00:00:00.001 Started by upstream project "autotest-per-patch" build number 132079 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.085 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.086 The recommended git tool is: git 00:00:00.086 using credential 00000000-0000-0000-0000-000000000002 00:00:00.088 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.145 Fetching changes from the remote Git repository 00:00:00.147 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.217 Using shallow fetch with depth 1 00:00:00.217 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.217 > git --version # timeout=10 00:00:00.266 > git --version # 'git version 2.39.2' 00:00:00.266 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.304 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.304 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.001 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.011 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.022 Checking out Revision b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf (FETCH_HEAD) 00:00:04.022 > git config core.sparsecheckout # timeout=10 00:00:04.035 > git read-tree -mu HEAD # timeout=10 00:00:04.050 > git checkout -f b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=5 00:00:04.070 Commit message: "jenkins/jjb-config: Ignore OS version mismatch under freebsd" 00:00:04.070 > git rev-list --no-walk b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=10 00:00:04.173 [Pipeline] Start of Pipeline 00:00:04.189 [Pipeline] library 00:00:04.190 Loading library shm_lib@master 00:00:04.190 Library shm_lib@master is cached. Copying from home. 00:00:04.207 [Pipeline] node 00:00:04.215 Running on CYP9 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:04.217 [Pipeline] { 00:00:04.226 [Pipeline] catchError 00:00:04.228 [Pipeline] { 00:00:04.239 [Pipeline] wrap 00:00:04.248 [Pipeline] { 00:00:04.258 [Pipeline] stage 00:00:04.260 [Pipeline] { (Prologue) 00:00:04.492 [Pipeline] sh 00:00:04.783 + logger -p user.info -t JENKINS-CI 00:00:04.807 [Pipeline] echo 00:00:04.809 Node: CYP9 00:00:04.815 [Pipeline] sh 00:00:05.119 [Pipeline] setCustomBuildProperty 00:00:05.130 [Pipeline] echo 00:00:05.131 Cleanup processes 00:00:05.137 [Pipeline] sh 00:00:05.424 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.424 22467 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.438 [Pipeline] sh 00:00:05.725 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.725 ++ grep -v 'sudo pgrep' 00:00:05.725 ++ awk '{print $1}' 00:00:05.725 + sudo kill -9 00:00:05.725 + true 00:00:05.741 [Pipeline] cleanWs 00:00:05.752 [WS-CLEANUP] Deleting project workspace... 00:00:05.752 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.759 [WS-CLEANUP] done 00:00:05.762 [Pipeline] setCustomBuildProperty 00:00:05.776 [Pipeline] sh 00:00:06.061 + sudo git config --global --replace-all safe.directory '*' 00:00:06.150 [Pipeline] httpRequest 00:00:06.523 [Pipeline] echo 00:00:06.525 Sorcerer 10.211.164.101 is alive 00:00:06.535 [Pipeline] retry 00:00:06.537 [Pipeline] { 00:00:06.551 [Pipeline] httpRequest 00:00:06.556 HttpMethod: GET 00:00:06.556 URL: http://10.211.164.101/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:06.556 Sending request to url: http://10.211.164.101/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:06.582 Response Code: HTTP/1.1 200 OK 00:00:06.582 Success: Status code 200 is in the accepted range: 200,404 00:00:06.583 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:29.939 [Pipeline] } 00:00:29.956 [Pipeline] // retry 00:00:29.963 [Pipeline] sh 00:00:30.249 + tar --no-same-owner -xf jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:30.265 [Pipeline] httpRequest 00:00:30.796 [Pipeline] echo 00:00:30.798 Sorcerer 10.211.164.101 is alive 00:00:30.807 [Pipeline] retry 00:00:30.809 [Pipeline] { 00:00:30.822 [Pipeline] httpRequest 00:00:30.827 HttpMethod: GET 00:00:30.827 URL: http://10.211.164.101/packages/spdk_8053cd6b8f8ed48dce8f8f22117219c22438e9a7.tar.gz 00:00:30.828 Sending request to url: http://10.211.164.101/packages/spdk_8053cd6b8f8ed48dce8f8f22117219c22438e9a7.tar.gz 00:00:30.835 Response Code: HTTP/1.1 200 OK 00:00:30.835 Success: Status code 200 is in the accepted range: 200,404 00:00:30.836 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_8053cd6b8f8ed48dce8f8f22117219c22438e9a7.tar.gz 00:02:17.035 [Pipeline] } 00:02:17.051 [Pipeline] // retry 00:02:17.059 [Pipeline] sh 00:02:17.347 + tar --no-same-owner -xf spdk_8053cd6b8f8ed48dce8f8f22117219c22438e9a7.tar.gz 00:02:19.904 [Pipeline] sh 00:02:20.192 + git -C spdk log --oneline -n5 00:02:20.192 8053cd6b8 test/iscsi_tgt: Remove support for the namespace arg 00:02:20.192 461b97702 test/nvmf: Solve ambiguity around $NVMF_SECOND_TARGET_IP 00:02:20.192 4c618f461 test/nvmf: Don't pin nvmf_bdevperf and nvmf_target_disconnect to phy 00:02:20.192 a51629061 test/nvmf: Remove all transport conditions from the test suites 00:02:20.192 9f70a047a test/nvmf: Drop $RDMA_IP_LIST 00:02:20.204 [Pipeline] } 00:02:20.216 [Pipeline] // stage 00:02:20.224 [Pipeline] stage 00:02:20.227 [Pipeline] { (Prepare) 00:02:20.242 [Pipeline] writeFile 00:02:20.258 [Pipeline] sh 00:02:20.547 + logger -p user.info -t JENKINS-CI 00:02:20.561 [Pipeline] sh 00:02:20.848 + logger -p user.info -t JENKINS-CI 00:02:20.861 [Pipeline] sh 00:02:21.149 + cat autorun-spdk.conf 00:02:21.149 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:21.149 SPDK_TEST_NVMF=1 00:02:21.149 SPDK_TEST_NVME_CLI=1 00:02:21.149 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:21.149 SPDK_TEST_NVMF_NICS=e810 00:02:21.149 SPDK_TEST_VFIOUSER=1 00:02:21.149 SPDK_RUN_UBSAN=1 00:02:21.149 NET_TYPE=phy 00:02:21.157 RUN_NIGHTLY=0 00:02:21.161 [Pipeline] readFile 00:02:21.185 [Pipeline] withEnv 00:02:21.187 [Pipeline] { 00:02:21.198 [Pipeline] sh 00:02:21.489 + set -ex 00:02:21.489 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:02:21.489 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:21.489 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:21.489 ++ SPDK_TEST_NVMF=1 00:02:21.489 ++ SPDK_TEST_NVME_CLI=1 00:02:21.489 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:21.489 ++ SPDK_TEST_NVMF_NICS=e810 00:02:21.489 ++ SPDK_TEST_VFIOUSER=1 00:02:21.489 ++ SPDK_RUN_UBSAN=1 00:02:21.489 ++ NET_TYPE=phy 00:02:21.489 ++ RUN_NIGHTLY=0 00:02:21.489 + case $SPDK_TEST_NVMF_NICS in 00:02:21.489 + DRIVERS=ice 00:02:21.489 + [[ tcp == \r\d\m\a ]] 00:02:21.489 + [[ -n ice ]] 00:02:21.489 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:02:21.489 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:02:21.489 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:02:21.489 rmmod: ERROR: Module irdma is not currently loaded 00:02:21.489 rmmod: ERROR: Module i40iw is not currently loaded 00:02:21.489 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:02:21.489 + true 00:02:21.489 + for D in $DRIVERS 00:02:21.489 + sudo modprobe ice 00:02:21.489 + exit 0 00:02:21.500 [Pipeline] } 00:02:21.514 [Pipeline] // withEnv 00:02:21.520 [Pipeline] } 00:02:21.535 [Pipeline] // stage 00:02:21.546 [Pipeline] catchError 00:02:21.548 [Pipeline] { 00:02:21.561 [Pipeline] timeout 00:02:21.561 Timeout set to expire in 1 hr 0 min 00:02:21.563 [Pipeline] { 00:02:21.578 [Pipeline] stage 00:02:21.579 [Pipeline] { (Tests) 00:02:21.593 [Pipeline] sh 00:02:21.882 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:21.882 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:21.882 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:21.882 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:02:21.882 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:21.882 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:21.882 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:02:21.882 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:21.882 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:21.882 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:21.882 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:02:21.882 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:21.882 + source /etc/os-release 00:02:21.882 ++ NAME='Fedora Linux' 00:02:21.882 ++ VERSION='39 (Cloud Edition)' 00:02:21.882 ++ ID=fedora 00:02:21.882 ++ VERSION_ID=39 00:02:21.882 ++ VERSION_CODENAME= 00:02:21.882 ++ PLATFORM_ID=platform:f39 00:02:21.882 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:21.882 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:21.882 ++ LOGO=fedora-logo-icon 00:02:21.882 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:21.882 ++ HOME_URL=https://fedoraproject.org/ 00:02:21.882 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:21.882 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:21.882 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:21.882 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:21.882 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:21.882 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:21.882 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:21.882 ++ SUPPORT_END=2024-11-12 00:02:21.882 ++ VARIANT='Cloud Edition' 00:02:21.882 ++ VARIANT_ID=cloud 00:02:21.882 + uname -a 00:02:21.882 Linux spdk-cyp-09 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:21.882 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:24.428 Hugepages 00:02:24.428 node hugesize free / total 00:02:24.428 node0 1048576kB 0 / 0 00:02:24.689 node0 2048kB 0 / 0 00:02:24.689 node1 1048576kB 0 / 0 00:02:24.689 node1 2048kB 0 / 0 00:02:24.689 00:02:24.689 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:24.689 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:02:24.689 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:02:24.689 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:02:24.689 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:02:24.689 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:02:24.689 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:02:24.689 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:02:24.689 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:02:24.689 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:02:24.689 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:02:24.689 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:02:24.689 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:02:24.689 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:02:24.689 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:02:24.689 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:02:24.689 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:02:24.689 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:02:24.689 + rm -f /tmp/spdk-ld-path 00:02:24.689 + source autorun-spdk.conf 00:02:24.689 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:24.689 ++ SPDK_TEST_NVMF=1 00:02:24.689 ++ SPDK_TEST_NVME_CLI=1 00:02:24.689 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:24.689 ++ SPDK_TEST_NVMF_NICS=e810 00:02:24.689 ++ SPDK_TEST_VFIOUSER=1 00:02:24.689 ++ SPDK_RUN_UBSAN=1 00:02:24.689 ++ NET_TYPE=phy 00:02:24.689 ++ RUN_NIGHTLY=0 00:02:24.689 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:24.689 + [[ -n '' ]] 00:02:24.689 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:24.689 + for M in /var/spdk/build-*-manifest.txt 00:02:24.689 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:24.689 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:24.689 + for M in /var/spdk/build-*-manifest.txt 00:02:24.689 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:24.689 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:24.689 + for M in /var/spdk/build-*-manifest.txt 00:02:24.689 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:24.689 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:24.689 ++ uname 00:02:24.689 + [[ Linux == \L\i\n\u\x ]] 00:02:24.690 + sudo dmesg -T 00:02:24.951 + sudo dmesg --clear 00:02:24.951 + dmesg_pid=24037 00:02:24.951 + [[ Fedora Linux == FreeBSD ]] 00:02:24.951 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:24.951 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:24.951 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:24.951 + [[ -x /usr/src/fio-static/fio ]] 00:02:24.951 + export FIO_BIN=/usr/src/fio-static/fio 00:02:24.951 + FIO_BIN=/usr/src/fio-static/fio 00:02:24.951 + sudo dmesg -Tw 00:02:24.951 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:24.951 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:24.951 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:24.951 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:24.951 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:24.951 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:24.951 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:24.951 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:24.951 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:24.951 18:51:54 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:02:24.951 18:51:54 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:24.951 18:51:54 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:24.951 18:51:54 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:02:24.951 18:51:54 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:02:24.951 18:51:54 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:24.951 18:51:54 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:02:24.951 18:51:54 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:02:24.951 18:51:54 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:02:24.951 18:51:54 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:02:24.951 18:51:54 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:02:24.951 18:51:54 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:24.951 18:51:54 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:24.951 18:51:54 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:02:24.951 18:51:54 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:24.951 18:51:54 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:24.951 18:51:54 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:24.951 18:51:54 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:24.951 18:51:54 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:24.951 18:51:54 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:24.951 18:51:54 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:24.951 18:51:54 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:24.951 18:51:54 -- paths/export.sh@5 -- $ export PATH 00:02:24.951 18:51:54 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:24.951 18:51:54 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:24.951 18:51:54 -- common/autobuild_common.sh@486 -- $ date +%s 00:02:24.951 18:51:54 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1730829114.XXXXXX 00:02:24.951 18:51:54 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1730829114.v20LVp 00:02:24.951 18:51:54 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:02:24.951 18:51:54 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:02:24.951 18:51:54 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:02:24.951 18:51:54 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:02:24.951 18:51:54 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:02:24.951 18:51:54 -- common/autobuild_common.sh@502 -- $ get_config_params 00:02:24.951 18:51:54 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:02:24.951 18:51:54 -- common/autotest_common.sh@10 -- $ set +x 00:02:24.951 18:51:54 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:02:24.951 18:51:54 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:02:24.951 18:51:54 -- pm/common@17 -- $ local monitor 00:02:24.951 18:51:54 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:24.951 18:51:54 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:24.951 18:51:54 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:24.951 18:51:54 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:24.951 18:51:54 -- pm/common@21 -- $ date +%s 00:02:24.951 18:51:54 -- pm/common@25 -- $ sleep 1 00:02:25.213 18:51:54 -- pm/common@21 -- $ date +%s 00:02:25.213 18:51:54 -- pm/common@21 -- $ date +%s 00:02:25.213 18:51:54 -- pm/common@21 -- $ date +%s 00:02:25.213 18:51:54 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1730829114 00:02:25.213 18:51:54 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1730829114 00:02:25.213 18:51:54 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1730829114 00:02:25.213 18:51:54 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1730829114 00:02:25.213 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1730829114_collect-cpu-load.pm.log 00:02:25.213 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1730829114_collect-vmstat.pm.log 00:02:25.213 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1730829114_collect-cpu-temp.pm.log 00:02:25.213 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1730829114_collect-bmc-pm.bmc.pm.log 00:02:26.157 18:51:55 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:02:26.157 18:51:55 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:26.157 18:51:55 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:26.157 18:51:55 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:26.157 18:51:55 -- spdk/autobuild.sh@16 -- $ date -u 00:02:26.157 Tue Nov 5 05:51:55 PM UTC 2024 00:02:26.157 18:51:55 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:26.157 v25.01-pre-166-g8053cd6b8 00:02:26.157 18:51:55 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:26.157 18:51:55 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:26.157 18:51:55 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:26.157 18:51:55 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:02:26.157 18:51:55 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:02:26.157 18:51:55 -- common/autotest_common.sh@10 -- $ set +x 00:02:26.157 ************************************ 00:02:26.157 START TEST ubsan 00:02:26.157 ************************************ 00:02:26.157 18:51:55 ubsan -- common/autotest_common.sh@1127 -- $ echo 'using ubsan' 00:02:26.157 using ubsan 00:02:26.157 00:02:26.157 real 0m0.001s 00:02:26.157 user 0m0.001s 00:02:26.157 sys 0m0.000s 00:02:26.157 18:51:55 ubsan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:02:26.157 18:51:55 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:26.157 ************************************ 00:02:26.157 END TEST ubsan 00:02:26.157 ************************************ 00:02:26.157 18:51:55 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:26.157 18:51:55 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:26.157 18:51:55 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:26.157 18:51:55 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:26.157 18:51:55 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:26.157 18:51:55 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:26.157 18:51:55 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:26.157 18:51:55 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:26.157 18:51:55 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:02:26.418 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:02:26.418 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:26.679 Using 'verbs' RDMA provider 00:02:42.539 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:54.783 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:54.783 Creating mk/config.mk...done. 00:02:54.783 Creating mk/cc.flags.mk...done. 00:02:54.783 Type 'make' to build. 00:02:54.783 18:52:23 -- spdk/autobuild.sh@70 -- $ run_test make make -j144 00:02:54.783 18:52:23 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:02:54.784 18:52:23 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:02:54.784 18:52:23 -- common/autotest_common.sh@10 -- $ set +x 00:02:54.784 ************************************ 00:02:54.784 START TEST make 00:02:54.784 ************************************ 00:02:54.784 18:52:24 make -- common/autotest_common.sh@1127 -- $ make -j144 00:02:55.355 make[1]: Nothing to be done for 'all'. 00:02:56.738 The Meson build system 00:02:56.738 Version: 1.5.0 00:02:56.738 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:02:56.738 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:56.738 Build type: native build 00:02:56.738 Project name: libvfio-user 00:02:56.738 Project version: 0.0.1 00:02:56.738 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:56.738 C linker for the host machine: cc ld.bfd 2.40-14 00:02:56.738 Host machine cpu family: x86_64 00:02:56.738 Host machine cpu: x86_64 00:02:56.738 Run-time dependency threads found: YES 00:02:56.738 Library dl found: YES 00:02:56.738 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:56.738 Run-time dependency json-c found: YES 0.17 00:02:56.738 Run-time dependency cmocka found: YES 1.1.7 00:02:56.738 Program pytest-3 found: NO 00:02:56.738 Program flake8 found: NO 00:02:56.738 Program misspell-fixer found: NO 00:02:56.738 Program restructuredtext-lint found: NO 00:02:56.738 Program valgrind found: YES (/usr/bin/valgrind) 00:02:56.738 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:56.738 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:56.738 Compiler for C supports arguments -Wwrite-strings: YES 00:02:56.738 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:56.738 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:02:56.738 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:02:56.738 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:56.738 Build targets in project: 8 00:02:56.738 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:56.738 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:56.738 00:02:56.738 libvfio-user 0.0.1 00:02:56.738 00:02:56.738 User defined options 00:02:56.738 buildtype : debug 00:02:56.738 default_library: shared 00:02:56.738 libdir : /usr/local/lib 00:02:56.738 00:02:56.738 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:56.738 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:56.999 [1/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:56.999 [2/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:56.999 [3/37] Compiling C object samples/lspci.p/lspci.c.o 00:02:56.999 [4/37] Compiling C object samples/null.p/null.c.o 00:02:56.999 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:02:56.999 [6/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:56.999 [7/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:56.999 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:02:56.999 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:02:56.999 [10/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:56.999 [11/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:56.999 [12/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:56.999 [13/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:02:56.999 [14/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:56.999 [15/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:56.999 [16/37] Compiling C object test/unit_tests.p/mocks.c.o 00:02:56.999 [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:02:56.999 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:02:56.999 [19/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:56.999 [20/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:02:56.999 [21/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:56.999 [22/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:56.999 [23/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:56.999 [24/37] Compiling C object samples/client.p/client.c.o 00:02:56.999 [25/37] Compiling C object samples/server.p/server.c.o 00:02:56.999 [26/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:56.999 [27/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:56.999 [28/37] Linking target samples/client 00:02:56.999 [29/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:02:56.999 [30/37] Linking target test/unit_tests 00:02:57.261 [31/37] Linking target lib/libvfio-user.so.0.0.1 00:02:57.261 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:02:57.261 [33/37] Linking target samples/gpio-pci-idio-16 00:02:57.261 [34/37] Linking target samples/null 00:02:57.261 [35/37] Linking target samples/server 00:02:57.261 [36/37] Linking target samples/shadow_ioeventfd_server 00:02:57.261 [37/37] Linking target samples/lspci 00:02:57.261 INFO: autodetecting backend as ninja 00:02:57.261 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:57.261 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:57.833 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:57.833 ninja: no work to do. 00:03:04.481 The Meson build system 00:03:04.481 Version: 1.5.0 00:03:04.481 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:03:04.481 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:03:04.481 Build type: native build 00:03:04.481 Program cat found: YES (/usr/bin/cat) 00:03:04.481 Project name: DPDK 00:03:04.481 Project version: 24.03.0 00:03:04.481 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:04.481 C linker for the host machine: cc ld.bfd 2.40-14 00:03:04.481 Host machine cpu family: x86_64 00:03:04.481 Host machine cpu: x86_64 00:03:04.481 Message: ## Building in Developer Mode ## 00:03:04.481 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:04.481 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:03:04.481 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:03:04.481 Program python3 found: YES (/usr/bin/python3) 00:03:04.481 Program cat found: YES (/usr/bin/cat) 00:03:04.481 Compiler for C supports arguments -march=native: YES 00:03:04.481 Checking for size of "void *" : 8 00:03:04.481 Checking for size of "void *" : 8 (cached) 00:03:04.481 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:03:04.481 Library m found: YES 00:03:04.481 Library numa found: YES 00:03:04.481 Has header "numaif.h" : YES 00:03:04.481 Library fdt found: NO 00:03:04.481 Library execinfo found: NO 00:03:04.481 Has header "execinfo.h" : YES 00:03:04.481 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:04.481 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:04.481 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:04.481 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:04.481 Run-time dependency openssl found: YES 3.1.1 00:03:04.481 Run-time dependency libpcap found: YES 1.10.4 00:03:04.481 Has header "pcap.h" with dependency libpcap: YES 00:03:04.481 Compiler for C supports arguments -Wcast-qual: YES 00:03:04.481 Compiler for C supports arguments -Wdeprecated: YES 00:03:04.481 Compiler for C supports arguments -Wformat: YES 00:03:04.481 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:04.481 Compiler for C supports arguments -Wformat-security: NO 00:03:04.481 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:04.481 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:04.481 Compiler for C supports arguments -Wnested-externs: YES 00:03:04.481 Compiler for C supports arguments -Wold-style-definition: YES 00:03:04.481 Compiler for C supports arguments -Wpointer-arith: YES 00:03:04.481 Compiler for C supports arguments -Wsign-compare: YES 00:03:04.481 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:04.481 Compiler for C supports arguments -Wundef: YES 00:03:04.481 Compiler for C supports arguments -Wwrite-strings: YES 00:03:04.481 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:04.481 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:04.481 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:04.481 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:04.481 Program objdump found: YES (/usr/bin/objdump) 00:03:04.481 Compiler for C supports arguments -mavx512f: YES 00:03:04.481 Checking if "AVX512 checking" compiles: YES 00:03:04.481 Fetching value of define "__SSE4_2__" : 1 00:03:04.481 Fetching value of define "__AES__" : 1 00:03:04.481 Fetching value of define "__AVX__" : 1 00:03:04.481 Fetching value of define "__AVX2__" : 1 00:03:04.481 Fetching value of define "__AVX512BW__" : 1 00:03:04.481 Fetching value of define "__AVX512CD__" : 1 00:03:04.481 Fetching value of define "__AVX512DQ__" : 1 00:03:04.481 Fetching value of define "__AVX512F__" : 1 00:03:04.481 Fetching value of define "__AVX512VL__" : 1 00:03:04.481 Fetching value of define "__PCLMUL__" : 1 00:03:04.481 Fetching value of define "__RDRND__" : 1 00:03:04.481 Fetching value of define "__RDSEED__" : 1 00:03:04.481 Fetching value of define "__VPCLMULQDQ__" : 1 00:03:04.481 Fetching value of define "__znver1__" : (undefined) 00:03:04.481 Fetching value of define "__znver2__" : (undefined) 00:03:04.481 Fetching value of define "__znver3__" : (undefined) 00:03:04.481 Fetching value of define "__znver4__" : (undefined) 00:03:04.481 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:04.481 Message: lib/log: Defining dependency "log" 00:03:04.481 Message: lib/kvargs: Defining dependency "kvargs" 00:03:04.481 Message: lib/telemetry: Defining dependency "telemetry" 00:03:04.481 Checking for function "getentropy" : NO 00:03:04.481 Message: lib/eal: Defining dependency "eal" 00:03:04.481 Message: lib/ring: Defining dependency "ring" 00:03:04.481 Message: lib/rcu: Defining dependency "rcu" 00:03:04.481 Message: lib/mempool: Defining dependency "mempool" 00:03:04.481 Message: lib/mbuf: Defining dependency "mbuf" 00:03:04.481 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:04.481 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:04.481 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:04.481 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:04.481 Fetching value of define "__AVX512VL__" : 1 (cached) 00:03:04.481 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:03:04.481 Compiler for C supports arguments -mpclmul: YES 00:03:04.481 Compiler for C supports arguments -maes: YES 00:03:04.481 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:04.481 Compiler for C supports arguments -mavx512bw: YES 00:03:04.481 Compiler for C supports arguments -mavx512dq: YES 00:03:04.481 Compiler for C supports arguments -mavx512vl: YES 00:03:04.481 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:04.481 Compiler for C supports arguments -mavx2: YES 00:03:04.481 Compiler for C supports arguments -mavx: YES 00:03:04.481 Message: lib/net: Defining dependency "net" 00:03:04.481 Message: lib/meter: Defining dependency "meter" 00:03:04.481 Message: lib/ethdev: Defining dependency "ethdev" 00:03:04.481 Message: lib/pci: Defining dependency "pci" 00:03:04.481 Message: lib/cmdline: Defining dependency "cmdline" 00:03:04.481 Message: lib/hash: Defining dependency "hash" 00:03:04.481 Message: lib/timer: Defining dependency "timer" 00:03:04.481 Message: lib/compressdev: Defining dependency "compressdev" 00:03:04.481 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:04.481 Message: lib/dmadev: Defining dependency "dmadev" 00:03:04.481 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:04.481 Message: lib/power: Defining dependency "power" 00:03:04.481 Message: lib/reorder: Defining dependency "reorder" 00:03:04.481 Message: lib/security: Defining dependency "security" 00:03:04.481 Has header "linux/userfaultfd.h" : YES 00:03:04.481 Has header "linux/vduse.h" : YES 00:03:04.481 Message: lib/vhost: Defining dependency "vhost" 00:03:04.481 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:04.481 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:04.481 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:04.481 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:04.481 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:03:04.481 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:03:04.481 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:03:04.481 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:03:04.481 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:03:04.481 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:03:04.481 Program doxygen found: YES (/usr/local/bin/doxygen) 00:03:04.482 Configuring doxy-api-html.conf using configuration 00:03:04.482 Configuring doxy-api-man.conf using configuration 00:03:04.482 Program mandb found: YES (/usr/bin/mandb) 00:03:04.482 Program sphinx-build found: NO 00:03:04.482 Configuring rte_build_config.h using configuration 00:03:04.482 Message: 00:03:04.482 ================= 00:03:04.482 Applications Enabled 00:03:04.482 ================= 00:03:04.482 00:03:04.482 apps: 00:03:04.482 00:03:04.482 00:03:04.482 Message: 00:03:04.482 ================= 00:03:04.482 Libraries Enabled 00:03:04.482 ================= 00:03:04.482 00:03:04.482 libs: 00:03:04.482 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:04.482 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:03:04.482 cryptodev, dmadev, power, reorder, security, vhost, 00:03:04.482 00:03:04.482 Message: 00:03:04.482 =============== 00:03:04.482 Drivers Enabled 00:03:04.482 =============== 00:03:04.482 00:03:04.482 common: 00:03:04.482 00:03:04.482 bus: 00:03:04.482 pci, vdev, 00:03:04.482 mempool: 00:03:04.482 ring, 00:03:04.482 dma: 00:03:04.482 00:03:04.482 net: 00:03:04.482 00:03:04.482 crypto: 00:03:04.482 00:03:04.482 compress: 00:03:04.482 00:03:04.482 vdpa: 00:03:04.482 00:03:04.482 00:03:04.482 Message: 00:03:04.482 ================= 00:03:04.482 Content Skipped 00:03:04.482 ================= 00:03:04.482 00:03:04.482 apps: 00:03:04.482 dumpcap: explicitly disabled via build config 00:03:04.482 graph: explicitly disabled via build config 00:03:04.482 pdump: explicitly disabled via build config 00:03:04.482 proc-info: explicitly disabled via build config 00:03:04.482 test-acl: explicitly disabled via build config 00:03:04.482 test-bbdev: explicitly disabled via build config 00:03:04.482 test-cmdline: explicitly disabled via build config 00:03:04.482 test-compress-perf: explicitly disabled via build config 00:03:04.482 test-crypto-perf: explicitly disabled via build config 00:03:04.482 test-dma-perf: explicitly disabled via build config 00:03:04.482 test-eventdev: explicitly disabled via build config 00:03:04.482 test-fib: explicitly disabled via build config 00:03:04.482 test-flow-perf: explicitly disabled via build config 00:03:04.482 test-gpudev: explicitly disabled via build config 00:03:04.482 test-mldev: explicitly disabled via build config 00:03:04.482 test-pipeline: explicitly disabled via build config 00:03:04.482 test-pmd: explicitly disabled via build config 00:03:04.482 test-regex: explicitly disabled via build config 00:03:04.482 test-sad: explicitly disabled via build config 00:03:04.482 test-security-perf: explicitly disabled via build config 00:03:04.482 00:03:04.482 libs: 00:03:04.482 argparse: explicitly disabled via build config 00:03:04.482 metrics: explicitly disabled via build config 00:03:04.482 acl: explicitly disabled via build config 00:03:04.482 bbdev: explicitly disabled via build config 00:03:04.482 bitratestats: explicitly disabled via build config 00:03:04.482 bpf: explicitly disabled via build config 00:03:04.482 cfgfile: explicitly disabled via build config 00:03:04.482 distributor: explicitly disabled via build config 00:03:04.482 efd: explicitly disabled via build config 00:03:04.482 eventdev: explicitly disabled via build config 00:03:04.482 dispatcher: explicitly disabled via build config 00:03:04.482 gpudev: explicitly disabled via build config 00:03:04.482 gro: explicitly disabled via build config 00:03:04.482 gso: explicitly disabled via build config 00:03:04.482 ip_frag: explicitly disabled via build config 00:03:04.482 jobstats: explicitly disabled via build config 00:03:04.482 latencystats: explicitly disabled via build config 00:03:04.482 lpm: explicitly disabled via build config 00:03:04.482 member: explicitly disabled via build config 00:03:04.482 pcapng: explicitly disabled via build config 00:03:04.482 rawdev: explicitly disabled via build config 00:03:04.482 regexdev: explicitly disabled via build config 00:03:04.482 mldev: explicitly disabled via build config 00:03:04.482 rib: explicitly disabled via build config 00:03:04.482 sched: explicitly disabled via build config 00:03:04.482 stack: explicitly disabled via build config 00:03:04.482 ipsec: explicitly disabled via build config 00:03:04.482 pdcp: explicitly disabled via build config 00:03:04.482 fib: explicitly disabled via build config 00:03:04.482 port: explicitly disabled via build config 00:03:04.482 pdump: explicitly disabled via build config 00:03:04.482 table: explicitly disabled via build config 00:03:04.482 pipeline: explicitly disabled via build config 00:03:04.482 graph: explicitly disabled via build config 00:03:04.482 node: explicitly disabled via build config 00:03:04.482 00:03:04.482 drivers: 00:03:04.482 common/cpt: not in enabled drivers build config 00:03:04.482 common/dpaax: not in enabled drivers build config 00:03:04.482 common/iavf: not in enabled drivers build config 00:03:04.482 common/idpf: not in enabled drivers build config 00:03:04.482 common/ionic: not in enabled drivers build config 00:03:04.482 common/mvep: not in enabled drivers build config 00:03:04.482 common/octeontx: not in enabled drivers build config 00:03:04.482 bus/auxiliary: not in enabled drivers build config 00:03:04.482 bus/cdx: not in enabled drivers build config 00:03:04.482 bus/dpaa: not in enabled drivers build config 00:03:04.482 bus/fslmc: not in enabled drivers build config 00:03:04.482 bus/ifpga: not in enabled drivers build config 00:03:04.482 bus/platform: not in enabled drivers build config 00:03:04.482 bus/uacce: not in enabled drivers build config 00:03:04.482 bus/vmbus: not in enabled drivers build config 00:03:04.482 common/cnxk: not in enabled drivers build config 00:03:04.482 common/mlx5: not in enabled drivers build config 00:03:04.482 common/nfp: not in enabled drivers build config 00:03:04.482 common/nitrox: not in enabled drivers build config 00:03:04.482 common/qat: not in enabled drivers build config 00:03:04.482 common/sfc_efx: not in enabled drivers build config 00:03:04.482 mempool/bucket: not in enabled drivers build config 00:03:04.482 mempool/cnxk: not in enabled drivers build config 00:03:04.482 mempool/dpaa: not in enabled drivers build config 00:03:04.482 mempool/dpaa2: not in enabled drivers build config 00:03:04.482 mempool/octeontx: not in enabled drivers build config 00:03:04.482 mempool/stack: not in enabled drivers build config 00:03:04.482 dma/cnxk: not in enabled drivers build config 00:03:04.482 dma/dpaa: not in enabled drivers build config 00:03:04.482 dma/dpaa2: not in enabled drivers build config 00:03:04.482 dma/hisilicon: not in enabled drivers build config 00:03:04.482 dma/idxd: not in enabled drivers build config 00:03:04.482 dma/ioat: not in enabled drivers build config 00:03:04.482 dma/skeleton: not in enabled drivers build config 00:03:04.482 net/af_packet: not in enabled drivers build config 00:03:04.482 net/af_xdp: not in enabled drivers build config 00:03:04.482 net/ark: not in enabled drivers build config 00:03:04.482 net/atlantic: not in enabled drivers build config 00:03:04.482 net/avp: not in enabled drivers build config 00:03:04.482 net/axgbe: not in enabled drivers build config 00:03:04.482 net/bnx2x: not in enabled drivers build config 00:03:04.482 net/bnxt: not in enabled drivers build config 00:03:04.482 net/bonding: not in enabled drivers build config 00:03:04.482 net/cnxk: not in enabled drivers build config 00:03:04.482 net/cpfl: not in enabled drivers build config 00:03:04.482 net/cxgbe: not in enabled drivers build config 00:03:04.482 net/dpaa: not in enabled drivers build config 00:03:04.482 net/dpaa2: not in enabled drivers build config 00:03:04.482 net/e1000: not in enabled drivers build config 00:03:04.482 net/ena: not in enabled drivers build config 00:03:04.482 net/enetc: not in enabled drivers build config 00:03:04.482 net/enetfec: not in enabled drivers build config 00:03:04.482 net/enic: not in enabled drivers build config 00:03:04.482 net/failsafe: not in enabled drivers build config 00:03:04.482 net/fm10k: not in enabled drivers build config 00:03:04.482 net/gve: not in enabled drivers build config 00:03:04.482 net/hinic: not in enabled drivers build config 00:03:04.482 net/hns3: not in enabled drivers build config 00:03:04.482 net/i40e: not in enabled drivers build config 00:03:04.482 net/iavf: not in enabled drivers build config 00:03:04.482 net/ice: not in enabled drivers build config 00:03:04.482 net/idpf: not in enabled drivers build config 00:03:04.482 net/igc: not in enabled drivers build config 00:03:04.482 net/ionic: not in enabled drivers build config 00:03:04.482 net/ipn3ke: not in enabled drivers build config 00:03:04.482 net/ixgbe: not in enabled drivers build config 00:03:04.482 net/mana: not in enabled drivers build config 00:03:04.482 net/memif: not in enabled drivers build config 00:03:04.482 net/mlx4: not in enabled drivers build config 00:03:04.482 net/mlx5: not in enabled drivers build config 00:03:04.482 net/mvneta: not in enabled drivers build config 00:03:04.482 net/mvpp2: not in enabled drivers build config 00:03:04.482 net/netvsc: not in enabled drivers build config 00:03:04.482 net/nfb: not in enabled drivers build config 00:03:04.482 net/nfp: not in enabled drivers build config 00:03:04.482 net/ngbe: not in enabled drivers build config 00:03:04.482 net/null: not in enabled drivers build config 00:03:04.482 net/octeontx: not in enabled drivers build config 00:03:04.482 net/octeon_ep: not in enabled drivers build config 00:03:04.482 net/pcap: not in enabled drivers build config 00:03:04.482 net/pfe: not in enabled drivers build config 00:03:04.482 net/qede: not in enabled drivers build config 00:03:04.482 net/ring: not in enabled drivers build config 00:03:04.482 net/sfc: not in enabled drivers build config 00:03:04.482 net/softnic: not in enabled drivers build config 00:03:04.482 net/tap: not in enabled drivers build config 00:03:04.482 net/thunderx: not in enabled drivers build config 00:03:04.482 net/txgbe: not in enabled drivers build config 00:03:04.482 net/vdev_netvsc: not in enabled drivers build config 00:03:04.482 net/vhost: not in enabled drivers build config 00:03:04.482 net/virtio: not in enabled drivers build config 00:03:04.482 net/vmxnet3: not in enabled drivers build config 00:03:04.482 raw/*: missing internal dependency, "rawdev" 00:03:04.482 crypto/armv8: not in enabled drivers build config 00:03:04.482 crypto/bcmfs: not in enabled drivers build config 00:03:04.482 crypto/caam_jr: not in enabled drivers build config 00:03:04.482 crypto/ccp: not in enabled drivers build config 00:03:04.483 crypto/cnxk: not in enabled drivers build config 00:03:04.483 crypto/dpaa_sec: not in enabled drivers build config 00:03:04.483 crypto/dpaa2_sec: not in enabled drivers build config 00:03:04.483 crypto/ipsec_mb: not in enabled drivers build config 00:03:04.483 crypto/mlx5: not in enabled drivers build config 00:03:04.483 crypto/mvsam: not in enabled drivers build config 00:03:04.483 crypto/nitrox: not in enabled drivers build config 00:03:04.483 crypto/null: not in enabled drivers build config 00:03:04.483 crypto/octeontx: not in enabled drivers build config 00:03:04.483 crypto/openssl: not in enabled drivers build config 00:03:04.483 crypto/scheduler: not in enabled drivers build config 00:03:04.483 crypto/uadk: not in enabled drivers build config 00:03:04.483 crypto/virtio: not in enabled drivers build config 00:03:04.483 compress/isal: not in enabled drivers build config 00:03:04.483 compress/mlx5: not in enabled drivers build config 00:03:04.483 compress/nitrox: not in enabled drivers build config 00:03:04.483 compress/octeontx: not in enabled drivers build config 00:03:04.483 compress/zlib: not in enabled drivers build config 00:03:04.483 regex/*: missing internal dependency, "regexdev" 00:03:04.483 ml/*: missing internal dependency, "mldev" 00:03:04.483 vdpa/ifc: not in enabled drivers build config 00:03:04.483 vdpa/mlx5: not in enabled drivers build config 00:03:04.483 vdpa/nfp: not in enabled drivers build config 00:03:04.483 vdpa/sfc: not in enabled drivers build config 00:03:04.483 event/*: missing internal dependency, "eventdev" 00:03:04.483 baseband/*: missing internal dependency, "bbdev" 00:03:04.483 gpu/*: missing internal dependency, "gpudev" 00:03:04.483 00:03:04.483 00:03:04.483 Build targets in project: 84 00:03:04.483 00:03:04.483 DPDK 24.03.0 00:03:04.483 00:03:04.483 User defined options 00:03:04.483 buildtype : debug 00:03:04.483 default_library : shared 00:03:04.483 libdir : lib 00:03:04.483 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:03:04.483 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:03:04.483 c_link_args : 00:03:04.483 cpu_instruction_set: native 00:03:04.483 disable_apps : test-dma-perf,test,test-sad,test-acl,test-pmd,test-mldev,test-compress-perf,test-cmdline,test-regex,test-fib,graph,test-bbdev,dumpcap,test-gpudev,proc-info,test-pipeline,test-flow-perf,test-crypto-perf,pdump,test-eventdev,test-security-perf 00:03:04.483 disable_libs : port,lpm,ipsec,regexdev,dispatcher,argparse,bitratestats,rawdev,stack,graph,acl,bbdev,pipeline,member,sched,pcapng,mldev,eventdev,efd,metrics,latencystats,cfgfile,ip_frag,jobstats,pdump,pdcp,rib,node,fib,distributor,gso,table,bpf,gpudev,gro 00:03:04.483 enable_docs : false 00:03:04.483 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:03:04.483 enable_kmods : false 00:03:04.483 max_lcores : 128 00:03:04.483 tests : false 00:03:04.483 00:03:04.483 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:04.483 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:03:04.483 [1/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:04.483 [2/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:04.483 [3/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:04.483 [4/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:04.483 [5/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:04.483 [6/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:04.483 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:04.483 [8/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:04.483 [9/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:04.483 [10/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:04.483 [11/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:04.483 [12/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:04.483 [13/267] Linking static target lib/librte_kvargs.a 00:03:04.483 [14/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:04.483 [15/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:04.483 [16/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:04.483 [17/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:04.483 [18/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:04.483 [19/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:04.483 [20/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:04.483 [21/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:04.483 [22/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:04.483 [23/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:03:04.483 [24/267] Linking static target lib/librte_log.a 00:03:04.483 [25/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:04.483 [26/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:04.483 [27/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:04.483 [28/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:04.483 [29/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:04.483 [30/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:04.483 [31/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:04.483 [32/267] Linking static target lib/librte_pci.a 00:03:04.483 [33/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:04.483 [34/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:04.483 [35/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:04.483 [36/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:04.483 [37/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:04.756 [38/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:04.756 [39/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:04.756 [40/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:04.756 [41/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:04.756 [42/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:04.756 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:04.756 [44/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.756 [45/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:04.756 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:04.756 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:04.756 [48/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:04.756 [49/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.756 [50/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:04.756 [51/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:04.756 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:04.756 [53/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:04.756 [54/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:04.756 [55/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:04.756 [56/267] Linking static target lib/librte_ring.a 00:03:04.756 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:04.756 [58/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:04.756 [59/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:04.756 [60/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:04.756 [61/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:04.756 [62/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:04.756 [63/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:04.756 [64/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:04.756 [65/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:04.756 [66/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:04.756 [67/267] Linking static target lib/librte_meter.a 00:03:04.756 [68/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:04.756 [69/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:04.756 [70/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:04.756 [71/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:04.756 [72/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:04.756 [73/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:04.756 [74/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:04.756 [75/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:04.756 [76/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:04.756 [77/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:03:04.756 [78/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:04.756 [79/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:04.756 [80/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:04.756 [81/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:04.756 [82/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:04.756 [83/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:04.756 [84/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:04.756 [85/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:04.756 [86/267] Linking static target lib/librte_telemetry.a 00:03:04.756 [87/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:04.756 [88/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:04.756 [89/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:04.756 [90/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:04.756 [91/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:04.756 [92/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:03:04.756 [93/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:04.756 [94/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:04.756 [95/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:04.756 [96/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:04.756 [97/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:04.756 [98/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:04.756 [99/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:04.756 [100/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:05.018 [101/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:05.018 [102/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:05.018 [103/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:05.018 [104/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:05.018 [105/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:05.018 [106/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:05.018 [107/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:05.018 [108/267] Linking static target lib/librte_cmdline.a 00:03:05.018 [109/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:05.018 [110/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:05.018 [111/267] Linking static target lib/librte_timer.a 00:03:05.018 [112/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:05.018 [113/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:05.018 [114/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:05.018 [115/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:05.018 [116/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:05.018 [117/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:05.018 [118/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:05.018 [119/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:05.018 [120/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:05.018 [121/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:05.018 [122/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:05.018 [123/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:05.018 [124/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:05.018 [125/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:05.018 [126/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:05.018 [127/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:05.018 [128/267] Linking static target lib/librte_compressdev.a 00:03:05.018 [129/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:03:05.018 [130/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:05.018 [131/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:05.018 [132/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:05.018 [133/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:05.018 [134/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:05.018 [135/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:05.018 [136/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:05.018 [137/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:05.018 [138/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:05.018 [139/267] Linking static target lib/librte_mempool.a 00:03:05.018 [140/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:05.018 [141/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:05.018 [142/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.018 [143/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:05.018 [144/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:05.018 [145/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:05.018 [146/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:05.018 [147/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:05.018 [148/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:05.018 [149/267] Linking static target lib/librte_rcu.a 00:03:05.018 [150/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:05.018 [151/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:05.018 [152/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:05.018 [153/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:05.018 [154/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:05.018 [155/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:05.018 [156/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:05.018 [157/267] Linking target lib/librte_log.so.24.1 00:03:05.018 [158/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:05.018 [159/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:05.018 [160/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:05.018 [161/267] Linking static target lib/librte_reorder.a 00:03:05.018 [162/267] Linking static target lib/librte_dmadev.a 00:03:05.018 [163/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:05.018 [164/267] Linking static target lib/librte_security.a 00:03:05.018 [165/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:05.018 [166/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:05.018 [167/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:05.018 [168/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:05.018 [169/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:05.018 [170/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.018 [171/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:05.018 [172/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:05.018 [173/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:05.018 [174/267] Linking static target lib/librte_eal.a 00:03:05.018 [175/267] Linking static target lib/librte_net.a 00:03:05.018 [176/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.018 [177/267] Linking static target lib/librte_power.a 00:03:05.018 [178/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:05.018 [179/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:05.018 [180/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:05.279 [181/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:03:05.279 [182/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:05.279 [183/267] Linking static target lib/librte_mbuf.a 00:03:05.280 [184/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:05.280 [185/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:05.280 [186/267] Linking target lib/librte_kvargs.so.24.1 00:03:05.280 [187/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:05.280 [188/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:05.280 [189/267] Linking static target drivers/librte_bus_vdev.a 00:03:05.280 [190/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:05.280 [191/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:05.280 [192/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:05.280 [193/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:05.280 [194/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:05.280 [195/267] Linking static target drivers/librte_bus_pci.a 00:03:05.280 [196/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:05.280 [197/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:05.280 [198/267] Linking static target lib/librte_hash.a 00:03:05.280 [199/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:03:05.280 [200/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:05.280 [201/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:05.280 [202/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.540 [203/267] Linking static target drivers/librte_mempool_ring.a 00:03:05.540 [204/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:05.540 [205/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.540 [206/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:05.540 [207/267] Linking static target lib/librte_cryptodev.a 00:03:05.540 [208/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.540 [209/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:05.540 [210/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.540 [211/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:05.540 [212/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.540 [213/267] Linking static target lib/librte_ethdev.a 00:03:05.540 [214/267] Linking target lib/librte_telemetry.so.24.1 00:03:05.540 [215/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.801 [216/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.801 [217/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:03:05.801 [218/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.801 [219/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.801 [220/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:06.062 [221/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.062 [222/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.062 [223/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.062 [224/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.322 [225/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.322 [226/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.890 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:06.890 [228/267] Linking static target lib/librte_vhost.a 00:03:07.827 [229/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.766 [230/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.335 [231/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.274 [232/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.274 [233/267] Linking target lib/librte_eal.so.24.1 00:03:16.274 [234/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:16.534 [235/267] Linking target lib/librte_pci.so.24.1 00:03:16.534 [236/267] Linking target lib/librte_ring.so.24.1 00:03:16.534 [237/267] Linking target lib/librte_meter.so.24.1 00:03:16.534 [238/267] Linking target lib/librte_dmadev.so.24.1 00:03:16.534 [239/267] Linking target lib/librte_timer.so.24.1 00:03:16.534 [240/267] Linking target drivers/librte_bus_vdev.so.24.1 00:03:16.534 [241/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:16.534 [242/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:16.534 [243/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:16.534 [244/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:16.534 [245/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:16.534 [246/267] Linking target drivers/librte_bus_pci.so.24.1 00:03:16.534 [247/267] Linking target lib/librte_rcu.so.24.1 00:03:16.534 [248/267] Linking target lib/librte_mempool.so.24.1 00:03:16.795 [249/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:16.795 [250/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:16.795 [251/267] Linking target drivers/librte_mempool_ring.so.24.1 00:03:16.795 [252/267] Linking target lib/librte_mbuf.so.24.1 00:03:16.795 [253/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:17.055 [254/267] Linking target lib/librte_net.so.24.1 00:03:17.055 [255/267] Linking target lib/librte_compressdev.so.24.1 00:03:17.055 [256/267] Linking target lib/librte_reorder.so.24.1 00:03:17.055 [257/267] Linking target lib/librte_cryptodev.so.24.1 00:03:17.055 [258/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:17.055 [259/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:17.055 [260/267] Linking target lib/librte_hash.so.24.1 00:03:17.055 [261/267] Linking target lib/librte_cmdline.so.24.1 00:03:17.055 [262/267] Linking target lib/librte_ethdev.so.24.1 00:03:17.055 [263/267] Linking target lib/librte_security.so.24.1 00:03:17.315 [264/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:17.316 [265/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:17.316 [266/267] Linking target lib/librte_power.so.24.1 00:03:17.316 [267/267] Linking target lib/librte_vhost.so.24.1 00:03:17.316 INFO: autodetecting backend as ninja 00:03:17.316 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 144 00:03:21.518 CC lib/ut_mock/mock.o 00:03:21.518 CC lib/ut/ut.o 00:03:21.518 CC lib/log/log.o 00:03:21.518 CC lib/log/log_flags.o 00:03:21.518 CC lib/log/log_deprecated.o 00:03:21.518 LIB libspdk_ut.a 00:03:21.518 LIB libspdk_ut_mock.a 00:03:21.518 SO libspdk_ut.so.2.0 00:03:21.518 LIB libspdk_log.a 00:03:21.518 SO libspdk_ut_mock.so.6.0 00:03:21.518 SO libspdk_log.so.7.1 00:03:21.518 SYMLINK libspdk_ut.so 00:03:21.518 SYMLINK libspdk_ut_mock.so 00:03:21.518 SYMLINK libspdk_log.so 00:03:21.779 CC lib/util/base64.o 00:03:21.779 CC lib/util/crc16.o 00:03:21.779 CC lib/util/bit_array.o 00:03:21.779 CC lib/util/cpuset.o 00:03:21.779 CC lib/util/crc32.o 00:03:21.779 CC lib/util/crc32c.o 00:03:21.779 CC lib/util/crc32_ieee.o 00:03:21.779 CC lib/util/crc64.o 00:03:21.779 CC lib/util/dif.o 00:03:21.779 CC lib/util/fd.o 00:03:21.779 CC lib/util/fd_group.o 00:03:21.779 CC lib/util/file.o 00:03:21.779 CC lib/util/hexlify.o 00:03:21.779 CC lib/util/math.o 00:03:21.779 CC lib/util/iov.o 00:03:21.779 CC lib/util/net.o 00:03:21.779 CC lib/util/pipe.o 00:03:21.779 CC lib/util/strerror_tls.o 00:03:21.779 CC lib/util/string.o 00:03:21.779 CC lib/util/uuid.o 00:03:21.779 CC lib/util/xor.o 00:03:21.779 CC lib/util/zipf.o 00:03:21.779 CC lib/util/md5.o 00:03:21.779 CC lib/ioat/ioat.o 00:03:21.779 CC lib/dma/dma.o 00:03:21.779 CXX lib/trace_parser/trace.o 00:03:22.040 CC lib/vfio_user/host/vfio_user.o 00:03:22.040 CC lib/vfio_user/host/vfio_user_pci.o 00:03:22.040 LIB libspdk_dma.a 00:03:22.040 SO libspdk_dma.so.5.0 00:03:22.301 LIB libspdk_ioat.a 00:03:22.301 SYMLINK libspdk_dma.so 00:03:22.301 SO libspdk_ioat.so.7.0 00:03:22.301 SYMLINK libspdk_ioat.so 00:03:22.301 LIB libspdk_vfio_user.a 00:03:22.301 SO libspdk_vfio_user.so.5.0 00:03:22.301 LIB libspdk_util.a 00:03:22.301 SYMLINK libspdk_vfio_user.so 00:03:22.562 SO libspdk_util.so.10.1 00:03:22.562 SYMLINK libspdk_util.so 00:03:22.562 LIB libspdk_trace_parser.a 00:03:22.823 SO libspdk_trace_parser.so.6.0 00:03:22.823 SYMLINK libspdk_trace_parser.so 00:03:22.823 CC lib/rdma_utils/rdma_utils.o 00:03:22.823 CC lib/conf/conf.o 00:03:22.823 CC lib/env_dpdk/env.o 00:03:22.823 CC lib/env_dpdk/memory.o 00:03:22.823 CC lib/json/json_parse.o 00:03:22.823 CC lib/json/json_util.o 00:03:22.823 CC lib/env_dpdk/pci.o 00:03:22.823 CC lib/json/json_write.o 00:03:22.823 CC lib/env_dpdk/init.o 00:03:22.823 CC lib/env_dpdk/threads.o 00:03:22.823 CC lib/idxd/idxd.o 00:03:22.823 CC lib/vmd/vmd.o 00:03:22.823 CC lib/env_dpdk/pci_ioat.o 00:03:22.823 CC lib/vmd/led.o 00:03:22.823 CC lib/idxd/idxd_user.o 00:03:22.823 CC lib/env_dpdk/pci_vmd.o 00:03:22.823 CC lib/env_dpdk/pci_virtio.o 00:03:22.823 CC lib/idxd/idxd_kernel.o 00:03:22.823 CC lib/rdma_provider/common.o 00:03:22.823 CC lib/env_dpdk/pci_idxd.o 00:03:22.823 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:22.823 CC lib/env_dpdk/pci_event.o 00:03:22.823 CC lib/env_dpdk/sigbus_handler.o 00:03:22.823 CC lib/env_dpdk/pci_dpdk.o 00:03:22.823 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:22.823 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:23.084 LIB libspdk_rdma_provider.a 00:03:23.084 LIB libspdk_conf.a 00:03:23.084 SO libspdk_rdma_provider.so.6.0 00:03:23.084 LIB libspdk_rdma_utils.a 00:03:23.084 SO libspdk_conf.so.6.0 00:03:23.351 LIB libspdk_json.a 00:03:23.351 SO libspdk_rdma_utils.so.1.0 00:03:23.351 SYMLINK libspdk_rdma_provider.so 00:03:23.351 SYMLINK libspdk_conf.so 00:03:23.351 SO libspdk_json.so.6.0 00:03:23.351 SYMLINK libspdk_rdma_utils.so 00:03:23.351 SYMLINK libspdk_json.so 00:03:23.351 LIB libspdk_idxd.a 00:03:23.613 SO libspdk_idxd.so.12.1 00:03:23.613 LIB libspdk_vmd.a 00:03:23.613 SO libspdk_vmd.so.6.0 00:03:23.613 SYMLINK libspdk_idxd.so 00:03:23.613 SYMLINK libspdk_vmd.so 00:03:23.613 CC lib/jsonrpc/jsonrpc_server.o 00:03:23.613 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:23.613 CC lib/jsonrpc/jsonrpc_client.o 00:03:23.613 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:23.874 LIB libspdk_jsonrpc.a 00:03:23.874 SO libspdk_jsonrpc.so.6.0 00:03:24.135 SYMLINK libspdk_jsonrpc.so 00:03:24.135 LIB libspdk_env_dpdk.a 00:03:24.135 SO libspdk_env_dpdk.so.15.1 00:03:24.396 SYMLINK libspdk_env_dpdk.so 00:03:24.396 CC lib/rpc/rpc.o 00:03:24.656 LIB libspdk_rpc.a 00:03:24.656 SO libspdk_rpc.so.6.0 00:03:24.656 SYMLINK libspdk_rpc.so 00:03:24.917 CC lib/notify/notify.o 00:03:24.917 CC lib/notify/notify_rpc.o 00:03:25.178 CC lib/trace/trace.o 00:03:25.178 CC lib/trace/trace_flags.o 00:03:25.178 CC lib/trace/trace_rpc.o 00:03:25.178 CC lib/keyring/keyring_rpc.o 00:03:25.178 CC lib/keyring/keyring.o 00:03:25.178 LIB libspdk_notify.a 00:03:25.178 SO libspdk_notify.so.6.0 00:03:25.178 LIB libspdk_trace.a 00:03:25.178 LIB libspdk_keyring.a 00:03:25.439 SO libspdk_keyring.so.2.0 00:03:25.439 SO libspdk_trace.so.11.0 00:03:25.439 SYMLINK libspdk_notify.so 00:03:25.439 SYMLINK libspdk_keyring.so 00:03:25.439 SYMLINK libspdk_trace.so 00:03:25.700 CC lib/thread/thread.o 00:03:25.700 CC lib/thread/iobuf.o 00:03:25.700 CC lib/sock/sock.o 00:03:25.700 CC lib/sock/sock_rpc.o 00:03:26.273 LIB libspdk_sock.a 00:03:26.273 SO libspdk_sock.so.10.0 00:03:26.273 SYMLINK libspdk_sock.so 00:03:26.535 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:26.535 CC lib/nvme/nvme_ctrlr.o 00:03:26.535 CC lib/nvme/nvme_fabric.o 00:03:26.535 CC lib/nvme/nvme_ns_cmd.o 00:03:26.535 CC lib/nvme/nvme_ns.o 00:03:26.535 CC lib/nvme/nvme_pcie_common.o 00:03:26.535 CC lib/nvme/nvme_pcie.o 00:03:26.535 CC lib/nvme/nvme_qpair.o 00:03:26.535 CC lib/nvme/nvme.o 00:03:26.535 CC lib/nvme/nvme_quirks.o 00:03:26.535 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:26.535 CC lib/nvme/nvme_transport.o 00:03:26.535 CC lib/nvme/nvme_discovery.o 00:03:26.535 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:26.535 CC lib/nvme/nvme_tcp.o 00:03:26.535 CC lib/nvme/nvme_opal.o 00:03:26.535 CC lib/nvme/nvme_io_msg.o 00:03:26.535 CC lib/nvme/nvme_poll_group.o 00:03:26.535 CC lib/nvme/nvme_zns.o 00:03:26.535 CC lib/nvme/nvme_stubs.o 00:03:26.535 CC lib/nvme/nvme_auth.o 00:03:26.535 CC lib/nvme/nvme_cuse.o 00:03:26.535 CC lib/nvme/nvme_vfio_user.o 00:03:26.535 CC lib/nvme/nvme_rdma.o 00:03:27.108 LIB libspdk_thread.a 00:03:27.108 SO libspdk_thread.so.11.0 00:03:27.108 SYMLINK libspdk_thread.so 00:03:27.370 CC lib/blob/blobstore.o 00:03:27.370 CC lib/blob/request.o 00:03:27.370 CC lib/blob/zeroes.o 00:03:27.370 CC lib/blob/blob_bs_dev.o 00:03:27.370 CC lib/fsdev/fsdev.o 00:03:27.370 CC lib/init/json_config.o 00:03:27.370 CC lib/fsdev/fsdev_rpc.o 00:03:27.370 CC lib/init/subsystem.o 00:03:27.370 CC lib/init/rpc.o 00:03:27.370 CC lib/fsdev/fsdev_io.o 00:03:27.370 CC lib/init/subsystem_rpc.o 00:03:27.631 CC lib/vfu_tgt/tgt_endpoint.o 00:03:27.631 CC lib/accel/accel.o 00:03:27.631 CC lib/vfu_tgt/tgt_rpc.o 00:03:27.631 CC lib/accel/accel_rpc.o 00:03:27.631 CC lib/accel/accel_sw.o 00:03:27.631 CC lib/virtio/virtio.o 00:03:27.631 CC lib/virtio/virtio_vhost_user.o 00:03:27.631 CC lib/virtio/virtio_vfio_user.o 00:03:27.631 CC lib/virtio/virtio_pci.o 00:03:27.631 LIB libspdk_init.a 00:03:27.893 SO libspdk_init.so.6.0 00:03:27.893 LIB libspdk_virtio.a 00:03:27.893 LIB libspdk_vfu_tgt.a 00:03:27.893 SO libspdk_virtio.so.7.0 00:03:27.893 SO libspdk_vfu_tgt.so.3.0 00:03:27.893 SYMLINK libspdk_init.so 00:03:27.893 SYMLINK libspdk_vfu_tgt.so 00:03:27.893 SYMLINK libspdk_virtio.so 00:03:28.154 LIB libspdk_fsdev.a 00:03:28.154 SO libspdk_fsdev.so.2.0 00:03:28.154 SYMLINK libspdk_fsdev.so 00:03:28.154 CC lib/event/app.o 00:03:28.154 CC lib/event/log_rpc.o 00:03:28.154 CC lib/event/reactor.o 00:03:28.154 CC lib/event/app_rpc.o 00:03:28.154 CC lib/event/scheduler_static.o 00:03:28.416 LIB libspdk_accel.a 00:03:28.416 SO libspdk_accel.so.16.0 00:03:28.416 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:28.677 SYMLINK libspdk_accel.so 00:03:28.677 LIB libspdk_nvme.a 00:03:28.677 LIB libspdk_event.a 00:03:28.677 SO libspdk_event.so.14.0 00:03:28.677 SO libspdk_nvme.so.15.0 00:03:28.677 SYMLINK libspdk_event.so 00:03:28.938 CC lib/bdev/bdev.o 00:03:28.938 CC lib/bdev/bdev_rpc.o 00:03:28.938 CC lib/bdev/bdev_zone.o 00:03:28.938 CC lib/bdev/part.o 00:03:28.938 CC lib/bdev/scsi_nvme.o 00:03:28.938 SYMLINK libspdk_nvme.so 00:03:29.199 LIB libspdk_fuse_dispatcher.a 00:03:29.199 SO libspdk_fuse_dispatcher.so.1.0 00:03:29.199 SYMLINK libspdk_fuse_dispatcher.so 00:03:30.140 LIB libspdk_blob.a 00:03:30.141 SO libspdk_blob.so.11.0 00:03:30.141 SYMLINK libspdk_blob.so 00:03:30.713 CC lib/lvol/lvol.o 00:03:30.713 CC lib/blobfs/blobfs.o 00:03:30.713 CC lib/blobfs/tree.o 00:03:31.286 LIB libspdk_bdev.a 00:03:31.286 SO libspdk_bdev.so.17.0 00:03:31.286 LIB libspdk_blobfs.a 00:03:31.286 SO libspdk_blobfs.so.10.0 00:03:31.286 SYMLINK libspdk_bdev.so 00:03:31.286 LIB libspdk_lvol.a 00:03:31.546 SYMLINK libspdk_blobfs.so 00:03:31.546 SO libspdk_lvol.so.10.0 00:03:31.546 SYMLINK libspdk_lvol.so 00:03:31.807 CC lib/scsi/dev.o 00:03:31.807 CC lib/scsi/lun.o 00:03:31.807 CC lib/nvmf/ctrlr.o 00:03:31.807 CC lib/scsi/port.o 00:03:31.807 CC lib/nvmf/ctrlr_discovery.o 00:03:31.807 CC lib/scsi/scsi.o 00:03:31.807 CC lib/scsi/scsi_rpc.o 00:03:31.807 CC lib/nvmf/ctrlr_bdev.o 00:03:31.807 CC lib/scsi/scsi_bdev.o 00:03:31.807 CC lib/scsi/task.o 00:03:31.807 CC lib/ublk/ublk.o 00:03:31.807 CC lib/scsi/scsi_pr.o 00:03:31.807 CC lib/nvmf/nvmf.o 00:03:31.807 CC lib/nvmf/subsystem.o 00:03:31.807 CC lib/ublk/ublk_rpc.o 00:03:31.807 CC lib/nvmf/nvmf_rpc.o 00:03:31.807 CC lib/nvmf/transport.o 00:03:31.807 CC lib/nvmf/tcp.o 00:03:31.807 CC lib/ftl/ftl_core.o 00:03:31.807 CC lib/nbd/nbd.o 00:03:31.807 CC lib/ftl/ftl_init.o 00:03:31.807 CC lib/nvmf/stubs.o 00:03:31.807 CC lib/nbd/nbd_rpc.o 00:03:31.807 CC lib/ftl/ftl_layout.o 00:03:31.807 CC lib/nvmf/mdns_server.o 00:03:31.807 CC lib/ftl/ftl_debug.o 00:03:31.807 CC lib/nvmf/vfio_user.o 00:03:31.807 CC lib/nvmf/rdma.o 00:03:31.807 CC lib/ftl/ftl_io.o 00:03:31.807 CC lib/nvmf/auth.o 00:03:31.807 CC lib/ftl/ftl_sb.o 00:03:31.807 CC lib/ftl/ftl_l2p.o 00:03:31.807 CC lib/ftl/ftl_l2p_flat.o 00:03:31.807 CC lib/ftl/ftl_nv_cache.o 00:03:31.807 CC lib/ftl/ftl_band.o 00:03:31.807 CC lib/ftl/ftl_band_ops.o 00:03:31.807 CC lib/ftl/ftl_writer.o 00:03:31.807 CC lib/ftl/ftl_rq.o 00:03:31.807 CC lib/ftl/ftl_reloc.o 00:03:31.807 CC lib/ftl/ftl_l2p_cache.o 00:03:31.807 CC lib/ftl/ftl_p2l.o 00:03:31.807 CC lib/ftl/ftl_p2l_log.o 00:03:31.807 CC lib/ftl/mngt/ftl_mngt.o 00:03:31.807 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:31.807 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:31.807 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:31.807 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:31.807 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:31.807 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:31.807 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:31.807 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:31.807 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:31.807 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:31.807 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:31.807 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:31.807 CC lib/ftl/utils/ftl_conf.o 00:03:31.807 CC lib/ftl/utils/ftl_md.o 00:03:31.807 CC lib/ftl/utils/ftl_mempool.o 00:03:31.807 CC lib/ftl/utils/ftl_bitmap.o 00:03:31.807 CC lib/ftl/utils/ftl_property.o 00:03:31.807 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:31.807 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:31.807 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:31.807 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:31.807 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:31.807 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:31.807 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:31.807 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:31.807 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:31.807 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:31.807 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:31.807 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:31.807 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:31.807 CC lib/ftl/base/ftl_base_dev.o 00:03:31.807 CC lib/ftl/ftl_trace.o 00:03:31.807 CC lib/ftl/base/ftl_base_bdev.o 00:03:32.375 LIB libspdk_nbd.a 00:03:32.375 SO libspdk_nbd.so.7.0 00:03:32.375 LIB libspdk_scsi.a 00:03:32.375 SO libspdk_scsi.so.9.0 00:03:32.375 SYMLINK libspdk_nbd.so 00:03:32.375 SYMLINK libspdk_scsi.so 00:03:32.375 LIB libspdk_ublk.a 00:03:32.635 SO libspdk_ublk.so.3.0 00:03:32.635 SYMLINK libspdk_ublk.so 00:03:32.895 LIB libspdk_ftl.a 00:03:32.895 CC lib/iscsi/conn.o 00:03:32.895 CC lib/iscsi/init_grp.o 00:03:32.895 CC lib/iscsi/iscsi.o 00:03:32.895 CC lib/iscsi/param.o 00:03:32.895 CC lib/iscsi/portal_grp.o 00:03:32.895 CC lib/iscsi/tgt_node.o 00:03:32.895 CC lib/iscsi/iscsi_subsystem.o 00:03:32.895 CC lib/iscsi/iscsi_rpc.o 00:03:32.895 CC lib/iscsi/task.o 00:03:32.895 CC lib/vhost/vhost.o 00:03:32.895 CC lib/vhost/vhost_rpc.o 00:03:32.895 CC lib/vhost/vhost_scsi.o 00:03:32.895 CC lib/vhost/vhost_blk.o 00:03:32.895 CC lib/vhost/rte_vhost_user.o 00:03:32.895 SO libspdk_ftl.so.9.0 00:03:33.156 SYMLINK libspdk_ftl.so 00:03:33.156 LIB libspdk_nvmf.a 00:03:33.417 SO libspdk_nvmf.so.20.0 00:03:33.417 SYMLINK libspdk_nvmf.so 00:03:33.989 LIB libspdk_vhost.a 00:03:33.989 SO libspdk_vhost.so.8.0 00:03:33.989 SYMLINK libspdk_vhost.so 00:03:33.989 LIB libspdk_iscsi.a 00:03:33.989 SO libspdk_iscsi.so.8.0 00:03:34.251 SYMLINK libspdk_iscsi.so 00:03:34.822 CC module/env_dpdk/env_dpdk_rpc.o 00:03:34.822 CC module/vfu_device/vfu_virtio.o 00:03:34.822 CC module/vfu_device/vfu_virtio_blk.o 00:03:34.822 CC module/vfu_device/vfu_virtio_scsi.o 00:03:34.822 CC module/vfu_device/vfu_virtio_rpc.o 00:03:34.822 CC module/vfu_device/vfu_virtio_fs.o 00:03:35.082 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:35.082 CC module/accel/dsa/accel_dsa.o 00:03:35.082 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:35.082 CC module/accel/dsa/accel_dsa_rpc.o 00:03:35.082 LIB libspdk_env_dpdk_rpc.a 00:03:35.082 CC module/accel/ioat/accel_ioat.o 00:03:35.082 CC module/accel/ioat/accel_ioat_rpc.o 00:03:35.082 CC module/sock/posix/posix.o 00:03:35.082 CC module/keyring/linux/keyring.o 00:03:35.082 CC module/accel/error/accel_error.o 00:03:35.082 CC module/keyring/linux/keyring_rpc.o 00:03:35.082 CC module/accel/error/accel_error_rpc.o 00:03:35.082 CC module/accel/iaa/accel_iaa.o 00:03:35.082 CC module/accel/iaa/accel_iaa_rpc.o 00:03:35.082 CC module/blob/bdev/blob_bdev.o 00:03:35.082 CC module/keyring/file/keyring.o 00:03:35.082 CC module/keyring/file/keyring_rpc.o 00:03:35.082 CC module/fsdev/aio/fsdev_aio.o 00:03:35.082 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:35.082 CC module/fsdev/aio/linux_aio_mgr.o 00:03:35.082 CC module/scheduler/gscheduler/gscheduler.o 00:03:35.083 SO libspdk_env_dpdk_rpc.so.6.0 00:03:35.083 SYMLINK libspdk_env_dpdk_rpc.so 00:03:35.083 LIB libspdk_scheduler_dynamic.a 00:03:35.083 LIB libspdk_scheduler_dpdk_governor.a 00:03:35.083 LIB libspdk_keyring_file.a 00:03:35.083 LIB libspdk_keyring_linux.a 00:03:35.083 LIB libspdk_scheduler_gscheduler.a 00:03:35.083 LIB libspdk_accel_ioat.a 00:03:35.083 SO libspdk_keyring_file.so.2.0 00:03:35.083 SO libspdk_scheduler_dynamic.so.4.0 00:03:35.083 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:35.083 LIB libspdk_accel_error.a 00:03:35.083 SO libspdk_scheduler_gscheduler.so.4.0 00:03:35.083 SO libspdk_keyring_linux.so.1.0 00:03:35.083 SO libspdk_accel_ioat.so.6.0 00:03:35.083 LIB libspdk_accel_iaa.a 00:03:35.083 SYMLINK libspdk_scheduler_dynamic.so 00:03:35.343 SO libspdk_accel_error.so.2.0 00:03:35.343 SO libspdk_accel_iaa.so.3.0 00:03:35.343 SYMLINK libspdk_keyring_file.so 00:03:35.343 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:35.343 SYMLINK libspdk_accel_ioat.so 00:03:35.343 SYMLINK libspdk_scheduler_gscheduler.so 00:03:35.343 SYMLINK libspdk_keyring_linux.so 00:03:35.343 LIB libspdk_accel_dsa.a 00:03:35.343 LIB libspdk_blob_bdev.a 00:03:35.343 SYMLINK libspdk_accel_error.so 00:03:35.343 SO libspdk_accel_dsa.so.5.0 00:03:35.343 SYMLINK libspdk_accel_iaa.so 00:03:35.343 SO libspdk_blob_bdev.so.11.0 00:03:35.343 LIB libspdk_vfu_device.a 00:03:35.343 SYMLINK libspdk_accel_dsa.so 00:03:35.343 SYMLINK libspdk_blob_bdev.so 00:03:35.343 SO libspdk_vfu_device.so.3.0 00:03:35.604 SYMLINK libspdk_vfu_device.so 00:03:35.604 LIB libspdk_fsdev_aio.a 00:03:35.604 LIB libspdk_sock_posix.a 00:03:35.604 SO libspdk_fsdev_aio.so.1.0 00:03:35.604 SO libspdk_sock_posix.so.6.0 00:03:35.865 SYMLINK libspdk_fsdev_aio.so 00:03:35.865 SYMLINK libspdk_sock_posix.so 00:03:35.865 CC module/bdev/delay/vbdev_delay.o 00:03:35.865 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:35.865 CC module/bdev/error/vbdev_error.o 00:03:35.865 CC module/bdev/malloc/bdev_malloc.o 00:03:35.865 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:35.865 CC module/bdev/error/vbdev_error_rpc.o 00:03:35.865 CC module/blobfs/bdev/blobfs_bdev.o 00:03:35.865 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:35.865 CC module/bdev/nvme/bdev_nvme.o 00:03:35.865 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:35.865 CC module/bdev/gpt/gpt.o 00:03:35.865 CC module/bdev/nvme/nvme_rpc.o 00:03:35.865 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:35.865 CC module/bdev/nvme/bdev_mdns_client.o 00:03:35.865 CC module/bdev/gpt/vbdev_gpt.o 00:03:35.865 CC module/bdev/nvme/vbdev_opal.o 00:03:35.865 CC module/bdev/lvol/vbdev_lvol.o 00:03:35.865 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:35.865 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:35.865 CC module/bdev/passthru/vbdev_passthru.o 00:03:35.865 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:35.865 CC module/bdev/aio/bdev_aio.o 00:03:35.865 CC module/bdev/iscsi/bdev_iscsi.o 00:03:35.865 CC module/bdev/aio/bdev_aio_rpc.o 00:03:35.865 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:35.865 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:35.865 CC module/bdev/null/bdev_null.o 00:03:35.865 CC module/bdev/null/bdev_null_rpc.o 00:03:35.865 CC module/bdev/ftl/bdev_ftl.o 00:03:35.865 CC module/bdev/raid/bdev_raid.o 00:03:35.865 CC module/bdev/raid/bdev_raid_rpc.o 00:03:35.865 CC module/bdev/raid/bdev_raid_sb.o 00:03:35.865 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:35.865 CC module/bdev/split/vbdev_split.o 00:03:35.865 CC module/bdev/raid/raid0.o 00:03:35.865 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:35.865 CC module/bdev/raid/raid1.o 00:03:35.865 CC module/bdev/split/vbdev_split_rpc.o 00:03:35.865 CC module/bdev/raid/concat.o 00:03:35.865 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:35.865 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:35.865 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:36.124 LIB libspdk_blobfs_bdev.a 00:03:36.385 SO libspdk_blobfs_bdev.so.6.0 00:03:36.385 LIB libspdk_bdev_error.a 00:03:36.385 LIB libspdk_bdev_split.a 00:03:36.385 LIB libspdk_bdev_gpt.a 00:03:36.385 LIB libspdk_bdev_null.a 00:03:36.385 SO libspdk_bdev_split.so.6.0 00:03:36.385 SO libspdk_bdev_error.so.6.0 00:03:36.385 LIB libspdk_bdev_passthru.a 00:03:36.385 SO libspdk_bdev_gpt.so.6.0 00:03:36.385 SYMLINK libspdk_blobfs_bdev.so 00:03:36.385 SO libspdk_bdev_null.so.6.0 00:03:36.385 LIB libspdk_bdev_ftl.a 00:03:36.385 SO libspdk_bdev_passthru.so.6.0 00:03:36.385 LIB libspdk_bdev_delay.a 00:03:36.385 SYMLINK libspdk_bdev_split.so 00:03:36.385 LIB libspdk_bdev_malloc.a 00:03:36.385 LIB libspdk_bdev_zone_block.a 00:03:36.385 LIB libspdk_bdev_aio.a 00:03:36.385 SYMLINK libspdk_bdev_error.so 00:03:36.385 SO libspdk_bdev_delay.so.6.0 00:03:36.385 SYMLINK libspdk_bdev_null.so 00:03:36.385 SO libspdk_bdev_ftl.so.6.0 00:03:36.385 LIB libspdk_bdev_iscsi.a 00:03:36.385 SYMLINK libspdk_bdev_gpt.so 00:03:36.385 SO libspdk_bdev_zone_block.so.6.0 00:03:36.385 SO libspdk_bdev_malloc.so.6.0 00:03:36.385 SYMLINK libspdk_bdev_passthru.so 00:03:36.385 SO libspdk_bdev_aio.so.6.0 00:03:36.385 SO libspdk_bdev_iscsi.so.6.0 00:03:36.385 SYMLINK libspdk_bdev_delay.so 00:03:36.385 SYMLINK libspdk_bdev_ftl.so 00:03:36.385 SYMLINK libspdk_bdev_zone_block.so 00:03:36.385 SYMLINK libspdk_bdev_aio.so 00:03:36.385 SYMLINK libspdk_bdev_malloc.so 00:03:36.646 SYMLINK libspdk_bdev_iscsi.so 00:03:36.646 LIB libspdk_bdev_lvol.a 00:03:36.646 LIB libspdk_bdev_virtio.a 00:03:36.646 SO libspdk_bdev_lvol.so.6.0 00:03:36.646 SO libspdk_bdev_virtio.so.6.0 00:03:36.646 SYMLINK libspdk_bdev_lvol.so 00:03:36.646 SYMLINK libspdk_bdev_virtio.so 00:03:36.906 LIB libspdk_bdev_raid.a 00:03:36.906 SO libspdk_bdev_raid.so.6.0 00:03:37.166 SYMLINK libspdk_bdev_raid.so 00:03:38.107 LIB libspdk_bdev_nvme.a 00:03:38.368 SO libspdk_bdev_nvme.so.7.1 00:03:38.368 SYMLINK libspdk_bdev_nvme.so 00:03:38.940 CC module/event/subsystems/iobuf/iobuf.o 00:03:38.940 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:38.940 CC module/event/subsystems/keyring/keyring.o 00:03:38.940 CC module/event/subsystems/vmd/vmd.o 00:03:38.940 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:38.940 CC module/event/subsystems/sock/sock.o 00:03:38.940 CC module/event/subsystems/fsdev/fsdev.o 00:03:38.940 CC module/event/subsystems/scheduler/scheduler.o 00:03:38.940 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:39.201 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:39.201 LIB libspdk_event_keyring.a 00:03:39.201 LIB libspdk_event_fsdev.a 00:03:39.201 LIB libspdk_event_scheduler.a 00:03:39.201 LIB libspdk_event_vhost_blk.a 00:03:39.201 LIB libspdk_event_sock.a 00:03:39.201 SO libspdk_event_keyring.so.1.0 00:03:39.201 LIB libspdk_event_vfu_tgt.a 00:03:39.201 LIB libspdk_event_vmd.a 00:03:39.201 LIB libspdk_event_iobuf.a 00:03:39.201 SO libspdk_event_fsdev.so.1.0 00:03:39.201 SO libspdk_event_scheduler.so.4.0 00:03:39.201 SO libspdk_event_iobuf.so.3.0 00:03:39.201 SO libspdk_event_vhost_blk.so.3.0 00:03:39.201 SO libspdk_event_sock.so.5.0 00:03:39.201 SO libspdk_event_vfu_tgt.so.3.0 00:03:39.201 SO libspdk_event_vmd.so.6.0 00:03:39.201 SYMLINK libspdk_event_keyring.so 00:03:39.201 SYMLINK libspdk_event_fsdev.so 00:03:39.513 SYMLINK libspdk_event_scheduler.so 00:03:39.513 SYMLINK libspdk_event_iobuf.so 00:03:39.513 SYMLINK libspdk_event_vfu_tgt.so 00:03:39.513 SYMLINK libspdk_event_vhost_blk.so 00:03:39.513 SYMLINK libspdk_event_sock.so 00:03:39.513 SYMLINK libspdk_event_vmd.so 00:03:39.813 CC module/event/subsystems/accel/accel.o 00:03:39.813 LIB libspdk_event_accel.a 00:03:39.814 SO libspdk_event_accel.so.6.0 00:03:40.074 SYMLINK libspdk_event_accel.so 00:03:40.335 CC module/event/subsystems/bdev/bdev.o 00:03:40.596 LIB libspdk_event_bdev.a 00:03:40.596 SO libspdk_event_bdev.so.6.0 00:03:40.596 SYMLINK libspdk_event_bdev.so 00:03:40.857 CC module/event/subsystems/ublk/ublk.o 00:03:40.857 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:40.857 CC module/event/subsystems/nbd/nbd.o 00:03:40.857 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:40.857 CC module/event/subsystems/scsi/scsi.o 00:03:41.119 LIB libspdk_event_ublk.a 00:03:41.119 SO libspdk_event_ublk.so.3.0 00:03:41.119 LIB libspdk_event_nbd.a 00:03:41.119 LIB libspdk_event_scsi.a 00:03:41.119 LIB libspdk_event_nvmf.a 00:03:41.119 SO libspdk_event_nbd.so.6.0 00:03:41.119 SO libspdk_event_scsi.so.6.0 00:03:41.119 SO libspdk_event_nvmf.so.6.0 00:03:41.119 SYMLINK libspdk_event_ublk.so 00:03:41.119 SYMLINK libspdk_event_nbd.so 00:03:41.119 SYMLINK libspdk_event_scsi.so 00:03:41.119 SYMLINK libspdk_event_nvmf.so 00:03:41.693 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:41.693 CC module/event/subsystems/iscsi/iscsi.o 00:03:41.693 LIB libspdk_event_vhost_scsi.a 00:03:41.693 LIB libspdk_event_iscsi.a 00:03:41.693 SO libspdk_event_vhost_scsi.so.3.0 00:03:41.693 SO libspdk_event_iscsi.so.6.0 00:03:41.955 SYMLINK libspdk_event_vhost_scsi.so 00:03:41.955 SYMLINK libspdk_event_iscsi.so 00:03:41.955 SO libspdk.so.6.0 00:03:41.955 SYMLINK libspdk.so 00:03:42.526 CXX app/trace/trace.o 00:03:42.526 CC test/rpc_client/rpc_client_test.o 00:03:42.526 CC app/spdk_nvme_identify/identify.o 00:03:42.526 CC app/trace_record/trace_record.o 00:03:42.526 CC app/spdk_top/spdk_top.o 00:03:42.526 TEST_HEADER include/spdk/accel.h 00:03:42.526 TEST_HEADER include/spdk/accel_module.h 00:03:42.526 CC app/spdk_nvme_perf/perf.o 00:03:42.526 TEST_HEADER include/spdk/assert.h 00:03:42.526 TEST_HEADER include/spdk/barrier.h 00:03:42.526 TEST_HEADER include/spdk/base64.h 00:03:42.526 TEST_HEADER include/spdk/bdev.h 00:03:42.526 CC app/spdk_nvme_discover/discovery_aer.o 00:03:42.526 TEST_HEADER include/spdk/bdev_module.h 00:03:42.526 TEST_HEADER include/spdk/bdev_zone.h 00:03:42.526 CC app/spdk_lspci/spdk_lspci.o 00:03:42.526 TEST_HEADER include/spdk/bit_array.h 00:03:42.526 TEST_HEADER include/spdk/bit_pool.h 00:03:42.526 TEST_HEADER include/spdk/blob_bdev.h 00:03:42.526 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:42.526 TEST_HEADER include/spdk/blobfs.h 00:03:42.526 TEST_HEADER include/spdk/blob.h 00:03:42.526 TEST_HEADER include/spdk/conf.h 00:03:42.526 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:42.526 TEST_HEADER include/spdk/config.h 00:03:42.526 TEST_HEADER include/spdk/cpuset.h 00:03:42.526 TEST_HEADER include/spdk/crc16.h 00:03:42.526 TEST_HEADER include/spdk/crc32.h 00:03:42.526 TEST_HEADER include/spdk/crc64.h 00:03:42.526 TEST_HEADER include/spdk/dif.h 00:03:42.526 TEST_HEADER include/spdk/endian.h 00:03:42.526 TEST_HEADER include/spdk/dma.h 00:03:42.526 TEST_HEADER include/spdk/env_dpdk.h 00:03:42.526 TEST_HEADER include/spdk/event.h 00:03:42.526 TEST_HEADER include/spdk/env.h 00:03:42.526 TEST_HEADER include/spdk/fd_group.h 00:03:42.526 TEST_HEADER include/spdk/fd.h 00:03:42.526 TEST_HEADER include/spdk/fsdev.h 00:03:42.526 TEST_HEADER include/spdk/file.h 00:03:42.526 TEST_HEADER include/spdk/fsdev_module.h 00:03:42.526 TEST_HEADER include/spdk/ftl.h 00:03:42.526 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:42.526 TEST_HEADER include/spdk/gpt_spec.h 00:03:42.526 CC app/spdk_dd/spdk_dd.o 00:03:42.526 TEST_HEADER include/spdk/hexlify.h 00:03:42.526 TEST_HEADER include/spdk/histogram_data.h 00:03:42.526 TEST_HEADER include/spdk/idxd.h 00:03:42.526 TEST_HEADER include/spdk/idxd_spec.h 00:03:42.526 TEST_HEADER include/spdk/init.h 00:03:42.526 TEST_HEADER include/spdk/ioat.h 00:03:42.526 TEST_HEADER include/spdk/iscsi_spec.h 00:03:42.526 TEST_HEADER include/spdk/ioat_spec.h 00:03:42.526 CC app/nvmf_tgt/nvmf_main.o 00:03:42.526 CC app/iscsi_tgt/iscsi_tgt.o 00:03:42.526 TEST_HEADER include/spdk/json.h 00:03:42.526 TEST_HEADER include/spdk/jsonrpc.h 00:03:42.526 TEST_HEADER include/spdk/keyring.h 00:03:42.526 TEST_HEADER include/spdk/keyring_module.h 00:03:42.526 TEST_HEADER include/spdk/likely.h 00:03:42.526 TEST_HEADER include/spdk/log.h 00:03:42.526 TEST_HEADER include/spdk/lvol.h 00:03:42.526 TEST_HEADER include/spdk/md5.h 00:03:42.526 TEST_HEADER include/spdk/memory.h 00:03:42.526 TEST_HEADER include/spdk/mmio.h 00:03:42.526 TEST_HEADER include/spdk/nbd.h 00:03:42.526 TEST_HEADER include/spdk/net.h 00:03:42.526 TEST_HEADER include/spdk/notify.h 00:03:42.526 CC app/spdk_tgt/spdk_tgt.o 00:03:42.526 TEST_HEADER include/spdk/nvme.h 00:03:42.526 TEST_HEADER include/spdk/nvme_intel.h 00:03:42.526 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:42.526 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:42.526 TEST_HEADER include/spdk/nvme_spec.h 00:03:42.526 TEST_HEADER include/spdk/nvme_zns.h 00:03:42.526 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:42.526 TEST_HEADER include/spdk/nvmf_spec.h 00:03:42.526 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:42.526 TEST_HEADER include/spdk/nvmf.h 00:03:42.526 TEST_HEADER include/spdk/nvmf_transport.h 00:03:42.526 TEST_HEADER include/spdk/opal_spec.h 00:03:42.526 TEST_HEADER include/spdk/opal.h 00:03:42.526 TEST_HEADER include/spdk/pci_ids.h 00:03:42.526 TEST_HEADER include/spdk/pipe.h 00:03:42.526 TEST_HEADER include/spdk/queue.h 00:03:42.526 TEST_HEADER include/spdk/reduce.h 00:03:42.526 TEST_HEADER include/spdk/rpc.h 00:03:42.526 TEST_HEADER include/spdk/scheduler.h 00:03:42.526 TEST_HEADER include/spdk/scsi.h 00:03:42.526 TEST_HEADER include/spdk/sock.h 00:03:42.526 TEST_HEADER include/spdk/scsi_spec.h 00:03:42.526 TEST_HEADER include/spdk/stdinc.h 00:03:42.526 TEST_HEADER include/spdk/string.h 00:03:42.526 TEST_HEADER include/spdk/thread.h 00:03:42.526 TEST_HEADER include/spdk/trace.h 00:03:42.526 TEST_HEADER include/spdk/tree.h 00:03:42.526 TEST_HEADER include/spdk/trace_parser.h 00:03:42.526 TEST_HEADER include/spdk/ublk.h 00:03:42.526 TEST_HEADER include/spdk/util.h 00:03:42.526 TEST_HEADER include/spdk/uuid.h 00:03:42.526 TEST_HEADER include/spdk/version.h 00:03:42.526 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:42.526 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:42.526 TEST_HEADER include/spdk/vhost.h 00:03:42.526 TEST_HEADER include/spdk/xor.h 00:03:42.526 TEST_HEADER include/spdk/vmd.h 00:03:42.526 TEST_HEADER include/spdk/zipf.h 00:03:42.526 CXX test/cpp_headers/accel.o 00:03:42.526 CXX test/cpp_headers/accel_module.o 00:03:42.526 CXX test/cpp_headers/barrier.o 00:03:42.526 CXX test/cpp_headers/assert.o 00:03:42.526 CXX test/cpp_headers/base64.o 00:03:42.526 CXX test/cpp_headers/bdev_module.o 00:03:42.526 CXX test/cpp_headers/bdev.o 00:03:42.526 CXX test/cpp_headers/bdev_zone.o 00:03:42.526 CXX test/cpp_headers/bit_array.o 00:03:42.526 CXX test/cpp_headers/blobfs_bdev.o 00:03:42.526 CXX test/cpp_headers/bit_pool.o 00:03:42.526 CXX test/cpp_headers/blob_bdev.o 00:03:42.526 CXX test/cpp_headers/blobfs.o 00:03:42.526 CXX test/cpp_headers/blob.o 00:03:42.526 CXX test/cpp_headers/cpuset.o 00:03:42.527 CXX test/cpp_headers/conf.o 00:03:42.527 CXX test/cpp_headers/config.o 00:03:42.527 CXX test/cpp_headers/crc32.o 00:03:42.527 CXX test/cpp_headers/crc16.o 00:03:42.527 CXX test/cpp_headers/dif.o 00:03:42.527 CXX test/cpp_headers/crc64.o 00:03:42.527 CXX test/cpp_headers/dma.o 00:03:42.527 CXX test/cpp_headers/env_dpdk.o 00:03:42.527 CXX test/cpp_headers/endian.o 00:03:42.527 CXX test/cpp_headers/env.o 00:03:42.527 CXX test/cpp_headers/fd_group.o 00:03:42.527 CXX test/cpp_headers/event.o 00:03:42.527 CXX test/cpp_headers/fd.o 00:03:42.527 CC examples/util/zipf/zipf.o 00:03:42.527 CXX test/cpp_headers/file.o 00:03:42.527 CXX test/cpp_headers/fsdev_module.o 00:03:42.527 CXX test/cpp_headers/fsdev.o 00:03:42.527 CXX test/cpp_headers/ftl.o 00:03:42.527 CC examples/ioat/perf/perf.o 00:03:42.527 CXX test/cpp_headers/fuse_dispatcher.o 00:03:42.527 CXX test/cpp_headers/hexlify.o 00:03:42.527 CXX test/cpp_headers/gpt_spec.o 00:03:42.527 CXX test/cpp_headers/idxd_spec.o 00:03:42.527 CXX test/cpp_headers/idxd.o 00:03:42.527 CXX test/cpp_headers/histogram_data.o 00:03:42.527 CXX test/cpp_headers/init.o 00:03:42.527 CXX test/cpp_headers/ioat.o 00:03:42.527 CXX test/cpp_headers/iscsi_spec.o 00:03:42.527 CXX test/cpp_headers/json.o 00:03:42.791 CXX test/cpp_headers/ioat_spec.o 00:03:42.791 CXX test/cpp_headers/jsonrpc.o 00:03:42.791 CXX test/cpp_headers/keyring_module.o 00:03:42.791 CXX test/cpp_headers/keyring.o 00:03:42.791 CXX test/cpp_headers/likely.o 00:03:42.791 CXX test/cpp_headers/log.o 00:03:42.791 CXX test/cpp_headers/md5.o 00:03:42.791 CC examples/ioat/verify/verify.o 00:03:42.791 CXX test/cpp_headers/lvol.o 00:03:42.791 CXX test/cpp_headers/mmio.o 00:03:42.791 CC test/app/stub/stub.o 00:03:42.791 CXX test/cpp_headers/memory.o 00:03:42.791 CXX test/cpp_headers/net.o 00:03:42.791 CXX test/cpp_headers/nvme.o 00:03:42.791 CXX test/cpp_headers/nbd.o 00:03:42.791 CXX test/cpp_headers/notify.o 00:03:42.791 CXX test/cpp_headers/nvme_ocssd.o 00:03:42.791 CC test/app/jsoncat/jsoncat.o 00:03:42.791 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:42.791 CXX test/cpp_headers/nvme_intel.o 00:03:42.791 CXX test/cpp_headers/nvme_spec.o 00:03:42.791 CXX test/cpp_headers/nvme_zns.o 00:03:42.791 CC test/env/pci/pci_ut.o 00:03:42.791 CXX test/cpp_headers/nvmf_cmd.o 00:03:42.791 CXX test/cpp_headers/nvmf.o 00:03:42.791 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:42.791 CXX test/cpp_headers/nvmf_spec.o 00:03:42.791 CXX test/cpp_headers/opal_spec.o 00:03:42.791 CXX test/cpp_headers/nvmf_transport.o 00:03:42.791 CC test/app/histogram_perf/histogram_perf.o 00:03:42.791 CXX test/cpp_headers/opal.o 00:03:42.791 CXX test/cpp_headers/pipe.o 00:03:42.791 CXX test/cpp_headers/pci_ids.o 00:03:42.791 CC test/env/vtophys/vtophys.o 00:03:42.791 CXX test/cpp_headers/scsi.o 00:03:42.791 CXX test/cpp_headers/queue.o 00:03:42.791 CXX test/cpp_headers/reduce.o 00:03:42.791 CXX test/cpp_headers/rpc.o 00:03:42.791 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:42.791 LINK spdk_lspci 00:03:42.791 CXX test/cpp_headers/scsi_spec.o 00:03:42.791 CXX test/cpp_headers/scheduler.o 00:03:42.791 CXX test/cpp_headers/sock.o 00:03:42.791 CXX test/cpp_headers/string.o 00:03:42.791 CXX test/cpp_headers/stdinc.o 00:03:42.791 CXX test/cpp_headers/thread.o 00:03:42.791 CXX test/cpp_headers/trace.o 00:03:42.791 CXX test/cpp_headers/trace_parser.o 00:03:42.791 LINK rpc_client_test 00:03:42.791 CXX test/cpp_headers/tree.o 00:03:42.791 CC test/env/memory/memory_ut.o 00:03:42.791 CXX test/cpp_headers/ublk.o 00:03:42.791 CXX test/cpp_headers/util.o 00:03:42.791 CXX test/cpp_headers/uuid.o 00:03:42.791 CXX test/cpp_headers/vfio_user_spec.o 00:03:42.791 CXX test/cpp_headers/version.o 00:03:42.791 CXX test/cpp_headers/vhost.o 00:03:42.791 CC test/thread/poller_perf/poller_perf.o 00:03:42.791 CXX test/cpp_headers/vfio_user_pci.o 00:03:42.791 CXX test/cpp_headers/zipf.o 00:03:42.791 CXX test/cpp_headers/vmd.o 00:03:42.791 CC app/fio/nvme/fio_plugin.o 00:03:42.791 CXX test/cpp_headers/xor.o 00:03:42.791 CC test/app/bdev_svc/bdev_svc.o 00:03:42.791 CC test/dma/test_dma/test_dma.o 00:03:42.791 CC app/fio/bdev/fio_plugin.o 00:03:42.791 LINK interrupt_tgt 00:03:42.791 LINK spdk_nvme_discover 00:03:42.791 LINK nvmf_tgt 00:03:43.052 LINK spdk_trace_record 00:03:43.052 LINK iscsi_tgt 00:03:43.052 LINK spdk_tgt 00:03:43.052 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:43.052 CC test/env/mem_callbacks/mem_callbacks.o 00:03:43.052 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:43.052 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:43.052 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:43.052 LINK verify 00:03:43.313 LINK spdk_dd 00:03:43.313 LINK spdk_trace 00:03:43.313 LINK bdev_svc 00:03:43.313 LINK zipf 00:03:43.313 LINK histogram_perf 00:03:43.313 LINK poller_perf 00:03:43.313 LINK ioat_perf 00:03:43.313 LINK jsoncat 00:03:43.313 LINK vtophys 00:03:43.313 LINK stub 00:03:43.313 LINK env_dpdk_post_init 00:03:43.573 LINK spdk_top 00:03:43.573 CC app/vhost/vhost.o 00:03:43.573 LINK spdk_nvme_perf 00:03:43.573 LINK nvme_fuzz 00:03:43.573 LINK pci_ut 00:03:43.833 LINK test_dma 00:03:43.834 LINK vhost_fuzz 00:03:43.834 CC test/event/event_perf/event_perf.o 00:03:43.834 CC test/event/reactor/reactor.o 00:03:43.834 LINK spdk_nvme 00:03:43.834 CC test/event/reactor_perf/reactor_perf.o 00:03:43.834 LINK spdk_bdev 00:03:43.834 CC test/event/app_repeat/app_repeat.o 00:03:43.834 CC examples/vmd/lsvmd/lsvmd.o 00:03:43.834 CC examples/idxd/perf/perf.o 00:03:43.834 CC examples/sock/hello_world/hello_sock.o 00:03:43.834 CC test/event/scheduler/scheduler.o 00:03:43.834 CC examples/vmd/led/led.o 00:03:43.834 CC examples/thread/thread/thread_ex.o 00:03:43.834 LINK spdk_nvme_identify 00:03:43.834 LINK mem_callbacks 00:03:43.834 LINK vhost 00:03:43.834 LINK reactor 00:03:43.834 LINK event_perf 00:03:43.834 LINK reactor_perf 00:03:44.095 LINK lsvmd 00:03:44.095 LINK led 00:03:44.095 LINK app_repeat 00:03:44.095 LINK scheduler 00:03:44.095 LINK hello_sock 00:03:44.095 LINK thread 00:03:44.095 LINK idxd_perf 00:03:44.355 CC test/nvme/reset/reset.o 00:03:44.355 LINK memory_ut 00:03:44.355 CC test/nvme/reserve/reserve.o 00:03:44.355 CC test/nvme/sgl/sgl.o 00:03:44.355 CC test/nvme/overhead/overhead.o 00:03:44.355 CC test/nvme/simple_copy/simple_copy.o 00:03:44.355 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:44.355 CC test/nvme/e2edp/nvme_dp.o 00:03:44.355 CC test/nvme/cuse/cuse.o 00:03:44.355 CC test/nvme/boot_partition/boot_partition.o 00:03:44.355 CC test/accel/dif/dif.o 00:03:44.355 CC test/nvme/compliance/nvme_compliance.o 00:03:44.355 CC test/nvme/err_injection/err_injection.o 00:03:44.355 CC test/nvme/startup/startup.o 00:03:44.355 CC test/nvme/aer/aer.o 00:03:44.355 CC test/nvme/fdp/fdp.o 00:03:44.355 CC test/nvme/fused_ordering/fused_ordering.o 00:03:44.355 CC test/nvme/connect_stress/connect_stress.o 00:03:44.355 CC test/blobfs/mkfs/mkfs.o 00:03:44.355 LINK iscsi_fuzz 00:03:44.355 CC test/lvol/esnap/esnap.o 00:03:44.616 LINK reserve 00:03:44.616 LINK boot_partition 00:03:44.616 LINK startup 00:03:44.616 LINK doorbell_aers 00:03:44.616 LINK connect_stress 00:03:44.616 LINK err_injection 00:03:44.616 LINK simple_copy 00:03:44.616 LINK sgl 00:03:44.616 CC examples/nvme/hello_world/hello_world.o 00:03:44.616 LINK fused_ordering 00:03:44.616 CC examples/nvme/hotplug/hotplug.o 00:03:44.616 CC examples/nvme/arbitration/arbitration.o 00:03:44.616 CC examples/nvme/reconnect/reconnect.o 00:03:44.616 CC examples/nvme/abort/abort.o 00:03:44.616 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:44.616 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:44.616 LINK reset 00:03:44.616 LINK nvme_dp 00:03:44.616 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:44.616 LINK overhead 00:03:44.616 LINK aer 00:03:44.616 CC examples/accel/perf/accel_perf.o 00:03:44.616 LINK mkfs 00:03:44.616 LINK nvme_compliance 00:03:44.616 LINK fdp 00:03:44.616 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:44.616 CC examples/blob/hello_world/hello_blob.o 00:03:44.616 CC examples/blob/cli/blobcli.o 00:03:44.878 LINK cmb_copy 00:03:44.878 LINK pmr_persistence 00:03:44.878 LINK hello_world 00:03:44.878 LINK hotplug 00:03:44.878 LINK abort 00:03:44.878 LINK reconnect 00:03:44.878 LINK arbitration 00:03:44.878 LINK dif 00:03:44.878 LINK hello_fsdev 00:03:44.878 LINK hello_blob 00:03:44.878 LINK nvme_manage 00:03:45.139 LINK accel_perf 00:03:45.139 LINK blobcli 00:03:45.401 LINK cuse 00:03:45.401 CC test/bdev/bdevio/bdevio.o 00:03:45.663 CC examples/bdev/hello_world/hello_bdev.o 00:03:45.663 CC examples/bdev/bdevperf/bdevperf.o 00:03:45.924 LINK hello_bdev 00:03:45.924 LINK bdevio 00:03:46.496 LINK bdevperf 00:03:47.067 CC examples/nvmf/nvmf/nvmf.o 00:03:47.327 LINK nvmf 00:03:48.271 LINK esnap 00:03:48.532 00:03:48.532 real 0m53.810s 00:03:48.532 user 7m44.364s 00:03:48.532 sys 4m23.566s 00:03:48.532 18:53:17 make -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:03:48.532 18:53:17 make -- common/autotest_common.sh@10 -- $ set +x 00:03:48.532 ************************************ 00:03:48.532 END TEST make 00:03:48.532 ************************************ 00:03:48.532 18:53:17 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:48.532 18:53:17 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:48.532 18:53:17 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:48.532 18:53:17 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:48.532 18:53:17 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:48.794 18:53:17 -- pm/common@44 -- $ pid=24079 00:03:48.794 18:53:17 -- pm/common@50 -- $ kill -TERM 24079 00:03:48.794 18:53:17 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:48.794 18:53:17 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:48.794 18:53:17 -- pm/common@44 -- $ pid=24080 00:03:48.794 18:53:17 -- pm/common@50 -- $ kill -TERM 24080 00:03:48.794 18:53:17 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:48.794 18:53:17 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:48.794 18:53:17 -- pm/common@44 -- $ pid=24082 00:03:48.794 18:53:17 -- pm/common@50 -- $ kill -TERM 24082 00:03:48.794 18:53:17 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:48.794 18:53:17 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:48.794 18:53:17 -- pm/common@44 -- $ pid=24101 00:03:48.794 18:53:17 -- pm/common@50 -- $ sudo -E kill -TERM 24101 00:03:48.794 18:53:17 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:48.794 18:53:17 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:48.794 18:53:17 -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:48.794 18:53:18 -- common/autotest_common.sh@1691 -- # lcov --version 00:03:48.794 18:53:18 -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:48.794 18:53:18 -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:48.794 18:53:18 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:48.794 18:53:18 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:48.794 18:53:18 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:48.794 18:53:18 -- scripts/common.sh@336 -- # IFS=.-: 00:03:48.794 18:53:18 -- scripts/common.sh@336 -- # read -ra ver1 00:03:48.794 18:53:18 -- scripts/common.sh@337 -- # IFS=.-: 00:03:48.794 18:53:18 -- scripts/common.sh@337 -- # read -ra ver2 00:03:48.794 18:53:18 -- scripts/common.sh@338 -- # local 'op=<' 00:03:48.794 18:53:18 -- scripts/common.sh@340 -- # ver1_l=2 00:03:48.794 18:53:18 -- scripts/common.sh@341 -- # ver2_l=1 00:03:48.794 18:53:18 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:48.794 18:53:18 -- scripts/common.sh@344 -- # case "$op" in 00:03:48.794 18:53:18 -- scripts/common.sh@345 -- # : 1 00:03:48.794 18:53:18 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:48.794 18:53:18 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:48.794 18:53:18 -- scripts/common.sh@365 -- # decimal 1 00:03:48.794 18:53:18 -- scripts/common.sh@353 -- # local d=1 00:03:48.794 18:53:18 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:48.794 18:53:18 -- scripts/common.sh@355 -- # echo 1 00:03:48.794 18:53:18 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:48.794 18:53:18 -- scripts/common.sh@366 -- # decimal 2 00:03:48.794 18:53:18 -- scripts/common.sh@353 -- # local d=2 00:03:48.794 18:53:18 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:48.794 18:53:18 -- scripts/common.sh@355 -- # echo 2 00:03:48.794 18:53:18 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:48.794 18:53:18 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:48.794 18:53:18 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:48.794 18:53:18 -- scripts/common.sh@368 -- # return 0 00:03:48.794 18:53:18 -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:48.794 18:53:18 -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:48.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:48.794 --rc genhtml_branch_coverage=1 00:03:48.794 --rc genhtml_function_coverage=1 00:03:48.794 --rc genhtml_legend=1 00:03:48.794 --rc geninfo_all_blocks=1 00:03:48.794 --rc geninfo_unexecuted_blocks=1 00:03:48.794 00:03:48.794 ' 00:03:48.794 18:53:18 -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:48.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:48.794 --rc genhtml_branch_coverage=1 00:03:48.794 --rc genhtml_function_coverage=1 00:03:48.794 --rc genhtml_legend=1 00:03:48.794 --rc geninfo_all_blocks=1 00:03:48.794 --rc geninfo_unexecuted_blocks=1 00:03:48.794 00:03:48.794 ' 00:03:48.794 18:53:18 -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:48.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:48.794 --rc genhtml_branch_coverage=1 00:03:48.794 --rc genhtml_function_coverage=1 00:03:48.794 --rc genhtml_legend=1 00:03:48.794 --rc geninfo_all_blocks=1 00:03:48.794 --rc geninfo_unexecuted_blocks=1 00:03:48.794 00:03:48.794 ' 00:03:48.794 18:53:18 -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:48.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:48.794 --rc genhtml_branch_coverage=1 00:03:48.794 --rc genhtml_function_coverage=1 00:03:48.794 --rc genhtml_legend=1 00:03:48.794 --rc geninfo_all_blocks=1 00:03:48.794 --rc geninfo_unexecuted_blocks=1 00:03:48.794 00:03:48.794 ' 00:03:48.794 18:53:18 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:48.794 18:53:18 -- nvmf/common.sh@7 -- # uname -s 00:03:48.794 18:53:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:48.794 18:53:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:48.794 18:53:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:48.794 18:53:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:48.794 18:53:18 -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:48.794 18:53:18 -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:03:48.794 18:53:18 -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:48.794 18:53:18 -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:03:48.794 18:53:18 -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:03:48.794 18:53:18 -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:03:48.794 18:53:18 -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:48.794 18:53:18 -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:03:48.794 18:53:18 -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:03:48.794 18:53:18 -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:49.057 18:53:18 -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:49.057 18:53:18 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:49.057 18:53:18 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:49.057 18:53:18 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:49.057 18:53:18 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:49.057 18:53:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:49.057 18:53:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:49.057 18:53:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:49.057 18:53:18 -- paths/export.sh@5 -- # export PATH 00:03:49.057 18:53:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:49.057 18:53:18 -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:03:49.057 18:53:18 -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:03:49.057 18:53:18 -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:03:49.057 18:53:18 -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:03:49.057 18:53:18 -- nvmf/common.sh@50 -- # : 0 00:03:49.057 18:53:18 -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:03:49.057 18:53:18 -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:03:49.057 18:53:18 -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:03:49.057 18:53:18 -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:49.057 18:53:18 -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:49.057 18:53:18 -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:03:49.057 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:03:49.057 18:53:18 -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:03:49.057 18:53:18 -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:03:49.057 18:53:18 -- nvmf/common.sh@54 -- # have_pci_nics=0 00:03:49.057 18:53:18 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:49.057 18:53:18 -- spdk/autotest.sh@32 -- # uname -s 00:03:49.057 18:53:18 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:49.057 18:53:18 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:49.057 18:53:18 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:49.057 18:53:18 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:49.057 18:53:18 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:49.057 18:53:18 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:49.057 18:53:18 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:49.057 18:53:18 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:49.057 18:53:18 -- spdk/autotest.sh@48 -- # udevadm_pid=89312 00:03:49.057 18:53:18 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:49.057 18:53:18 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:49.057 18:53:18 -- pm/common@17 -- # local monitor 00:03:49.057 18:53:18 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:49.057 18:53:18 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:49.057 18:53:18 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:49.057 18:53:18 -- pm/common@21 -- # date +%s 00:03:49.057 18:53:18 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:49.057 18:53:18 -- pm/common@21 -- # date +%s 00:03:49.057 18:53:18 -- pm/common@25 -- # sleep 1 00:03:49.057 18:53:18 -- pm/common@21 -- # date +%s 00:03:49.057 18:53:18 -- pm/common@21 -- # date +%s 00:03:49.057 18:53:18 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1730829198 00:03:49.057 18:53:18 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1730829198 00:03:49.057 18:53:18 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1730829198 00:03:49.057 18:53:18 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1730829198 00:03:49.057 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1730829198_collect-cpu-load.pm.log 00:03:49.057 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1730829198_collect-vmstat.pm.log 00:03:49.057 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1730829198_collect-cpu-temp.pm.log 00:03:49.057 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1730829198_collect-bmc-pm.bmc.pm.log 00:03:50.003 18:53:19 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:50.003 18:53:19 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:50.003 18:53:19 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:50.003 18:53:19 -- common/autotest_common.sh@10 -- # set +x 00:03:50.003 18:53:19 -- spdk/autotest.sh@59 -- # create_test_list 00:03:50.003 18:53:19 -- common/autotest_common.sh@750 -- # xtrace_disable 00:03:50.003 18:53:19 -- common/autotest_common.sh@10 -- # set +x 00:03:50.003 18:53:19 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:50.003 18:53:19 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:50.003 18:53:19 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:50.003 18:53:19 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:50.003 18:53:19 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:50.003 18:53:19 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:50.003 18:53:19 -- common/autotest_common.sh@1455 -- # uname 00:03:50.003 18:53:19 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:50.003 18:53:19 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:50.003 18:53:19 -- common/autotest_common.sh@1475 -- # uname 00:03:50.003 18:53:19 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:50.003 18:53:19 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:50.003 18:53:19 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:50.003 lcov: LCOV version 1.15 00:03:50.003 18:53:19 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:04:11.971 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:11.971 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:04:21.971 18:53:49 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:21.971 18:53:49 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:21.971 18:53:49 -- common/autotest_common.sh@10 -- # set +x 00:04:21.971 18:53:49 -- spdk/autotest.sh@78 -- # rm -f 00:04:21.971 18:53:49 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:23.883 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:04:23.883 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:04:23.883 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:04:23.883 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:04:23.883 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:04:23.883 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:04:23.883 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:04:23.883 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:04:23.883 0000:65:00.0 (144d a80a): Already using the nvme driver 00:04:23.883 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:04:23.883 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:04:23.883 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:04:23.883 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:04:23.883 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:04:24.143 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:04:24.143 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:04:24.143 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:04:24.404 18:53:53 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:24.404 18:53:53 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:04:24.404 18:53:53 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:04:24.404 18:53:53 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:04:24.404 18:53:53 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:24.404 18:53:53 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:04:24.404 18:53:53 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:04:24.404 18:53:53 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:24.404 18:53:53 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:24.404 18:53:53 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:24.404 18:53:53 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:24.404 18:53:53 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:24.404 18:53:53 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:24.404 18:53:53 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:24.404 18:53:53 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:24.404 No valid GPT data, bailing 00:04:24.404 18:53:53 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:24.404 18:53:53 -- scripts/common.sh@394 -- # pt= 00:04:24.404 18:53:53 -- scripts/common.sh@395 -- # return 1 00:04:24.404 18:53:53 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:24.404 1+0 records in 00:04:24.404 1+0 records out 00:04:24.404 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00479943 s, 218 MB/s 00:04:24.404 18:53:53 -- spdk/autotest.sh@105 -- # sync 00:04:24.404 18:53:53 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:24.404 18:53:53 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:24.404 18:53:53 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:34.404 18:54:02 -- spdk/autotest.sh@111 -- # uname -s 00:04:34.404 18:54:02 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:34.404 18:54:02 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:34.404 18:54:02 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:36.318 Hugepages 00:04:36.318 node hugesize free / total 00:04:36.318 node0 1048576kB 0 / 0 00:04:36.318 node0 2048kB 0 / 0 00:04:36.318 node1 1048576kB 0 / 0 00:04:36.318 node1 2048kB 0 / 0 00:04:36.318 00:04:36.318 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:36.318 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:04:36.318 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:04:36.318 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:04:36.318 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:04:36.318 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:04:36.318 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:04:36.318 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:04:36.318 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:04:36.318 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:04:36.318 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:04:36.318 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:04:36.318 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:04:36.318 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:04:36.318 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:04:36.318 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:04:36.318 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:04:36.318 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:04:36.579 18:54:05 -- spdk/autotest.sh@117 -- # uname -s 00:04:36.579 18:54:05 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:36.579 18:54:05 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:36.579 18:54:05 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:39.879 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:39.879 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:39.879 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:39.879 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:39.879 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:39.879 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:39.879 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:39.879 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:39.879 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:39.879 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:39.879 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:39.879 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:39.879 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:39.879 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:39.879 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:39.879 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:41.790 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:04:42.051 18:54:11 -- common/autotest_common.sh@1515 -- # sleep 1 00:04:43.063 18:54:12 -- common/autotest_common.sh@1516 -- # bdfs=() 00:04:43.063 18:54:12 -- common/autotest_common.sh@1516 -- # local bdfs 00:04:43.063 18:54:12 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:04:43.063 18:54:12 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:04:43.063 18:54:12 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:43.063 18:54:12 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:43.063 18:54:12 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:43.063 18:54:12 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:43.063 18:54:12 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:43.063 18:54:12 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:04:43.063 18:54:12 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:65:00.0 00:04:43.063 18:54:12 -- common/autotest_common.sh@1520 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:46.362 Waiting for block devices as requested 00:04:46.362 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:04:46.362 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:04:46.362 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:04:46.362 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:04:46.622 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:04:46.622 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:04:46.622 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:04:46.882 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:04:46.882 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:04:47.142 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:04:47.142 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:04:47.142 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:04:47.142 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:04:47.403 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:04:47.403 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:04:47.403 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:04:47.403 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:04:47.974 18:54:17 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:47.974 18:54:17 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:04:47.974 18:54:17 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 00:04:47.974 18:54:17 -- common/autotest_common.sh@1485 -- # grep 0000:65:00.0/nvme/nvme 00:04:47.974 18:54:17 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:04:47.974 18:54:17 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:04:47.974 18:54:17 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:04:47.974 18:54:17 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:04:47.974 18:54:17 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:04:47.974 18:54:17 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:04:47.974 18:54:17 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:04:47.974 18:54:17 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:47.974 18:54:17 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:47.974 18:54:17 -- common/autotest_common.sh@1529 -- # oacs=' 0x5f' 00:04:47.974 18:54:17 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:47.974 18:54:17 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:47.974 18:54:17 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:04:47.974 18:54:17 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:47.974 18:54:17 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:47.974 18:54:17 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:47.974 18:54:17 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:47.974 18:54:17 -- common/autotest_common.sh@1541 -- # continue 00:04:47.974 18:54:17 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:47.974 18:54:17 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:47.974 18:54:17 -- common/autotest_common.sh@10 -- # set +x 00:04:47.974 18:54:17 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:47.974 18:54:17 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:47.974 18:54:17 -- common/autotest_common.sh@10 -- # set +x 00:04:47.974 18:54:17 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:51.279 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:51.279 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:51.279 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:51.279 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:51.279 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:51.279 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:51.279 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:51.279 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:51.279 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:51.279 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:51.279 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:51.279 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:51.279 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:51.279 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:51.279 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:51.279 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:51.279 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:04:51.279 18:54:20 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:51.279 18:54:20 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:51.279 18:54:20 -- common/autotest_common.sh@10 -- # set +x 00:04:51.279 18:54:20 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:51.279 18:54:20 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:04:51.279 18:54:20 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:04:51.279 18:54:20 -- common/autotest_common.sh@1561 -- # bdfs=() 00:04:51.279 18:54:20 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:04:51.279 18:54:20 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:04:51.279 18:54:20 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:04:51.279 18:54:20 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:04:51.279 18:54:20 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:51.279 18:54:20 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:51.279 18:54:20 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:51.279 18:54:20 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:51.279 18:54:20 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:51.540 18:54:20 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:04:51.540 18:54:20 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:65:00.0 00:04:51.540 18:54:20 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:51.540 18:54:20 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:04:51.540 18:54:20 -- common/autotest_common.sh@1564 -- # device=0xa80a 00:04:51.540 18:54:20 -- common/autotest_common.sh@1565 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:04:51.540 18:54:20 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:04:51.540 18:54:20 -- common/autotest_common.sh@1570 -- # return 0 00:04:51.540 18:54:20 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:04:51.540 18:54:20 -- common/autotest_common.sh@1578 -- # return 0 00:04:51.540 18:54:20 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:51.540 18:54:20 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:51.540 18:54:20 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:51.540 18:54:20 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:51.540 18:54:20 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:51.540 18:54:20 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:51.540 18:54:20 -- common/autotest_common.sh@10 -- # set +x 00:04:51.540 18:54:20 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:51.540 18:54:20 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:51.540 18:54:20 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:51.540 18:54:20 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:51.540 18:54:20 -- common/autotest_common.sh@10 -- # set +x 00:04:51.540 ************************************ 00:04:51.540 START TEST env 00:04:51.540 ************************************ 00:04:51.540 18:54:20 env -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:51.540 * Looking for test storage... 00:04:51.540 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:51.540 18:54:20 env -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:51.540 18:54:20 env -- common/autotest_common.sh@1691 -- # lcov --version 00:04:51.540 18:54:20 env -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:51.802 18:54:20 env -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:51.802 18:54:20 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:51.802 18:54:20 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:51.802 18:54:20 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:51.802 18:54:20 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:51.802 18:54:20 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:51.802 18:54:20 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:51.802 18:54:20 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:51.802 18:54:20 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:51.802 18:54:20 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:51.802 18:54:20 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:51.802 18:54:20 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:51.802 18:54:20 env -- scripts/common.sh@344 -- # case "$op" in 00:04:51.802 18:54:20 env -- scripts/common.sh@345 -- # : 1 00:04:51.802 18:54:20 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:51.802 18:54:20 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:51.802 18:54:20 env -- scripts/common.sh@365 -- # decimal 1 00:04:51.802 18:54:20 env -- scripts/common.sh@353 -- # local d=1 00:04:51.802 18:54:20 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:51.802 18:54:20 env -- scripts/common.sh@355 -- # echo 1 00:04:51.802 18:54:20 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:51.802 18:54:20 env -- scripts/common.sh@366 -- # decimal 2 00:04:51.802 18:54:20 env -- scripts/common.sh@353 -- # local d=2 00:04:51.802 18:54:20 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:51.802 18:54:20 env -- scripts/common.sh@355 -- # echo 2 00:04:51.802 18:54:20 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:51.802 18:54:20 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:51.802 18:54:20 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:51.802 18:54:20 env -- scripts/common.sh@368 -- # return 0 00:04:51.802 18:54:20 env -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:51.802 18:54:20 env -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:51.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.802 --rc genhtml_branch_coverage=1 00:04:51.802 --rc genhtml_function_coverage=1 00:04:51.802 --rc genhtml_legend=1 00:04:51.802 --rc geninfo_all_blocks=1 00:04:51.802 --rc geninfo_unexecuted_blocks=1 00:04:51.802 00:04:51.802 ' 00:04:51.802 18:54:20 env -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:51.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.802 --rc genhtml_branch_coverage=1 00:04:51.802 --rc genhtml_function_coverage=1 00:04:51.802 --rc genhtml_legend=1 00:04:51.802 --rc geninfo_all_blocks=1 00:04:51.802 --rc geninfo_unexecuted_blocks=1 00:04:51.802 00:04:51.802 ' 00:04:51.802 18:54:20 env -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:51.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.802 --rc genhtml_branch_coverage=1 00:04:51.802 --rc genhtml_function_coverage=1 00:04:51.802 --rc genhtml_legend=1 00:04:51.802 --rc geninfo_all_blocks=1 00:04:51.802 --rc geninfo_unexecuted_blocks=1 00:04:51.802 00:04:51.802 ' 00:04:51.802 18:54:20 env -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:51.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.802 --rc genhtml_branch_coverage=1 00:04:51.802 --rc genhtml_function_coverage=1 00:04:51.802 --rc genhtml_legend=1 00:04:51.802 --rc geninfo_all_blocks=1 00:04:51.802 --rc geninfo_unexecuted_blocks=1 00:04:51.802 00:04:51.802 ' 00:04:51.802 18:54:20 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:51.802 18:54:20 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:51.802 18:54:20 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:51.802 18:54:20 env -- common/autotest_common.sh@10 -- # set +x 00:04:51.802 ************************************ 00:04:51.802 START TEST env_memory 00:04:51.802 ************************************ 00:04:51.802 18:54:20 env.env_memory -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:51.802 00:04:51.802 00:04:51.802 CUnit - A unit testing framework for C - Version 2.1-3 00:04:51.802 http://cunit.sourceforge.net/ 00:04:51.802 00:04:51.802 00:04:51.802 Suite: memory 00:04:51.802 Test: alloc and free memory map ...[2024-11-05 18:54:21.017344] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:51.802 passed 00:04:51.802 Test: mem map translation ...[2024-11-05 18:54:21.042775] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:51.802 [2024-11-05 18:54:21.042797] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:51.802 [2024-11-05 18:54:21.042843] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:51.802 [2024-11-05 18:54:21.042850] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:51.802 passed 00:04:51.802 Test: mem map registration ...[2024-11-05 18:54:21.097981] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:51.802 [2024-11-05 18:54:21.097998] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:51.802 passed 00:04:52.064 Test: mem map adjacent registrations ...passed 00:04:52.064 00:04:52.064 Run Summary: Type Total Ran Passed Failed Inactive 00:04:52.064 suites 1 1 n/a 0 0 00:04:52.064 tests 4 4 4 0 0 00:04:52.064 asserts 152 152 152 0 n/a 00:04:52.064 00:04:52.064 Elapsed time = 0.193 seconds 00:04:52.064 00:04:52.064 real 0m0.206s 00:04:52.064 user 0m0.196s 00:04:52.064 sys 0m0.010s 00:04:52.064 18:54:21 env.env_memory -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:52.064 18:54:21 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:52.064 ************************************ 00:04:52.064 END TEST env_memory 00:04:52.064 ************************************ 00:04:52.064 18:54:21 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:52.064 18:54:21 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:52.064 18:54:21 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:52.064 18:54:21 env -- common/autotest_common.sh@10 -- # set +x 00:04:52.064 ************************************ 00:04:52.064 START TEST env_vtophys 00:04:52.064 ************************************ 00:04:52.064 18:54:21 env.env_vtophys -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:52.064 EAL: lib.eal log level changed from notice to debug 00:04:52.064 EAL: Detected lcore 0 as core 0 on socket 0 00:04:52.064 EAL: Detected lcore 1 as core 1 on socket 0 00:04:52.064 EAL: Detected lcore 2 as core 2 on socket 0 00:04:52.064 EAL: Detected lcore 3 as core 3 on socket 0 00:04:52.064 EAL: Detected lcore 4 as core 4 on socket 0 00:04:52.064 EAL: Detected lcore 5 as core 5 on socket 0 00:04:52.064 EAL: Detected lcore 6 as core 6 on socket 0 00:04:52.064 EAL: Detected lcore 7 as core 7 on socket 0 00:04:52.064 EAL: Detected lcore 8 as core 8 on socket 0 00:04:52.064 EAL: Detected lcore 9 as core 9 on socket 0 00:04:52.064 EAL: Detected lcore 10 as core 10 on socket 0 00:04:52.064 EAL: Detected lcore 11 as core 11 on socket 0 00:04:52.064 EAL: Detected lcore 12 as core 12 on socket 0 00:04:52.064 EAL: Detected lcore 13 as core 13 on socket 0 00:04:52.064 EAL: Detected lcore 14 as core 14 on socket 0 00:04:52.064 EAL: Detected lcore 15 as core 15 on socket 0 00:04:52.064 EAL: Detected lcore 16 as core 16 on socket 0 00:04:52.064 EAL: Detected lcore 17 as core 17 on socket 0 00:04:52.064 EAL: Detected lcore 18 as core 18 on socket 0 00:04:52.064 EAL: Detected lcore 19 as core 19 on socket 0 00:04:52.064 EAL: Detected lcore 20 as core 20 on socket 0 00:04:52.064 EAL: Detected lcore 21 as core 21 on socket 0 00:04:52.064 EAL: Detected lcore 22 as core 22 on socket 0 00:04:52.064 EAL: Detected lcore 23 as core 23 on socket 0 00:04:52.064 EAL: Detected lcore 24 as core 24 on socket 0 00:04:52.064 EAL: Detected lcore 25 as core 25 on socket 0 00:04:52.064 EAL: Detected lcore 26 as core 26 on socket 0 00:04:52.064 EAL: Detected lcore 27 as core 27 on socket 0 00:04:52.064 EAL: Detected lcore 28 as core 28 on socket 0 00:04:52.064 EAL: Detected lcore 29 as core 29 on socket 0 00:04:52.064 EAL: Detected lcore 30 as core 30 on socket 0 00:04:52.064 EAL: Detected lcore 31 as core 31 on socket 0 00:04:52.064 EAL: Detected lcore 32 as core 32 on socket 0 00:04:52.064 EAL: Detected lcore 33 as core 33 on socket 0 00:04:52.064 EAL: Detected lcore 34 as core 34 on socket 0 00:04:52.064 EAL: Detected lcore 35 as core 35 on socket 0 00:04:52.064 EAL: Detected lcore 36 as core 0 on socket 1 00:04:52.064 EAL: Detected lcore 37 as core 1 on socket 1 00:04:52.064 EAL: Detected lcore 38 as core 2 on socket 1 00:04:52.064 EAL: Detected lcore 39 as core 3 on socket 1 00:04:52.064 EAL: Detected lcore 40 as core 4 on socket 1 00:04:52.064 EAL: Detected lcore 41 as core 5 on socket 1 00:04:52.064 EAL: Detected lcore 42 as core 6 on socket 1 00:04:52.064 EAL: Detected lcore 43 as core 7 on socket 1 00:04:52.064 EAL: Detected lcore 44 as core 8 on socket 1 00:04:52.064 EAL: Detected lcore 45 as core 9 on socket 1 00:04:52.064 EAL: Detected lcore 46 as core 10 on socket 1 00:04:52.064 EAL: Detected lcore 47 as core 11 on socket 1 00:04:52.064 EAL: Detected lcore 48 as core 12 on socket 1 00:04:52.064 EAL: Detected lcore 49 as core 13 on socket 1 00:04:52.064 EAL: Detected lcore 50 as core 14 on socket 1 00:04:52.064 EAL: Detected lcore 51 as core 15 on socket 1 00:04:52.064 EAL: Detected lcore 52 as core 16 on socket 1 00:04:52.064 EAL: Detected lcore 53 as core 17 on socket 1 00:04:52.064 EAL: Detected lcore 54 as core 18 on socket 1 00:04:52.064 EAL: Detected lcore 55 as core 19 on socket 1 00:04:52.064 EAL: Detected lcore 56 as core 20 on socket 1 00:04:52.064 EAL: Detected lcore 57 as core 21 on socket 1 00:04:52.064 EAL: Detected lcore 58 as core 22 on socket 1 00:04:52.064 EAL: Detected lcore 59 as core 23 on socket 1 00:04:52.064 EAL: Detected lcore 60 as core 24 on socket 1 00:04:52.064 EAL: Detected lcore 61 as core 25 on socket 1 00:04:52.064 EAL: Detected lcore 62 as core 26 on socket 1 00:04:52.064 EAL: Detected lcore 63 as core 27 on socket 1 00:04:52.064 EAL: Detected lcore 64 as core 28 on socket 1 00:04:52.064 EAL: Detected lcore 65 as core 29 on socket 1 00:04:52.064 EAL: Detected lcore 66 as core 30 on socket 1 00:04:52.064 EAL: Detected lcore 67 as core 31 on socket 1 00:04:52.064 EAL: Detected lcore 68 as core 32 on socket 1 00:04:52.064 EAL: Detected lcore 69 as core 33 on socket 1 00:04:52.064 EAL: Detected lcore 70 as core 34 on socket 1 00:04:52.064 EAL: Detected lcore 71 as core 35 on socket 1 00:04:52.064 EAL: Detected lcore 72 as core 0 on socket 0 00:04:52.064 EAL: Detected lcore 73 as core 1 on socket 0 00:04:52.064 EAL: Detected lcore 74 as core 2 on socket 0 00:04:52.064 EAL: Detected lcore 75 as core 3 on socket 0 00:04:52.064 EAL: Detected lcore 76 as core 4 on socket 0 00:04:52.064 EAL: Detected lcore 77 as core 5 on socket 0 00:04:52.064 EAL: Detected lcore 78 as core 6 on socket 0 00:04:52.064 EAL: Detected lcore 79 as core 7 on socket 0 00:04:52.064 EAL: Detected lcore 80 as core 8 on socket 0 00:04:52.064 EAL: Detected lcore 81 as core 9 on socket 0 00:04:52.064 EAL: Detected lcore 82 as core 10 on socket 0 00:04:52.064 EAL: Detected lcore 83 as core 11 on socket 0 00:04:52.064 EAL: Detected lcore 84 as core 12 on socket 0 00:04:52.064 EAL: Detected lcore 85 as core 13 on socket 0 00:04:52.064 EAL: Detected lcore 86 as core 14 on socket 0 00:04:52.064 EAL: Detected lcore 87 as core 15 on socket 0 00:04:52.064 EAL: Detected lcore 88 as core 16 on socket 0 00:04:52.064 EAL: Detected lcore 89 as core 17 on socket 0 00:04:52.064 EAL: Detected lcore 90 as core 18 on socket 0 00:04:52.064 EAL: Detected lcore 91 as core 19 on socket 0 00:04:52.064 EAL: Detected lcore 92 as core 20 on socket 0 00:04:52.064 EAL: Detected lcore 93 as core 21 on socket 0 00:04:52.064 EAL: Detected lcore 94 as core 22 on socket 0 00:04:52.064 EAL: Detected lcore 95 as core 23 on socket 0 00:04:52.064 EAL: Detected lcore 96 as core 24 on socket 0 00:04:52.064 EAL: Detected lcore 97 as core 25 on socket 0 00:04:52.064 EAL: Detected lcore 98 as core 26 on socket 0 00:04:52.064 EAL: Detected lcore 99 as core 27 on socket 0 00:04:52.064 EAL: Detected lcore 100 as core 28 on socket 0 00:04:52.064 EAL: Detected lcore 101 as core 29 on socket 0 00:04:52.064 EAL: Detected lcore 102 as core 30 on socket 0 00:04:52.064 EAL: Detected lcore 103 as core 31 on socket 0 00:04:52.064 EAL: Detected lcore 104 as core 32 on socket 0 00:04:52.064 EAL: Detected lcore 105 as core 33 on socket 0 00:04:52.064 EAL: Detected lcore 106 as core 34 on socket 0 00:04:52.064 EAL: Detected lcore 107 as core 35 on socket 0 00:04:52.064 EAL: Detected lcore 108 as core 0 on socket 1 00:04:52.064 EAL: Detected lcore 109 as core 1 on socket 1 00:04:52.064 EAL: Detected lcore 110 as core 2 on socket 1 00:04:52.064 EAL: Detected lcore 111 as core 3 on socket 1 00:04:52.064 EAL: Detected lcore 112 as core 4 on socket 1 00:04:52.064 EAL: Detected lcore 113 as core 5 on socket 1 00:04:52.064 EAL: Detected lcore 114 as core 6 on socket 1 00:04:52.064 EAL: Detected lcore 115 as core 7 on socket 1 00:04:52.064 EAL: Detected lcore 116 as core 8 on socket 1 00:04:52.064 EAL: Detected lcore 117 as core 9 on socket 1 00:04:52.064 EAL: Detected lcore 118 as core 10 on socket 1 00:04:52.064 EAL: Detected lcore 119 as core 11 on socket 1 00:04:52.064 EAL: Detected lcore 120 as core 12 on socket 1 00:04:52.064 EAL: Detected lcore 121 as core 13 on socket 1 00:04:52.064 EAL: Detected lcore 122 as core 14 on socket 1 00:04:52.064 EAL: Detected lcore 123 as core 15 on socket 1 00:04:52.064 EAL: Detected lcore 124 as core 16 on socket 1 00:04:52.064 EAL: Detected lcore 125 as core 17 on socket 1 00:04:52.064 EAL: Detected lcore 126 as core 18 on socket 1 00:04:52.064 EAL: Detected lcore 127 as core 19 on socket 1 00:04:52.064 EAL: Skipped lcore 128 as core 20 on socket 1 00:04:52.064 EAL: Skipped lcore 129 as core 21 on socket 1 00:04:52.064 EAL: Skipped lcore 130 as core 22 on socket 1 00:04:52.064 EAL: Skipped lcore 131 as core 23 on socket 1 00:04:52.064 EAL: Skipped lcore 132 as core 24 on socket 1 00:04:52.064 EAL: Skipped lcore 133 as core 25 on socket 1 00:04:52.064 EAL: Skipped lcore 134 as core 26 on socket 1 00:04:52.064 EAL: Skipped lcore 135 as core 27 on socket 1 00:04:52.064 EAL: Skipped lcore 136 as core 28 on socket 1 00:04:52.064 EAL: Skipped lcore 137 as core 29 on socket 1 00:04:52.064 EAL: Skipped lcore 138 as core 30 on socket 1 00:04:52.064 EAL: Skipped lcore 139 as core 31 on socket 1 00:04:52.064 EAL: Skipped lcore 140 as core 32 on socket 1 00:04:52.064 EAL: Skipped lcore 141 as core 33 on socket 1 00:04:52.064 EAL: Skipped lcore 142 as core 34 on socket 1 00:04:52.064 EAL: Skipped lcore 143 as core 35 on socket 1 00:04:52.064 EAL: Maximum logical cores by configuration: 128 00:04:52.064 EAL: Detected CPU lcores: 128 00:04:52.064 EAL: Detected NUMA nodes: 2 00:04:52.064 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:52.064 EAL: Detected shared linkage of DPDK 00:04:52.064 EAL: No shared files mode enabled, IPC will be disabled 00:04:52.064 EAL: Bus pci wants IOVA as 'DC' 00:04:52.064 EAL: Buses did not request a specific IOVA mode. 00:04:52.064 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:52.064 EAL: Selected IOVA mode 'VA' 00:04:52.064 EAL: Probing VFIO support... 00:04:52.064 EAL: IOMMU type 1 (Type 1) is supported 00:04:52.064 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:52.064 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:52.064 EAL: VFIO support initialized 00:04:52.064 EAL: Ask a virtual area of 0x2e000 bytes 00:04:52.064 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:52.065 EAL: Setting up physically contiguous memory... 00:04:52.065 EAL: Setting maximum number of open files to 524288 00:04:52.065 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:52.065 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:52.065 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:52.065 EAL: Ask a virtual area of 0x61000 bytes 00:04:52.065 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:52.065 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:52.065 EAL: Ask a virtual area of 0x400000000 bytes 00:04:52.065 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:52.065 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:52.065 EAL: Ask a virtual area of 0x61000 bytes 00:04:52.065 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:52.065 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:52.065 EAL: Ask a virtual area of 0x400000000 bytes 00:04:52.065 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:52.065 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:52.065 EAL: Ask a virtual area of 0x61000 bytes 00:04:52.065 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:52.065 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:52.065 EAL: Ask a virtual area of 0x400000000 bytes 00:04:52.065 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:52.065 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:52.065 EAL: Ask a virtual area of 0x61000 bytes 00:04:52.065 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:52.065 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:52.065 EAL: Ask a virtual area of 0x400000000 bytes 00:04:52.065 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:52.065 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:52.065 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:52.065 EAL: Ask a virtual area of 0x61000 bytes 00:04:52.065 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:52.065 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:52.065 EAL: Ask a virtual area of 0x400000000 bytes 00:04:52.065 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:52.065 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:52.065 EAL: Ask a virtual area of 0x61000 bytes 00:04:52.065 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:52.065 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:52.065 EAL: Ask a virtual area of 0x400000000 bytes 00:04:52.065 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:52.065 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:52.065 EAL: Ask a virtual area of 0x61000 bytes 00:04:52.065 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:52.065 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:52.065 EAL: Ask a virtual area of 0x400000000 bytes 00:04:52.065 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:52.065 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:52.065 EAL: Ask a virtual area of 0x61000 bytes 00:04:52.065 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:52.065 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:52.065 EAL: Ask a virtual area of 0x400000000 bytes 00:04:52.065 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:52.065 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:52.065 EAL: Hugepages will be freed exactly as allocated. 00:04:52.065 EAL: No shared files mode enabled, IPC is disabled 00:04:52.065 EAL: No shared files mode enabled, IPC is disabled 00:04:52.065 EAL: TSC frequency is ~2400000 KHz 00:04:52.065 EAL: Main lcore 0 is ready (tid=7f75312e9a00;cpuset=[0]) 00:04:52.065 EAL: Trying to obtain current memory policy. 00:04:52.065 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:52.065 EAL: Restoring previous memory policy: 0 00:04:52.065 EAL: request: mp_malloc_sync 00:04:52.065 EAL: No shared files mode enabled, IPC is disabled 00:04:52.065 EAL: Heap on socket 0 was expanded by 2MB 00:04:52.065 EAL: No shared files mode enabled, IPC is disabled 00:04:52.065 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:52.065 EAL: Mem event callback 'spdk:(nil)' registered 00:04:52.065 00:04:52.065 00:04:52.065 CUnit - A unit testing framework for C - Version 2.1-3 00:04:52.065 http://cunit.sourceforge.net/ 00:04:52.065 00:04:52.065 00:04:52.065 Suite: components_suite 00:04:52.065 Test: vtophys_malloc_test ...passed 00:04:52.065 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:52.065 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:52.065 EAL: Restoring previous memory policy: 4 00:04:52.065 EAL: Calling mem event callback 'spdk:(nil)' 00:04:52.065 EAL: request: mp_malloc_sync 00:04:52.065 EAL: No shared files mode enabled, IPC is disabled 00:04:52.065 EAL: Heap on socket 0 was expanded by 4MB 00:04:52.065 EAL: Calling mem event callback 'spdk:(nil)' 00:04:52.065 EAL: request: mp_malloc_sync 00:04:52.065 EAL: No shared files mode enabled, IPC is disabled 00:04:52.065 EAL: Heap on socket 0 was shrunk by 4MB 00:04:52.065 EAL: Trying to obtain current memory policy. 00:04:52.065 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:52.065 EAL: Restoring previous memory policy: 4 00:04:52.065 EAL: Calling mem event callback 'spdk:(nil)' 00:04:52.065 EAL: request: mp_malloc_sync 00:04:52.065 EAL: No shared files mode enabled, IPC is disabled 00:04:52.065 EAL: Heap on socket 0 was expanded by 6MB 00:04:52.065 EAL: Calling mem event callback 'spdk:(nil)' 00:04:52.065 EAL: request: mp_malloc_sync 00:04:52.065 EAL: No shared files mode enabled, IPC is disabled 00:04:52.065 EAL: Heap on socket 0 was shrunk by 6MB 00:04:52.065 EAL: Trying to obtain current memory policy. 00:04:52.065 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:52.065 EAL: Restoring previous memory policy: 4 00:04:52.065 EAL: Calling mem event callback 'spdk:(nil)' 00:04:52.065 EAL: request: mp_malloc_sync 00:04:52.065 EAL: No shared files mode enabled, IPC is disabled 00:04:52.065 EAL: Heap on socket 0 was expanded by 10MB 00:04:52.065 EAL: Calling mem event callback 'spdk:(nil)' 00:04:52.065 EAL: request: mp_malloc_sync 00:04:52.065 EAL: No shared files mode enabled, IPC is disabled 00:04:52.065 EAL: Heap on socket 0 was shrunk by 10MB 00:04:52.065 EAL: Trying to obtain current memory policy. 00:04:52.065 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:52.065 EAL: Restoring previous memory policy: 4 00:04:52.065 EAL: Calling mem event callback 'spdk:(nil)' 00:04:52.065 EAL: request: mp_malloc_sync 00:04:52.065 EAL: No shared files mode enabled, IPC is disabled 00:04:52.065 EAL: Heap on socket 0 was expanded by 18MB 00:04:52.065 EAL: Calling mem event callback 'spdk:(nil)' 00:04:52.065 EAL: request: mp_malloc_sync 00:04:52.065 EAL: No shared files mode enabled, IPC is disabled 00:04:52.065 EAL: Heap on socket 0 was shrunk by 18MB 00:04:52.065 EAL: Trying to obtain current memory policy. 00:04:52.065 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:52.065 EAL: Restoring previous memory policy: 4 00:04:52.065 EAL: Calling mem event callback 'spdk:(nil)' 00:04:52.065 EAL: request: mp_malloc_sync 00:04:52.065 EAL: No shared files mode enabled, IPC is disabled 00:04:52.065 EAL: Heap on socket 0 was expanded by 34MB 00:04:52.065 EAL: Calling mem event callback 'spdk:(nil)' 00:04:52.065 EAL: request: mp_malloc_sync 00:04:52.065 EAL: No shared files mode enabled, IPC is disabled 00:04:52.065 EAL: Heap on socket 0 was shrunk by 34MB 00:04:52.065 EAL: Trying to obtain current memory policy. 00:04:52.065 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:52.065 EAL: Restoring previous memory policy: 4 00:04:52.065 EAL: Calling mem event callback 'spdk:(nil)' 00:04:52.065 EAL: request: mp_malloc_sync 00:04:52.065 EAL: No shared files mode enabled, IPC is disabled 00:04:52.065 EAL: Heap on socket 0 was expanded by 66MB 00:04:52.065 EAL: Calling mem event callback 'spdk:(nil)' 00:04:52.065 EAL: request: mp_malloc_sync 00:04:52.065 EAL: No shared files mode enabled, IPC is disabled 00:04:52.065 EAL: Heap on socket 0 was shrunk by 66MB 00:04:52.065 EAL: Trying to obtain current memory policy. 00:04:52.065 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:52.325 EAL: Restoring previous memory policy: 4 00:04:52.326 EAL: Calling mem event callback 'spdk:(nil)' 00:04:52.326 EAL: request: mp_malloc_sync 00:04:52.326 EAL: No shared files mode enabled, IPC is disabled 00:04:52.326 EAL: Heap on socket 0 was expanded by 130MB 00:04:52.326 EAL: Calling mem event callback 'spdk:(nil)' 00:04:52.326 EAL: request: mp_malloc_sync 00:04:52.326 EAL: No shared files mode enabled, IPC is disabled 00:04:52.326 EAL: Heap on socket 0 was shrunk by 130MB 00:04:52.326 EAL: Trying to obtain current memory policy. 00:04:52.326 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:52.326 EAL: Restoring previous memory policy: 4 00:04:52.326 EAL: Calling mem event callback 'spdk:(nil)' 00:04:52.326 EAL: request: mp_malloc_sync 00:04:52.326 EAL: No shared files mode enabled, IPC is disabled 00:04:52.326 EAL: Heap on socket 0 was expanded by 258MB 00:04:52.326 EAL: Calling mem event callback 'spdk:(nil)' 00:04:52.326 EAL: request: mp_malloc_sync 00:04:52.326 EAL: No shared files mode enabled, IPC is disabled 00:04:52.326 EAL: Heap on socket 0 was shrunk by 258MB 00:04:52.326 EAL: Trying to obtain current memory policy. 00:04:52.326 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:52.326 EAL: Restoring previous memory policy: 4 00:04:52.326 EAL: Calling mem event callback 'spdk:(nil)' 00:04:52.326 EAL: request: mp_malloc_sync 00:04:52.326 EAL: No shared files mode enabled, IPC is disabled 00:04:52.326 EAL: Heap on socket 0 was expanded by 514MB 00:04:52.326 EAL: Calling mem event callback 'spdk:(nil)' 00:04:52.586 EAL: request: mp_malloc_sync 00:04:52.586 EAL: No shared files mode enabled, IPC is disabled 00:04:52.586 EAL: Heap on socket 0 was shrunk by 514MB 00:04:52.586 EAL: Trying to obtain current memory policy. 00:04:52.586 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:52.586 EAL: Restoring previous memory policy: 4 00:04:52.586 EAL: Calling mem event callback 'spdk:(nil)' 00:04:52.586 EAL: request: mp_malloc_sync 00:04:52.586 EAL: No shared files mode enabled, IPC is disabled 00:04:52.586 EAL: Heap on socket 0 was expanded by 1026MB 00:04:52.846 EAL: Calling mem event callback 'spdk:(nil)' 00:04:52.846 EAL: request: mp_malloc_sync 00:04:52.846 EAL: No shared files mode enabled, IPC is disabled 00:04:52.846 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:52.846 passed 00:04:52.846 00:04:52.846 Run Summary: Type Total Ran Passed Failed Inactive 00:04:52.846 suites 1 1 n/a 0 0 00:04:52.846 tests 2 2 2 0 0 00:04:52.846 asserts 497 497 497 0 n/a 00:04:52.846 00:04:52.846 Elapsed time = 0.647 seconds 00:04:52.846 EAL: Calling mem event callback 'spdk:(nil)' 00:04:52.846 EAL: request: mp_malloc_sync 00:04:52.846 EAL: No shared files mode enabled, IPC is disabled 00:04:52.846 EAL: Heap on socket 0 was shrunk by 2MB 00:04:52.846 EAL: No shared files mode enabled, IPC is disabled 00:04:52.846 EAL: No shared files mode enabled, IPC is disabled 00:04:52.846 EAL: No shared files mode enabled, IPC is disabled 00:04:52.846 00:04:52.846 real 0m0.784s 00:04:52.846 user 0m0.415s 00:04:52.846 sys 0m0.334s 00:04:52.846 18:54:22 env.env_vtophys -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:52.846 18:54:22 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:52.846 ************************************ 00:04:52.846 END TEST env_vtophys 00:04:52.846 ************************************ 00:04:52.846 18:54:22 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:52.846 18:54:22 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:52.846 18:54:22 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:52.846 18:54:22 env -- common/autotest_common.sh@10 -- # set +x 00:04:52.846 ************************************ 00:04:52.846 START TEST env_pci 00:04:52.846 ************************************ 00:04:52.846 18:54:22 env.env_pci -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:52.846 00:04:52.846 00:04:52.846 CUnit - A unit testing framework for C - Version 2.1-3 00:04:52.846 http://cunit.sourceforge.net/ 00:04:52.846 00:04:52.846 00:04:52.846 Suite: pci 00:04:52.846 Test: pci_hook ...[2024-11-05 18:54:22.134225] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 109130 has claimed it 00:04:52.846 EAL: Cannot find device (10000:00:01.0) 00:04:52.846 EAL: Failed to attach device on primary process 00:04:52.846 passed 00:04:52.846 00:04:52.846 Run Summary: Type Total Ran Passed Failed Inactive 00:04:52.846 suites 1 1 n/a 0 0 00:04:52.846 tests 1 1 1 0 0 00:04:52.846 asserts 25 25 25 0 n/a 00:04:52.846 00:04:52.846 Elapsed time = 0.031 seconds 00:04:52.846 00:04:52.846 real 0m0.052s 00:04:52.846 user 0m0.018s 00:04:52.846 sys 0m0.034s 00:04:52.846 18:54:22 env.env_pci -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:52.846 18:54:22 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:52.846 ************************************ 00:04:52.846 END TEST env_pci 00:04:52.846 ************************************ 00:04:53.106 18:54:22 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:53.106 18:54:22 env -- env/env.sh@15 -- # uname 00:04:53.106 18:54:22 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:53.106 18:54:22 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:53.106 18:54:22 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:53.106 18:54:22 env -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:04:53.106 18:54:22 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:53.106 18:54:22 env -- common/autotest_common.sh@10 -- # set +x 00:04:53.106 ************************************ 00:04:53.106 START TEST env_dpdk_post_init 00:04:53.106 ************************************ 00:04:53.106 18:54:22 env.env_dpdk_post_init -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:53.106 EAL: Detected CPU lcores: 128 00:04:53.106 EAL: Detected NUMA nodes: 2 00:04:53.106 EAL: Detected shared linkage of DPDK 00:04:53.106 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:53.106 EAL: Selected IOVA mode 'VA' 00:04:53.106 EAL: VFIO support initialized 00:04:53.106 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:53.106 EAL: Using IOMMU type 1 (Type 1) 00:04:53.367 EAL: Ignore mapping IO port bar(1) 00:04:53.367 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:04:53.628 EAL: Ignore mapping IO port bar(1) 00:04:53.628 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:04:53.628 EAL: Ignore mapping IO port bar(1) 00:04:53.887 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:04:53.887 EAL: Ignore mapping IO port bar(1) 00:04:54.147 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:04:54.147 EAL: Ignore mapping IO port bar(1) 00:04:54.407 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:04:54.407 EAL: Ignore mapping IO port bar(1) 00:04:54.407 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:04:54.668 EAL: Ignore mapping IO port bar(1) 00:04:54.668 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:04:54.928 EAL: Ignore mapping IO port bar(1) 00:04:54.928 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:04:55.188 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:04:55.448 EAL: Ignore mapping IO port bar(1) 00:04:55.448 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:04:55.448 EAL: Ignore mapping IO port bar(1) 00:04:55.708 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:04:55.708 EAL: Ignore mapping IO port bar(1) 00:04:55.969 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:04:55.969 EAL: Ignore mapping IO port bar(1) 00:04:55.969 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:04:56.229 EAL: Ignore mapping IO port bar(1) 00:04:56.229 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:04:56.489 EAL: Ignore mapping IO port bar(1) 00:04:56.489 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:04:56.749 EAL: Ignore mapping IO port bar(1) 00:04:56.749 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:04:57.009 EAL: Ignore mapping IO port bar(1) 00:04:57.009 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:04:57.009 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:04:57.009 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:04:57.009 Starting DPDK initialization... 00:04:57.009 Starting SPDK post initialization... 00:04:57.009 SPDK NVMe probe 00:04:57.009 Attaching to 0000:65:00.0 00:04:57.009 Attached to 0000:65:00.0 00:04:57.009 Cleaning up... 00:04:58.920 00:04:58.920 real 0m5.726s 00:04:58.920 user 0m0.112s 00:04:58.920 sys 0m0.157s 00:04:58.920 18:54:27 env.env_dpdk_post_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:58.920 18:54:27 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:58.920 ************************************ 00:04:58.920 END TEST env_dpdk_post_init 00:04:58.920 ************************************ 00:04:58.920 18:54:28 env -- env/env.sh@26 -- # uname 00:04:58.920 18:54:28 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:58.920 18:54:28 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:58.920 18:54:28 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:58.920 18:54:28 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:58.920 18:54:28 env -- common/autotest_common.sh@10 -- # set +x 00:04:58.920 ************************************ 00:04:58.920 START TEST env_mem_callbacks 00:04:58.920 ************************************ 00:04:58.920 18:54:28 env.env_mem_callbacks -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:58.920 EAL: Detected CPU lcores: 128 00:04:58.920 EAL: Detected NUMA nodes: 2 00:04:58.920 EAL: Detected shared linkage of DPDK 00:04:58.920 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:58.920 EAL: Selected IOVA mode 'VA' 00:04:58.920 EAL: VFIO support initialized 00:04:58.920 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:58.920 00:04:58.920 00:04:58.920 CUnit - A unit testing framework for C - Version 2.1-3 00:04:58.920 http://cunit.sourceforge.net/ 00:04:58.920 00:04:58.920 00:04:58.920 Suite: memory 00:04:58.920 Test: test ... 00:04:58.920 register 0x200000200000 2097152 00:04:58.920 malloc 3145728 00:04:58.920 register 0x200000400000 4194304 00:04:58.920 buf 0x200000500000 len 3145728 PASSED 00:04:58.920 malloc 64 00:04:58.920 buf 0x2000004fff40 len 64 PASSED 00:04:58.920 malloc 4194304 00:04:58.920 register 0x200000800000 6291456 00:04:58.921 buf 0x200000a00000 len 4194304 PASSED 00:04:58.921 free 0x200000500000 3145728 00:04:58.921 free 0x2000004fff40 64 00:04:58.921 unregister 0x200000400000 4194304 PASSED 00:04:58.921 free 0x200000a00000 4194304 00:04:58.921 unregister 0x200000800000 6291456 PASSED 00:04:58.921 malloc 8388608 00:04:58.921 register 0x200000400000 10485760 00:04:58.921 buf 0x200000600000 len 8388608 PASSED 00:04:58.921 free 0x200000600000 8388608 00:04:58.921 unregister 0x200000400000 10485760 PASSED 00:04:58.921 passed 00:04:58.921 00:04:58.921 Run Summary: Type Total Ran Passed Failed Inactive 00:04:58.921 suites 1 1 n/a 0 0 00:04:58.921 tests 1 1 1 0 0 00:04:58.921 asserts 15 15 15 0 n/a 00:04:58.921 00:04:58.921 Elapsed time = 0.006 seconds 00:04:58.921 00:04:58.921 real 0m0.063s 00:04:58.921 user 0m0.019s 00:04:58.921 sys 0m0.044s 00:04:58.921 18:54:28 env.env_mem_callbacks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:58.921 18:54:28 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:58.921 ************************************ 00:04:58.921 END TEST env_mem_callbacks 00:04:58.921 ************************************ 00:04:58.921 00:04:58.921 real 0m7.434s 00:04:58.921 user 0m1.012s 00:04:58.921 sys 0m0.960s 00:04:58.921 18:54:28 env -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:58.921 18:54:28 env -- common/autotest_common.sh@10 -- # set +x 00:04:58.921 ************************************ 00:04:58.921 END TEST env 00:04:58.921 ************************************ 00:04:58.921 18:54:28 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:58.921 18:54:28 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:58.921 18:54:28 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:58.921 18:54:28 -- common/autotest_common.sh@10 -- # set +x 00:04:59.181 ************************************ 00:04:59.181 START TEST rpc 00:04:59.181 ************************************ 00:04:59.181 18:54:28 rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:59.181 * Looking for test storage... 00:04:59.181 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:59.181 18:54:28 rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:59.181 18:54:28 rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:04:59.181 18:54:28 rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:59.181 18:54:28 rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:59.181 18:54:28 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:59.181 18:54:28 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:59.181 18:54:28 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:59.181 18:54:28 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:59.181 18:54:28 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:59.181 18:54:28 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:59.181 18:54:28 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:59.181 18:54:28 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:59.182 18:54:28 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:59.182 18:54:28 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:59.182 18:54:28 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:59.182 18:54:28 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:59.182 18:54:28 rpc -- scripts/common.sh@345 -- # : 1 00:04:59.182 18:54:28 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:59.182 18:54:28 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:59.182 18:54:28 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:59.182 18:54:28 rpc -- scripts/common.sh@353 -- # local d=1 00:04:59.182 18:54:28 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:59.182 18:54:28 rpc -- scripts/common.sh@355 -- # echo 1 00:04:59.182 18:54:28 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:59.182 18:54:28 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:59.182 18:54:28 rpc -- scripts/common.sh@353 -- # local d=2 00:04:59.182 18:54:28 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:59.182 18:54:28 rpc -- scripts/common.sh@355 -- # echo 2 00:04:59.182 18:54:28 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:59.182 18:54:28 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:59.182 18:54:28 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:59.182 18:54:28 rpc -- scripts/common.sh@368 -- # return 0 00:04:59.182 18:54:28 rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:59.182 18:54:28 rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:59.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.182 --rc genhtml_branch_coverage=1 00:04:59.182 --rc genhtml_function_coverage=1 00:04:59.182 --rc genhtml_legend=1 00:04:59.182 --rc geninfo_all_blocks=1 00:04:59.182 --rc geninfo_unexecuted_blocks=1 00:04:59.182 00:04:59.182 ' 00:04:59.182 18:54:28 rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:59.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.182 --rc genhtml_branch_coverage=1 00:04:59.182 --rc genhtml_function_coverage=1 00:04:59.182 --rc genhtml_legend=1 00:04:59.182 --rc geninfo_all_blocks=1 00:04:59.182 --rc geninfo_unexecuted_blocks=1 00:04:59.182 00:04:59.182 ' 00:04:59.182 18:54:28 rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:59.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.182 --rc genhtml_branch_coverage=1 00:04:59.182 --rc genhtml_function_coverage=1 00:04:59.182 --rc genhtml_legend=1 00:04:59.182 --rc geninfo_all_blocks=1 00:04:59.182 --rc geninfo_unexecuted_blocks=1 00:04:59.182 00:04:59.182 ' 00:04:59.182 18:54:28 rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:59.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.182 --rc genhtml_branch_coverage=1 00:04:59.182 --rc genhtml_function_coverage=1 00:04:59.182 --rc genhtml_legend=1 00:04:59.182 --rc geninfo_all_blocks=1 00:04:59.182 --rc geninfo_unexecuted_blocks=1 00:04:59.182 00:04:59.182 ' 00:04:59.182 18:54:28 rpc -- rpc/rpc.sh@65 -- # spdk_pid=110588 00:04:59.182 18:54:28 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:59.182 18:54:28 rpc -- rpc/rpc.sh@67 -- # waitforlisten 110588 00:04:59.182 18:54:28 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:59.182 18:54:28 rpc -- common/autotest_common.sh@833 -- # '[' -z 110588 ']' 00:04:59.182 18:54:28 rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:59.182 18:54:28 rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:59.182 18:54:28 rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:59.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:59.182 18:54:28 rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:59.182 18:54:28 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:59.442 [2024-11-05 18:54:28.514904] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:04:59.442 [2024-11-05 18:54:28.514979] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110588 ] 00:04:59.442 [2024-11-05 18:54:28.589808] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.442 [2024-11-05 18:54:28.631460] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:59.442 [2024-11-05 18:54:28.631494] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 110588' to capture a snapshot of events at runtime. 00:04:59.442 [2024-11-05 18:54:28.631502] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:59.442 [2024-11-05 18:54:28.631509] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:59.442 [2024-11-05 18:54:28.631515] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid110588 for offline analysis/debug. 00:04:59.442 [2024-11-05 18:54:28.632100] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.012 18:54:29 rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:00.012 18:54:29 rpc -- common/autotest_common.sh@866 -- # return 0 00:05:00.013 18:54:29 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:00.013 18:54:29 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:00.013 18:54:29 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:00.013 18:54:29 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:00.013 18:54:29 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:00.013 18:54:29 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:00.013 18:54:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.273 ************************************ 00:05:00.273 START TEST rpc_integrity 00:05:00.273 ************************************ 00:05:00.273 18:54:29 rpc.rpc_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:05:00.273 18:54:29 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:00.273 18:54:29 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:00.273 18:54:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:00.273 18:54:29 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:00.273 18:54:29 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:00.273 18:54:29 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:00.273 18:54:29 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:00.273 18:54:29 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:00.273 18:54:29 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:00.273 18:54:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:00.273 18:54:29 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:00.273 18:54:29 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:00.273 18:54:29 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:00.273 18:54:29 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:00.273 18:54:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:00.273 18:54:29 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:00.273 18:54:29 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:00.273 { 00:05:00.273 "name": "Malloc0", 00:05:00.273 "aliases": [ 00:05:00.273 "c55c0dc1-7d4f-406a-9f06-fa60d829ba75" 00:05:00.273 ], 00:05:00.273 "product_name": "Malloc disk", 00:05:00.273 "block_size": 512, 00:05:00.273 "num_blocks": 16384, 00:05:00.273 "uuid": "c55c0dc1-7d4f-406a-9f06-fa60d829ba75", 00:05:00.273 "assigned_rate_limits": { 00:05:00.273 "rw_ios_per_sec": 0, 00:05:00.273 "rw_mbytes_per_sec": 0, 00:05:00.273 "r_mbytes_per_sec": 0, 00:05:00.273 "w_mbytes_per_sec": 0 00:05:00.273 }, 00:05:00.273 "claimed": false, 00:05:00.273 "zoned": false, 00:05:00.273 "supported_io_types": { 00:05:00.273 "read": true, 00:05:00.273 "write": true, 00:05:00.273 "unmap": true, 00:05:00.274 "flush": true, 00:05:00.274 "reset": true, 00:05:00.274 "nvme_admin": false, 00:05:00.274 "nvme_io": false, 00:05:00.274 "nvme_io_md": false, 00:05:00.274 "write_zeroes": true, 00:05:00.274 "zcopy": true, 00:05:00.274 "get_zone_info": false, 00:05:00.274 "zone_management": false, 00:05:00.274 "zone_append": false, 00:05:00.274 "compare": false, 00:05:00.274 "compare_and_write": false, 00:05:00.274 "abort": true, 00:05:00.274 "seek_hole": false, 00:05:00.274 "seek_data": false, 00:05:00.274 "copy": true, 00:05:00.274 "nvme_iov_md": false 00:05:00.274 }, 00:05:00.274 "memory_domains": [ 00:05:00.274 { 00:05:00.274 "dma_device_id": "system", 00:05:00.274 "dma_device_type": 1 00:05:00.274 }, 00:05:00.274 { 00:05:00.274 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:00.274 "dma_device_type": 2 00:05:00.274 } 00:05:00.274 ], 00:05:00.274 "driver_specific": {} 00:05:00.274 } 00:05:00.274 ]' 00:05:00.274 18:54:29 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:00.274 18:54:29 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:00.274 18:54:29 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:00.274 18:54:29 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:00.274 18:54:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:00.274 [2024-11-05 18:54:29.485302] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:00.274 [2024-11-05 18:54:29.485333] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:00.274 [2024-11-05 18:54:29.485346] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xebdda0 00:05:00.274 [2024-11-05 18:54:29.485353] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:00.274 [2024-11-05 18:54:29.486720] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:00.274 [2024-11-05 18:54:29.486741] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:00.274 Passthru0 00:05:00.274 18:54:29 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:00.274 18:54:29 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:00.274 18:54:29 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:00.274 18:54:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:00.274 18:54:29 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:00.274 18:54:29 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:00.274 { 00:05:00.274 "name": "Malloc0", 00:05:00.274 "aliases": [ 00:05:00.274 "c55c0dc1-7d4f-406a-9f06-fa60d829ba75" 00:05:00.274 ], 00:05:00.274 "product_name": "Malloc disk", 00:05:00.274 "block_size": 512, 00:05:00.274 "num_blocks": 16384, 00:05:00.274 "uuid": "c55c0dc1-7d4f-406a-9f06-fa60d829ba75", 00:05:00.274 "assigned_rate_limits": { 00:05:00.274 "rw_ios_per_sec": 0, 00:05:00.274 "rw_mbytes_per_sec": 0, 00:05:00.274 "r_mbytes_per_sec": 0, 00:05:00.274 "w_mbytes_per_sec": 0 00:05:00.274 }, 00:05:00.274 "claimed": true, 00:05:00.274 "claim_type": "exclusive_write", 00:05:00.274 "zoned": false, 00:05:00.274 "supported_io_types": { 00:05:00.274 "read": true, 00:05:00.274 "write": true, 00:05:00.274 "unmap": true, 00:05:00.274 "flush": true, 00:05:00.274 "reset": true, 00:05:00.274 "nvme_admin": false, 00:05:00.274 "nvme_io": false, 00:05:00.274 "nvme_io_md": false, 00:05:00.274 "write_zeroes": true, 00:05:00.274 "zcopy": true, 00:05:00.274 "get_zone_info": false, 00:05:00.274 "zone_management": false, 00:05:00.274 "zone_append": false, 00:05:00.274 "compare": false, 00:05:00.274 "compare_and_write": false, 00:05:00.274 "abort": true, 00:05:00.274 "seek_hole": false, 00:05:00.274 "seek_data": false, 00:05:00.274 "copy": true, 00:05:00.274 "nvme_iov_md": false 00:05:00.274 }, 00:05:00.274 "memory_domains": [ 00:05:00.274 { 00:05:00.274 "dma_device_id": "system", 00:05:00.274 "dma_device_type": 1 00:05:00.274 }, 00:05:00.274 { 00:05:00.274 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:00.274 "dma_device_type": 2 00:05:00.274 } 00:05:00.274 ], 00:05:00.274 "driver_specific": {} 00:05:00.274 }, 00:05:00.274 { 00:05:00.274 "name": "Passthru0", 00:05:00.274 "aliases": [ 00:05:00.274 "816dd67d-4ca9-519c-93f3-d4cd95e13b27" 00:05:00.274 ], 00:05:00.274 "product_name": "passthru", 00:05:00.274 "block_size": 512, 00:05:00.274 "num_blocks": 16384, 00:05:00.274 "uuid": "816dd67d-4ca9-519c-93f3-d4cd95e13b27", 00:05:00.274 "assigned_rate_limits": { 00:05:00.274 "rw_ios_per_sec": 0, 00:05:00.274 "rw_mbytes_per_sec": 0, 00:05:00.274 "r_mbytes_per_sec": 0, 00:05:00.274 "w_mbytes_per_sec": 0 00:05:00.274 }, 00:05:00.274 "claimed": false, 00:05:00.274 "zoned": false, 00:05:00.274 "supported_io_types": { 00:05:00.274 "read": true, 00:05:00.274 "write": true, 00:05:00.274 "unmap": true, 00:05:00.274 "flush": true, 00:05:00.274 "reset": true, 00:05:00.274 "nvme_admin": false, 00:05:00.274 "nvme_io": false, 00:05:00.274 "nvme_io_md": false, 00:05:00.274 "write_zeroes": true, 00:05:00.274 "zcopy": true, 00:05:00.274 "get_zone_info": false, 00:05:00.274 "zone_management": false, 00:05:00.274 "zone_append": false, 00:05:00.274 "compare": false, 00:05:00.274 "compare_and_write": false, 00:05:00.274 "abort": true, 00:05:00.274 "seek_hole": false, 00:05:00.274 "seek_data": false, 00:05:00.274 "copy": true, 00:05:00.274 "nvme_iov_md": false 00:05:00.274 }, 00:05:00.274 "memory_domains": [ 00:05:00.274 { 00:05:00.274 "dma_device_id": "system", 00:05:00.274 "dma_device_type": 1 00:05:00.274 }, 00:05:00.274 { 00:05:00.274 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:00.274 "dma_device_type": 2 00:05:00.274 } 00:05:00.274 ], 00:05:00.274 "driver_specific": { 00:05:00.274 "passthru": { 00:05:00.274 "name": "Passthru0", 00:05:00.274 "base_bdev_name": "Malloc0" 00:05:00.274 } 00:05:00.274 } 00:05:00.274 } 00:05:00.274 ]' 00:05:00.274 18:54:29 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:00.274 18:54:29 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:00.274 18:54:29 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:00.274 18:54:29 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:00.274 18:54:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:00.274 18:54:29 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:00.274 18:54:29 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:00.274 18:54:29 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:00.274 18:54:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:00.274 18:54:29 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:00.274 18:54:29 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:00.274 18:54:29 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:00.274 18:54:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:00.274 18:54:29 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:00.274 18:54:29 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:00.274 18:54:29 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:00.534 18:54:29 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:00.535 00:05:00.535 real 0m0.300s 00:05:00.535 user 0m0.186s 00:05:00.535 sys 0m0.046s 00:05:00.535 18:54:29 rpc.rpc_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:00.535 18:54:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:00.535 ************************************ 00:05:00.535 END TEST rpc_integrity 00:05:00.535 ************************************ 00:05:00.535 18:54:29 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:00.535 18:54:29 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:00.535 18:54:29 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:00.535 18:54:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.535 ************************************ 00:05:00.535 START TEST rpc_plugins 00:05:00.535 ************************************ 00:05:00.535 18:54:29 rpc.rpc_plugins -- common/autotest_common.sh@1127 -- # rpc_plugins 00:05:00.535 18:54:29 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:00.535 18:54:29 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:00.535 18:54:29 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:00.535 18:54:29 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:00.535 18:54:29 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:00.535 18:54:29 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:00.535 18:54:29 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:00.535 18:54:29 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:00.535 18:54:29 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:00.535 18:54:29 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:00.535 { 00:05:00.535 "name": "Malloc1", 00:05:00.535 "aliases": [ 00:05:00.535 "f8f936e3-079d-4c46-a1c4-23c49464c213" 00:05:00.535 ], 00:05:00.535 "product_name": "Malloc disk", 00:05:00.535 "block_size": 4096, 00:05:00.535 "num_blocks": 256, 00:05:00.535 "uuid": "f8f936e3-079d-4c46-a1c4-23c49464c213", 00:05:00.535 "assigned_rate_limits": { 00:05:00.535 "rw_ios_per_sec": 0, 00:05:00.535 "rw_mbytes_per_sec": 0, 00:05:00.535 "r_mbytes_per_sec": 0, 00:05:00.535 "w_mbytes_per_sec": 0 00:05:00.535 }, 00:05:00.535 "claimed": false, 00:05:00.535 "zoned": false, 00:05:00.535 "supported_io_types": { 00:05:00.535 "read": true, 00:05:00.535 "write": true, 00:05:00.535 "unmap": true, 00:05:00.535 "flush": true, 00:05:00.535 "reset": true, 00:05:00.535 "nvme_admin": false, 00:05:00.535 "nvme_io": false, 00:05:00.535 "nvme_io_md": false, 00:05:00.535 "write_zeroes": true, 00:05:00.535 "zcopy": true, 00:05:00.535 "get_zone_info": false, 00:05:00.535 "zone_management": false, 00:05:00.535 "zone_append": false, 00:05:00.535 "compare": false, 00:05:00.535 "compare_and_write": false, 00:05:00.535 "abort": true, 00:05:00.535 "seek_hole": false, 00:05:00.535 "seek_data": false, 00:05:00.535 "copy": true, 00:05:00.535 "nvme_iov_md": false 00:05:00.535 }, 00:05:00.535 "memory_domains": [ 00:05:00.535 { 00:05:00.535 "dma_device_id": "system", 00:05:00.535 "dma_device_type": 1 00:05:00.535 }, 00:05:00.535 { 00:05:00.535 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:00.535 "dma_device_type": 2 00:05:00.535 } 00:05:00.535 ], 00:05:00.535 "driver_specific": {} 00:05:00.535 } 00:05:00.535 ]' 00:05:00.535 18:54:29 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:00.535 18:54:29 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:00.535 18:54:29 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:00.535 18:54:29 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:00.535 18:54:29 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:00.535 18:54:29 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:00.535 18:54:29 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:00.535 18:54:29 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:00.535 18:54:29 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:00.535 18:54:29 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:00.535 18:54:29 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:00.535 18:54:29 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:00.795 18:54:29 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:00.795 00:05:00.795 real 0m0.149s 00:05:00.795 user 0m0.096s 00:05:00.795 sys 0m0.019s 00:05:00.795 18:54:29 rpc.rpc_plugins -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:00.795 18:54:29 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:00.795 ************************************ 00:05:00.795 END TEST rpc_plugins 00:05:00.795 ************************************ 00:05:00.795 18:54:29 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:00.795 18:54:29 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:00.795 18:54:29 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:00.795 18:54:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.795 ************************************ 00:05:00.795 START TEST rpc_trace_cmd_test 00:05:00.795 ************************************ 00:05:00.795 18:54:29 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1127 -- # rpc_trace_cmd_test 00:05:00.795 18:54:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:00.795 18:54:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:00.795 18:54:29 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:00.795 18:54:29 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:00.795 18:54:29 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:00.795 18:54:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:00.795 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid110588", 00:05:00.795 "tpoint_group_mask": "0x8", 00:05:00.795 "iscsi_conn": { 00:05:00.795 "mask": "0x2", 00:05:00.795 "tpoint_mask": "0x0" 00:05:00.795 }, 00:05:00.795 "scsi": { 00:05:00.795 "mask": "0x4", 00:05:00.795 "tpoint_mask": "0x0" 00:05:00.795 }, 00:05:00.795 "bdev": { 00:05:00.795 "mask": "0x8", 00:05:00.795 "tpoint_mask": "0xffffffffffffffff" 00:05:00.795 }, 00:05:00.795 "nvmf_rdma": { 00:05:00.795 "mask": "0x10", 00:05:00.795 "tpoint_mask": "0x0" 00:05:00.795 }, 00:05:00.795 "nvmf_tcp": { 00:05:00.795 "mask": "0x20", 00:05:00.795 "tpoint_mask": "0x0" 00:05:00.795 }, 00:05:00.795 "ftl": { 00:05:00.795 "mask": "0x40", 00:05:00.795 "tpoint_mask": "0x0" 00:05:00.795 }, 00:05:00.795 "blobfs": { 00:05:00.795 "mask": "0x80", 00:05:00.795 "tpoint_mask": "0x0" 00:05:00.795 }, 00:05:00.795 "dsa": { 00:05:00.795 "mask": "0x200", 00:05:00.795 "tpoint_mask": "0x0" 00:05:00.795 }, 00:05:00.795 "thread": { 00:05:00.795 "mask": "0x400", 00:05:00.795 "tpoint_mask": "0x0" 00:05:00.795 }, 00:05:00.795 "nvme_pcie": { 00:05:00.795 "mask": "0x800", 00:05:00.795 "tpoint_mask": "0x0" 00:05:00.795 }, 00:05:00.795 "iaa": { 00:05:00.795 "mask": "0x1000", 00:05:00.795 "tpoint_mask": "0x0" 00:05:00.795 }, 00:05:00.795 "nvme_tcp": { 00:05:00.795 "mask": "0x2000", 00:05:00.795 "tpoint_mask": "0x0" 00:05:00.795 }, 00:05:00.795 "bdev_nvme": { 00:05:00.795 "mask": "0x4000", 00:05:00.795 "tpoint_mask": "0x0" 00:05:00.795 }, 00:05:00.795 "sock": { 00:05:00.795 "mask": "0x8000", 00:05:00.795 "tpoint_mask": "0x0" 00:05:00.795 }, 00:05:00.795 "blob": { 00:05:00.795 "mask": "0x10000", 00:05:00.795 "tpoint_mask": "0x0" 00:05:00.795 }, 00:05:00.795 "bdev_raid": { 00:05:00.795 "mask": "0x20000", 00:05:00.795 "tpoint_mask": "0x0" 00:05:00.795 }, 00:05:00.795 "scheduler": { 00:05:00.795 "mask": "0x40000", 00:05:00.795 "tpoint_mask": "0x0" 00:05:00.795 } 00:05:00.795 }' 00:05:00.795 18:54:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:00.795 18:54:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:05:00.795 18:54:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:00.795 18:54:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:00.795 18:54:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:00.795 18:54:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:00.795 18:54:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:01.056 18:54:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:01.056 18:54:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:01.056 18:54:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:01.056 00:05:01.056 real 0m0.251s 00:05:01.056 user 0m0.213s 00:05:01.056 sys 0m0.030s 00:05:01.056 18:54:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:01.056 18:54:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:01.056 ************************************ 00:05:01.056 END TEST rpc_trace_cmd_test 00:05:01.056 ************************************ 00:05:01.056 18:54:30 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:01.056 18:54:30 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:01.056 18:54:30 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:01.056 18:54:30 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:01.056 18:54:30 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:01.056 18:54:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:01.056 ************************************ 00:05:01.056 START TEST rpc_daemon_integrity 00:05:01.056 ************************************ 00:05:01.056 18:54:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:05:01.056 18:54:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:01.056 18:54:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:01.056 18:54:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:01.056 18:54:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:01.056 18:54:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:01.056 18:54:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:01.056 18:54:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:01.056 18:54:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:01.056 18:54:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:01.056 18:54:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:01.056 18:54:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:01.056 18:54:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:01.056 18:54:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:01.056 18:54:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:01.056 18:54:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:01.056 18:54:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:01.056 18:54:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:01.057 { 00:05:01.057 "name": "Malloc2", 00:05:01.057 "aliases": [ 00:05:01.057 "a44f5500-1e73-4bac-9938-8fccda4529e4" 00:05:01.057 ], 00:05:01.057 "product_name": "Malloc disk", 00:05:01.057 "block_size": 512, 00:05:01.057 "num_blocks": 16384, 00:05:01.057 "uuid": "a44f5500-1e73-4bac-9938-8fccda4529e4", 00:05:01.057 "assigned_rate_limits": { 00:05:01.057 "rw_ios_per_sec": 0, 00:05:01.057 "rw_mbytes_per_sec": 0, 00:05:01.057 "r_mbytes_per_sec": 0, 00:05:01.057 "w_mbytes_per_sec": 0 00:05:01.057 }, 00:05:01.057 "claimed": false, 00:05:01.057 "zoned": false, 00:05:01.057 "supported_io_types": { 00:05:01.057 "read": true, 00:05:01.057 "write": true, 00:05:01.057 "unmap": true, 00:05:01.057 "flush": true, 00:05:01.057 "reset": true, 00:05:01.057 "nvme_admin": false, 00:05:01.057 "nvme_io": false, 00:05:01.057 "nvme_io_md": false, 00:05:01.057 "write_zeroes": true, 00:05:01.057 "zcopy": true, 00:05:01.057 "get_zone_info": false, 00:05:01.057 "zone_management": false, 00:05:01.057 "zone_append": false, 00:05:01.057 "compare": false, 00:05:01.057 "compare_and_write": false, 00:05:01.057 "abort": true, 00:05:01.057 "seek_hole": false, 00:05:01.057 "seek_data": false, 00:05:01.057 "copy": true, 00:05:01.057 "nvme_iov_md": false 00:05:01.057 }, 00:05:01.057 "memory_domains": [ 00:05:01.057 { 00:05:01.057 "dma_device_id": "system", 00:05:01.057 "dma_device_type": 1 00:05:01.057 }, 00:05:01.057 { 00:05:01.057 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:01.057 "dma_device_type": 2 00:05:01.057 } 00:05:01.057 ], 00:05:01.057 "driver_specific": {} 00:05:01.057 } 00:05:01.057 ]' 00:05:01.057 18:54:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:01.316 18:54:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:01.316 18:54:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:01.316 18:54:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:01.316 18:54:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:01.316 [2024-11-05 18:54:30.415834] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:01.316 [2024-11-05 18:54:30.415863] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:01.316 [2024-11-05 18:54:30.415877] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xfef090 00:05:01.316 [2024-11-05 18:54:30.415885] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:01.316 [2024-11-05 18:54:30.417206] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:01.316 [2024-11-05 18:54:30.417226] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:01.316 Passthru0 00:05:01.316 18:54:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:01.316 18:54:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:01.316 18:54:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:01.316 18:54:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:01.316 18:54:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:01.316 18:54:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:01.316 { 00:05:01.316 "name": "Malloc2", 00:05:01.316 "aliases": [ 00:05:01.316 "a44f5500-1e73-4bac-9938-8fccda4529e4" 00:05:01.316 ], 00:05:01.316 "product_name": "Malloc disk", 00:05:01.316 "block_size": 512, 00:05:01.316 "num_blocks": 16384, 00:05:01.316 "uuid": "a44f5500-1e73-4bac-9938-8fccda4529e4", 00:05:01.316 "assigned_rate_limits": { 00:05:01.316 "rw_ios_per_sec": 0, 00:05:01.316 "rw_mbytes_per_sec": 0, 00:05:01.316 "r_mbytes_per_sec": 0, 00:05:01.316 "w_mbytes_per_sec": 0 00:05:01.316 }, 00:05:01.316 "claimed": true, 00:05:01.316 "claim_type": "exclusive_write", 00:05:01.316 "zoned": false, 00:05:01.316 "supported_io_types": { 00:05:01.316 "read": true, 00:05:01.316 "write": true, 00:05:01.316 "unmap": true, 00:05:01.316 "flush": true, 00:05:01.316 "reset": true, 00:05:01.316 "nvme_admin": false, 00:05:01.316 "nvme_io": false, 00:05:01.316 "nvme_io_md": false, 00:05:01.316 "write_zeroes": true, 00:05:01.316 "zcopy": true, 00:05:01.316 "get_zone_info": false, 00:05:01.316 "zone_management": false, 00:05:01.316 "zone_append": false, 00:05:01.316 "compare": false, 00:05:01.316 "compare_and_write": false, 00:05:01.316 "abort": true, 00:05:01.316 "seek_hole": false, 00:05:01.316 "seek_data": false, 00:05:01.316 "copy": true, 00:05:01.316 "nvme_iov_md": false 00:05:01.316 }, 00:05:01.316 "memory_domains": [ 00:05:01.316 { 00:05:01.316 "dma_device_id": "system", 00:05:01.316 "dma_device_type": 1 00:05:01.316 }, 00:05:01.316 { 00:05:01.316 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:01.316 "dma_device_type": 2 00:05:01.316 } 00:05:01.316 ], 00:05:01.316 "driver_specific": {} 00:05:01.316 }, 00:05:01.316 { 00:05:01.316 "name": "Passthru0", 00:05:01.316 "aliases": [ 00:05:01.316 "3430668e-1f29-5574-a84d-1e152f3dd3a5" 00:05:01.316 ], 00:05:01.316 "product_name": "passthru", 00:05:01.317 "block_size": 512, 00:05:01.317 "num_blocks": 16384, 00:05:01.317 "uuid": "3430668e-1f29-5574-a84d-1e152f3dd3a5", 00:05:01.317 "assigned_rate_limits": { 00:05:01.317 "rw_ios_per_sec": 0, 00:05:01.317 "rw_mbytes_per_sec": 0, 00:05:01.317 "r_mbytes_per_sec": 0, 00:05:01.317 "w_mbytes_per_sec": 0 00:05:01.317 }, 00:05:01.317 "claimed": false, 00:05:01.317 "zoned": false, 00:05:01.317 "supported_io_types": { 00:05:01.317 "read": true, 00:05:01.317 "write": true, 00:05:01.317 "unmap": true, 00:05:01.317 "flush": true, 00:05:01.317 "reset": true, 00:05:01.317 "nvme_admin": false, 00:05:01.317 "nvme_io": false, 00:05:01.317 "nvme_io_md": false, 00:05:01.317 "write_zeroes": true, 00:05:01.317 "zcopy": true, 00:05:01.317 "get_zone_info": false, 00:05:01.317 "zone_management": false, 00:05:01.317 "zone_append": false, 00:05:01.317 "compare": false, 00:05:01.317 "compare_and_write": false, 00:05:01.317 "abort": true, 00:05:01.317 "seek_hole": false, 00:05:01.317 "seek_data": false, 00:05:01.317 "copy": true, 00:05:01.317 "nvme_iov_md": false 00:05:01.317 }, 00:05:01.317 "memory_domains": [ 00:05:01.317 { 00:05:01.317 "dma_device_id": "system", 00:05:01.317 "dma_device_type": 1 00:05:01.317 }, 00:05:01.317 { 00:05:01.317 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:01.317 "dma_device_type": 2 00:05:01.317 } 00:05:01.317 ], 00:05:01.317 "driver_specific": { 00:05:01.317 "passthru": { 00:05:01.317 "name": "Passthru0", 00:05:01.317 "base_bdev_name": "Malloc2" 00:05:01.317 } 00:05:01.317 } 00:05:01.317 } 00:05:01.317 ]' 00:05:01.317 18:54:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:01.317 18:54:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:01.317 18:54:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:01.317 18:54:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:01.317 18:54:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:01.317 18:54:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:01.317 18:54:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:01.317 18:54:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:01.317 18:54:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:01.317 18:54:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:01.317 18:54:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:01.317 18:54:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:01.317 18:54:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:01.317 18:54:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:01.317 18:54:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:01.317 18:54:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:01.317 18:54:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:01.317 00:05:01.317 real 0m0.299s 00:05:01.317 user 0m0.192s 00:05:01.317 sys 0m0.037s 00:05:01.317 18:54:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:01.317 18:54:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:01.317 ************************************ 00:05:01.317 END TEST rpc_daemon_integrity 00:05:01.317 ************************************ 00:05:01.317 18:54:30 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:01.317 18:54:30 rpc -- rpc/rpc.sh@84 -- # killprocess 110588 00:05:01.317 18:54:30 rpc -- common/autotest_common.sh@952 -- # '[' -z 110588 ']' 00:05:01.317 18:54:30 rpc -- common/autotest_common.sh@956 -- # kill -0 110588 00:05:01.317 18:54:30 rpc -- common/autotest_common.sh@957 -- # uname 00:05:01.317 18:54:30 rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:01.317 18:54:30 rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 110588 00:05:01.576 18:54:30 rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:01.576 18:54:30 rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:01.576 18:54:30 rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 110588' 00:05:01.576 killing process with pid 110588 00:05:01.576 18:54:30 rpc -- common/autotest_common.sh@971 -- # kill 110588 00:05:01.576 18:54:30 rpc -- common/autotest_common.sh@976 -- # wait 110588 00:05:01.576 00:05:01.576 real 0m2.625s 00:05:01.576 user 0m3.443s 00:05:01.576 sys 0m0.729s 00:05:01.576 18:54:30 rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:01.576 18:54:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:01.576 ************************************ 00:05:01.576 END TEST rpc 00:05:01.576 ************************************ 00:05:01.836 18:54:30 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:01.836 18:54:30 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:01.836 18:54:30 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:01.836 18:54:30 -- common/autotest_common.sh@10 -- # set +x 00:05:01.836 ************************************ 00:05:01.836 START TEST skip_rpc 00:05:01.836 ************************************ 00:05:01.836 18:54:30 skip_rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:01.836 * Looking for test storage... 00:05:01.836 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:01.836 18:54:31 skip_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:01.836 18:54:31 skip_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:05:01.836 18:54:31 skip_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:01.836 18:54:31 skip_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:01.836 18:54:31 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:01.836 18:54:31 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:01.836 18:54:31 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:01.836 18:54:31 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:01.836 18:54:31 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:01.836 18:54:31 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:01.836 18:54:31 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:01.836 18:54:31 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:01.836 18:54:31 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:01.836 18:54:31 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:01.836 18:54:31 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:01.836 18:54:31 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:01.836 18:54:31 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:01.836 18:54:31 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:01.836 18:54:31 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:01.836 18:54:31 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:01.836 18:54:31 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:01.836 18:54:31 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:01.836 18:54:31 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:01.836 18:54:31 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:01.836 18:54:31 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:01.836 18:54:31 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:01.836 18:54:31 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:01.836 18:54:31 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:01.836 18:54:31 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:01.836 18:54:31 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:01.836 18:54:31 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:01.836 18:54:31 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:01.836 18:54:31 skip_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:01.836 18:54:31 skip_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:01.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.836 --rc genhtml_branch_coverage=1 00:05:01.836 --rc genhtml_function_coverage=1 00:05:01.836 --rc genhtml_legend=1 00:05:01.836 --rc geninfo_all_blocks=1 00:05:01.836 --rc geninfo_unexecuted_blocks=1 00:05:01.836 00:05:01.836 ' 00:05:01.836 18:54:31 skip_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:01.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.836 --rc genhtml_branch_coverage=1 00:05:01.836 --rc genhtml_function_coverage=1 00:05:01.836 --rc genhtml_legend=1 00:05:01.836 --rc geninfo_all_blocks=1 00:05:01.836 --rc geninfo_unexecuted_blocks=1 00:05:01.836 00:05:01.836 ' 00:05:01.836 18:54:31 skip_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:01.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.836 --rc genhtml_branch_coverage=1 00:05:01.836 --rc genhtml_function_coverage=1 00:05:01.836 --rc genhtml_legend=1 00:05:01.836 --rc geninfo_all_blocks=1 00:05:01.836 --rc geninfo_unexecuted_blocks=1 00:05:01.836 00:05:01.836 ' 00:05:01.836 18:54:31 skip_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:01.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.836 --rc genhtml_branch_coverage=1 00:05:01.836 --rc genhtml_function_coverage=1 00:05:01.836 --rc genhtml_legend=1 00:05:01.836 --rc geninfo_all_blocks=1 00:05:01.836 --rc geninfo_unexecuted_blocks=1 00:05:01.836 00:05:01.836 ' 00:05:01.836 18:54:31 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:01.836 18:54:31 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:01.836 18:54:31 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:01.836 18:54:31 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:01.836 18:54:31 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:01.836 18:54:31 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:02.095 ************************************ 00:05:02.095 START TEST skip_rpc 00:05:02.095 ************************************ 00:05:02.095 18:54:31 skip_rpc.skip_rpc -- common/autotest_common.sh@1127 -- # test_skip_rpc 00:05:02.095 18:54:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=111144 00:05:02.095 18:54:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:02.095 18:54:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:02.095 18:54:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:02.095 [2024-11-05 18:54:31.256048] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:05:02.095 [2024-11-05 18:54:31.256119] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111144 ] 00:05:02.096 [2024-11-05 18:54:31.332412] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:02.096 [2024-11-05 18:54:31.374726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.378 18:54:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:07.378 18:54:36 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:07.378 18:54:36 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:07.378 18:54:36 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:07.378 18:54:36 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:07.378 18:54:36 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:07.378 18:54:36 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:07.378 18:54:36 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:05:07.378 18:54:36 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:07.378 18:54:36 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:07.378 18:54:36 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:07.378 18:54:36 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:07.378 18:54:36 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:07.378 18:54:36 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:07.378 18:54:36 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:07.378 18:54:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:07.378 18:54:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 111144 00:05:07.378 18:54:36 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # '[' -z 111144 ']' 00:05:07.378 18:54:36 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # kill -0 111144 00:05:07.378 18:54:36 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # uname 00:05:07.378 18:54:36 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:07.378 18:54:36 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 111144 00:05:07.378 18:54:36 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:07.378 18:54:36 skip_rpc.skip_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:07.378 18:54:36 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 111144' 00:05:07.378 killing process with pid 111144 00:05:07.378 18:54:36 skip_rpc.skip_rpc -- common/autotest_common.sh@971 -- # kill 111144 00:05:07.378 18:54:36 skip_rpc.skip_rpc -- common/autotest_common.sh@976 -- # wait 111144 00:05:07.378 00:05:07.378 real 0m5.284s 00:05:07.378 user 0m5.084s 00:05:07.378 sys 0m0.252s 00:05:07.378 18:54:36 skip_rpc.skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:07.378 18:54:36 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:07.378 ************************************ 00:05:07.378 END TEST skip_rpc 00:05:07.378 ************************************ 00:05:07.378 18:54:36 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:07.378 18:54:36 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:07.378 18:54:36 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:07.378 18:54:36 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:07.378 ************************************ 00:05:07.378 START TEST skip_rpc_with_json 00:05:07.378 ************************************ 00:05:07.378 18:54:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_json 00:05:07.378 18:54:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:07.378 18:54:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=112368 00:05:07.378 18:54:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:07.378 18:54:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 112368 00:05:07.378 18:54:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:07.378 18:54:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # '[' -z 112368 ']' 00:05:07.378 18:54:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:07.378 18:54:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:07.378 18:54:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:07.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:07.378 18:54:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:07.378 18:54:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:07.378 [2024-11-05 18:54:36.613231] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:05:07.378 [2024-11-05 18:54:36.613282] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112368 ] 00:05:07.378 [2024-11-05 18:54:36.684345] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.638 [2024-11-05 18:54:36.721617] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.208 18:54:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:08.208 18:54:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@866 -- # return 0 00:05:08.208 18:54:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:08.208 18:54:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:08.208 18:54:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:08.208 [2024-11-05 18:54:37.402320] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:08.208 request: 00:05:08.208 { 00:05:08.208 "trtype": "tcp", 00:05:08.208 "method": "nvmf_get_transports", 00:05:08.208 "req_id": 1 00:05:08.208 } 00:05:08.208 Got JSON-RPC error response 00:05:08.208 response: 00:05:08.208 { 00:05:08.208 "code": -19, 00:05:08.208 "message": "No such device" 00:05:08.208 } 00:05:08.208 18:54:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:08.208 18:54:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:08.208 18:54:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:08.208 18:54:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:08.208 [2024-11-05 18:54:37.414443] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:08.208 18:54:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:08.208 18:54:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:08.208 18:54:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:08.208 18:54:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:08.468 18:54:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:08.468 18:54:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:08.468 { 00:05:08.468 "subsystems": [ 00:05:08.468 { 00:05:08.468 "subsystem": "fsdev", 00:05:08.468 "config": [ 00:05:08.468 { 00:05:08.468 "method": "fsdev_set_opts", 00:05:08.468 "params": { 00:05:08.468 "fsdev_io_pool_size": 65535, 00:05:08.468 "fsdev_io_cache_size": 256 00:05:08.468 } 00:05:08.468 } 00:05:08.468 ] 00:05:08.468 }, 00:05:08.468 { 00:05:08.468 "subsystem": "vfio_user_target", 00:05:08.468 "config": null 00:05:08.468 }, 00:05:08.468 { 00:05:08.468 "subsystem": "keyring", 00:05:08.468 "config": [] 00:05:08.468 }, 00:05:08.468 { 00:05:08.468 "subsystem": "iobuf", 00:05:08.468 "config": [ 00:05:08.468 { 00:05:08.468 "method": "iobuf_set_options", 00:05:08.468 "params": { 00:05:08.468 "small_pool_count": 8192, 00:05:08.468 "large_pool_count": 1024, 00:05:08.468 "small_bufsize": 8192, 00:05:08.468 "large_bufsize": 135168, 00:05:08.468 "enable_numa": false 00:05:08.468 } 00:05:08.468 } 00:05:08.468 ] 00:05:08.468 }, 00:05:08.468 { 00:05:08.468 "subsystem": "sock", 00:05:08.468 "config": [ 00:05:08.468 { 00:05:08.468 "method": "sock_set_default_impl", 00:05:08.468 "params": { 00:05:08.468 "impl_name": "posix" 00:05:08.468 } 00:05:08.468 }, 00:05:08.468 { 00:05:08.468 "method": "sock_impl_set_options", 00:05:08.468 "params": { 00:05:08.468 "impl_name": "ssl", 00:05:08.468 "recv_buf_size": 4096, 00:05:08.468 "send_buf_size": 4096, 00:05:08.468 "enable_recv_pipe": true, 00:05:08.468 "enable_quickack": false, 00:05:08.468 "enable_placement_id": 0, 00:05:08.468 "enable_zerocopy_send_server": true, 00:05:08.468 "enable_zerocopy_send_client": false, 00:05:08.468 "zerocopy_threshold": 0, 00:05:08.468 "tls_version": 0, 00:05:08.468 "enable_ktls": false 00:05:08.468 } 00:05:08.468 }, 00:05:08.468 { 00:05:08.468 "method": "sock_impl_set_options", 00:05:08.468 "params": { 00:05:08.468 "impl_name": "posix", 00:05:08.468 "recv_buf_size": 2097152, 00:05:08.468 "send_buf_size": 2097152, 00:05:08.468 "enable_recv_pipe": true, 00:05:08.468 "enable_quickack": false, 00:05:08.468 "enable_placement_id": 0, 00:05:08.468 "enable_zerocopy_send_server": true, 00:05:08.468 "enable_zerocopy_send_client": false, 00:05:08.468 "zerocopy_threshold": 0, 00:05:08.468 "tls_version": 0, 00:05:08.468 "enable_ktls": false 00:05:08.468 } 00:05:08.468 } 00:05:08.468 ] 00:05:08.468 }, 00:05:08.468 { 00:05:08.468 "subsystem": "vmd", 00:05:08.468 "config": [] 00:05:08.468 }, 00:05:08.468 { 00:05:08.468 "subsystem": "accel", 00:05:08.468 "config": [ 00:05:08.468 { 00:05:08.468 "method": "accel_set_options", 00:05:08.468 "params": { 00:05:08.468 "small_cache_size": 128, 00:05:08.468 "large_cache_size": 16, 00:05:08.468 "task_count": 2048, 00:05:08.468 "sequence_count": 2048, 00:05:08.468 "buf_count": 2048 00:05:08.468 } 00:05:08.468 } 00:05:08.468 ] 00:05:08.468 }, 00:05:08.468 { 00:05:08.468 "subsystem": "bdev", 00:05:08.468 "config": [ 00:05:08.468 { 00:05:08.468 "method": "bdev_set_options", 00:05:08.468 "params": { 00:05:08.468 "bdev_io_pool_size": 65535, 00:05:08.468 "bdev_io_cache_size": 256, 00:05:08.468 "bdev_auto_examine": true, 00:05:08.468 "iobuf_small_cache_size": 128, 00:05:08.468 "iobuf_large_cache_size": 16 00:05:08.468 } 00:05:08.468 }, 00:05:08.468 { 00:05:08.468 "method": "bdev_raid_set_options", 00:05:08.468 "params": { 00:05:08.468 "process_window_size_kb": 1024, 00:05:08.468 "process_max_bandwidth_mb_sec": 0 00:05:08.468 } 00:05:08.468 }, 00:05:08.468 { 00:05:08.468 "method": "bdev_iscsi_set_options", 00:05:08.468 "params": { 00:05:08.468 "timeout_sec": 30 00:05:08.468 } 00:05:08.468 }, 00:05:08.468 { 00:05:08.468 "method": "bdev_nvme_set_options", 00:05:08.468 "params": { 00:05:08.468 "action_on_timeout": "none", 00:05:08.468 "timeout_us": 0, 00:05:08.468 "timeout_admin_us": 0, 00:05:08.468 "keep_alive_timeout_ms": 10000, 00:05:08.468 "arbitration_burst": 0, 00:05:08.468 "low_priority_weight": 0, 00:05:08.468 "medium_priority_weight": 0, 00:05:08.468 "high_priority_weight": 0, 00:05:08.468 "nvme_adminq_poll_period_us": 10000, 00:05:08.468 "nvme_ioq_poll_period_us": 0, 00:05:08.468 "io_queue_requests": 0, 00:05:08.468 "delay_cmd_submit": true, 00:05:08.468 "transport_retry_count": 4, 00:05:08.468 "bdev_retry_count": 3, 00:05:08.468 "transport_ack_timeout": 0, 00:05:08.468 "ctrlr_loss_timeout_sec": 0, 00:05:08.468 "reconnect_delay_sec": 0, 00:05:08.468 "fast_io_fail_timeout_sec": 0, 00:05:08.468 "disable_auto_failback": false, 00:05:08.468 "generate_uuids": false, 00:05:08.468 "transport_tos": 0, 00:05:08.468 "nvme_error_stat": false, 00:05:08.468 "rdma_srq_size": 0, 00:05:08.468 "io_path_stat": false, 00:05:08.468 "allow_accel_sequence": false, 00:05:08.468 "rdma_max_cq_size": 0, 00:05:08.468 "rdma_cm_event_timeout_ms": 0, 00:05:08.468 "dhchap_digests": [ 00:05:08.468 "sha256", 00:05:08.468 "sha384", 00:05:08.468 "sha512" 00:05:08.468 ], 00:05:08.468 "dhchap_dhgroups": [ 00:05:08.468 "null", 00:05:08.468 "ffdhe2048", 00:05:08.468 "ffdhe3072", 00:05:08.468 "ffdhe4096", 00:05:08.468 "ffdhe6144", 00:05:08.468 "ffdhe8192" 00:05:08.468 ] 00:05:08.468 } 00:05:08.468 }, 00:05:08.468 { 00:05:08.468 "method": "bdev_nvme_set_hotplug", 00:05:08.468 "params": { 00:05:08.468 "period_us": 100000, 00:05:08.468 "enable": false 00:05:08.468 } 00:05:08.468 }, 00:05:08.468 { 00:05:08.468 "method": "bdev_wait_for_examine" 00:05:08.468 } 00:05:08.468 ] 00:05:08.468 }, 00:05:08.468 { 00:05:08.468 "subsystem": "scsi", 00:05:08.468 "config": null 00:05:08.468 }, 00:05:08.468 { 00:05:08.468 "subsystem": "scheduler", 00:05:08.468 "config": [ 00:05:08.468 { 00:05:08.468 "method": "framework_set_scheduler", 00:05:08.468 "params": { 00:05:08.468 "name": "static" 00:05:08.468 } 00:05:08.468 } 00:05:08.468 ] 00:05:08.468 }, 00:05:08.468 { 00:05:08.468 "subsystem": "vhost_scsi", 00:05:08.468 "config": [] 00:05:08.468 }, 00:05:08.469 { 00:05:08.469 "subsystem": "vhost_blk", 00:05:08.469 "config": [] 00:05:08.469 }, 00:05:08.469 { 00:05:08.469 "subsystem": "ublk", 00:05:08.469 "config": [] 00:05:08.469 }, 00:05:08.469 { 00:05:08.469 "subsystem": "nbd", 00:05:08.469 "config": [] 00:05:08.469 }, 00:05:08.469 { 00:05:08.469 "subsystem": "nvmf", 00:05:08.469 "config": [ 00:05:08.469 { 00:05:08.469 "method": "nvmf_set_config", 00:05:08.469 "params": { 00:05:08.469 "discovery_filter": "match_any", 00:05:08.469 "admin_cmd_passthru": { 00:05:08.469 "identify_ctrlr": false 00:05:08.469 }, 00:05:08.469 "dhchap_digests": [ 00:05:08.469 "sha256", 00:05:08.469 "sha384", 00:05:08.469 "sha512" 00:05:08.469 ], 00:05:08.469 "dhchap_dhgroups": [ 00:05:08.469 "null", 00:05:08.469 "ffdhe2048", 00:05:08.469 "ffdhe3072", 00:05:08.469 "ffdhe4096", 00:05:08.469 "ffdhe6144", 00:05:08.469 "ffdhe8192" 00:05:08.469 ] 00:05:08.469 } 00:05:08.469 }, 00:05:08.469 { 00:05:08.469 "method": "nvmf_set_max_subsystems", 00:05:08.469 "params": { 00:05:08.469 "max_subsystems": 1024 00:05:08.469 } 00:05:08.469 }, 00:05:08.469 { 00:05:08.469 "method": "nvmf_set_crdt", 00:05:08.469 "params": { 00:05:08.469 "crdt1": 0, 00:05:08.469 "crdt2": 0, 00:05:08.469 "crdt3": 0 00:05:08.469 } 00:05:08.469 }, 00:05:08.469 { 00:05:08.469 "method": "nvmf_create_transport", 00:05:08.469 "params": { 00:05:08.469 "trtype": "TCP", 00:05:08.469 "max_queue_depth": 128, 00:05:08.469 "max_io_qpairs_per_ctrlr": 127, 00:05:08.469 "in_capsule_data_size": 4096, 00:05:08.469 "max_io_size": 131072, 00:05:08.469 "io_unit_size": 131072, 00:05:08.469 "max_aq_depth": 128, 00:05:08.469 "num_shared_buffers": 511, 00:05:08.469 "buf_cache_size": 4294967295, 00:05:08.469 "dif_insert_or_strip": false, 00:05:08.469 "zcopy": false, 00:05:08.469 "c2h_success": true, 00:05:08.469 "sock_priority": 0, 00:05:08.469 "abort_timeout_sec": 1, 00:05:08.469 "ack_timeout": 0, 00:05:08.469 "data_wr_pool_size": 0 00:05:08.469 } 00:05:08.469 } 00:05:08.469 ] 00:05:08.469 }, 00:05:08.469 { 00:05:08.469 "subsystem": "iscsi", 00:05:08.469 "config": [ 00:05:08.469 { 00:05:08.469 "method": "iscsi_set_options", 00:05:08.469 "params": { 00:05:08.469 "node_base": "iqn.2016-06.io.spdk", 00:05:08.469 "max_sessions": 128, 00:05:08.469 "max_connections_per_session": 2, 00:05:08.469 "max_queue_depth": 64, 00:05:08.469 "default_time2wait": 2, 00:05:08.469 "default_time2retain": 20, 00:05:08.469 "first_burst_length": 8192, 00:05:08.469 "immediate_data": true, 00:05:08.469 "allow_duplicated_isid": false, 00:05:08.469 "error_recovery_level": 0, 00:05:08.469 "nop_timeout": 60, 00:05:08.469 "nop_in_interval": 30, 00:05:08.469 "disable_chap": false, 00:05:08.469 "require_chap": false, 00:05:08.469 "mutual_chap": false, 00:05:08.469 "chap_group": 0, 00:05:08.469 "max_large_datain_per_connection": 64, 00:05:08.469 "max_r2t_per_connection": 4, 00:05:08.469 "pdu_pool_size": 36864, 00:05:08.469 "immediate_data_pool_size": 16384, 00:05:08.469 "data_out_pool_size": 2048 00:05:08.469 } 00:05:08.469 } 00:05:08.469 ] 00:05:08.469 } 00:05:08.469 ] 00:05:08.469 } 00:05:08.469 18:54:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:08.469 18:54:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 112368 00:05:08.469 18:54:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 112368 ']' 00:05:08.469 18:54:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 112368 00:05:08.469 18:54:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:05:08.469 18:54:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:08.469 18:54:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 112368 00:05:08.469 18:54:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:08.469 18:54:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:08.469 18:54:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 112368' 00:05:08.469 killing process with pid 112368 00:05:08.469 18:54:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 112368 00:05:08.469 18:54:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 112368 00:05:08.729 18:54:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=112525 00:05:08.729 18:54:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:08.729 18:54:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:14.169 18:54:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 112525 00:05:14.169 18:54:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 112525 ']' 00:05:14.169 18:54:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 112525 00:05:14.169 18:54:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:05:14.169 18:54:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:14.169 18:54:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 112525 00:05:14.169 18:54:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:14.169 18:54:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:14.169 18:54:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 112525' 00:05:14.169 killing process with pid 112525 00:05:14.169 18:54:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 112525 00:05:14.169 18:54:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 112525 00:05:14.169 18:54:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:14.169 18:54:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:14.169 00:05:14.169 real 0m6.589s 00:05:14.169 user 0m6.504s 00:05:14.169 sys 0m0.537s 00:05:14.169 18:54:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:14.169 18:54:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:14.169 ************************************ 00:05:14.169 END TEST skip_rpc_with_json 00:05:14.169 ************************************ 00:05:14.169 18:54:43 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:14.169 18:54:43 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:14.169 18:54:43 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:14.169 18:54:43 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.169 ************************************ 00:05:14.169 START TEST skip_rpc_with_delay 00:05:14.169 ************************************ 00:05:14.170 18:54:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_delay 00:05:14.170 18:54:43 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:14.170 18:54:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:05:14.170 18:54:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:14.170 18:54:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:14.170 18:54:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:14.170 18:54:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:14.170 18:54:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:14.170 18:54:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:14.170 18:54:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:14.170 18:54:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:14.170 18:54:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:14.170 18:54:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:14.170 [2024-11-05 18:54:43.290928] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:14.170 18:54:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:05:14.170 18:54:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:14.170 18:54:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:14.170 18:54:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:14.170 00:05:14.170 real 0m0.080s 00:05:14.170 user 0m0.046s 00:05:14.170 sys 0m0.033s 00:05:14.170 18:54:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:14.170 18:54:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:14.170 ************************************ 00:05:14.170 END TEST skip_rpc_with_delay 00:05:14.170 ************************************ 00:05:14.170 18:54:43 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:14.170 18:54:43 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:14.170 18:54:43 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:14.170 18:54:43 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:14.170 18:54:43 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:14.170 18:54:43 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.170 ************************************ 00:05:14.170 START TEST exit_on_failed_rpc_init 00:05:14.170 ************************************ 00:05:14.170 18:54:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1127 -- # test_exit_on_failed_rpc_init 00:05:14.170 18:54:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=113839 00:05:14.170 18:54:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 113839 00:05:14.170 18:54:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # '[' -z 113839 ']' 00:05:14.170 18:54:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:14.170 18:54:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:14.170 18:54:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:14.170 18:54:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:14.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:14.170 18:54:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:14.170 18:54:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:14.170 [2024-11-05 18:54:43.456394] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:05:14.170 [2024-11-05 18:54:43.456464] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113839 ] 00:05:14.431 [2024-11-05 18:54:43.533979] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.431 [2024-11-05 18:54:43.576147] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.002 18:54:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:15.002 18:54:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@866 -- # return 0 00:05:15.002 18:54:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:15.002 18:54:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:15.002 18:54:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:05:15.002 18:54:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:15.002 18:54:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:15.002 18:54:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:15.002 18:54:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:15.002 18:54:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:15.002 18:54:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:15.002 18:54:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:15.002 18:54:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:15.002 18:54:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:15.002 18:54:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:15.002 [2024-11-05 18:54:44.297905] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:05:15.002 [2024-11-05 18:54:44.297956] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113908 ] 00:05:15.263 [2024-11-05 18:54:44.384544] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.263 [2024-11-05 18:54:44.420363] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:15.263 [2024-11-05 18:54:44.420413] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:15.263 [2024-11-05 18:54:44.420423] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:15.263 [2024-11-05 18:54:44.420430] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:15.263 18:54:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:05:15.263 18:54:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:15.263 18:54:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:05:15.263 18:54:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:05:15.263 18:54:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:05:15.263 18:54:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:15.263 18:54:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:15.263 18:54:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 113839 00:05:15.263 18:54:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # '[' -z 113839 ']' 00:05:15.263 18:54:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # kill -0 113839 00:05:15.263 18:54:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # uname 00:05:15.263 18:54:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:15.263 18:54:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 113839 00:05:15.263 18:54:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:15.263 18:54:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:15.263 18:54:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # echo 'killing process with pid 113839' 00:05:15.263 killing process with pid 113839 00:05:15.263 18:54:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@971 -- # kill 113839 00:05:15.263 18:54:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@976 -- # wait 113839 00:05:15.524 00:05:15.524 real 0m1.336s 00:05:15.524 user 0m1.569s 00:05:15.524 sys 0m0.372s 00:05:15.524 18:54:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:15.524 18:54:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:15.524 ************************************ 00:05:15.524 END TEST exit_on_failed_rpc_init 00:05:15.524 ************************************ 00:05:15.524 18:54:44 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:15.524 00:05:15.524 real 0m13.818s 00:05:15.524 user 0m13.432s 00:05:15.524 sys 0m1.523s 00:05:15.524 18:54:44 skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:15.524 18:54:44 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:15.524 ************************************ 00:05:15.524 END TEST skip_rpc 00:05:15.524 ************************************ 00:05:15.524 18:54:44 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:15.524 18:54:44 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:15.524 18:54:44 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:15.524 18:54:44 -- common/autotest_common.sh@10 -- # set +x 00:05:15.786 ************************************ 00:05:15.786 START TEST rpc_client 00:05:15.786 ************************************ 00:05:15.786 18:54:44 rpc_client -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:15.786 * Looking for test storage... 00:05:15.786 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:15.786 18:54:44 rpc_client -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:15.786 18:54:44 rpc_client -- common/autotest_common.sh@1691 -- # lcov --version 00:05:15.786 18:54:44 rpc_client -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:15.786 18:54:45 rpc_client -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:15.786 18:54:45 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:15.786 18:54:45 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:15.786 18:54:45 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:15.786 18:54:45 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:15.786 18:54:45 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:15.786 18:54:45 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:15.786 18:54:45 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:15.786 18:54:45 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:15.786 18:54:45 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:15.786 18:54:45 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:15.786 18:54:45 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:15.786 18:54:45 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:15.786 18:54:45 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:15.786 18:54:45 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:15.786 18:54:45 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:15.786 18:54:45 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:15.786 18:54:45 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:15.786 18:54:45 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:15.786 18:54:45 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:15.786 18:54:45 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:15.786 18:54:45 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:15.786 18:54:45 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:15.786 18:54:45 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:15.786 18:54:45 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:15.786 18:54:45 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:15.786 18:54:45 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:15.786 18:54:45 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:15.786 18:54:45 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:15.786 18:54:45 rpc_client -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:15.786 18:54:45 rpc_client -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:15.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.786 --rc genhtml_branch_coverage=1 00:05:15.786 --rc genhtml_function_coverage=1 00:05:15.786 --rc genhtml_legend=1 00:05:15.786 --rc geninfo_all_blocks=1 00:05:15.786 --rc geninfo_unexecuted_blocks=1 00:05:15.786 00:05:15.786 ' 00:05:15.786 18:54:45 rpc_client -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:15.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.786 --rc genhtml_branch_coverage=1 00:05:15.786 --rc genhtml_function_coverage=1 00:05:15.786 --rc genhtml_legend=1 00:05:15.786 --rc geninfo_all_blocks=1 00:05:15.786 --rc geninfo_unexecuted_blocks=1 00:05:15.786 00:05:15.786 ' 00:05:15.786 18:54:45 rpc_client -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:15.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.786 --rc genhtml_branch_coverage=1 00:05:15.786 --rc genhtml_function_coverage=1 00:05:15.786 --rc genhtml_legend=1 00:05:15.786 --rc geninfo_all_blocks=1 00:05:15.786 --rc geninfo_unexecuted_blocks=1 00:05:15.786 00:05:15.786 ' 00:05:15.786 18:54:45 rpc_client -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:15.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.786 --rc genhtml_branch_coverage=1 00:05:15.786 --rc genhtml_function_coverage=1 00:05:15.786 --rc genhtml_legend=1 00:05:15.786 --rc geninfo_all_blocks=1 00:05:15.786 --rc geninfo_unexecuted_blocks=1 00:05:15.786 00:05:15.786 ' 00:05:15.786 18:54:45 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:15.786 OK 00:05:15.786 18:54:45 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:15.786 00:05:15.786 real 0m0.228s 00:05:15.786 user 0m0.128s 00:05:15.787 sys 0m0.110s 00:05:15.787 18:54:45 rpc_client -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:15.787 18:54:45 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:15.787 ************************************ 00:05:15.787 END TEST rpc_client 00:05:15.787 ************************************ 00:05:16.048 18:54:45 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:16.048 18:54:45 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:16.048 18:54:45 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:16.048 18:54:45 -- common/autotest_common.sh@10 -- # set +x 00:05:16.048 ************************************ 00:05:16.048 START TEST json_config 00:05:16.048 ************************************ 00:05:16.048 18:54:45 json_config -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:16.048 18:54:45 json_config -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:16.048 18:54:45 json_config -- common/autotest_common.sh@1691 -- # lcov --version 00:05:16.048 18:54:45 json_config -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:16.048 18:54:45 json_config -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:16.048 18:54:45 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:16.048 18:54:45 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:16.048 18:54:45 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:16.048 18:54:45 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:16.048 18:54:45 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:16.048 18:54:45 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:16.048 18:54:45 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:16.048 18:54:45 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:16.048 18:54:45 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:16.048 18:54:45 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:16.048 18:54:45 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:16.048 18:54:45 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:16.049 18:54:45 json_config -- scripts/common.sh@345 -- # : 1 00:05:16.049 18:54:45 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:16.049 18:54:45 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:16.049 18:54:45 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:16.049 18:54:45 json_config -- scripts/common.sh@353 -- # local d=1 00:05:16.049 18:54:45 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:16.049 18:54:45 json_config -- scripts/common.sh@355 -- # echo 1 00:05:16.049 18:54:45 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:16.049 18:54:45 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:16.049 18:54:45 json_config -- scripts/common.sh@353 -- # local d=2 00:05:16.049 18:54:45 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:16.049 18:54:45 json_config -- scripts/common.sh@355 -- # echo 2 00:05:16.049 18:54:45 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:16.049 18:54:45 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:16.049 18:54:45 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:16.049 18:54:45 json_config -- scripts/common.sh@368 -- # return 0 00:05:16.049 18:54:45 json_config -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:16.049 18:54:45 json_config -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:16.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.049 --rc genhtml_branch_coverage=1 00:05:16.049 --rc genhtml_function_coverage=1 00:05:16.049 --rc genhtml_legend=1 00:05:16.049 --rc geninfo_all_blocks=1 00:05:16.049 --rc geninfo_unexecuted_blocks=1 00:05:16.049 00:05:16.049 ' 00:05:16.049 18:54:45 json_config -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:16.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.049 --rc genhtml_branch_coverage=1 00:05:16.049 --rc genhtml_function_coverage=1 00:05:16.049 --rc genhtml_legend=1 00:05:16.049 --rc geninfo_all_blocks=1 00:05:16.049 --rc geninfo_unexecuted_blocks=1 00:05:16.049 00:05:16.049 ' 00:05:16.049 18:54:45 json_config -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:16.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.049 --rc genhtml_branch_coverage=1 00:05:16.049 --rc genhtml_function_coverage=1 00:05:16.049 --rc genhtml_legend=1 00:05:16.049 --rc geninfo_all_blocks=1 00:05:16.049 --rc geninfo_unexecuted_blocks=1 00:05:16.049 00:05:16.049 ' 00:05:16.049 18:54:45 json_config -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:16.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.049 --rc genhtml_branch_coverage=1 00:05:16.049 --rc genhtml_function_coverage=1 00:05:16.049 --rc genhtml_legend=1 00:05:16.049 --rc geninfo_all_blocks=1 00:05:16.049 --rc geninfo_unexecuted_blocks=1 00:05:16.049 00:05:16.049 ' 00:05:16.049 18:54:45 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:16.049 18:54:45 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:16.049 18:54:45 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:16.049 18:54:45 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:16.049 18:54:45 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:16.049 18:54:45 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:16.049 18:54:45 json_config -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:16.049 18:54:45 json_config -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:05:16.049 18:54:45 json_config -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:16.049 18:54:45 json_config -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:05:16.049 18:54:45 json_config -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:16.049 18:54:45 json_config -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:16.049 18:54:45 json_config -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:16.049 18:54:45 json_config -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:05:16.049 18:54:45 json_config -- nvmf/common.sh@19 -- # NET_TYPE=phy-fallback 00:05:16.049 18:54:45 json_config -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:16.049 18:54:45 json_config -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:16.049 18:54:45 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:16.049 18:54:45 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:16.049 18:54:45 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:16.049 18:54:45 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:16.049 18:54:45 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:16.049 18:54:45 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:16.049 18:54:45 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:16.049 18:54:45 json_config -- paths/export.sh@5 -- # export PATH 00:05:16.049 18:54:45 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:16.049 18:54:45 json_config -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:05:16.049 18:54:45 json_config -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:05:16.049 18:54:45 json_config -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:05:16.049 18:54:45 json_config -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:05:16.049 18:54:45 json_config -- nvmf/common.sh@50 -- # : 0 00:05:16.049 18:54:45 json_config -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:05:16.049 18:54:45 json_config -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:05:16.049 18:54:45 json_config -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:05:16.049 18:54:45 json_config -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:16.049 18:54:45 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:16.049 18:54:45 json_config -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:05:16.049 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:05:16.049 18:54:45 json_config -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:05:16.049 18:54:45 json_config -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:05:16.049 18:54:45 json_config -- nvmf/common.sh@54 -- # have_pci_nics=0 00:05:16.049 18:54:45 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:16.049 18:54:45 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:16.049 18:54:45 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:16.049 18:54:45 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:16.049 18:54:45 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:16.049 18:54:45 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:16.049 18:54:45 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:16.049 18:54:45 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:16.049 18:54:45 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:16.049 18:54:45 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:16.049 18:54:45 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:16.049 18:54:45 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:16.049 18:54:45 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:16.049 18:54:45 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:16.049 18:54:45 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:16.049 18:54:45 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:05:16.049 INFO: JSON configuration test init 00:05:16.049 18:54:45 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:05:16.049 18:54:45 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:05:16.310 18:54:45 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:16.310 18:54:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:16.310 18:54:45 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:05:16.310 18:54:45 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:16.310 18:54:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:16.310 18:54:45 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:05:16.310 18:54:45 json_config -- json_config/common.sh@9 -- # local app=target 00:05:16.310 18:54:45 json_config -- json_config/common.sh@10 -- # shift 00:05:16.310 18:54:45 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:16.310 18:54:45 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:16.310 18:54:45 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:16.310 18:54:45 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:16.310 18:54:45 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:16.310 18:54:45 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=114364 00:05:16.310 18:54:45 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:16.310 Waiting for target to run... 00:05:16.310 18:54:45 json_config -- json_config/common.sh@25 -- # waitforlisten 114364 /var/tmp/spdk_tgt.sock 00:05:16.310 18:54:45 json_config -- common/autotest_common.sh@833 -- # '[' -z 114364 ']' 00:05:16.310 18:54:45 json_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:16.310 18:54:45 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:16.310 18:54:45 json_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:16.310 18:54:45 json_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:16.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:16.310 18:54:45 json_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:16.310 18:54:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:16.310 [2024-11-05 18:54:45.448811] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:05:16.310 [2024-11-05 18:54:45.448883] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114364 ] 00:05:16.570 [2024-11-05 18:54:45.729156] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.570 [2024-11-05 18:54:45.759530] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.142 18:54:46 json_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:17.142 18:54:46 json_config -- common/autotest_common.sh@866 -- # return 0 00:05:17.142 18:54:46 json_config -- json_config/common.sh@26 -- # echo '' 00:05:17.142 00:05:17.142 18:54:46 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:05:17.142 18:54:46 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:05:17.142 18:54:46 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:17.142 18:54:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:17.142 18:54:46 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:05:17.142 18:54:46 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:05:17.142 18:54:46 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:17.142 18:54:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:17.142 18:54:46 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:17.142 18:54:46 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:05:17.142 18:54:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:17.713 18:54:46 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:05:17.713 18:54:46 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:17.713 18:54:46 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:17.713 18:54:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:17.713 18:54:46 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:17.713 18:54:46 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:17.713 18:54:46 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:17.713 18:54:46 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:05:17.713 18:54:46 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:05:17.713 18:54:46 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:17.713 18:54:46 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:17.713 18:54:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:17.713 18:54:47 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:05:17.713 18:54:47 json_config -- json_config/json_config.sh@51 -- # local get_types 00:05:17.713 18:54:47 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:05:17.713 18:54:47 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:05:17.713 18:54:47 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:05:17.713 18:54:47 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:05:17.713 18:54:47 json_config -- json_config/json_config.sh@54 -- # sort 00:05:17.975 18:54:47 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:05:17.975 18:54:47 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:05:17.975 18:54:47 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:05:17.975 18:54:47 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:17.975 18:54:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:17.975 18:54:47 json_config -- json_config/json_config.sh@62 -- # return 0 00:05:17.975 18:54:47 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:05:17.975 18:54:47 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:05:17.975 18:54:47 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:05:17.975 18:54:47 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:05:17.975 18:54:47 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:05:17.975 18:54:47 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:05:17.975 18:54:47 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:17.975 18:54:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:17.975 18:54:47 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:17.975 18:54:47 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:05:17.975 18:54:47 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:05:17.975 18:54:47 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:17.975 18:54:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:17.975 MallocForNvmf0 00:05:17.975 18:54:47 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:17.975 18:54:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:18.235 MallocForNvmf1 00:05:18.235 18:54:47 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:18.235 18:54:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:18.496 [2024-11-05 18:54:47.607524] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:18.496 18:54:47 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:18.496 18:54:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:18.496 18:54:47 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:18.496 18:54:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:18.756 18:54:47 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:18.756 18:54:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:19.017 18:54:48 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:19.017 18:54:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:19.017 [2024-11-05 18:54:48.333799] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:19.277 18:54:48 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:05:19.277 18:54:48 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:19.277 18:54:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:19.277 18:54:48 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:05:19.277 18:54:48 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:19.277 18:54:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:19.277 18:54:48 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:05:19.277 18:54:48 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:19.277 18:54:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:19.277 MallocBdevForConfigChangeCheck 00:05:19.537 18:54:48 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:05:19.537 18:54:48 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:19.537 18:54:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:19.537 18:54:48 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:05:19.537 18:54:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:19.798 18:54:48 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:05:19.798 INFO: shutting down applications... 00:05:19.798 18:54:48 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:05:19.798 18:54:48 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:05:19.798 18:54:48 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:05:19.798 18:54:48 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:20.058 Calling clear_iscsi_subsystem 00:05:20.058 Calling clear_nvmf_subsystem 00:05:20.058 Calling clear_nbd_subsystem 00:05:20.058 Calling clear_ublk_subsystem 00:05:20.058 Calling clear_vhost_blk_subsystem 00:05:20.058 Calling clear_vhost_scsi_subsystem 00:05:20.058 Calling clear_bdev_subsystem 00:05:20.318 18:54:49 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:20.318 18:54:49 json_config -- json_config/json_config.sh@350 -- # count=100 00:05:20.318 18:54:49 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:05:20.318 18:54:49 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:20.318 18:54:49 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:20.318 18:54:49 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:20.578 18:54:49 json_config -- json_config/json_config.sh@352 -- # break 00:05:20.578 18:54:49 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:05:20.578 18:54:49 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:05:20.578 18:54:49 json_config -- json_config/common.sh@31 -- # local app=target 00:05:20.578 18:54:49 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:20.578 18:54:49 json_config -- json_config/common.sh@35 -- # [[ -n 114364 ]] 00:05:20.578 18:54:49 json_config -- json_config/common.sh@38 -- # kill -SIGINT 114364 00:05:20.578 18:54:49 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:20.578 18:54:49 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:20.578 18:54:49 json_config -- json_config/common.sh@41 -- # kill -0 114364 00:05:20.578 18:54:49 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:21.150 18:54:50 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:21.150 18:54:50 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:21.150 18:54:50 json_config -- json_config/common.sh@41 -- # kill -0 114364 00:05:21.150 18:54:50 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:21.150 18:54:50 json_config -- json_config/common.sh@43 -- # break 00:05:21.150 18:54:50 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:21.150 18:54:50 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:21.150 SPDK target shutdown done 00:05:21.150 18:54:50 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:05:21.150 INFO: relaunching applications... 00:05:21.150 18:54:50 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:21.150 18:54:50 json_config -- json_config/common.sh@9 -- # local app=target 00:05:21.150 18:54:50 json_config -- json_config/common.sh@10 -- # shift 00:05:21.150 18:54:50 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:21.150 18:54:50 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:21.150 18:54:50 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:21.150 18:54:50 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:21.150 18:54:50 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:21.150 18:54:50 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=115496 00:05:21.150 18:54:50 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:21.150 Waiting for target to run... 00:05:21.150 18:54:50 json_config -- json_config/common.sh@25 -- # waitforlisten 115496 /var/tmp/spdk_tgt.sock 00:05:21.150 18:54:50 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:21.150 18:54:50 json_config -- common/autotest_common.sh@833 -- # '[' -z 115496 ']' 00:05:21.150 18:54:50 json_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:21.150 18:54:50 json_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:21.150 18:54:50 json_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:21.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:21.150 18:54:50 json_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:21.150 18:54:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:21.150 [2024-11-05 18:54:50.298635] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:05:21.150 [2024-11-05 18:54:50.298705] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115496 ] 00:05:21.410 [2024-11-05 18:54:50.547565] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.410 [2024-11-05 18:54:50.575979] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.981 [2024-11-05 18:54:51.092737] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:21.981 [2024-11-05 18:54:51.125127] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:21.981 18:54:51 json_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:21.981 18:54:51 json_config -- common/autotest_common.sh@866 -- # return 0 00:05:21.981 18:54:51 json_config -- json_config/common.sh@26 -- # echo '' 00:05:21.981 00:05:21.981 18:54:51 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:05:21.981 18:54:51 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:21.981 INFO: Checking if target configuration is the same... 00:05:21.981 18:54:51 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:21.981 18:54:51 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:05:21.981 18:54:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:21.981 + '[' 2 -ne 2 ']' 00:05:21.981 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:21.981 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:21.981 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:21.981 +++ basename /dev/fd/62 00:05:21.981 ++ mktemp /tmp/62.XXX 00:05:21.981 + tmp_file_1=/tmp/62.fdj 00:05:21.981 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:21.981 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:21.981 + tmp_file_2=/tmp/spdk_tgt_config.json.DJZ 00:05:21.981 + ret=0 00:05:21.981 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:22.242 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:22.242 + diff -u /tmp/62.fdj /tmp/spdk_tgt_config.json.DJZ 00:05:22.242 + echo 'INFO: JSON config files are the same' 00:05:22.242 INFO: JSON config files are the same 00:05:22.242 + rm /tmp/62.fdj /tmp/spdk_tgt_config.json.DJZ 00:05:22.242 + exit 0 00:05:22.242 18:54:51 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:05:22.242 18:54:51 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:22.242 INFO: changing configuration and checking if this can be detected... 00:05:22.242 18:54:51 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:22.242 18:54:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:22.503 18:54:51 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:22.503 18:54:51 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:05:22.503 18:54:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:22.503 + '[' 2 -ne 2 ']' 00:05:22.503 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:22.503 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:22.503 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:22.503 +++ basename /dev/fd/62 00:05:22.503 ++ mktemp /tmp/62.XXX 00:05:22.503 + tmp_file_1=/tmp/62.dgy 00:05:22.503 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:22.503 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:22.503 + tmp_file_2=/tmp/spdk_tgt_config.json.XEB 00:05:22.503 + ret=0 00:05:22.503 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:22.763 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:22.763 + diff -u /tmp/62.dgy /tmp/spdk_tgt_config.json.XEB 00:05:23.022 + ret=1 00:05:23.022 + echo '=== Start of file: /tmp/62.dgy ===' 00:05:23.022 + cat /tmp/62.dgy 00:05:23.022 + echo '=== End of file: /tmp/62.dgy ===' 00:05:23.022 + echo '' 00:05:23.022 + echo '=== Start of file: /tmp/spdk_tgt_config.json.XEB ===' 00:05:23.022 + cat /tmp/spdk_tgt_config.json.XEB 00:05:23.022 + echo '=== End of file: /tmp/spdk_tgt_config.json.XEB ===' 00:05:23.022 + echo '' 00:05:23.022 + rm /tmp/62.dgy /tmp/spdk_tgt_config.json.XEB 00:05:23.022 + exit 1 00:05:23.022 18:54:52 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:05:23.022 INFO: configuration change detected. 00:05:23.022 18:54:52 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:05:23.022 18:54:52 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:05:23.022 18:54:52 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:23.022 18:54:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:23.023 18:54:52 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:05:23.023 18:54:52 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:05:23.023 18:54:52 json_config -- json_config/json_config.sh@324 -- # [[ -n 115496 ]] 00:05:23.023 18:54:52 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:05:23.023 18:54:52 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:05:23.023 18:54:52 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:23.023 18:54:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:23.023 18:54:52 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:05:23.023 18:54:52 json_config -- json_config/json_config.sh@200 -- # uname -s 00:05:23.023 18:54:52 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:05:23.023 18:54:52 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:05:23.023 18:54:52 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:05:23.023 18:54:52 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:05:23.023 18:54:52 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:23.023 18:54:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:23.023 18:54:52 json_config -- json_config/json_config.sh@330 -- # killprocess 115496 00:05:23.023 18:54:52 json_config -- common/autotest_common.sh@952 -- # '[' -z 115496 ']' 00:05:23.023 18:54:52 json_config -- common/autotest_common.sh@956 -- # kill -0 115496 00:05:23.023 18:54:52 json_config -- common/autotest_common.sh@957 -- # uname 00:05:23.023 18:54:52 json_config -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:23.023 18:54:52 json_config -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 115496 00:05:23.023 18:54:52 json_config -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:23.023 18:54:52 json_config -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:23.023 18:54:52 json_config -- common/autotest_common.sh@970 -- # echo 'killing process with pid 115496' 00:05:23.023 killing process with pid 115496 00:05:23.023 18:54:52 json_config -- common/autotest_common.sh@971 -- # kill 115496 00:05:23.023 18:54:52 json_config -- common/autotest_common.sh@976 -- # wait 115496 00:05:23.282 18:54:52 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:23.282 18:54:52 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:05:23.282 18:54:52 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:23.282 18:54:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:23.282 18:54:52 json_config -- json_config/json_config.sh@335 -- # return 0 00:05:23.282 18:54:52 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:05:23.282 INFO: Success 00:05:23.282 00:05:23.282 real 0m7.392s 00:05:23.282 user 0m9.059s 00:05:23.282 sys 0m1.879s 00:05:23.282 18:54:52 json_config -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:23.282 18:54:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:23.282 ************************************ 00:05:23.282 END TEST json_config 00:05:23.282 ************************************ 00:05:23.282 18:54:52 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:23.282 18:54:52 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:23.282 18:54:52 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:23.282 18:54:52 -- common/autotest_common.sh@10 -- # set +x 00:05:23.544 ************************************ 00:05:23.544 START TEST json_config_extra_key 00:05:23.544 ************************************ 00:05:23.544 18:54:52 json_config_extra_key -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:23.544 18:54:52 json_config_extra_key -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:23.544 18:54:52 json_config_extra_key -- common/autotest_common.sh@1691 -- # lcov --version 00:05:23.544 18:54:52 json_config_extra_key -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:23.544 18:54:52 json_config_extra_key -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:23.544 18:54:52 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:23.544 18:54:52 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:23.544 18:54:52 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:23.544 18:54:52 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:23.544 18:54:52 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:23.544 18:54:52 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:23.544 18:54:52 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:23.544 18:54:52 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:23.544 18:54:52 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:23.544 18:54:52 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:23.544 18:54:52 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:23.544 18:54:52 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:23.544 18:54:52 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:23.544 18:54:52 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:23.544 18:54:52 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:23.544 18:54:52 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:23.544 18:54:52 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:23.544 18:54:52 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:23.544 18:54:52 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:23.544 18:54:52 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:23.544 18:54:52 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:23.544 18:54:52 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:23.544 18:54:52 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:23.544 18:54:52 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:23.544 18:54:52 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:23.544 18:54:52 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:23.544 18:54:52 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:23.544 18:54:52 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:23.544 18:54:52 json_config_extra_key -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:23.544 18:54:52 json_config_extra_key -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:23.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.544 --rc genhtml_branch_coverage=1 00:05:23.544 --rc genhtml_function_coverage=1 00:05:23.544 --rc genhtml_legend=1 00:05:23.544 --rc geninfo_all_blocks=1 00:05:23.544 --rc geninfo_unexecuted_blocks=1 00:05:23.544 00:05:23.544 ' 00:05:23.544 18:54:52 json_config_extra_key -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:23.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.544 --rc genhtml_branch_coverage=1 00:05:23.544 --rc genhtml_function_coverage=1 00:05:23.544 --rc genhtml_legend=1 00:05:23.544 --rc geninfo_all_blocks=1 00:05:23.544 --rc geninfo_unexecuted_blocks=1 00:05:23.544 00:05:23.544 ' 00:05:23.544 18:54:52 json_config_extra_key -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:23.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.544 --rc genhtml_branch_coverage=1 00:05:23.544 --rc genhtml_function_coverage=1 00:05:23.544 --rc genhtml_legend=1 00:05:23.544 --rc geninfo_all_blocks=1 00:05:23.544 --rc geninfo_unexecuted_blocks=1 00:05:23.544 00:05:23.544 ' 00:05:23.544 18:54:52 json_config_extra_key -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:23.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.544 --rc genhtml_branch_coverage=1 00:05:23.544 --rc genhtml_function_coverage=1 00:05:23.544 --rc genhtml_legend=1 00:05:23.544 --rc geninfo_all_blocks=1 00:05:23.544 --rc geninfo_unexecuted_blocks=1 00:05:23.544 00:05:23.544 ' 00:05:23.544 18:54:52 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:23.544 18:54:52 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:23.544 18:54:52 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:23.544 18:54:52 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:23.544 18:54:52 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:23.544 18:54:52 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:23.544 18:54:52 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:23.544 18:54:52 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:05:23.544 18:54:52 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:23.544 18:54:52 json_config_extra_key -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:05:23.544 18:54:52 json_config_extra_key -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:23.544 18:54:52 json_config_extra_key -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:23.544 18:54:52 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:23.544 18:54:52 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:05:23.544 18:54:52 json_config_extra_key -- nvmf/common.sh@19 -- # NET_TYPE=phy-fallback 00:05:23.544 18:54:52 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:23.544 18:54:52 json_config_extra_key -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:23.544 18:54:52 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:23.544 18:54:52 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:23.544 18:54:52 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:23.544 18:54:52 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:23.544 18:54:52 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:23.544 18:54:52 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:23.544 18:54:52 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:23.544 18:54:52 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:23.544 18:54:52 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:23.544 18:54:52 json_config_extra_key -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:05:23.544 18:54:52 json_config_extra_key -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:05:23.545 18:54:52 json_config_extra_key -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:05:23.545 18:54:52 json_config_extra_key -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:05:23.545 18:54:52 json_config_extra_key -- nvmf/common.sh@50 -- # : 0 00:05:23.545 18:54:52 json_config_extra_key -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:05:23.545 18:54:52 json_config_extra_key -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:05:23.545 18:54:52 json_config_extra_key -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:05:23.545 18:54:52 json_config_extra_key -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:23.545 18:54:52 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:23.545 18:54:52 json_config_extra_key -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:05:23.545 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:05:23.545 18:54:52 json_config_extra_key -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:05:23.545 18:54:52 json_config_extra_key -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:05:23.545 18:54:52 json_config_extra_key -- nvmf/common.sh@54 -- # have_pci_nics=0 00:05:23.545 18:54:52 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:23.545 18:54:52 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:23.545 18:54:52 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:23.545 18:54:52 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:23.545 18:54:52 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:23.545 18:54:52 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:23.545 18:54:52 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:23.545 18:54:52 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:23.545 18:54:52 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:23.545 18:54:52 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:23.545 18:54:52 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:23.545 INFO: launching applications... 00:05:23.545 18:54:52 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:23.545 18:54:52 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:23.545 18:54:52 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:23.545 18:54:52 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:23.545 18:54:52 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:23.545 18:54:52 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:23.545 18:54:52 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:23.545 18:54:52 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:23.545 18:54:52 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=115968 00:05:23.545 18:54:52 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:23.545 Waiting for target to run... 00:05:23.545 18:54:52 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 115968 /var/tmp/spdk_tgt.sock 00:05:23.545 18:54:52 json_config_extra_key -- common/autotest_common.sh@833 -- # '[' -z 115968 ']' 00:05:23.545 18:54:52 json_config_extra_key -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:23.545 18:54:52 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:23.545 18:54:52 json_config_extra_key -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:23.545 18:54:52 json_config_extra_key -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:23.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:23.545 18:54:52 json_config_extra_key -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:23.545 18:54:52 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:23.806 [2024-11-05 18:54:52.886142] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:05:23.806 [2024-11-05 18:54:52.886195] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115968 ] 00:05:24.066 [2024-11-05 18:54:53.253166] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.066 [2024-11-05 18:54:53.289754] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.636 18:54:53 json_config_extra_key -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:24.636 18:54:53 json_config_extra_key -- common/autotest_common.sh@866 -- # return 0 00:05:24.636 18:54:53 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:24.636 00:05:24.636 18:54:53 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:24.636 INFO: shutting down applications... 00:05:24.636 18:54:53 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:24.636 18:54:53 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:24.636 18:54:53 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:24.636 18:54:53 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 115968 ]] 00:05:24.636 18:54:53 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 115968 00:05:24.636 18:54:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:24.636 18:54:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:24.636 18:54:53 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 115968 00:05:24.636 18:54:53 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:24.896 18:54:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:24.896 18:54:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:24.896 18:54:54 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 115968 00:05:24.896 18:54:54 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:24.896 18:54:54 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:24.896 18:54:54 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:24.896 18:54:54 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:24.896 SPDK target shutdown done 00:05:24.896 18:54:54 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:24.896 Success 00:05:24.896 00:05:24.896 real 0m1.563s 00:05:24.896 user 0m1.143s 00:05:24.896 sys 0m0.461s 00:05:24.896 18:54:54 json_config_extra_key -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:24.896 18:54:54 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:24.896 ************************************ 00:05:24.896 END TEST json_config_extra_key 00:05:24.896 ************************************ 00:05:25.157 18:54:54 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:25.157 18:54:54 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:25.157 18:54:54 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:25.157 18:54:54 -- common/autotest_common.sh@10 -- # set +x 00:05:25.157 ************************************ 00:05:25.157 START TEST alias_rpc 00:05:25.157 ************************************ 00:05:25.157 18:54:54 alias_rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:25.157 * Looking for test storage... 00:05:25.157 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:25.157 18:54:54 alias_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:25.157 18:54:54 alias_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:05:25.157 18:54:54 alias_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:25.157 18:54:54 alias_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:25.157 18:54:54 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:25.157 18:54:54 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:25.157 18:54:54 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:25.157 18:54:54 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:25.157 18:54:54 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:25.157 18:54:54 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:25.157 18:54:54 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:25.157 18:54:54 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:25.157 18:54:54 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:25.157 18:54:54 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:25.157 18:54:54 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:25.157 18:54:54 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:25.157 18:54:54 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:25.157 18:54:54 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:25.157 18:54:54 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:25.157 18:54:54 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:25.157 18:54:54 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:25.157 18:54:54 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:25.157 18:54:54 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:25.157 18:54:54 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:25.157 18:54:54 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:25.157 18:54:54 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:25.157 18:54:54 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:25.157 18:54:54 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:25.157 18:54:54 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:25.157 18:54:54 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:25.157 18:54:54 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:25.157 18:54:54 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:25.157 18:54:54 alias_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:25.157 18:54:54 alias_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:25.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.157 --rc genhtml_branch_coverage=1 00:05:25.157 --rc genhtml_function_coverage=1 00:05:25.157 --rc genhtml_legend=1 00:05:25.157 --rc geninfo_all_blocks=1 00:05:25.157 --rc geninfo_unexecuted_blocks=1 00:05:25.157 00:05:25.157 ' 00:05:25.157 18:54:54 alias_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:25.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.157 --rc genhtml_branch_coverage=1 00:05:25.157 --rc genhtml_function_coverage=1 00:05:25.157 --rc genhtml_legend=1 00:05:25.157 --rc geninfo_all_blocks=1 00:05:25.157 --rc geninfo_unexecuted_blocks=1 00:05:25.157 00:05:25.157 ' 00:05:25.157 18:54:54 alias_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:25.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.157 --rc genhtml_branch_coverage=1 00:05:25.157 --rc genhtml_function_coverage=1 00:05:25.157 --rc genhtml_legend=1 00:05:25.157 --rc geninfo_all_blocks=1 00:05:25.157 --rc geninfo_unexecuted_blocks=1 00:05:25.157 00:05:25.157 ' 00:05:25.157 18:54:54 alias_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:25.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.157 --rc genhtml_branch_coverage=1 00:05:25.157 --rc genhtml_function_coverage=1 00:05:25.157 --rc genhtml_legend=1 00:05:25.157 --rc geninfo_all_blocks=1 00:05:25.157 --rc geninfo_unexecuted_blocks=1 00:05:25.157 00:05:25.157 ' 00:05:25.157 18:54:54 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:25.157 18:54:54 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=116366 00:05:25.157 18:54:54 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:25.157 18:54:54 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 116366 00:05:25.157 18:54:54 alias_rpc -- common/autotest_common.sh@833 -- # '[' -z 116366 ']' 00:05:25.157 18:54:54 alias_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:25.157 18:54:54 alias_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:25.157 18:54:54 alias_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:25.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:25.157 18:54:54 alias_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:25.158 18:54:54 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:25.418 [2024-11-05 18:54:54.522235] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:05:25.418 [2024-11-05 18:54:54.522291] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116366 ] 00:05:25.418 [2024-11-05 18:54:54.593919] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.418 [2024-11-05 18:54:54.630209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.995 18:54:55 alias_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:25.995 18:54:55 alias_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:25.995 18:54:55 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:26.256 18:54:55 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 116366 00:05:26.256 18:54:55 alias_rpc -- common/autotest_common.sh@952 -- # '[' -z 116366 ']' 00:05:26.256 18:54:55 alias_rpc -- common/autotest_common.sh@956 -- # kill -0 116366 00:05:26.256 18:54:55 alias_rpc -- common/autotest_common.sh@957 -- # uname 00:05:26.256 18:54:55 alias_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:26.256 18:54:55 alias_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 116366 00:05:26.256 18:54:55 alias_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:26.256 18:54:55 alias_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:26.256 18:54:55 alias_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 116366' 00:05:26.256 killing process with pid 116366 00:05:26.256 18:54:55 alias_rpc -- common/autotest_common.sh@971 -- # kill 116366 00:05:26.256 18:54:55 alias_rpc -- common/autotest_common.sh@976 -- # wait 116366 00:05:26.516 00:05:26.516 real 0m1.511s 00:05:26.516 user 0m1.681s 00:05:26.516 sys 0m0.388s 00:05:26.516 18:54:55 alias_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:26.516 18:54:55 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.516 ************************************ 00:05:26.516 END TEST alias_rpc 00:05:26.516 ************************************ 00:05:26.516 18:54:55 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:26.516 18:54:55 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:26.516 18:54:55 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:26.516 18:54:55 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:26.516 18:54:55 -- common/autotest_common.sh@10 -- # set +x 00:05:26.777 ************************************ 00:05:26.777 START TEST spdkcli_tcp 00:05:26.777 ************************************ 00:05:26.777 18:54:55 spdkcli_tcp -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:26.777 * Looking for test storage... 00:05:26.777 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:26.777 18:54:55 spdkcli_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:26.777 18:54:55 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:05:26.777 18:54:55 spdkcli_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:26.777 18:54:56 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:26.777 18:54:56 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:26.777 18:54:56 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:26.777 18:54:56 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:26.777 18:54:56 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:26.777 18:54:56 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:26.777 18:54:56 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:26.777 18:54:56 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:26.777 18:54:56 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:26.777 18:54:56 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:26.777 18:54:56 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:26.777 18:54:56 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:26.777 18:54:56 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:26.777 18:54:56 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:26.777 18:54:56 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:26.777 18:54:56 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:26.777 18:54:56 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:26.777 18:54:56 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:26.777 18:54:56 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:26.777 18:54:56 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:26.777 18:54:56 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:26.777 18:54:56 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:26.777 18:54:56 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:26.777 18:54:56 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:26.777 18:54:56 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:26.777 18:54:56 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:26.777 18:54:56 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:26.778 18:54:56 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:26.778 18:54:56 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:26.778 18:54:56 spdkcli_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:26.778 18:54:56 spdkcli_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:26.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.778 --rc genhtml_branch_coverage=1 00:05:26.778 --rc genhtml_function_coverage=1 00:05:26.778 --rc genhtml_legend=1 00:05:26.778 --rc geninfo_all_blocks=1 00:05:26.778 --rc geninfo_unexecuted_blocks=1 00:05:26.778 00:05:26.778 ' 00:05:26.778 18:54:56 spdkcli_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:26.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.778 --rc genhtml_branch_coverage=1 00:05:26.778 --rc genhtml_function_coverage=1 00:05:26.778 --rc genhtml_legend=1 00:05:26.778 --rc geninfo_all_blocks=1 00:05:26.778 --rc geninfo_unexecuted_blocks=1 00:05:26.778 00:05:26.778 ' 00:05:26.778 18:54:56 spdkcli_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:26.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.778 --rc genhtml_branch_coverage=1 00:05:26.778 --rc genhtml_function_coverage=1 00:05:26.778 --rc genhtml_legend=1 00:05:26.778 --rc geninfo_all_blocks=1 00:05:26.778 --rc geninfo_unexecuted_blocks=1 00:05:26.778 00:05:26.778 ' 00:05:26.778 18:54:56 spdkcli_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:26.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.778 --rc genhtml_branch_coverage=1 00:05:26.778 --rc genhtml_function_coverage=1 00:05:26.778 --rc genhtml_legend=1 00:05:26.778 --rc geninfo_all_blocks=1 00:05:26.778 --rc geninfo_unexecuted_blocks=1 00:05:26.778 00:05:26.778 ' 00:05:26.778 18:54:56 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:26.778 18:54:56 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:26.778 18:54:56 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:26.778 18:54:56 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:26.778 18:54:56 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:26.778 18:54:56 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:26.778 18:54:56 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:26.778 18:54:56 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:26.778 18:54:56 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:26.778 18:54:56 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=116761 00:05:26.778 18:54:56 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 116761 00:05:26.778 18:54:56 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:26.778 18:54:56 spdkcli_tcp -- common/autotest_common.sh@833 -- # '[' -z 116761 ']' 00:05:26.778 18:54:56 spdkcli_tcp -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:26.778 18:54:56 spdkcli_tcp -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:26.778 18:54:56 spdkcli_tcp -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:26.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:26.778 18:54:56 spdkcli_tcp -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:26.778 18:54:56 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:27.039 [2024-11-05 18:54:56.118743] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:05:27.039 [2024-11-05 18:54:56.118803] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116761 ] 00:05:27.039 [2024-11-05 18:54:56.190420] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:27.039 [2024-11-05 18:54:56.227339] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:27.039 [2024-11-05 18:54:56.227342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.611 18:54:56 spdkcli_tcp -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:27.611 18:54:56 spdkcli_tcp -- common/autotest_common.sh@866 -- # return 0 00:05:27.611 18:54:56 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=117077 00:05:27.611 18:54:56 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:27.611 18:54:56 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:27.873 [ 00:05:27.873 "bdev_malloc_delete", 00:05:27.873 "bdev_malloc_create", 00:05:27.873 "bdev_null_resize", 00:05:27.873 "bdev_null_delete", 00:05:27.873 "bdev_null_create", 00:05:27.873 "bdev_nvme_cuse_unregister", 00:05:27.873 "bdev_nvme_cuse_register", 00:05:27.873 "bdev_opal_new_user", 00:05:27.873 "bdev_opal_set_lock_state", 00:05:27.873 "bdev_opal_delete", 00:05:27.873 "bdev_opal_get_info", 00:05:27.873 "bdev_opal_create", 00:05:27.873 "bdev_nvme_opal_revert", 00:05:27.873 "bdev_nvme_opal_init", 00:05:27.873 "bdev_nvme_send_cmd", 00:05:27.873 "bdev_nvme_set_keys", 00:05:27.873 "bdev_nvme_get_path_iostat", 00:05:27.873 "bdev_nvme_get_mdns_discovery_info", 00:05:27.873 "bdev_nvme_stop_mdns_discovery", 00:05:27.873 "bdev_nvme_start_mdns_discovery", 00:05:27.873 "bdev_nvme_set_multipath_policy", 00:05:27.873 "bdev_nvme_set_preferred_path", 00:05:27.873 "bdev_nvme_get_io_paths", 00:05:27.873 "bdev_nvme_remove_error_injection", 00:05:27.873 "bdev_nvme_add_error_injection", 00:05:27.873 "bdev_nvme_get_discovery_info", 00:05:27.873 "bdev_nvme_stop_discovery", 00:05:27.873 "bdev_nvme_start_discovery", 00:05:27.873 "bdev_nvme_get_controller_health_info", 00:05:27.873 "bdev_nvme_disable_controller", 00:05:27.873 "bdev_nvme_enable_controller", 00:05:27.873 "bdev_nvme_reset_controller", 00:05:27.873 "bdev_nvme_get_transport_statistics", 00:05:27.873 "bdev_nvme_apply_firmware", 00:05:27.873 "bdev_nvme_detach_controller", 00:05:27.873 "bdev_nvme_get_controllers", 00:05:27.873 "bdev_nvme_attach_controller", 00:05:27.873 "bdev_nvme_set_hotplug", 00:05:27.873 "bdev_nvme_set_options", 00:05:27.873 "bdev_passthru_delete", 00:05:27.873 "bdev_passthru_create", 00:05:27.873 "bdev_lvol_set_parent_bdev", 00:05:27.873 "bdev_lvol_set_parent", 00:05:27.873 "bdev_lvol_check_shallow_copy", 00:05:27.873 "bdev_lvol_start_shallow_copy", 00:05:27.873 "bdev_lvol_grow_lvstore", 00:05:27.873 "bdev_lvol_get_lvols", 00:05:27.873 "bdev_lvol_get_lvstores", 00:05:27.873 "bdev_lvol_delete", 00:05:27.873 "bdev_lvol_set_read_only", 00:05:27.873 "bdev_lvol_resize", 00:05:27.873 "bdev_lvol_decouple_parent", 00:05:27.873 "bdev_lvol_inflate", 00:05:27.873 "bdev_lvol_rename", 00:05:27.873 "bdev_lvol_clone_bdev", 00:05:27.873 "bdev_lvol_clone", 00:05:27.873 "bdev_lvol_snapshot", 00:05:27.873 "bdev_lvol_create", 00:05:27.873 "bdev_lvol_delete_lvstore", 00:05:27.873 "bdev_lvol_rename_lvstore", 00:05:27.873 "bdev_lvol_create_lvstore", 00:05:27.873 "bdev_raid_set_options", 00:05:27.873 "bdev_raid_remove_base_bdev", 00:05:27.873 "bdev_raid_add_base_bdev", 00:05:27.873 "bdev_raid_delete", 00:05:27.873 "bdev_raid_create", 00:05:27.873 "bdev_raid_get_bdevs", 00:05:27.873 "bdev_error_inject_error", 00:05:27.873 "bdev_error_delete", 00:05:27.873 "bdev_error_create", 00:05:27.873 "bdev_split_delete", 00:05:27.873 "bdev_split_create", 00:05:27.873 "bdev_delay_delete", 00:05:27.873 "bdev_delay_create", 00:05:27.873 "bdev_delay_update_latency", 00:05:27.873 "bdev_zone_block_delete", 00:05:27.873 "bdev_zone_block_create", 00:05:27.873 "blobfs_create", 00:05:27.873 "blobfs_detect", 00:05:27.873 "blobfs_set_cache_size", 00:05:27.873 "bdev_aio_delete", 00:05:27.873 "bdev_aio_rescan", 00:05:27.873 "bdev_aio_create", 00:05:27.873 "bdev_ftl_set_property", 00:05:27.873 "bdev_ftl_get_properties", 00:05:27.873 "bdev_ftl_get_stats", 00:05:27.873 "bdev_ftl_unmap", 00:05:27.873 "bdev_ftl_unload", 00:05:27.873 "bdev_ftl_delete", 00:05:27.873 "bdev_ftl_load", 00:05:27.873 "bdev_ftl_create", 00:05:27.873 "bdev_virtio_attach_controller", 00:05:27.873 "bdev_virtio_scsi_get_devices", 00:05:27.873 "bdev_virtio_detach_controller", 00:05:27.873 "bdev_virtio_blk_set_hotplug", 00:05:27.873 "bdev_iscsi_delete", 00:05:27.873 "bdev_iscsi_create", 00:05:27.873 "bdev_iscsi_set_options", 00:05:27.873 "accel_error_inject_error", 00:05:27.873 "ioat_scan_accel_module", 00:05:27.873 "dsa_scan_accel_module", 00:05:27.873 "iaa_scan_accel_module", 00:05:27.873 "vfu_virtio_create_fs_endpoint", 00:05:27.873 "vfu_virtio_create_scsi_endpoint", 00:05:27.873 "vfu_virtio_scsi_remove_target", 00:05:27.873 "vfu_virtio_scsi_add_target", 00:05:27.873 "vfu_virtio_create_blk_endpoint", 00:05:27.873 "vfu_virtio_delete_endpoint", 00:05:27.873 "keyring_file_remove_key", 00:05:27.873 "keyring_file_add_key", 00:05:27.873 "keyring_linux_set_options", 00:05:27.873 "fsdev_aio_delete", 00:05:27.873 "fsdev_aio_create", 00:05:27.873 "iscsi_get_histogram", 00:05:27.873 "iscsi_enable_histogram", 00:05:27.873 "iscsi_set_options", 00:05:27.873 "iscsi_get_auth_groups", 00:05:27.873 "iscsi_auth_group_remove_secret", 00:05:27.873 "iscsi_auth_group_add_secret", 00:05:27.873 "iscsi_delete_auth_group", 00:05:27.873 "iscsi_create_auth_group", 00:05:27.873 "iscsi_set_discovery_auth", 00:05:27.873 "iscsi_get_options", 00:05:27.873 "iscsi_target_node_request_logout", 00:05:27.873 "iscsi_target_node_set_redirect", 00:05:27.873 "iscsi_target_node_set_auth", 00:05:27.873 "iscsi_target_node_add_lun", 00:05:27.873 "iscsi_get_stats", 00:05:27.873 "iscsi_get_connections", 00:05:27.873 "iscsi_portal_group_set_auth", 00:05:27.873 "iscsi_start_portal_group", 00:05:27.873 "iscsi_delete_portal_group", 00:05:27.873 "iscsi_create_portal_group", 00:05:27.873 "iscsi_get_portal_groups", 00:05:27.873 "iscsi_delete_target_node", 00:05:27.873 "iscsi_target_node_remove_pg_ig_maps", 00:05:27.873 "iscsi_target_node_add_pg_ig_maps", 00:05:27.873 "iscsi_create_target_node", 00:05:27.873 "iscsi_get_target_nodes", 00:05:27.873 "iscsi_delete_initiator_group", 00:05:27.873 "iscsi_initiator_group_remove_initiators", 00:05:27.873 "iscsi_initiator_group_add_initiators", 00:05:27.873 "iscsi_create_initiator_group", 00:05:27.873 "iscsi_get_initiator_groups", 00:05:27.873 "nvmf_set_crdt", 00:05:27.873 "nvmf_set_config", 00:05:27.873 "nvmf_set_max_subsystems", 00:05:27.873 "nvmf_stop_mdns_prr", 00:05:27.873 "nvmf_publish_mdns_prr", 00:05:27.873 "nvmf_subsystem_get_listeners", 00:05:27.873 "nvmf_subsystem_get_qpairs", 00:05:27.873 "nvmf_subsystem_get_controllers", 00:05:27.873 "nvmf_get_stats", 00:05:27.873 "nvmf_get_transports", 00:05:27.873 "nvmf_create_transport", 00:05:27.873 "nvmf_get_targets", 00:05:27.873 "nvmf_delete_target", 00:05:27.873 "nvmf_create_target", 00:05:27.873 "nvmf_subsystem_allow_any_host", 00:05:27.873 "nvmf_subsystem_set_keys", 00:05:27.873 "nvmf_subsystem_remove_host", 00:05:27.873 "nvmf_subsystem_add_host", 00:05:27.873 "nvmf_ns_remove_host", 00:05:27.873 "nvmf_ns_add_host", 00:05:27.873 "nvmf_subsystem_remove_ns", 00:05:27.873 "nvmf_subsystem_set_ns_ana_group", 00:05:27.873 "nvmf_subsystem_add_ns", 00:05:27.873 "nvmf_subsystem_listener_set_ana_state", 00:05:27.873 "nvmf_discovery_get_referrals", 00:05:27.873 "nvmf_discovery_remove_referral", 00:05:27.873 "nvmf_discovery_add_referral", 00:05:27.873 "nvmf_subsystem_remove_listener", 00:05:27.873 "nvmf_subsystem_add_listener", 00:05:27.873 "nvmf_delete_subsystem", 00:05:27.873 "nvmf_create_subsystem", 00:05:27.873 "nvmf_get_subsystems", 00:05:27.873 "env_dpdk_get_mem_stats", 00:05:27.873 "nbd_get_disks", 00:05:27.873 "nbd_stop_disk", 00:05:27.873 "nbd_start_disk", 00:05:27.873 "ublk_recover_disk", 00:05:27.873 "ublk_get_disks", 00:05:27.873 "ublk_stop_disk", 00:05:27.873 "ublk_start_disk", 00:05:27.873 "ublk_destroy_target", 00:05:27.873 "ublk_create_target", 00:05:27.873 "virtio_blk_create_transport", 00:05:27.873 "virtio_blk_get_transports", 00:05:27.873 "vhost_controller_set_coalescing", 00:05:27.873 "vhost_get_controllers", 00:05:27.873 "vhost_delete_controller", 00:05:27.873 "vhost_create_blk_controller", 00:05:27.873 "vhost_scsi_controller_remove_target", 00:05:27.873 "vhost_scsi_controller_add_target", 00:05:27.873 "vhost_start_scsi_controller", 00:05:27.873 "vhost_create_scsi_controller", 00:05:27.873 "thread_set_cpumask", 00:05:27.874 "scheduler_set_options", 00:05:27.874 "framework_get_governor", 00:05:27.874 "framework_get_scheduler", 00:05:27.874 "framework_set_scheduler", 00:05:27.874 "framework_get_reactors", 00:05:27.874 "thread_get_io_channels", 00:05:27.874 "thread_get_pollers", 00:05:27.874 "thread_get_stats", 00:05:27.874 "framework_monitor_context_switch", 00:05:27.874 "spdk_kill_instance", 00:05:27.874 "log_enable_timestamps", 00:05:27.874 "log_get_flags", 00:05:27.874 "log_clear_flag", 00:05:27.874 "log_set_flag", 00:05:27.874 "log_get_level", 00:05:27.874 "log_set_level", 00:05:27.874 "log_get_print_level", 00:05:27.874 "log_set_print_level", 00:05:27.874 "framework_enable_cpumask_locks", 00:05:27.874 "framework_disable_cpumask_locks", 00:05:27.874 "framework_wait_init", 00:05:27.874 "framework_start_init", 00:05:27.874 "scsi_get_devices", 00:05:27.874 "bdev_get_histogram", 00:05:27.874 "bdev_enable_histogram", 00:05:27.874 "bdev_set_qos_limit", 00:05:27.874 "bdev_set_qd_sampling_period", 00:05:27.874 "bdev_get_bdevs", 00:05:27.874 "bdev_reset_iostat", 00:05:27.874 "bdev_get_iostat", 00:05:27.874 "bdev_examine", 00:05:27.874 "bdev_wait_for_examine", 00:05:27.874 "bdev_set_options", 00:05:27.874 "accel_get_stats", 00:05:27.874 "accel_set_options", 00:05:27.874 "accel_set_driver", 00:05:27.874 "accel_crypto_key_destroy", 00:05:27.874 "accel_crypto_keys_get", 00:05:27.874 "accel_crypto_key_create", 00:05:27.874 "accel_assign_opc", 00:05:27.874 "accel_get_module_info", 00:05:27.874 "accel_get_opc_assignments", 00:05:27.874 "vmd_rescan", 00:05:27.874 "vmd_remove_device", 00:05:27.874 "vmd_enable", 00:05:27.874 "sock_get_default_impl", 00:05:27.874 "sock_set_default_impl", 00:05:27.874 "sock_impl_set_options", 00:05:27.874 "sock_impl_get_options", 00:05:27.874 "iobuf_get_stats", 00:05:27.874 "iobuf_set_options", 00:05:27.874 "keyring_get_keys", 00:05:27.874 "vfu_tgt_set_base_path", 00:05:27.874 "framework_get_pci_devices", 00:05:27.874 "framework_get_config", 00:05:27.874 "framework_get_subsystems", 00:05:27.874 "fsdev_set_opts", 00:05:27.874 "fsdev_get_opts", 00:05:27.874 "trace_get_info", 00:05:27.874 "trace_get_tpoint_group_mask", 00:05:27.874 "trace_disable_tpoint_group", 00:05:27.874 "trace_enable_tpoint_group", 00:05:27.874 "trace_clear_tpoint_mask", 00:05:27.874 "trace_set_tpoint_mask", 00:05:27.874 "notify_get_notifications", 00:05:27.874 "notify_get_types", 00:05:27.874 "spdk_get_version", 00:05:27.874 "rpc_get_methods" 00:05:27.874 ] 00:05:27.874 18:54:57 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:27.874 18:54:57 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:27.874 18:54:57 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:27.874 18:54:57 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:27.874 18:54:57 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 116761 00:05:27.874 18:54:57 spdkcli_tcp -- common/autotest_common.sh@952 -- # '[' -z 116761 ']' 00:05:27.874 18:54:57 spdkcli_tcp -- common/autotest_common.sh@956 -- # kill -0 116761 00:05:27.874 18:54:57 spdkcli_tcp -- common/autotest_common.sh@957 -- # uname 00:05:27.874 18:54:57 spdkcli_tcp -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:27.874 18:54:57 spdkcli_tcp -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 116761 00:05:27.874 18:54:57 spdkcli_tcp -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:27.874 18:54:57 spdkcli_tcp -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:27.874 18:54:57 spdkcli_tcp -- common/autotest_common.sh@970 -- # echo 'killing process with pid 116761' 00:05:27.874 killing process with pid 116761 00:05:27.874 18:54:57 spdkcli_tcp -- common/autotest_common.sh@971 -- # kill 116761 00:05:27.874 18:54:57 spdkcli_tcp -- common/autotest_common.sh@976 -- # wait 116761 00:05:28.136 00:05:28.136 real 0m1.517s 00:05:28.136 user 0m2.777s 00:05:28.136 sys 0m0.436s 00:05:28.136 18:54:57 spdkcli_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:28.136 18:54:57 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:28.136 ************************************ 00:05:28.136 END TEST spdkcli_tcp 00:05:28.136 ************************************ 00:05:28.136 18:54:57 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:28.136 18:54:57 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:28.136 18:54:57 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:28.136 18:54:57 -- common/autotest_common.sh@10 -- # set +x 00:05:28.136 ************************************ 00:05:28.136 START TEST dpdk_mem_utility 00:05:28.136 ************************************ 00:05:28.136 18:54:57 dpdk_mem_utility -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:28.398 * Looking for test storage... 00:05:28.398 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:28.398 18:54:57 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:28.398 18:54:57 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lcov --version 00:05:28.398 18:54:57 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:28.398 18:54:57 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:28.398 18:54:57 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:28.398 18:54:57 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:28.398 18:54:57 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:28.398 18:54:57 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:28.398 18:54:57 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:28.398 18:54:57 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:28.398 18:54:57 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:28.398 18:54:57 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:28.398 18:54:57 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:28.398 18:54:57 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:28.398 18:54:57 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:28.398 18:54:57 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:28.398 18:54:57 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:28.398 18:54:57 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:28.398 18:54:57 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:28.398 18:54:57 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:28.398 18:54:57 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:28.398 18:54:57 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:28.398 18:54:57 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:28.398 18:54:57 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:28.398 18:54:57 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:28.398 18:54:57 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:28.398 18:54:57 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:28.398 18:54:57 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:28.398 18:54:57 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:28.398 18:54:57 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:28.398 18:54:57 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:28.398 18:54:57 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:28.398 18:54:57 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:28.398 18:54:57 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:28.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.398 --rc genhtml_branch_coverage=1 00:05:28.398 --rc genhtml_function_coverage=1 00:05:28.398 --rc genhtml_legend=1 00:05:28.398 --rc geninfo_all_blocks=1 00:05:28.398 --rc geninfo_unexecuted_blocks=1 00:05:28.398 00:05:28.398 ' 00:05:28.398 18:54:57 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:28.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.398 --rc genhtml_branch_coverage=1 00:05:28.398 --rc genhtml_function_coverage=1 00:05:28.398 --rc genhtml_legend=1 00:05:28.398 --rc geninfo_all_blocks=1 00:05:28.398 --rc geninfo_unexecuted_blocks=1 00:05:28.398 00:05:28.398 ' 00:05:28.398 18:54:57 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:28.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.398 --rc genhtml_branch_coverage=1 00:05:28.398 --rc genhtml_function_coverage=1 00:05:28.398 --rc genhtml_legend=1 00:05:28.398 --rc geninfo_all_blocks=1 00:05:28.398 --rc geninfo_unexecuted_blocks=1 00:05:28.398 00:05:28.398 ' 00:05:28.398 18:54:57 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:28.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.398 --rc genhtml_branch_coverage=1 00:05:28.398 --rc genhtml_function_coverage=1 00:05:28.398 --rc genhtml_legend=1 00:05:28.398 --rc geninfo_all_blocks=1 00:05:28.398 --rc geninfo_unexecuted_blocks=1 00:05:28.398 00:05:28.398 ' 00:05:28.398 18:54:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:28.398 18:54:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=117178 00:05:28.398 18:54:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 117178 00:05:28.398 18:54:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:28.398 18:54:57 dpdk_mem_utility -- common/autotest_common.sh@833 -- # '[' -z 117178 ']' 00:05:28.398 18:54:57 dpdk_mem_utility -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:28.398 18:54:57 dpdk_mem_utility -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:28.398 18:54:57 dpdk_mem_utility -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:28.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:28.398 18:54:57 dpdk_mem_utility -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:28.398 18:54:57 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:28.398 [2024-11-05 18:54:57.699700] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:05:28.398 [2024-11-05 18:54:57.699758] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117178 ] 00:05:28.658 [2024-11-05 18:54:57.770333] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.658 [2024-11-05 18:54:57.806311] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.229 18:54:58 dpdk_mem_utility -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:29.229 18:54:58 dpdk_mem_utility -- common/autotest_common.sh@866 -- # return 0 00:05:29.229 18:54:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:29.229 18:54:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:29.229 18:54:58 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:29.229 18:54:58 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:29.229 { 00:05:29.229 "filename": "/tmp/spdk_mem_dump.txt" 00:05:29.229 } 00:05:29.229 18:54:58 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:29.229 18:54:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:29.229 DPDK memory size 810.000000 MiB in 1 heap(s) 00:05:29.229 1 heaps totaling size 810.000000 MiB 00:05:29.229 size: 810.000000 MiB heap id: 0 00:05:29.229 end heaps---------- 00:05:29.229 9 mempools totaling size 595.772034 MiB 00:05:29.229 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:29.229 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:29.229 size: 92.545471 MiB name: bdev_io_117178 00:05:29.229 size: 50.003479 MiB name: msgpool_117178 00:05:29.229 size: 36.509338 MiB name: fsdev_io_117178 00:05:29.229 size: 21.763794 MiB name: PDU_Pool 00:05:29.229 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:29.229 size: 4.133484 MiB name: evtpool_117178 00:05:29.229 size: 0.026123 MiB name: Session_Pool 00:05:29.229 end mempools------- 00:05:29.229 6 memzones totaling size 4.142822 MiB 00:05:29.229 size: 1.000366 MiB name: RG_ring_0_117178 00:05:29.229 size: 1.000366 MiB name: RG_ring_1_117178 00:05:29.229 size: 1.000366 MiB name: RG_ring_4_117178 00:05:29.229 size: 1.000366 MiB name: RG_ring_5_117178 00:05:29.229 size: 0.125366 MiB name: RG_ring_2_117178 00:05:29.229 size: 0.015991 MiB name: RG_ring_3_117178 00:05:29.229 end memzones------- 00:05:29.491 18:54:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:29.491 heap id: 0 total size: 810.000000 MiB number of busy elements: 44 number of free elements: 15 00:05:29.491 list of free elements. size: 10.862488 MiB 00:05:29.491 element at address: 0x200018a00000 with size: 0.999878 MiB 00:05:29.491 element at address: 0x200018c00000 with size: 0.999878 MiB 00:05:29.491 element at address: 0x200000400000 with size: 0.998535 MiB 00:05:29.491 element at address: 0x200031800000 with size: 0.994446 MiB 00:05:29.491 element at address: 0x200006400000 with size: 0.959839 MiB 00:05:29.491 element at address: 0x200012c00000 with size: 0.954285 MiB 00:05:29.491 element at address: 0x200018e00000 with size: 0.936584 MiB 00:05:29.491 element at address: 0x200000200000 with size: 0.717346 MiB 00:05:29.491 element at address: 0x20001a600000 with size: 0.582886 MiB 00:05:29.491 element at address: 0x200000c00000 with size: 0.495422 MiB 00:05:29.491 element at address: 0x20000a600000 with size: 0.490723 MiB 00:05:29.491 element at address: 0x200019000000 with size: 0.485657 MiB 00:05:29.491 element at address: 0x200003e00000 with size: 0.481934 MiB 00:05:29.491 element at address: 0x200027a00000 with size: 0.410034 MiB 00:05:29.491 element at address: 0x200000800000 with size: 0.355042 MiB 00:05:29.491 list of standard malloc elements. size: 199.218628 MiB 00:05:29.491 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:05:29.491 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:05:29.491 element at address: 0x200018afff80 with size: 1.000122 MiB 00:05:29.491 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:05:29.491 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:29.491 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:29.491 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:05:29.491 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:29.491 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:05:29.491 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:29.491 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:29.491 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:05:29.491 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:05:29.491 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:05:29.491 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:05:29.491 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:05:29.491 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:05:29.491 element at address: 0x20000085b040 with size: 0.000183 MiB 00:05:29.491 element at address: 0x20000085f300 with size: 0.000183 MiB 00:05:29.491 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:05:29.491 element at address: 0x20000087f680 with size: 0.000183 MiB 00:05:29.491 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:05:29.491 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:05:29.491 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:05:29.491 element at address: 0x200000cff000 with size: 0.000183 MiB 00:05:29.491 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:05:29.491 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:05:29.491 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:05:29.491 element at address: 0x200003efb980 with size: 0.000183 MiB 00:05:29.491 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:05:29.491 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:05:29.491 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:05:29.492 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:05:29.492 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:05:29.492 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:05:29.492 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:05:29.492 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:05:29.492 element at address: 0x20001a695380 with size: 0.000183 MiB 00:05:29.492 element at address: 0x20001a695440 with size: 0.000183 MiB 00:05:29.492 element at address: 0x200027a68f80 with size: 0.000183 MiB 00:05:29.492 element at address: 0x200027a69040 with size: 0.000183 MiB 00:05:29.492 element at address: 0x200027a6fc40 with size: 0.000183 MiB 00:05:29.492 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:05:29.492 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:05:29.492 list of memzone associated elements. size: 599.918884 MiB 00:05:29.492 element at address: 0x20001a695500 with size: 211.416748 MiB 00:05:29.492 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:29.492 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:05:29.492 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:29.492 element at address: 0x200012df4780 with size: 92.045044 MiB 00:05:29.492 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_117178_0 00:05:29.492 element at address: 0x200000dff380 with size: 48.003052 MiB 00:05:29.492 associated memzone info: size: 48.002930 MiB name: MP_msgpool_117178_0 00:05:29.492 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:05:29.492 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_117178_0 00:05:29.492 element at address: 0x2000191be940 with size: 20.255554 MiB 00:05:29.492 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:29.492 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:05:29.492 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:29.492 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:05:29.492 associated memzone info: size: 3.000122 MiB name: MP_evtpool_117178_0 00:05:29.492 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:05:29.492 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_117178 00:05:29.492 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:29.492 associated memzone info: size: 1.007996 MiB name: MP_evtpool_117178 00:05:29.492 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:05:29.492 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:29.492 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:05:29.492 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:29.492 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:05:29.492 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:29.492 element at address: 0x200003efba40 with size: 1.008118 MiB 00:05:29.492 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:29.492 element at address: 0x200000cff180 with size: 1.000488 MiB 00:05:29.492 associated memzone info: size: 1.000366 MiB name: RG_ring_0_117178 00:05:29.492 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:05:29.492 associated memzone info: size: 1.000366 MiB name: RG_ring_1_117178 00:05:29.492 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:05:29.492 associated memzone info: size: 1.000366 MiB name: RG_ring_4_117178 00:05:29.492 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:05:29.492 associated memzone info: size: 1.000366 MiB name: RG_ring_5_117178 00:05:29.492 element at address: 0x20000087f740 with size: 0.500488 MiB 00:05:29.492 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_117178 00:05:29.492 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:05:29.492 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_117178 00:05:29.492 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:05:29.492 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:29.492 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:05:29.492 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:29.492 element at address: 0x20001907c540 with size: 0.250488 MiB 00:05:29.492 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:29.492 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:05:29.492 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_117178 00:05:29.492 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:05:29.492 associated memzone info: size: 0.125366 MiB name: RG_ring_2_117178 00:05:29.492 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:05:29.492 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:29.492 element at address: 0x200027a69100 with size: 0.023743 MiB 00:05:29.492 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:29.492 element at address: 0x20000085b100 with size: 0.016113 MiB 00:05:29.492 associated memzone info: size: 0.015991 MiB name: RG_ring_3_117178 00:05:29.492 element at address: 0x200027a6f240 with size: 0.002441 MiB 00:05:29.492 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:29.492 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:05:29.492 associated memzone info: size: 0.000183 MiB name: MP_msgpool_117178 00:05:29.492 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:05:29.492 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_117178 00:05:29.492 element at address: 0x20000085af00 with size: 0.000305 MiB 00:05:29.492 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_117178 00:05:29.492 element at address: 0x200027a6fd00 with size: 0.000305 MiB 00:05:29.492 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:29.492 18:54:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:29.492 18:54:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 117178 00:05:29.492 18:54:58 dpdk_mem_utility -- common/autotest_common.sh@952 -- # '[' -z 117178 ']' 00:05:29.492 18:54:58 dpdk_mem_utility -- common/autotest_common.sh@956 -- # kill -0 117178 00:05:29.492 18:54:58 dpdk_mem_utility -- common/autotest_common.sh@957 -- # uname 00:05:29.492 18:54:58 dpdk_mem_utility -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:29.492 18:54:58 dpdk_mem_utility -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 117178 00:05:29.492 18:54:58 dpdk_mem_utility -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:29.492 18:54:58 dpdk_mem_utility -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:29.492 18:54:58 dpdk_mem_utility -- common/autotest_common.sh@970 -- # echo 'killing process with pid 117178' 00:05:29.492 killing process with pid 117178 00:05:29.492 18:54:58 dpdk_mem_utility -- common/autotest_common.sh@971 -- # kill 117178 00:05:29.492 18:54:58 dpdk_mem_utility -- common/autotest_common.sh@976 -- # wait 117178 00:05:29.753 00:05:29.753 real 0m1.413s 00:05:29.753 user 0m1.516s 00:05:29.753 sys 0m0.386s 00:05:29.753 18:54:58 dpdk_mem_utility -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:29.753 18:54:58 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:29.753 ************************************ 00:05:29.753 END TEST dpdk_mem_utility 00:05:29.753 ************************************ 00:05:29.753 18:54:58 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:29.753 18:54:58 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:29.753 18:54:58 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:29.753 18:54:58 -- common/autotest_common.sh@10 -- # set +x 00:05:29.753 ************************************ 00:05:29.753 START TEST event 00:05:29.753 ************************************ 00:05:29.753 18:54:58 event -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:29.753 * Looking for test storage... 00:05:29.753 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:29.753 18:54:59 event -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:29.753 18:54:59 event -- common/autotest_common.sh@1691 -- # lcov --version 00:05:29.753 18:54:59 event -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:30.014 18:54:59 event -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:30.014 18:54:59 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:30.014 18:54:59 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:30.014 18:54:59 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:30.014 18:54:59 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:30.014 18:54:59 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:30.014 18:54:59 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:30.014 18:54:59 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:30.014 18:54:59 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:30.014 18:54:59 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:30.014 18:54:59 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:30.014 18:54:59 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:30.014 18:54:59 event -- scripts/common.sh@344 -- # case "$op" in 00:05:30.014 18:54:59 event -- scripts/common.sh@345 -- # : 1 00:05:30.014 18:54:59 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:30.014 18:54:59 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:30.014 18:54:59 event -- scripts/common.sh@365 -- # decimal 1 00:05:30.014 18:54:59 event -- scripts/common.sh@353 -- # local d=1 00:05:30.014 18:54:59 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:30.014 18:54:59 event -- scripts/common.sh@355 -- # echo 1 00:05:30.014 18:54:59 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:30.014 18:54:59 event -- scripts/common.sh@366 -- # decimal 2 00:05:30.014 18:54:59 event -- scripts/common.sh@353 -- # local d=2 00:05:30.014 18:54:59 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:30.014 18:54:59 event -- scripts/common.sh@355 -- # echo 2 00:05:30.014 18:54:59 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:30.014 18:54:59 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:30.014 18:54:59 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:30.014 18:54:59 event -- scripts/common.sh@368 -- # return 0 00:05:30.014 18:54:59 event -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:30.014 18:54:59 event -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:30.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.014 --rc genhtml_branch_coverage=1 00:05:30.014 --rc genhtml_function_coverage=1 00:05:30.014 --rc genhtml_legend=1 00:05:30.014 --rc geninfo_all_blocks=1 00:05:30.014 --rc geninfo_unexecuted_blocks=1 00:05:30.014 00:05:30.015 ' 00:05:30.015 18:54:59 event -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:30.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.015 --rc genhtml_branch_coverage=1 00:05:30.015 --rc genhtml_function_coverage=1 00:05:30.015 --rc genhtml_legend=1 00:05:30.015 --rc geninfo_all_blocks=1 00:05:30.015 --rc geninfo_unexecuted_blocks=1 00:05:30.015 00:05:30.015 ' 00:05:30.015 18:54:59 event -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:30.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.015 --rc genhtml_branch_coverage=1 00:05:30.015 --rc genhtml_function_coverage=1 00:05:30.015 --rc genhtml_legend=1 00:05:30.015 --rc geninfo_all_blocks=1 00:05:30.015 --rc geninfo_unexecuted_blocks=1 00:05:30.015 00:05:30.015 ' 00:05:30.015 18:54:59 event -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:30.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.015 --rc genhtml_branch_coverage=1 00:05:30.015 --rc genhtml_function_coverage=1 00:05:30.015 --rc genhtml_legend=1 00:05:30.015 --rc geninfo_all_blocks=1 00:05:30.015 --rc geninfo_unexecuted_blocks=1 00:05:30.015 00:05:30.015 ' 00:05:30.015 18:54:59 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:30.015 18:54:59 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:30.015 18:54:59 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:30.015 18:54:59 event -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:05:30.015 18:54:59 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:30.015 18:54:59 event -- common/autotest_common.sh@10 -- # set +x 00:05:30.015 ************************************ 00:05:30.015 START TEST event_perf 00:05:30.015 ************************************ 00:05:30.015 18:54:59 event.event_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:30.015 Running I/O for 1 seconds...[2024-11-05 18:54:59.196926] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:05:30.015 [2024-11-05 18:54:59.197021] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117572 ] 00:05:30.015 [2024-11-05 18:54:59.272889] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:30.015 [2024-11-05 18:54:59.313285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:30.015 [2024-11-05 18:54:59.313387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:30.015 [2024-11-05 18:54:59.313542] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.015 Running I/O for 1 seconds...[2024-11-05 18:54:59.313542] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:31.400 00:05:31.400 lcore 0: 176221 00:05:31.400 lcore 1: 176220 00:05:31.400 lcore 2: 176217 00:05:31.400 lcore 3: 176220 00:05:31.400 done. 00:05:31.400 00:05:31.400 real 0m1.173s 00:05:31.400 user 0m4.105s 00:05:31.400 sys 0m0.063s 00:05:31.400 18:55:00 event.event_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:31.400 18:55:00 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:31.400 ************************************ 00:05:31.400 END TEST event_perf 00:05:31.400 ************************************ 00:05:31.400 18:55:00 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:31.400 18:55:00 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:05:31.400 18:55:00 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:31.400 18:55:00 event -- common/autotest_common.sh@10 -- # set +x 00:05:31.400 ************************************ 00:05:31.400 START TEST event_reactor 00:05:31.400 ************************************ 00:05:31.400 18:55:00 event.event_reactor -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:31.400 [2024-11-05 18:55:00.449320] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:05:31.400 [2024-11-05 18:55:00.449413] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117926 ] 00:05:31.400 [2024-11-05 18:55:00.524947] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.400 [2024-11-05 18:55:00.562048] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.344 test_start 00:05:32.344 oneshot 00:05:32.344 tick 100 00:05:32.344 tick 100 00:05:32.344 tick 250 00:05:32.344 tick 100 00:05:32.344 tick 100 00:05:32.344 tick 250 00:05:32.344 tick 100 00:05:32.344 tick 500 00:05:32.344 tick 100 00:05:32.344 tick 100 00:05:32.344 tick 250 00:05:32.344 tick 100 00:05:32.344 tick 100 00:05:32.344 test_end 00:05:32.344 00:05:32.344 real 0m1.166s 00:05:32.344 user 0m1.101s 00:05:32.344 sys 0m0.061s 00:05:32.344 18:55:01 event.event_reactor -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:32.344 18:55:01 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:32.344 ************************************ 00:05:32.344 END TEST event_reactor 00:05:32.344 ************************************ 00:05:32.344 18:55:01 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:32.344 18:55:01 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:05:32.344 18:55:01 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:32.344 18:55:01 event -- common/autotest_common.sh@10 -- # set +x 00:05:32.605 ************************************ 00:05:32.605 START TEST event_reactor_perf 00:05:32.605 ************************************ 00:05:32.606 18:55:01 event.event_reactor_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:32.606 [2024-11-05 18:55:01.695476] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:05:32.606 [2024-11-05 18:55:01.695575] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118139 ] 00:05:32.606 [2024-11-05 18:55:01.770868] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.606 [2024-11-05 18:55:01.808390] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.549 test_start 00:05:33.549 test_end 00:05:33.549 Performance: 371416 events per second 00:05:33.549 00:05:33.549 real 0m1.166s 00:05:33.549 user 0m1.098s 00:05:33.549 sys 0m0.065s 00:05:33.549 18:55:02 event.event_reactor_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:33.549 18:55:02 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:33.549 ************************************ 00:05:33.549 END TEST event_reactor_perf 00:05:33.549 ************************************ 00:05:33.811 18:55:02 event -- event/event.sh@49 -- # uname -s 00:05:33.811 18:55:02 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:33.811 18:55:02 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:33.811 18:55:02 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:33.811 18:55:02 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:33.811 18:55:02 event -- common/autotest_common.sh@10 -- # set +x 00:05:33.811 ************************************ 00:05:33.811 START TEST event_scheduler 00:05:33.811 ************************************ 00:05:33.811 18:55:02 event.event_scheduler -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:33.811 * Looking for test storage... 00:05:33.811 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:33.811 18:55:03 event.event_scheduler -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:33.811 18:55:03 event.event_scheduler -- common/autotest_common.sh@1691 -- # lcov --version 00:05:33.811 18:55:03 event.event_scheduler -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:33.811 18:55:03 event.event_scheduler -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:33.811 18:55:03 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:33.811 18:55:03 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:33.811 18:55:03 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:33.811 18:55:03 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:33.811 18:55:03 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:33.811 18:55:03 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:33.811 18:55:03 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:33.811 18:55:03 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:33.811 18:55:03 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:33.811 18:55:03 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:33.811 18:55:03 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:33.811 18:55:03 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:33.811 18:55:03 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:33.811 18:55:03 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:33.811 18:55:03 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:33.811 18:55:03 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:33.811 18:55:03 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:33.811 18:55:03 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:33.811 18:55:03 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:33.811 18:55:03 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:33.811 18:55:03 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:33.811 18:55:03 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:33.811 18:55:03 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:33.811 18:55:03 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:33.811 18:55:03 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:33.811 18:55:03 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:33.811 18:55:03 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:33.811 18:55:03 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:33.811 18:55:03 event.event_scheduler -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:33.811 18:55:03 event.event_scheduler -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:33.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.811 --rc genhtml_branch_coverage=1 00:05:33.811 --rc genhtml_function_coverage=1 00:05:33.811 --rc genhtml_legend=1 00:05:33.811 --rc geninfo_all_blocks=1 00:05:33.811 --rc geninfo_unexecuted_blocks=1 00:05:33.811 00:05:33.811 ' 00:05:33.811 18:55:03 event.event_scheduler -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:33.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.811 --rc genhtml_branch_coverage=1 00:05:33.811 --rc genhtml_function_coverage=1 00:05:33.811 --rc genhtml_legend=1 00:05:33.811 --rc geninfo_all_blocks=1 00:05:33.811 --rc geninfo_unexecuted_blocks=1 00:05:33.811 00:05:33.811 ' 00:05:33.811 18:55:03 event.event_scheduler -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:33.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.811 --rc genhtml_branch_coverage=1 00:05:33.811 --rc genhtml_function_coverage=1 00:05:33.811 --rc genhtml_legend=1 00:05:33.811 --rc geninfo_all_blocks=1 00:05:33.811 --rc geninfo_unexecuted_blocks=1 00:05:33.811 00:05:33.811 ' 00:05:33.811 18:55:03 event.event_scheduler -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:33.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.811 --rc genhtml_branch_coverage=1 00:05:33.811 --rc genhtml_function_coverage=1 00:05:33.811 --rc genhtml_legend=1 00:05:33.811 --rc geninfo_all_blocks=1 00:05:33.811 --rc geninfo_unexecuted_blocks=1 00:05:33.811 00:05:33.811 ' 00:05:33.811 18:55:03 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:33.811 18:55:03 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=118389 00:05:33.811 18:55:03 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:33.812 18:55:03 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 118389 00:05:33.812 18:55:03 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:33.812 18:55:03 event.event_scheduler -- common/autotest_common.sh@833 -- # '[' -z 118389 ']' 00:05:33.812 18:55:03 event.event_scheduler -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:33.812 18:55:03 event.event_scheduler -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:33.812 18:55:03 event.event_scheduler -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:33.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:33.812 18:55:03 event.event_scheduler -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:33.812 18:55:03 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:34.073 [2024-11-05 18:55:03.175986] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:05:34.073 [2024-11-05 18:55:03.176062] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118389 ] 00:05:34.073 [2024-11-05 18:55:03.237535] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:34.073 [2024-11-05 18:55:03.273632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.073 [2024-11-05 18:55:03.273855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:34.073 [2024-11-05 18:55:03.274097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:34.073 [2024-11-05 18:55:03.274097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:34.073 18:55:03 event.event_scheduler -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:34.073 18:55:03 event.event_scheduler -- common/autotest_common.sh@866 -- # return 0 00:05:34.073 18:55:03 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:34.073 18:55:03 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:34.073 18:55:03 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:34.073 [2024-11-05 18:55:03.314520] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:34.073 [2024-11-05 18:55:03.314534] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:34.073 [2024-11-05 18:55:03.314541] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:34.073 [2024-11-05 18:55:03.314545] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:34.073 [2024-11-05 18:55:03.314549] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:34.073 18:55:03 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:34.073 18:55:03 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:34.073 18:55:03 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:34.073 18:55:03 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:34.073 [2024-11-05 18:55:03.370403] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:34.073 18:55:03 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:34.073 18:55:03 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:34.073 18:55:03 event.event_scheduler -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:34.073 18:55:03 event.event_scheduler -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:34.073 18:55:03 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:34.335 ************************************ 00:05:34.335 START TEST scheduler_create_thread 00:05:34.335 ************************************ 00:05:34.335 18:55:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1127 -- # scheduler_create_thread 00:05:34.335 18:55:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:34.335 18:55:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:34.335 18:55:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:34.335 2 00:05:34.335 18:55:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:34.335 18:55:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:34.335 18:55:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:34.335 18:55:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:34.335 3 00:05:34.335 18:55:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:34.335 18:55:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:34.335 18:55:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:34.335 18:55:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:34.335 4 00:05:34.335 18:55:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:34.335 18:55:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:34.335 18:55:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:34.335 18:55:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:34.335 5 00:05:34.335 18:55:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:34.335 18:55:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:34.335 18:55:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:34.335 18:55:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:34.335 6 00:05:34.335 18:55:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:34.335 18:55:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:34.335 18:55:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:34.335 18:55:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:34.335 7 00:05:34.335 18:55:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:34.335 18:55:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:34.335 18:55:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:34.335 18:55:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:34.335 8 00:05:34.335 18:55:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:34.335 18:55:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:34.335 18:55:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:34.335 18:55:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:34.335 9 00:05:34.335 18:55:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:34.335 18:55:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:34.335 18:55:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:34.335 18:55:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:34.907 10 00:05:34.907 18:55:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:34.907 18:55:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:34.907 18:55:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:34.907 18:55:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:36.291 18:55:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:36.291 18:55:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:36.291 18:55:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:36.291 18:55:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:36.291 18:55:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:36.862 18:55:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:36.862 18:55:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:36.862 18:55:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:36.862 18:55:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:37.805 18:55:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.805 18:55:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:37.805 18:55:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:37.805 18:55:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.805 18:55:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:38.376 18:55:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:38.376 00:05:38.376 real 0m4.225s 00:05:38.376 user 0m0.024s 00:05:38.376 sys 0m0.008s 00:05:38.376 18:55:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:38.376 18:55:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:38.376 ************************************ 00:05:38.376 END TEST scheduler_create_thread 00:05:38.376 ************************************ 00:05:38.376 18:55:07 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:38.376 18:55:07 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 118389 00:05:38.376 18:55:07 event.event_scheduler -- common/autotest_common.sh@952 -- # '[' -z 118389 ']' 00:05:38.376 18:55:07 event.event_scheduler -- common/autotest_common.sh@956 -- # kill -0 118389 00:05:38.376 18:55:07 event.event_scheduler -- common/autotest_common.sh@957 -- # uname 00:05:38.376 18:55:07 event.event_scheduler -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:38.376 18:55:07 event.event_scheduler -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 118389 00:05:38.637 18:55:07 event.event_scheduler -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:05:38.637 18:55:07 event.event_scheduler -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:05:38.637 18:55:07 event.event_scheduler -- common/autotest_common.sh@970 -- # echo 'killing process with pid 118389' 00:05:38.637 killing process with pid 118389 00:05:38.637 18:55:07 event.event_scheduler -- common/autotest_common.sh@971 -- # kill 118389 00:05:38.637 18:55:07 event.event_scheduler -- common/autotest_common.sh@976 -- # wait 118389 00:05:38.637 [2024-11-05 18:55:07.915587] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:38.899 00:05:38.899 real 0m5.146s 00:05:38.899 user 0m10.201s 00:05:38.899 sys 0m0.374s 00:05:38.899 18:55:08 event.event_scheduler -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:38.899 18:55:08 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:38.899 ************************************ 00:05:38.899 END TEST event_scheduler 00:05:38.899 ************************************ 00:05:38.899 18:55:08 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:38.899 18:55:08 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:38.899 18:55:08 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:38.899 18:55:08 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:38.899 18:55:08 event -- common/autotest_common.sh@10 -- # set +x 00:05:38.899 ************************************ 00:05:38.899 START TEST app_repeat 00:05:38.899 ************************************ 00:05:38.899 18:55:08 event.app_repeat -- common/autotest_common.sh@1127 -- # app_repeat_test 00:05:38.899 18:55:08 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:38.899 18:55:08 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:38.899 18:55:08 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:38.899 18:55:08 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:38.899 18:55:08 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:38.899 18:55:08 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:38.899 18:55:08 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:38.899 18:55:08 event.app_repeat -- event/event.sh@19 -- # repeat_pid=119435 00:05:38.899 18:55:08 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:38.899 18:55:08 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:38.899 18:55:08 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 119435' 00:05:38.899 Process app_repeat pid: 119435 00:05:38.899 18:55:08 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:38.899 18:55:08 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:38.899 spdk_app_start Round 0 00:05:38.899 18:55:08 event.app_repeat -- event/event.sh@25 -- # waitforlisten 119435 /var/tmp/spdk-nbd.sock 00:05:38.899 18:55:08 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 119435 ']' 00:05:38.899 18:55:08 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:38.899 18:55:08 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:38.899 18:55:08 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:38.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:38.899 18:55:08 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:38.899 18:55:08 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:38.899 [2024-11-05 18:55:08.197238] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:05:38.899 [2024-11-05 18:55:08.197305] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119435 ] 00:05:39.160 [2024-11-05 18:55:08.271930] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:39.160 [2024-11-05 18:55:08.314388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:39.160 [2024-11-05 18:55:08.314390] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.160 18:55:08 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:39.160 18:55:08 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:05:39.160 18:55:08 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:39.421 Malloc0 00:05:39.421 18:55:08 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:39.421 Malloc1 00:05:39.682 18:55:08 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:39.682 18:55:08 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:39.682 18:55:08 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:39.682 18:55:08 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:39.682 18:55:08 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:39.682 18:55:08 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:39.682 18:55:08 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:39.682 18:55:08 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:39.682 18:55:08 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:39.682 18:55:08 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:39.682 18:55:08 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:39.682 18:55:08 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:39.682 18:55:08 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:39.682 18:55:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:39.682 18:55:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:39.682 18:55:08 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:39.682 /dev/nbd0 00:05:39.682 18:55:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:39.682 18:55:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:39.682 18:55:08 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:05:39.682 18:55:08 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:05:39.682 18:55:08 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:39.682 18:55:08 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:39.682 18:55:08 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:05:39.682 18:55:08 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:05:39.682 18:55:08 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:39.682 18:55:08 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:39.682 18:55:08 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:39.682 1+0 records in 00:05:39.682 1+0 records out 00:05:39.682 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000208108 s, 19.7 MB/s 00:05:39.682 18:55:08 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:39.682 18:55:08 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:05:39.682 18:55:08 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:39.682 18:55:08 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:39.682 18:55:08 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:05:39.682 18:55:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:39.682 18:55:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:39.682 18:55:08 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:39.943 /dev/nbd1 00:05:39.943 18:55:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:39.943 18:55:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:39.943 18:55:09 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:05:39.943 18:55:09 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:05:39.943 18:55:09 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:39.943 18:55:09 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:39.943 18:55:09 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:05:39.943 18:55:09 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:05:39.943 18:55:09 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:39.943 18:55:09 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:39.943 18:55:09 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:39.943 1+0 records in 00:05:39.943 1+0 records out 00:05:39.943 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000276657 s, 14.8 MB/s 00:05:39.943 18:55:09 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:39.943 18:55:09 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:05:39.943 18:55:09 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:39.943 18:55:09 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:39.943 18:55:09 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:05:39.943 18:55:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:39.943 18:55:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:39.943 18:55:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:39.943 18:55:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:39.943 18:55:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:40.204 18:55:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:40.204 { 00:05:40.204 "nbd_device": "/dev/nbd0", 00:05:40.204 "bdev_name": "Malloc0" 00:05:40.204 }, 00:05:40.204 { 00:05:40.204 "nbd_device": "/dev/nbd1", 00:05:40.204 "bdev_name": "Malloc1" 00:05:40.204 } 00:05:40.204 ]' 00:05:40.204 18:55:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:40.204 { 00:05:40.204 "nbd_device": "/dev/nbd0", 00:05:40.204 "bdev_name": "Malloc0" 00:05:40.204 }, 00:05:40.204 { 00:05:40.204 "nbd_device": "/dev/nbd1", 00:05:40.204 "bdev_name": "Malloc1" 00:05:40.204 } 00:05:40.204 ]' 00:05:40.204 18:55:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:40.204 18:55:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:40.204 /dev/nbd1' 00:05:40.204 18:55:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:40.204 /dev/nbd1' 00:05:40.204 18:55:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:40.204 18:55:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:40.204 18:55:09 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:40.204 18:55:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:40.204 18:55:09 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:40.205 18:55:09 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:40.205 18:55:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.205 18:55:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:40.205 18:55:09 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:40.205 18:55:09 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:40.205 18:55:09 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:40.205 18:55:09 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:40.205 256+0 records in 00:05:40.205 256+0 records out 00:05:40.205 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0120731 s, 86.9 MB/s 00:05:40.205 18:55:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:40.205 18:55:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:40.205 256+0 records in 00:05:40.205 256+0 records out 00:05:40.205 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0233742 s, 44.9 MB/s 00:05:40.205 18:55:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:40.205 18:55:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:40.205 256+0 records in 00:05:40.205 256+0 records out 00:05:40.205 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0177153 s, 59.2 MB/s 00:05:40.205 18:55:09 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:40.205 18:55:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.205 18:55:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:40.205 18:55:09 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:40.205 18:55:09 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:40.205 18:55:09 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:40.205 18:55:09 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:40.205 18:55:09 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:40.205 18:55:09 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:40.205 18:55:09 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:40.205 18:55:09 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:40.205 18:55:09 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:40.205 18:55:09 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:40.205 18:55:09 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.205 18:55:09 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.205 18:55:09 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:40.205 18:55:09 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:40.205 18:55:09 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:40.205 18:55:09 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:40.485 18:55:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:40.485 18:55:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:40.485 18:55:09 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:40.485 18:55:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:40.485 18:55:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:40.485 18:55:09 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:40.485 18:55:09 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:40.485 18:55:09 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:40.485 18:55:09 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:40.485 18:55:09 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:40.745 18:55:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:40.745 18:55:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:40.745 18:55:09 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:40.745 18:55:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:40.745 18:55:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:40.745 18:55:09 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:40.745 18:55:09 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:40.745 18:55:09 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:40.745 18:55:09 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:40.745 18:55:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.745 18:55:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:41.006 18:55:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:41.006 18:55:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:41.006 18:55:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:41.006 18:55:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:41.006 18:55:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:41.006 18:55:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:41.006 18:55:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:41.006 18:55:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:41.006 18:55:10 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:41.006 18:55:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:41.006 18:55:10 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:41.006 18:55:10 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:41.006 18:55:10 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:41.006 18:55:10 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:41.267 [2024-11-05 18:55:10.424421] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:41.267 [2024-11-05 18:55:10.461396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:41.267 [2024-11-05 18:55:10.461399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.267 [2024-11-05 18:55:10.493201] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:41.267 [2024-11-05 18:55:10.493239] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:44.570 18:55:13 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:44.570 18:55:13 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:44.570 spdk_app_start Round 1 00:05:44.570 18:55:13 event.app_repeat -- event/event.sh@25 -- # waitforlisten 119435 /var/tmp/spdk-nbd.sock 00:05:44.570 18:55:13 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 119435 ']' 00:05:44.570 18:55:13 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:44.570 18:55:13 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:44.570 18:55:13 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:44.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:44.570 18:55:13 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:44.570 18:55:13 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:44.570 18:55:13 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:44.570 18:55:13 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:05:44.570 18:55:13 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:44.570 Malloc0 00:05:44.570 18:55:13 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:44.570 Malloc1 00:05:44.570 18:55:13 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:44.570 18:55:13 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:44.570 18:55:13 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:44.570 18:55:13 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:44.570 18:55:13 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:44.570 18:55:13 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:44.570 18:55:13 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:44.570 18:55:13 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:44.570 18:55:13 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:44.570 18:55:13 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:44.570 18:55:13 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:44.570 18:55:13 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:44.570 18:55:13 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:44.570 18:55:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:44.570 18:55:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:44.570 18:55:13 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:44.831 /dev/nbd0 00:05:44.831 18:55:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:44.831 18:55:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:44.831 18:55:14 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:05:44.831 18:55:14 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:05:44.831 18:55:14 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:44.831 18:55:14 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:44.831 18:55:14 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:05:44.831 18:55:14 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:05:44.831 18:55:14 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:44.831 18:55:14 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:44.831 18:55:14 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:44.831 1+0 records in 00:05:44.831 1+0 records out 00:05:44.831 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00024322 s, 16.8 MB/s 00:05:44.831 18:55:14 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:44.831 18:55:14 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:05:44.831 18:55:14 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:44.831 18:55:14 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:44.831 18:55:14 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:05:44.831 18:55:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:44.831 18:55:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:44.831 18:55:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:45.092 /dev/nbd1 00:05:45.092 18:55:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:45.092 18:55:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:45.092 18:55:14 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:05:45.092 18:55:14 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:05:45.092 18:55:14 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:45.092 18:55:14 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:45.092 18:55:14 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:05:45.092 18:55:14 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:05:45.092 18:55:14 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:45.092 18:55:14 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:45.092 18:55:14 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:45.092 1+0 records in 00:05:45.092 1+0 records out 00:05:45.092 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00027465 s, 14.9 MB/s 00:05:45.092 18:55:14 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:45.092 18:55:14 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:05:45.092 18:55:14 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:45.092 18:55:14 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:45.092 18:55:14 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:05:45.092 18:55:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:45.092 18:55:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:45.092 18:55:14 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:45.092 18:55:14 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:45.092 18:55:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:45.354 18:55:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:45.354 { 00:05:45.354 "nbd_device": "/dev/nbd0", 00:05:45.354 "bdev_name": "Malloc0" 00:05:45.354 }, 00:05:45.354 { 00:05:45.354 "nbd_device": "/dev/nbd1", 00:05:45.354 "bdev_name": "Malloc1" 00:05:45.354 } 00:05:45.354 ]' 00:05:45.354 18:55:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:45.354 { 00:05:45.354 "nbd_device": "/dev/nbd0", 00:05:45.354 "bdev_name": "Malloc0" 00:05:45.354 }, 00:05:45.354 { 00:05:45.354 "nbd_device": "/dev/nbd1", 00:05:45.354 "bdev_name": "Malloc1" 00:05:45.354 } 00:05:45.354 ]' 00:05:45.354 18:55:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:45.354 18:55:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:45.354 /dev/nbd1' 00:05:45.354 18:55:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:45.354 /dev/nbd1' 00:05:45.354 18:55:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:45.354 18:55:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:45.354 18:55:14 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:45.354 18:55:14 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:45.354 18:55:14 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:45.354 18:55:14 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:45.354 18:55:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:45.354 18:55:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:45.354 18:55:14 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:45.354 18:55:14 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:45.354 18:55:14 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:45.354 18:55:14 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:45.354 256+0 records in 00:05:45.354 256+0 records out 00:05:45.354 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0125346 s, 83.7 MB/s 00:05:45.354 18:55:14 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:45.354 18:55:14 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:45.354 256+0 records in 00:05:45.354 256+0 records out 00:05:45.354 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0169567 s, 61.8 MB/s 00:05:45.354 18:55:14 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:45.354 18:55:14 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:45.354 256+0 records in 00:05:45.354 256+0 records out 00:05:45.354 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0178683 s, 58.7 MB/s 00:05:45.354 18:55:14 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:45.354 18:55:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:45.354 18:55:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:45.354 18:55:14 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:45.354 18:55:14 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:45.354 18:55:14 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:45.354 18:55:14 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:45.354 18:55:14 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:45.354 18:55:14 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:45.354 18:55:14 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:45.354 18:55:14 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:45.354 18:55:14 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:45.354 18:55:14 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:45.354 18:55:14 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:45.354 18:55:14 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:45.354 18:55:14 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:45.354 18:55:14 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:45.354 18:55:14 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:45.354 18:55:14 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:45.615 18:55:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:45.615 18:55:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:45.615 18:55:14 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:45.615 18:55:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:45.615 18:55:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:45.615 18:55:14 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:45.615 18:55:14 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:45.615 18:55:14 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:45.615 18:55:14 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:45.615 18:55:14 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:45.875 18:55:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:45.875 18:55:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:45.875 18:55:14 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:45.875 18:55:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:45.875 18:55:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:45.875 18:55:14 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:45.875 18:55:14 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:45.875 18:55:14 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:45.875 18:55:14 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:45.875 18:55:14 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:45.875 18:55:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:45.875 18:55:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:45.875 18:55:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:45.875 18:55:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:45.875 18:55:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:45.875 18:55:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:45.875 18:55:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:46.136 18:55:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:46.136 18:55:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:46.136 18:55:15 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:46.136 18:55:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:46.136 18:55:15 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:46.136 18:55:15 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:46.136 18:55:15 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:46.136 18:55:15 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:46.397 [2024-11-05 18:55:15.493387] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:46.397 [2024-11-05 18:55:15.531021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:46.397 [2024-11-05 18:55:15.531023] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.397 [2024-11-05 18:55:15.563320] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:46.397 [2024-11-05 18:55:15.563356] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:49.696 18:55:18 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:49.696 18:55:18 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:49.696 spdk_app_start Round 2 00:05:49.696 18:55:18 event.app_repeat -- event/event.sh@25 -- # waitforlisten 119435 /var/tmp/spdk-nbd.sock 00:05:49.696 18:55:18 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 119435 ']' 00:05:49.696 18:55:18 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:49.696 18:55:18 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:49.696 18:55:18 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:49.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:49.696 18:55:18 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:49.696 18:55:18 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:49.696 18:55:18 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:49.696 18:55:18 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:05:49.696 18:55:18 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:49.696 Malloc0 00:05:49.696 18:55:18 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:49.696 Malloc1 00:05:49.696 18:55:18 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:49.696 18:55:18 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:49.696 18:55:18 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:49.696 18:55:18 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:49.696 18:55:18 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:49.696 18:55:18 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:49.696 18:55:18 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:49.696 18:55:18 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:49.696 18:55:18 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:49.696 18:55:18 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:49.696 18:55:18 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:49.696 18:55:18 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:49.696 18:55:18 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:49.696 18:55:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:49.696 18:55:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:49.696 18:55:18 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:49.957 /dev/nbd0 00:05:49.957 18:55:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:49.957 18:55:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:49.957 18:55:19 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:05:49.957 18:55:19 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:05:49.957 18:55:19 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:49.957 18:55:19 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:49.957 18:55:19 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:05:49.957 18:55:19 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:05:49.957 18:55:19 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:49.957 18:55:19 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:49.957 18:55:19 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:49.957 1+0 records in 00:05:49.957 1+0 records out 00:05:49.957 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000231518 s, 17.7 MB/s 00:05:49.957 18:55:19 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:49.957 18:55:19 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:05:49.957 18:55:19 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:49.957 18:55:19 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:49.957 18:55:19 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:05:49.957 18:55:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:49.957 18:55:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:49.957 18:55:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:50.218 /dev/nbd1 00:05:50.218 18:55:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:50.218 18:55:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:50.218 18:55:19 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:05:50.218 18:55:19 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:05:50.218 18:55:19 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:50.218 18:55:19 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:50.218 18:55:19 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:05:50.218 18:55:19 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:05:50.218 18:55:19 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:50.218 18:55:19 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:50.218 18:55:19 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:50.218 1+0 records in 00:05:50.218 1+0 records out 00:05:50.218 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000379825 s, 10.8 MB/s 00:05:50.218 18:55:19 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:50.218 18:55:19 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:05:50.218 18:55:19 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:50.218 18:55:19 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:50.218 18:55:19 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:05:50.218 18:55:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:50.218 18:55:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:50.218 18:55:19 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:50.218 18:55:19 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.218 18:55:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:50.218 18:55:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:50.218 { 00:05:50.218 "nbd_device": "/dev/nbd0", 00:05:50.218 "bdev_name": "Malloc0" 00:05:50.218 }, 00:05:50.218 { 00:05:50.218 "nbd_device": "/dev/nbd1", 00:05:50.218 "bdev_name": "Malloc1" 00:05:50.218 } 00:05:50.218 ]' 00:05:50.218 18:55:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:50.218 { 00:05:50.218 "nbd_device": "/dev/nbd0", 00:05:50.218 "bdev_name": "Malloc0" 00:05:50.218 }, 00:05:50.218 { 00:05:50.218 "nbd_device": "/dev/nbd1", 00:05:50.218 "bdev_name": "Malloc1" 00:05:50.218 } 00:05:50.218 ]' 00:05:50.218 18:55:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:50.480 18:55:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:50.480 /dev/nbd1' 00:05:50.480 18:55:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:50.480 /dev/nbd1' 00:05:50.480 18:55:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:50.480 18:55:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:50.480 18:55:19 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:50.480 18:55:19 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:50.480 18:55:19 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:50.480 18:55:19 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:50.480 18:55:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:50.480 18:55:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:50.480 18:55:19 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:50.480 18:55:19 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:50.480 18:55:19 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:50.480 18:55:19 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:50.480 256+0 records in 00:05:50.480 256+0 records out 00:05:50.480 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0117985 s, 88.9 MB/s 00:05:50.480 18:55:19 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:50.480 18:55:19 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:50.480 256+0 records in 00:05:50.480 256+0 records out 00:05:50.480 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0177408 s, 59.1 MB/s 00:05:50.480 18:55:19 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:50.480 18:55:19 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:50.480 256+0 records in 00:05:50.480 256+0 records out 00:05:50.480 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0191698 s, 54.7 MB/s 00:05:50.480 18:55:19 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:50.480 18:55:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:50.480 18:55:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:50.480 18:55:19 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:50.480 18:55:19 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:50.480 18:55:19 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:50.480 18:55:19 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:50.480 18:55:19 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:50.480 18:55:19 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:50.480 18:55:19 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:50.480 18:55:19 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:50.480 18:55:19 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:50.480 18:55:19 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:50.480 18:55:19 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.480 18:55:19 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:50.480 18:55:19 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:50.480 18:55:19 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:50.480 18:55:19 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:50.480 18:55:19 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:50.740 18:55:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:50.740 18:55:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:50.740 18:55:19 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:50.740 18:55:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:50.740 18:55:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:50.740 18:55:19 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:50.740 18:55:19 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:50.740 18:55:19 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:50.740 18:55:19 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:50.740 18:55:19 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:50.740 18:55:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:50.740 18:55:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:50.740 18:55:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:50.740 18:55:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:50.740 18:55:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:50.740 18:55:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:50.740 18:55:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:50.740 18:55:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:50.740 18:55:20 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:50.740 18:55:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.740 18:55:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:51.001 18:55:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:51.001 18:55:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:51.001 18:55:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:51.001 18:55:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:51.001 18:55:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:51.001 18:55:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:51.001 18:55:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:51.001 18:55:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:51.001 18:55:20 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:51.001 18:55:20 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:51.001 18:55:20 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:51.001 18:55:20 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:51.001 18:55:20 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:51.261 18:55:20 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:51.261 [2024-11-05 18:55:20.585210] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:51.522 [2024-11-05 18:55:20.622453] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:51.522 [2024-11-05 18:55:20.622455] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.522 [2024-11-05 18:55:20.654048] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:51.522 [2024-11-05 18:55:20.654083] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:54.819 18:55:23 event.app_repeat -- event/event.sh@38 -- # waitforlisten 119435 /var/tmp/spdk-nbd.sock 00:05:54.819 18:55:23 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 119435 ']' 00:05:54.819 18:55:23 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:54.819 18:55:23 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:54.819 18:55:23 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:54.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:54.819 18:55:23 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:54.819 18:55:23 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:54.819 18:55:23 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:54.819 18:55:23 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:05:54.819 18:55:23 event.app_repeat -- event/event.sh@39 -- # killprocess 119435 00:05:54.819 18:55:23 event.app_repeat -- common/autotest_common.sh@952 -- # '[' -z 119435 ']' 00:05:54.819 18:55:23 event.app_repeat -- common/autotest_common.sh@956 -- # kill -0 119435 00:05:54.819 18:55:23 event.app_repeat -- common/autotest_common.sh@957 -- # uname 00:05:54.819 18:55:23 event.app_repeat -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:54.819 18:55:23 event.app_repeat -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 119435 00:05:54.819 18:55:23 event.app_repeat -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:54.819 18:55:23 event.app_repeat -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:54.819 18:55:23 event.app_repeat -- common/autotest_common.sh@970 -- # echo 'killing process with pid 119435' 00:05:54.819 killing process with pid 119435 00:05:54.819 18:55:23 event.app_repeat -- common/autotest_common.sh@971 -- # kill 119435 00:05:54.819 18:55:23 event.app_repeat -- common/autotest_common.sh@976 -- # wait 119435 00:05:54.819 spdk_app_start is called in Round 0. 00:05:54.819 Shutdown signal received, stop current app iteration 00:05:54.819 Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 reinitialization... 00:05:54.819 spdk_app_start is called in Round 1. 00:05:54.819 Shutdown signal received, stop current app iteration 00:05:54.819 Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 reinitialization... 00:05:54.819 spdk_app_start is called in Round 2. 00:05:54.819 Shutdown signal received, stop current app iteration 00:05:54.819 Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 reinitialization... 00:05:54.819 spdk_app_start is called in Round 3. 00:05:54.819 Shutdown signal received, stop current app iteration 00:05:54.819 18:55:23 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:54.819 18:55:23 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:54.819 00:05:54.819 real 0m15.650s 00:05:54.819 user 0m34.211s 00:05:54.819 sys 0m2.217s 00:05:54.819 18:55:23 event.app_repeat -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:54.819 18:55:23 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:54.819 ************************************ 00:05:54.819 END TEST app_repeat 00:05:54.819 ************************************ 00:05:54.819 18:55:23 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:54.819 18:55:23 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:54.819 18:55:23 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:54.819 18:55:23 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:54.819 18:55:23 event -- common/autotest_common.sh@10 -- # set +x 00:05:54.819 ************************************ 00:05:54.819 START TEST cpu_locks 00:05:54.819 ************************************ 00:05:54.819 18:55:23 event.cpu_locks -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:54.819 * Looking for test storage... 00:05:54.819 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:54.819 18:55:23 event.cpu_locks -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:54.819 18:55:23 event.cpu_locks -- common/autotest_common.sh@1691 -- # lcov --version 00:05:54.819 18:55:23 event.cpu_locks -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:54.819 18:55:24 event.cpu_locks -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:54.819 18:55:24 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:54.819 18:55:24 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:54.819 18:55:24 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:54.819 18:55:24 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:54.819 18:55:24 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:54.819 18:55:24 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:54.819 18:55:24 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:54.819 18:55:24 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:54.819 18:55:24 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:54.819 18:55:24 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:54.819 18:55:24 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:54.819 18:55:24 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:54.819 18:55:24 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:54.819 18:55:24 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:54.819 18:55:24 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:54.820 18:55:24 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:54.820 18:55:24 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:54.820 18:55:24 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:54.820 18:55:24 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:54.820 18:55:24 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:54.820 18:55:24 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:54.820 18:55:24 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:54.820 18:55:24 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:54.820 18:55:24 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:54.820 18:55:24 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:54.820 18:55:24 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:54.820 18:55:24 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:54.820 18:55:24 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:54.820 18:55:24 event.cpu_locks -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:54.820 18:55:24 event.cpu_locks -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:54.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.820 --rc genhtml_branch_coverage=1 00:05:54.820 --rc genhtml_function_coverage=1 00:05:54.820 --rc genhtml_legend=1 00:05:54.820 --rc geninfo_all_blocks=1 00:05:54.820 --rc geninfo_unexecuted_blocks=1 00:05:54.820 00:05:54.820 ' 00:05:54.820 18:55:24 event.cpu_locks -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:54.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.820 --rc genhtml_branch_coverage=1 00:05:54.820 --rc genhtml_function_coverage=1 00:05:54.820 --rc genhtml_legend=1 00:05:54.820 --rc geninfo_all_blocks=1 00:05:54.820 --rc geninfo_unexecuted_blocks=1 00:05:54.820 00:05:54.820 ' 00:05:54.820 18:55:24 event.cpu_locks -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:54.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.820 --rc genhtml_branch_coverage=1 00:05:54.820 --rc genhtml_function_coverage=1 00:05:54.820 --rc genhtml_legend=1 00:05:54.820 --rc geninfo_all_blocks=1 00:05:54.820 --rc geninfo_unexecuted_blocks=1 00:05:54.820 00:05:54.820 ' 00:05:54.820 18:55:24 event.cpu_locks -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:54.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.820 --rc genhtml_branch_coverage=1 00:05:54.820 --rc genhtml_function_coverage=1 00:05:54.820 --rc genhtml_legend=1 00:05:54.820 --rc geninfo_all_blocks=1 00:05:54.820 --rc geninfo_unexecuted_blocks=1 00:05:54.820 00:05:54.820 ' 00:05:54.820 18:55:24 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:54.820 18:55:24 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:54.820 18:55:24 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:54.820 18:55:24 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:54.820 18:55:24 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:54.820 18:55:24 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:54.820 18:55:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:54.820 ************************************ 00:05:54.820 START TEST default_locks 00:05:54.820 ************************************ 00:05:54.820 18:55:24 event.cpu_locks.default_locks -- common/autotest_common.sh@1127 -- # default_locks 00:05:54.820 18:55:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=122993 00:05:54.820 18:55:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 122993 00:05:54.820 18:55:24 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 122993 ']' 00:05:54.820 18:55:24 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:54.820 18:55:24 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:54.820 18:55:24 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:54.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:54.820 18:55:24 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:54.820 18:55:24 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:54.820 18:55:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:55.080 [2024-11-05 18:55:24.170443] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:05:55.080 [2024-11-05 18:55:24.170493] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid122993 ] 00:05:55.080 [2024-11-05 18:55:24.240201] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.080 [2024-11-05 18:55:24.276430] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.650 18:55:24 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:55.650 18:55:24 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 0 00:05:55.650 18:55:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 122993 00:05:55.650 18:55:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 122993 00:05:55.650 18:55:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:55.911 lslocks: write error 00:05:55.911 18:55:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 122993 00:05:55.911 18:55:25 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # '[' -z 122993 ']' 00:05:55.911 18:55:25 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # kill -0 122993 00:05:55.911 18:55:25 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # uname 00:05:55.911 18:55:25 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:55.911 18:55:25 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 122993 00:05:55.911 18:55:25 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:55.911 18:55:25 event.cpu_locks.default_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:55.911 18:55:25 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 122993' 00:05:55.911 killing process with pid 122993 00:05:55.911 18:55:25 event.cpu_locks.default_locks -- common/autotest_common.sh@971 -- # kill 122993 00:05:55.911 18:55:25 event.cpu_locks.default_locks -- common/autotest_common.sh@976 -- # wait 122993 00:05:56.172 18:55:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 122993 00:05:56.172 18:55:25 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:05:56.172 18:55:25 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 122993 00:05:56.172 18:55:25 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:56.172 18:55:25 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:56.172 18:55:25 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:56.172 18:55:25 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:56.172 18:55:25 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 122993 00:05:56.172 18:55:25 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 122993 ']' 00:05:56.172 18:55:25 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.172 18:55:25 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:56.172 18:55:25 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.172 18:55:25 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:56.172 18:55:25 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:56.172 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 848: kill: (122993) - No such process 00:05:56.172 ERROR: process (pid: 122993) is no longer running 00:05:56.172 18:55:25 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:56.172 18:55:25 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 1 00:05:56.172 18:55:25 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:05:56.172 18:55:25 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:56.172 18:55:25 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:56.172 18:55:25 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:56.172 18:55:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:56.172 18:55:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:56.172 18:55:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:56.172 18:55:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:56.172 00:05:56.172 real 0m1.251s 00:05:56.172 user 0m1.363s 00:05:56.172 sys 0m0.387s 00:05:56.172 18:55:25 event.cpu_locks.default_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:56.172 18:55:25 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:56.172 ************************************ 00:05:56.172 END TEST default_locks 00:05:56.172 ************************************ 00:05:56.172 18:55:25 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:56.172 18:55:25 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:56.172 18:55:25 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:56.172 18:55:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:56.172 ************************************ 00:05:56.172 START TEST default_locks_via_rpc 00:05:56.172 ************************************ 00:05:56.172 18:55:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1127 -- # default_locks_via_rpc 00:05:56.172 18:55:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=123263 00:05:56.172 18:55:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 123263 00:05:56.172 18:55:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:56.172 18:55:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 123263 ']' 00:05:56.172 18:55:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.172 18:55:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:56.172 18:55:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.172 18:55:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:56.172 18:55:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.432 [2024-11-05 18:55:25.499864] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:05:56.432 [2024-11-05 18:55:25.499917] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid123263 ] 00:05:56.432 [2024-11-05 18:55:25.570537] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.432 [2024-11-05 18:55:25.608006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.003 18:55:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:57.003 18:55:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:57.003 18:55:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:57.003 18:55:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:57.003 18:55:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:57.003 18:55:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:57.003 18:55:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:57.003 18:55:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:57.003 18:55:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:57.003 18:55:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:57.003 18:55:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:57.003 18:55:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:57.003 18:55:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:57.003 18:55:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:57.003 18:55:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 123263 00:05:57.003 18:55:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 123263 00:05:57.003 18:55:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:57.573 18:55:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 123263 00:05:57.573 18:55:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # '[' -z 123263 ']' 00:05:57.573 18:55:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # kill -0 123263 00:05:57.573 18:55:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # uname 00:05:57.573 18:55:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:57.573 18:55:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 123263 00:05:57.573 18:55:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:57.573 18:55:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:57.573 18:55:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 123263' 00:05:57.573 killing process with pid 123263 00:05:57.573 18:55:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@971 -- # kill 123263 00:05:57.573 18:55:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@976 -- # wait 123263 00:05:57.833 00:05:57.833 real 0m1.591s 00:05:57.833 user 0m1.720s 00:05:57.833 sys 0m0.511s 00:05:57.833 18:55:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:57.833 18:55:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:57.833 ************************************ 00:05:57.833 END TEST default_locks_via_rpc 00:05:57.833 ************************************ 00:05:57.833 18:55:27 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:57.833 18:55:27 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:57.833 18:55:27 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:57.833 18:55:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:57.833 ************************************ 00:05:57.833 START TEST non_locking_app_on_locked_coremask 00:05:57.833 ************************************ 00:05:57.833 18:55:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # non_locking_app_on_locked_coremask 00:05:57.833 18:55:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=123623 00:05:57.833 18:55:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 123623 /var/tmp/spdk.sock 00:05:57.833 18:55:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:57.833 18:55:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 123623 ']' 00:05:57.833 18:55:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:57.833 18:55:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:57.833 18:55:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:57.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:57.833 18:55:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:57.833 18:55:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:58.094 [2024-11-05 18:55:27.174461] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:05:58.094 [2024-11-05 18:55:27.174552] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid123623 ] 00:05:58.094 [2024-11-05 18:55:27.252895] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.094 [2024-11-05 18:55:27.294081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.664 18:55:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:58.664 18:55:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:05:58.664 18:55:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=123743 00:05:58.664 18:55:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 123743 /var/tmp/spdk2.sock 00:05:58.664 18:55:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:58.664 18:55:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 123743 ']' 00:05:58.664 18:55:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:58.664 18:55:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:58.664 18:55:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:58.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:58.664 18:55:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:58.664 18:55:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:58.924 [2024-11-05 18:55:28.008041] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:05:58.924 [2024-11-05 18:55:28.008095] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid123743 ] 00:05:58.924 [2024-11-05 18:55:28.121189] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:58.924 [2024-11-05 18:55:28.121218] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.924 [2024-11-05 18:55:28.193370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.495 18:55:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:59.495 18:55:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:05:59.495 18:55:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 123623 00:05:59.495 18:55:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 123623 00:05:59.495 18:55:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:59.755 lslocks: write error 00:05:59.755 18:55:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 123623 00:05:59.755 18:55:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 123623 ']' 00:05:59.755 18:55:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 123623 00:05:59.755 18:55:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:05:59.755 18:55:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:00.016 18:55:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 123623 00:06:00.016 18:55:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:00.016 18:55:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:00.016 18:55:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 123623' 00:06:00.016 killing process with pid 123623 00:06:00.016 18:55:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 123623 00:06:00.016 18:55:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 123623 00:06:00.276 18:55:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 123743 00:06:00.276 18:55:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 123743 ']' 00:06:00.276 18:55:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 123743 00:06:00.276 18:55:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:06:00.276 18:55:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:00.276 18:55:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 123743 00:06:00.535 18:55:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:00.535 18:55:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:00.535 18:55:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 123743' 00:06:00.535 killing process with pid 123743 00:06:00.535 18:55:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 123743 00:06:00.535 18:55:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 123743 00:06:00.535 00:06:00.535 real 0m2.702s 00:06:00.535 user 0m3.023s 00:06:00.535 sys 0m0.790s 00:06:00.535 18:55:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:00.535 18:55:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:00.535 ************************************ 00:06:00.535 END TEST non_locking_app_on_locked_coremask 00:06:00.535 ************************************ 00:06:00.536 18:55:29 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:00.536 18:55:29 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:00.536 18:55:29 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:00.536 18:55:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:00.796 ************************************ 00:06:00.796 START TEST locking_app_on_unlocked_coremask 00:06:00.796 ************************************ 00:06:00.796 18:55:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_unlocked_coremask 00:06:00.796 18:55:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=124118 00:06:00.796 18:55:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 124118 /var/tmp/spdk.sock 00:06:00.796 18:55:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:00.796 18:55:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 124118 ']' 00:06:00.796 18:55:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.796 18:55:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:00.796 18:55:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.796 18:55:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:00.796 18:55:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:00.796 [2024-11-05 18:55:29.948641] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:06:00.796 [2024-11-05 18:55:29.948693] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124118 ] 00:06:00.796 [2024-11-05 18:55:30.024267] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:00.796 [2024-11-05 18:55:30.024302] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.796 [2024-11-05 18:55:30.063944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.739 18:55:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:01.739 18:55:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:06:01.739 18:55:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=124448 00:06:01.739 18:55:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 124448 /var/tmp/spdk2.sock 00:06:01.739 18:55:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 124448 ']' 00:06:01.739 18:55:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:01.739 18:55:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:01.739 18:55:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:01.739 18:55:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:01.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:01.739 18:55:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:01.739 18:55:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:01.739 [2024-11-05 18:55:30.805761] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:06:01.739 [2024-11-05 18:55:30.805821] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124448 ] 00:06:01.739 [2024-11-05 18:55:30.917996] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.739 [2024-11-05 18:55:30.990291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.311 18:55:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:02.311 18:55:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:06:02.311 18:55:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 124448 00:06:02.311 18:55:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 124448 00:06:02.311 18:55:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:02.882 lslocks: write error 00:06:02.882 18:55:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 124118 00:06:02.882 18:55:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 124118 ']' 00:06:02.882 18:55:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 124118 00:06:02.882 18:55:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:06:02.882 18:55:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:02.882 18:55:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 124118 00:06:02.882 18:55:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:02.882 18:55:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:02.882 18:55:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 124118' 00:06:02.882 killing process with pid 124118 00:06:02.882 18:55:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 124118 00:06:02.882 18:55:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 124118 00:06:03.453 18:55:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 124448 00:06:03.453 18:55:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 124448 ']' 00:06:03.453 18:55:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 124448 00:06:03.453 18:55:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:06:03.453 18:55:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:03.453 18:55:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 124448 00:06:03.453 18:55:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:03.453 18:55:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:03.453 18:55:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 124448' 00:06:03.453 killing process with pid 124448 00:06:03.453 18:55:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 124448 00:06:03.453 18:55:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 124448 00:06:03.714 00:06:03.714 real 0m2.982s 00:06:03.714 user 0m3.309s 00:06:03.714 sys 0m0.910s 00:06:03.714 18:55:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:03.714 18:55:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:03.714 ************************************ 00:06:03.714 END TEST locking_app_on_unlocked_coremask 00:06:03.714 ************************************ 00:06:03.714 18:55:32 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:03.714 18:55:32 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:03.714 18:55:32 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:03.714 18:55:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:03.714 ************************************ 00:06:03.714 START TEST locking_app_on_locked_coremask 00:06:03.714 ************************************ 00:06:03.714 18:55:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_locked_coremask 00:06:03.714 18:55:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=124825 00:06:03.714 18:55:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 124825 /var/tmp/spdk.sock 00:06:03.715 18:55:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:03.715 18:55:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 124825 ']' 00:06:03.715 18:55:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:03.715 18:55:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:03.715 18:55:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:03.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:03.715 18:55:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:03.715 18:55:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:03.715 [2024-11-05 18:55:33.005961] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:06:03.715 [2024-11-05 18:55:33.006009] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124825 ] 00:06:03.976 [2024-11-05 18:55:33.077188] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.976 [2024-11-05 18:55:33.112299] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.548 18:55:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:04.548 18:55:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:06:04.548 18:55:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=125071 00:06:04.548 18:55:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 125071 /var/tmp/spdk2.sock 00:06:04.548 18:55:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:04.548 18:55:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:04.548 18:55:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 125071 /var/tmp/spdk2.sock 00:06:04.548 18:55:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:04.548 18:55:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:04.548 18:55:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:04.548 18:55:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:04.548 18:55:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 125071 /var/tmp/spdk2.sock 00:06:04.548 18:55:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 125071 ']' 00:06:04.548 18:55:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:04.548 18:55:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:04.548 18:55:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:04.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:04.548 18:55:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:04.548 18:55:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:04.548 [2024-11-05 18:55:33.853602] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:06:04.548 [2024-11-05 18:55:33.853657] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125071 ] 00:06:04.809 [2024-11-05 18:55:33.967175] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 124825 has claimed it. 00:06:04.809 [2024-11-05 18:55:33.967223] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:05.381 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 848: kill: (125071) - No such process 00:06:05.381 ERROR: process (pid: 125071) is no longer running 00:06:05.381 18:55:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:05.381 18:55:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 1 00:06:05.381 18:55:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:05.381 18:55:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:05.381 18:55:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:05.381 18:55:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:05.381 18:55:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 124825 00:06:05.381 18:55:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 124825 00:06:05.381 18:55:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:05.953 lslocks: write error 00:06:05.954 18:55:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 124825 00:06:05.954 18:55:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 124825 ']' 00:06:05.954 18:55:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 124825 00:06:05.954 18:55:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:06:05.954 18:55:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:05.954 18:55:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 124825 00:06:05.954 18:55:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:05.954 18:55:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:05.954 18:55:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 124825' 00:06:05.954 killing process with pid 124825 00:06:05.954 18:55:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 124825 00:06:05.954 18:55:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 124825 00:06:05.954 00:06:05.954 real 0m2.306s 00:06:05.954 user 0m2.623s 00:06:05.954 sys 0m0.625s 00:06:05.954 18:55:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:05.954 18:55:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:05.954 ************************************ 00:06:05.954 END TEST locking_app_on_locked_coremask 00:06:05.954 ************************************ 00:06:06.214 18:55:35 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:06.214 18:55:35 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:06.214 18:55:35 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:06.214 18:55:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:06.214 ************************************ 00:06:06.214 START TEST locking_overlapped_coremask 00:06:06.214 ************************************ 00:06:06.214 18:55:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask 00:06:06.214 18:55:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=125371 00:06:06.214 18:55:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 125371 /var/tmp/spdk.sock 00:06:06.214 18:55:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:06.214 18:55:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 125371 ']' 00:06:06.214 18:55:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:06.214 18:55:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:06.214 18:55:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:06.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:06.214 18:55:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:06.214 18:55:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:06.214 [2024-11-05 18:55:35.389565] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:06:06.214 [2024-11-05 18:55:35.389621] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125371 ] 00:06:06.214 [2024-11-05 18:55:35.466019] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:06.214 [2024-11-05 18:55:35.508611] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:06.214 [2024-11-05 18:55:35.508730] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:06.214 [2024-11-05 18:55:35.508733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.154 18:55:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:07.154 18:55:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 0 00:06:07.154 18:55:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=125537 00:06:07.154 18:55:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:07.154 18:55:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 125537 /var/tmp/spdk2.sock 00:06:07.154 18:55:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:07.154 18:55:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 125537 /var/tmp/spdk2.sock 00:06:07.154 18:55:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:07.154 18:55:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:07.154 18:55:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:07.154 18:55:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:07.154 18:55:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 125537 /var/tmp/spdk2.sock 00:06:07.154 18:55:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 125537 ']' 00:06:07.154 18:55:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:07.154 18:55:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:07.154 18:55:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:07.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:07.154 18:55:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:07.154 18:55:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:07.154 [2024-11-05 18:55:36.239468] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:06:07.154 [2024-11-05 18:55:36.239523] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125537 ] 00:06:07.154 [2024-11-05 18:55:36.327653] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 125371 has claimed it. 00:06:07.154 [2024-11-05 18:55:36.327689] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:07.725 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 848: kill: (125537) - No such process 00:06:07.725 ERROR: process (pid: 125537) is no longer running 00:06:07.725 18:55:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:07.725 18:55:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 1 00:06:07.725 18:55:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:07.725 18:55:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:07.725 18:55:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:07.725 18:55:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:07.725 18:55:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:07.725 18:55:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:07.725 18:55:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:07.725 18:55:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:07.725 18:55:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 125371 00:06:07.725 18:55:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # '[' -z 125371 ']' 00:06:07.725 18:55:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # kill -0 125371 00:06:07.725 18:55:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # uname 00:06:07.725 18:55:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:07.725 18:55:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 125371 00:06:07.725 18:55:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:07.725 18:55:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:07.725 18:55:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 125371' 00:06:07.725 killing process with pid 125371 00:06:07.725 18:55:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@971 -- # kill 125371 00:06:07.725 18:55:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@976 -- # wait 125371 00:06:07.987 00:06:07.987 real 0m1.804s 00:06:07.987 user 0m5.209s 00:06:07.987 sys 0m0.377s 00:06:07.987 18:55:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:07.987 18:55:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:07.987 ************************************ 00:06:07.987 END TEST locking_overlapped_coremask 00:06:07.987 ************************************ 00:06:07.987 18:55:37 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:07.987 18:55:37 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:07.987 18:55:37 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:07.987 18:55:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:07.987 ************************************ 00:06:07.987 START TEST locking_overlapped_coremask_via_rpc 00:06:07.987 ************************************ 00:06:07.987 18:55:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask_via_rpc 00:06:07.987 18:55:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=125841 00:06:07.987 18:55:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 125841 /var/tmp/spdk.sock 00:06:07.987 18:55:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:07.987 18:55:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 125841 ']' 00:06:07.987 18:55:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.987 18:55:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:07.987 18:55:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.987 18:55:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:07.987 18:55:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:07.987 [2024-11-05 18:55:37.265775] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:06:07.987 [2024-11-05 18:55:37.265828] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125841 ] 00:06:08.248 [2024-11-05 18:55:37.340034] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:08.248 [2024-11-05 18:55:37.340076] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:08.248 [2024-11-05 18:55:37.381071] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:08.248 [2024-11-05 18:55:37.381187] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:08.248 [2024-11-05 18:55:37.381190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.820 18:55:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:08.820 18:55:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:06:08.820 18:55:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=125914 00:06:08.820 18:55:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 125914 /var/tmp/spdk2.sock 00:06:08.820 18:55:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 125914 ']' 00:06:08.820 18:55:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:08.820 18:55:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:08.820 18:55:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:08.820 18:55:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:08.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:08.820 18:55:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:08.820 18:55:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:08.820 [2024-11-05 18:55:38.131590] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:06:08.820 [2024-11-05 18:55:38.131647] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125914 ] 00:06:09.081 [2024-11-05 18:55:38.217970] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:09.081 [2024-11-05 18:55:38.217995] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:09.081 [2024-11-05 18:55:38.281204] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:09.081 [2024-11-05 18:55:38.284868] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:09.081 [2024-11-05 18:55:38.284869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:09.654 18:55:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:09.654 18:55:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:06:09.654 18:55:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:09.654 18:55:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:09.654 18:55:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:09.654 18:55:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:09.654 18:55:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:09.654 18:55:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:09.654 18:55:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:09.654 18:55:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:09.654 18:55:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:09.654 18:55:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:09.654 18:55:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:09.654 18:55:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:09.654 18:55:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:09.654 18:55:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:09.654 [2024-11-05 18:55:38.915809] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 125841 has claimed it. 00:06:09.654 request: 00:06:09.654 { 00:06:09.654 "method": "framework_enable_cpumask_locks", 00:06:09.654 "req_id": 1 00:06:09.654 } 00:06:09.654 Got JSON-RPC error response 00:06:09.654 response: 00:06:09.654 { 00:06:09.654 "code": -32603, 00:06:09.654 "message": "Failed to claim CPU core: 2" 00:06:09.654 } 00:06:09.654 18:55:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:09.654 18:55:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:09.654 18:55:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:09.654 18:55:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:09.654 18:55:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:09.654 18:55:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 125841 /var/tmp/spdk.sock 00:06:09.654 18:55:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 125841 ']' 00:06:09.654 18:55:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:09.654 18:55:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:09.654 18:55:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:09.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:09.654 18:55:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:09.654 18:55:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:09.915 18:55:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:09.915 18:55:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:06:09.915 18:55:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 125914 /var/tmp/spdk2.sock 00:06:09.915 18:55:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 125914 ']' 00:06:09.915 18:55:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:09.915 18:55:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:09.915 18:55:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:09.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:09.915 18:55:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:09.915 18:55:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.176 18:55:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:10.176 18:55:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:06:10.176 18:55:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:10.176 18:55:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:10.176 18:55:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:10.176 18:55:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:10.176 00:06:10.176 real 0m2.083s 00:06:10.176 user 0m0.842s 00:06:10.176 sys 0m0.162s 00:06:10.176 18:55:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:10.176 18:55:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.176 ************************************ 00:06:10.176 END TEST locking_overlapped_coremask_via_rpc 00:06:10.176 ************************************ 00:06:10.176 18:55:39 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:10.176 18:55:39 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 125841 ]] 00:06:10.176 18:55:39 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 125841 00:06:10.176 18:55:39 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 125841 ']' 00:06:10.176 18:55:39 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 125841 00:06:10.176 18:55:39 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:06:10.176 18:55:39 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:10.176 18:55:39 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 125841 00:06:10.176 18:55:39 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:10.176 18:55:39 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:10.176 18:55:39 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 125841' 00:06:10.176 killing process with pid 125841 00:06:10.176 18:55:39 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 125841 00:06:10.177 18:55:39 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 125841 00:06:10.449 18:55:39 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 125914 ]] 00:06:10.449 18:55:39 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 125914 00:06:10.449 18:55:39 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 125914 ']' 00:06:10.449 18:55:39 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 125914 00:06:10.450 18:55:39 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:06:10.450 18:55:39 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:10.450 18:55:39 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 125914 00:06:10.450 18:55:39 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:06:10.450 18:55:39 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:06:10.450 18:55:39 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 125914' 00:06:10.450 killing process with pid 125914 00:06:10.450 18:55:39 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 125914 00:06:10.450 18:55:39 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 125914 00:06:10.717 18:55:39 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:10.717 18:55:39 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:10.717 18:55:39 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 125841 ]] 00:06:10.717 18:55:39 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 125841 00:06:10.717 18:55:39 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 125841 ']' 00:06:10.717 18:55:39 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 125841 00:06:10.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (125841) - No such process 00:06:10.717 18:55:39 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 125841 is not found' 00:06:10.717 Process with pid 125841 is not found 00:06:10.718 18:55:39 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 125914 ]] 00:06:10.718 18:55:39 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 125914 00:06:10.718 18:55:39 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 125914 ']' 00:06:10.718 18:55:39 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 125914 00:06:10.718 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (125914) - No such process 00:06:10.718 18:55:39 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 125914 is not found' 00:06:10.718 Process with pid 125914 is not found 00:06:10.718 18:55:39 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:10.718 00:06:10.718 real 0m15.977s 00:06:10.718 user 0m28.273s 00:06:10.718 sys 0m4.661s 00:06:10.718 18:55:39 event.cpu_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:10.718 18:55:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:10.718 ************************************ 00:06:10.718 END TEST cpu_locks 00:06:10.718 ************************************ 00:06:10.718 00:06:10.718 real 0m40.968s 00:06:10.718 user 1m19.284s 00:06:10.718 sys 0m7.867s 00:06:10.718 18:55:39 event -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:10.718 18:55:39 event -- common/autotest_common.sh@10 -- # set +x 00:06:10.718 ************************************ 00:06:10.718 END TEST event 00:06:10.718 ************************************ 00:06:10.718 18:55:39 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:10.718 18:55:39 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:10.718 18:55:39 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:10.718 18:55:39 -- common/autotest_common.sh@10 -- # set +x 00:06:10.718 ************************************ 00:06:10.718 START TEST thread 00:06:10.718 ************************************ 00:06:10.718 18:55:39 thread -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:10.981 * Looking for test storage... 00:06:10.981 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:10.981 18:55:40 thread -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:10.981 18:55:40 thread -- common/autotest_common.sh@1691 -- # lcov --version 00:06:10.981 18:55:40 thread -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:10.981 18:55:40 thread -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:10.981 18:55:40 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:10.981 18:55:40 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:10.981 18:55:40 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:10.981 18:55:40 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:10.981 18:55:40 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:10.981 18:55:40 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:10.981 18:55:40 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:10.981 18:55:40 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:10.981 18:55:40 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:10.981 18:55:40 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:10.981 18:55:40 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:10.981 18:55:40 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:10.981 18:55:40 thread -- scripts/common.sh@345 -- # : 1 00:06:10.981 18:55:40 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:10.981 18:55:40 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:10.981 18:55:40 thread -- scripts/common.sh@365 -- # decimal 1 00:06:10.981 18:55:40 thread -- scripts/common.sh@353 -- # local d=1 00:06:10.981 18:55:40 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:10.981 18:55:40 thread -- scripts/common.sh@355 -- # echo 1 00:06:10.981 18:55:40 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:10.981 18:55:40 thread -- scripts/common.sh@366 -- # decimal 2 00:06:10.981 18:55:40 thread -- scripts/common.sh@353 -- # local d=2 00:06:10.981 18:55:40 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:10.981 18:55:40 thread -- scripts/common.sh@355 -- # echo 2 00:06:10.981 18:55:40 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:10.981 18:55:40 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:10.981 18:55:40 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:10.981 18:55:40 thread -- scripts/common.sh@368 -- # return 0 00:06:10.981 18:55:40 thread -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:10.981 18:55:40 thread -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:10.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.981 --rc genhtml_branch_coverage=1 00:06:10.981 --rc genhtml_function_coverage=1 00:06:10.981 --rc genhtml_legend=1 00:06:10.981 --rc geninfo_all_blocks=1 00:06:10.981 --rc geninfo_unexecuted_blocks=1 00:06:10.981 00:06:10.981 ' 00:06:10.981 18:55:40 thread -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:10.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.981 --rc genhtml_branch_coverage=1 00:06:10.981 --rc genhtml_function_coverage=1 00:06:10.981 --rc genhtml_legend=1 00:06:10.981 --rc geninfo_all_blocks=1 00:06:10.981 --rc geninfo_unexecuted_blocks=1 00:06:10.981 00:06:10.981 ' 00:06:10.981 18:55:40 thread -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:10.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.981 --rc genhtml_branch_coverage=1 00:06:10.981 --rc genhtml_function_coverage=1 00:06:10.981 --rc genhtml_legend=1 00:06:10.981 --rc geninfo_all_blocks=1 00:06:10.981 --rc geninfo_unexecuted_blocks=1 00:06:10.981 00:06:10.981 ' 00:06:10.981 18:55:40 thread -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:10.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.981 --rc genhtml_branch_coverage=1 00:06:10.981 --rc genhtml_function_coverage=1 00:06:10.981 --rc genhtml_legend=1 00:06:10.981 --rc geninfo_all_blocks=1 00:06:10.981 --rc geninfo_unexecuted_blocks=1 00:06:10.981 00:06:10.981 ' 00:06:10.981 18:55:40 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:10.981 18:55:40 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:06:10.981 18:55:40 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:10.981 18:55:40 thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.981 ************************************ 00:06:10.981 START TEST thread_poller_perf 00:06:10.981 ************************************ 00:06:10.981 18:55:40 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:10.981 [2024-11-05 18:55:40.241223] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:06:10.981 [2024-11-05 18:55:40.241317] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126445 ] 00:06:11.243 [2024-11-05 18:55:40.321010] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.243 [2024-11-05 18:55:40.363860] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.243 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:12.186 [2024-11-05T17:55:41.509Z] ====================================== 00:06:12.186 [2024-11-05T17:55:41.509Z] busy:2409994928 (cyc) 00:06:12.186 [2024-11-05T17:55:41.509Z] total_run_count: 287000 00:06:12.186 [2024-11-05T17:55:41.509Z] tsc_hz: 2400000000 (cyc) 00:06:12.186 [2024-11-05T17:55:41.509Z] ====================================== 00:06:12.186 [2024-11-05T17:55:41.509Z] poller_cost: 8397 (cyc), 3498 (nsec) 00:06:12.186 00:06:12.187 real 0m1.185s 00:06:12.187 user 0m1.109s 00:06:12.187 sys 0m0.072s 00:06:12.187 18:55:41 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:12.187 18:55:41 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:12.187 ************************************ 00:06:12.187 END TEST thread_poller_perf 00:06:12.187 ************************************ 00:06:12.187 18:55:41 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:12.187 18:55:41 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:06:12.187 18:55:41 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:12.187 18:55:41 thread -- common/autotest_common.sh@10 -- # set +x 00:06:12.187 ************************************ 00:06:12.187 START TEST thread_poller_perf 00:06:12.187 ************************************ 00:06:12.187 18:55:41 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:12.187 [2024-11-05 18:55:41.501422] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:06:12.187 [2024-11-05 18:55:41.501512] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126713 ] 00:06:12.447 [2024-11-05 18:55:41.576976] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.447 [2024-11-05 18:55:41.610331] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.447 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:13.390 [2024-11-05T17:55:42.713Z] ====================================== 00:06:13.390 [2024-11-05T17:55:42.713Z] busy:2402347052 (cyc) 00:06:13.390 [2024-11-05T17:55:42.713Z] total_run_count: 3813000 00:06:13.390 [2024-11-05T17:55:42.713Z] tsc_hz: 2400000000 (cyc) 00:06:13.390 [2024-11-05T17:55:42.713Z] ====================================== 00:06:13.390 [2024-11-05T17:55:42.713Z] poller_cost: 630 (cyc), 262 (nsec) 00:06:13.390 00:06:13.390 real 0m1.165s 00:06:13.390 user 0m1.102s 00:06:13.390 sys 0m0.058s 00:06:13.391 18:55:42 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:13.391 18:55:42 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:13.391 ************************************ 00:06:13.391 END TEST thread_poller_perf 00:06:13.391 ************************************ 00:06:13.391 18:55:42 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:13.391 00:06:13.391 real 0m2.700s 00:06:13.391 user 0m2.379s 00:06:13.391 sys 0m0.333s 00:06:13.391 18:55:42 thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:13.391 18:55:42 thread -- common/autotest_common.sh@10 -- # set +x 00:06:13.391 ************************************ 00:06:13.391 END TEST thread 00:06:13.391 ************************************ 00:06:13.652 18:55:42 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:13.652 18:55:42 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:13.652 18:55:42 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:13.652 18:55:42 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:13.652 18:55:42 -- common/autotest_common.sh@10 -- # set +x 00:06:13.652 ************************************ 00:06:13.652 START TEST app_cmdline 00:06:13.652 ************************************ 00:06:13.652 18:55:42 app_cmdline -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:13.652 * Looking for test storage... 00:06:13.652 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:13.652 18:55:42 app_cmdline -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:13.652 18:55:42 app_cmdline -- common/autotest_common.sh@1691 -- # lcov --version 00:06:13.652 18:55:42 app_cmdline -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:13.652 18:55:42 app_cmdline -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:13.652 18:55:42 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:13.652 18:55:42 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:13.652 18:55:42 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:13.652 18:55:42 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:13.652 18:55:42 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:13.652 18:55:42 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:13.652 18:55:42 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:13.652 18:55:42 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:13.652 18:55:42 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:13.652 18:55:42 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:13.652 18:55:42 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:13.652 18:55:42 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:13.652 18:55:42 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:13.652 18:55:42 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:13.652 18:55:42 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:13.652 18:55:42 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:13.652 18:55:42 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:13.652 18:55:42 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:13.652 18:55:42 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:13.652 18:55:42 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:13.652 18:55:42 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:13.652 18:55:42 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:13.652 18:55:42 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:13.652 18:55:42 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:13.652 18:55:42 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:13.652 18:55:42 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:13.652 18:55:42 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:13.652 18:55:42 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:13.652 18:55:42 app_cmdline -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:13.652 18:55:42 app_cmdline -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:13.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.652 --rc genhtml_branch_coverage=1 00:06:13.652 --rc genhtml_function_coverage=1 00:06:13.652 --rc genhtml_legend=1 00:06:13.652 --rc geninfo_all_blocks=1 00:06:13.652 --rc geninfo_unexecuted_blocks=1 00:06:13.652 00:06:13.652 ' 00:06:13.652 18:55:42 app_cmdline -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:13.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.652 --rc genhtml_branch_coverage=1 00:06:13.652 --rc genhtml_function_coverage=1 00:06:13.652 --rc genhtml_legend=1 00:06:13.652 --rc geninfo_all_blocks=1 00:06:13.652 --rc geninfo_unexecuted_blocks=1 00:06:13.652 00:06:13.652 ' 00:06:13.652 18:55:42 app_cmdline -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:13.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.652 --rc genhtml_branch_coverage=1 00:06:13.652 --rc genhtml_function_coverage=1 00:06:13.652 --rc genhtml_legend=1 00:06:13.652 --rc geninfo_all_blocks=1 00:06:13.652 --rc geninfo_unexecuted_blocks=1 00:06:13.652 00:06:13.652 ' 00:06:13.652 18:55:42 app_cmdline -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:13.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.652 --rc genhtml_branch_coverage=1 00:06:13.652 --rc genhtml_function_coverage=1 00:06:13.652 --rc genhtml_legend=1 00:06:13.652 --rc geninfo_all_blocks=1 00:06:13.652 --rc geninfo_unexecuted_blocks=1 00:06:13.652 00:06:13.652 ' 00:06:13.652 18:55:42 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:13.652 18:55:42 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=127116 00:06:13.652 18:55:42 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 127116 00:06:13.652 18:55:42 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:13.652 18:55:42 app_cmdline -- common/autotest_common.sh@833 -- # '[' -z 127116 ']' 00:06:13.652 18:55:42 app_cmdline -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.652 18:55:42 app_cmdline -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:13.652 18:55:42 app_cmdline -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.652 18:55:42 app_cmdline -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:13.652 18:55:42 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:13.914 [2024-11-05 18:55:43.017346] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:06:13.914 [2024-11-05 18:55:43.017409] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127116 ] 00:06:13.914 [2024-11-05 18:55:43.092283] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.914 [2024-11-05 18:55:43.128547] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.174 18:55:43 app_cmdline -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:14.174 18:55:43 app_cmdline -- common/autotest_common.sh@866 -- # return 0 00:06:14.174 18:55:43 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:14.174 { 00:06:14.174 "version": "SPDK v25.01-pre git sha1 8053cd6b8", 00:06:14.174 "fields": { 00:06:14.174 "major": 25, 00:06:14.174 "minor": 1, 00:06:14.174 "patch": 0, 00:06:14.174 "suffix": "-pre", 00:06:14.174 "commit": "8053cd6b8" 00:06:14.174 } 00:06:14.174 } 00:06:14.174 18:55:43 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:14.174 18:55:43 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:14.174 18:55:43 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:14.174 18:55:43 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:14.435 18:55:43 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:14.435 18:55:43 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:14.435 18:55:43 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:14.435 18:55:43 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:14.435 18:55:43 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:14.435 18:55:43 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:14.435 18:55:43 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:14.435 18:55:43 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:14.435 18:55:43 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:14.435 18:55:43 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:06:14.435 18:55:43 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:14.435 18:55:43 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:14.435 18:55:43 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:14.435 18:55:43 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:14.435 18:55:43 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:14.435 18:55:43 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:14.435 18:55:43 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:14.435 18:55:43 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:14.435 18:55:43 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:14.435 18:55:43 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:14.435 request: 00:06:14.435 { 00:06:14.435 "method": "env_dpdk_get_mem_stats", 00:06:14.435 "req_id": 1 00:06:14.435 } 00:06:14.435 Got JSON-RPC error response 00:06:14.435 response: 00:06:14.435 { 00:06:14.435 "code": -32601, 00:06:14.435 "message": "Method not found" 00:06:14.435 } 00:06:14.435 18:55:43 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:06:14.435 18:55:43 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:14.435 18:55:43 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:14.435 18:55:43 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:14.435 18:55:43 app_cmdline -- app/cmdline.sh@1 -- # killprocess 127116 00:06:14.435 18:55:43 app_cmdline -- common/autotest_common.sh@952 -- # '[' -z 127116 ']' 00:06:14.435 18:55:43 app_cmdline -- common/autotest_common.sh@956 -- # kill -0 127116 00:06:14.435 18:55:43 app_cmdline -- common/autotest_common.sh@957 -- # uname 00:06:14.435 18:55:43 app_cmdline -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:14.435 18:55:43 app_cmdline -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 127116 00:06:14.696 18:55:43 app_cmdline -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:14.696 18:55:43 app_cmdline -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:14.696 18:55:43 app_cmdline -- common/autotest_common.sh@970 -- # echo 'killing process with pid 127116' 00:06:14.696 killing process with pid 127116 00:06:14.697 18:55:43 app_cmdline -- common/autotest_common.sh@971 -- # kill 127116 00:06:14.697 18:55:43 app_cmdline -- common/autotest_common.sh@976 -- # wait 127116 00:06:14.697 00:06:14.697 real 0m1.242s 00:06:14.697 user 0m1.537s 00:06:14.697 sys 0m0.416s 00:06:14.697 18:55:44 app_cmdline -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:14.697 18:55:44 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:14.697 ************************************ 00:06:14.697 END TEST app_cmdline 00:06:14.697 ************************************ 00:06:14.958 18:55:44 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:14.958 18:55:44 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:14.958 18:55:44 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:14.958 18:55:44 -- common/autotest_common.sh@10 -- # set +x 00:06:14.958 ************************************ 00:06:14.958 START TEST version 00:06:14.958 ************************************ 00:06:14.958 18:55:44 version -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:14.958 * Looking for test storage... 00:06:14.958 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:14.958 18:55:44 version -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:14.958 18:55:44 version -- common/autotest_common.sh@1691 -- # lcov --version 00:06:14.958 18:55:44 version -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:14.958 18:55:44 version -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:14.958 18:55:44 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:14.958 18:55:44 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:14.958 18:55:44 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:14.958 18:55:44 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:14.958 18:55:44 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:14.958 18:55:44 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:14.958 18:55:44 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:14.958 18:55:44 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:14.958 18:55:44 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:14.958 18:55:44 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:14.958 18:55:44 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:14.958 18:55:44 version -- scripts/common.sh@344 -- # case "$op" in 00:06:14.958 18:55:44 version -- scripts/common.sh@345 -- # : 1 00:06:14.958 18:55:44 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:14.958 18:55:44 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:14.958 18:55:44 version -- scripts/common.sh@365 -- # decimal 1 00:06:14.958 18:55:44 version -- scripts/common.sh@353 -- # local d=1 00:06:14.958 18:55:44 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:14.958 18:55:44 version -- scripts/common.sh@355 -- # echo 1 00:06:14.958 18:55:44 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:14.958 18:55:44 version -- scripts/common.sh@366 -- # decimal 2 00:06:14.958 18:55:44 version -- scripts/common.sh@353 -- # local d=2 00:06:14.958 18:55:44 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:14.958 18:55:44 version -- scripts/common.sh@355 -- # echo 2 00:06:14.958 18:55:44 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:14.958 18:55:44 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:14.958 18:55:44 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:14.958 18:55:44 version -- scripts/common.sh@368 -- # return 0 00:06:14.958 18:55:44 version -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:14.958 18:55:44 version -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:14.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.958 --rc genhtml_branch_coverage=1 00:06:14.958 --rc genhtml_function_coverage=1 00:06:14.958 --rc genhtml_legend=1 00:06:14.958 --rc geninfo_all_blocks=1 00:06:14.958 --rc geninfo_unexecuted_blocks=1 00:06:14.958 00:06:14.958 ' 00:06:14.958 18:55:44 version -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:14.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.958 --rc genhtml_branch_coverage=1 00:06:14.958 --rc genhtml_function_coverage=1 00:06:14.958 --rc genhtml_legend=1 00:06:14.958 --rc geninfo_all_blocks=1 00:06:14.958 --rc geninfo_unexecuted_blocks=1 00:06:14.958 00:06:14.958 ' 00:06:14.958 18:55:44 version -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:14.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.958 --rc genhtml_branch_coverage=1 00:06:14.958 --rc genhtml_function_coverage=1 00:06:14.958 --rc genhtml_legend=1 00:06:14.958 --rc geninfo_all_blocks=1 00:06:14.958 --rc geninfo_unexecuted_blocks=1 00:06:14.958 00:06:14.958 ' 00:06:14.958 18:55:44 version -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:14.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.958 --rc genhtml_branch_coverage=1 00:06:14.958 --rc genhtml_function_coverage=1 00:06:14.958 --rc genhtml_legend=1 00:06:14.958 --rc geninfo_all_blocks=1 00:06:14.958 --rc geninfo_unexecuted_blocks=1 00:06:14.958 00:06:14.958 ' 00:06:14.958 18:55:44 version -- app/version.sh@17 -- # get_header_version major 00:06:14.958 18:55:44 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:14.958 18:55:44 version -- app/version.sh@14 -- # cut -f2 00:06:14.958 18:55:44 version -- app/version.sh@14 -- # tr -d '"' 00:06:14.958 18:55:44 version -- app/version.sh@17 -- # major=25 00:06:14.958 18:55:44 version -- app/version.sh@18 -- # get_header_version minor 00:06:14.958 18:55:44 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:14.958 18:55:44 version -- app/version.sh@14 -- # cut -f2 00:06:15.219 18:55:44 version -- app/version.sh@14 -- # tr -d '"' 00:06:15.219 18:55:44 version -- app/version.sh@18 -- # minor=1 00:06:15.219 18:55:44 version -- app/version.sh@19 -- # get_header_version patch 00:06:15.219 18:55:44 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:15.219 18:55:44 version -- app/version.sh@14 -- # cut -f2 00:06:15.219 18:55:44 version -- app/version.sh@14 -- # tr -d '"' 00:06:15.219 18:55:44 version -- app/version.sh@19 -- # patch=0 00:06:15.219 18:55:44 version -- app/version.sh@20 -- # get_header_version suffix 00:06:15.219 18:55:44 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:15.219 18:55:44 version -- app/version.sh@14 -- # cut -f2 00:06:15.219 18:55:44 version -- app/version.sh@14 -- # tr -d '"' 00:06:15.219 18:55:44 version -- app/version.sh@20 -- # suffix=-pre 00:06:15.219 18:55:44 version -- app/version.sh@22 -- # version=25.1 00:06:15.219 18:55:44 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:15.219 18:55:44 version -- app/version.sh@28 -- # version=25.1rc0 00:06:15.219 18:55:44 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:15.219 18:55:44 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:15.219 18:55:44 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:15.219 18:55:44 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:15.219 00:06:15.219 real 0m0.267s 00:06:15.219 user 0m0.161s 00:06:15.219 sys 0m0.154s 00:06:15.219 18:55:44 version -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:15.219 18:55:44 version -- common/autotest_common.sh@10 -- # set +x 00:06:15.219 ************************************ 00:06:15.219 END TEST version 00:06:15.219 ************************************ 00:06:15.219 18:55:44 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:15.219 18:55:44 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:15.219 18:55:44 -- spdk/autotest.sh@194 -- # uname -s 00:06:15.219 18:55:44 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:15.219 18:55:44 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:15.219 18:55:44 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:15.219 18:55:44 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:06:15.219 18:55:44 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:06:15.219 18:55:44 -- spdk/autotest.sh@256 -- # timing_exit lib 00:06:15.219 18:55:44 -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:15.219 18:55:44 -- common/autotest_common.sh@10 -- # set +x 00:06:15.219 18:55:44 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:06:15.219 18:55:44 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:06:15.219 18:55:44 -- spdk/autotest.sh@272 -- # '[' 1 -eq 1 ']' 00:06:15.219 18:55:44 -- spdk/autotest.sh@273 -- # export NET_TYPE 00:06:15.219 18:55:44 -- spdk/autotest.sh@276 -- # '[' tcp = rdma ']' 00:06:15.219 18:55:44 -- spdk/autotest.sh@279 -- # '[' tcp = tcp ']' 00:06:15.219 18:55:44 -- spdk/autotest.sh@280 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:15.219 18:55:44 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:06:15.219 18:55:44 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:15.219 18:55:44 -- common/autotest_common.sh@10 -- # set +x 00:06:15.219 ************************************ 00:06:15.219 START TEST nvmf_tcp 00:06:15.219 ************************************ 00:06:15.219 18:55:44 nvmf_tcp -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:15.481 * Looking for test storage... 00:06:15.481 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:15.481 18:55:44 nvmf_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:15.481 18:55:44 nvmf_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:06:15.481 18:55:44 nvmf_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:15.481 18:55:44 nvmf_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:15.481 18:55:44 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:15.481 18:55:44 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:15.481 18:55:44 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:15.481 18:55:44 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:15.481 18:55:44 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:15.481 18:55:44 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:15.481 18:55:44 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:15.481 18:55:44 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:15.481 18:55:44 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:15.481 18:55:44 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:15.481 18:55:44 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:15.481 18:55:44 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:15.481 18:55:44 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:06:15.481 18:55:44 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:15.481 18:55:44 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:15.481 18:55:44 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:15.481 18:55:44 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:06:15.481 18:55:44 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:15.481 18:55:44 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:06:15.481 18:55:44 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:15.481 18:55:44 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:15.481 18:55:44 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:06:15.481 18:55:44 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:15.481 18:55:44 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:06:15.481 18:55:44 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:15.481 18:55:44 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:15.481 18:55:44 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:15.481 18:55:44 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:06:15.481 18:55:44 nvmf_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:15.481 18:55:44 nvmf_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:15.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.481 --rc genhtml_branch_coverage=1 00:06:15.481 --rc genhtml_function_coverage=1 00:06:15.481 --rc genhtml_legend=1 00:06:15.481 --rc geninfo_all_blocks=1 00:06:15.481 --rc geninfo_unexecuted_blocks=1 00:06:15.481 00:06:15.481 ' 00:06:15.481 18:55:44 nvmf_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:15.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.481 --rc genhtml_branch_coverage=1 00:06:15.481 --rc genhtml_function_coverage=1 00:06:15.481 --rc genhtml_legend=1 00:06:15.481 --rc geninfo_all_blocks=1 00:06:15.481 --rc geninfo_unexecuted_blocks=1 00:06:15.481 00:06:15.481 ' 00:06:15.481 18:55:44 nvmf_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:15.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.481 --rc genhtml_branch_coverage=1 00:06:15.481 --rc genhtml_function_coverage=1 00:06:15.481 --rc genhtml_legend=1 00:06:15.481 --rc geninfo_all_blocks=1 00:06:15.481 --rc geninfo_unexecuted_blocks=1 00:06:15.481 00:06:15.481 ' 00:06:15.481 18:55:44 nvmf_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:15.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.481 --rc genhtml_branch_coverage=1 00:06:15.481 --rc genhtml_function_coverage=1 00:06:15.481 --rc genhtml_legend=1 00:06:15.481 --rc geninfo_all_blocks=1 00:06:15.481 --rc geninfo_unexecuted_blocks=1 00:06:15.481 00:06:15.481 ' 00:06:15.481 18:55:44 nvmf_tcp -- nvmf/nvmf.sh@10 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:15.481 18:55:44 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:06:15.481 18:55:44 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:15.481 18:55:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:15.481 ************************************ 00:06:15.481 START TEST nvmf_target_core 00:06:15.481 ************************************ 00:06:15.481 18:55:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:15.481 * Looking for test storage... 00:06:15.481 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:15.481 18:55:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:15.481 18:55:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # lcov --version 00:06:15.481 18:55:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:15.743 18:55:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:15.743 18:55:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:15.743 18:55:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:15.743 18:55:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:15.743 18:55:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:06:15.743 18:55:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:06:15.743 18:55:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:06:15.743 18:55:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:06:15.743 18:55:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:06:15.743 18:55:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:06:15.743 18:55:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:06:15.743 18:55:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:15.743 18:55:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:06:15.743 18:55:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:06:15.743 18:55:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:15.743 18:55:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:15.743 18:55:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:06:15.743 18:55:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:06:15.743 18:55:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:15.743 18:55:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:06:15.743 18:55:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:06:15.743 18:55:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:06:15.743 18:55:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:06:15.743 18:55:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:15.743 18:55:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:06:15.743 18:55:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:06:15.743 18:55:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:15.743 18:55:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:15.743 18:55:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:06:15.743 18:55:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:15.743 18:55:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:15.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.743 --rc genhtml_branch_coverage=1 00:06:15.743 --rc genhtml_function_coverage=1 00:06:15.743 --rc genhtml_legend=1 00:06:15.743 --rc geninfo_all_blocks=1 00:06:15.743 --rc geninfo_unexecuted_blocks=1 00:06:15.743 00:06:15.743 ' 00:06:15.743 18:55:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:15.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.743 --rc genhtml_branch_coverage=1 00:06:15.743 --rc genhtml_function_coverage=1 00:06:15.743 --rc genhtml_legend=1 00:06:15.743 --rc geninfo_all_blocks=1 00:06:15.743 --rc geninfo_unexecuted_blocks=1 00:06:15.743 00:06:15.743 ' 00:06:15.743 18:55:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:15.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.743 --rc genhtml_branch_coverage=1 00:06:15.743 --rc genhtml_function_coverage=1 00:06:15.743 --rc genhtml_legend=1 00:06:15.743 --rc geninfo_all_blocks=1 00:06:15.743 --rc geninfo_unexecuted_blocks=1 00:06:15.743 00:06:15.743 ' 00:06:15.743 18:55:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:15.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.743 --rc genhtml_branch_coverage=1 00:06:15.743 --rc genhtml_function_coverage=1 00:06:15.743 --rc genhtml_legend=1 00:06:15.743 --rc geninfo_all_blocks=1 00:06:15.743 --rc geninfo_unexecuted_blocks=1 00:06:15.743 00:06:15.743 ' 00:06:15.743 18:55:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:15.743 18:55:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:06:15.743 18:55:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:15.743 18:55:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:15.743 18:55:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:15.743 18:55:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:15.743 18:55:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:15.743 18:55:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:06:15.743 18:55:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:15.743 18:55:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:06:15.743 18:55:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:15.743 18:55:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:15.743 18:55:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:15.743 18:55:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:06:15.743 18:55:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:06:15.743 18:55:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:15.743 18:55:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:15.743 18:55:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:06:15.743 18:55:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:15.743 18:55:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:15.743 18:55:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:15.743 18:55:44 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.743 18:55:44 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.743 18:55:44 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.743 18:55:44 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:06:15.743 18:55:44 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.743 18:55:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:06:15.743 18:55:44 nvmf_tcp.nvmf_target_core -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:06:15.743 18:55:44 nvmf_tcp.nvmf_target_core -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:06:15.743 18:55:44 nvmf_tcp.nvmf_target_core -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:06:15.743 18:55:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@50 -- # : 0 00:06:15.743 18:55:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:06:15.743 18:55:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:06:15.743 18:55:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:06:15.743 18:55:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:15.743 18:55:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:15.743 18:55:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:06:15.743 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:06:15.743 18:55:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:06:15.743 18:55:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:06:15.743 18:55:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@54 -- # have_pci_nics=0 00:06:15.743 18:55:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:15.743 18:55:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@13 -- # TEST_ARGS=("$@") 00:06:15.743 18:55:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@15 -- # [[ 0 -eq 0 ]] 00:06:15.743 18:55:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:15.744 18:55:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:06:15.744 18:55:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:15.744 18:55:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:15.744 ************************************ 00:06:15.744 START TEST nvmf_abort 00:06:15.744 ************************************ 00:06:15.744 18:55:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:15.744 * Looking for test storage... 00:06:15.744 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:15.744 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:15.744 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # lcov --version 00:06:15.744 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:16.006 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:16.006 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:16.006 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:16.006 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:16.006 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:06:16.006 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:06:16.006 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:06:16.006 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:06:16.006 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:06:16.006 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:06:16.006 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:06:16.006 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:16.006 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:06:16.006 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:06:16.006 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:16.006 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:16.006 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:06:16.006 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:06:16.006 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:16.006 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:06:16.006 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:06:16.006 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:06:16.006 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:06:16.006 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:16.006 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:06:16.006 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:06:16.006 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:16.006 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:16.006 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:06:16.006 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:16.006 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:16.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.006 --rc genhtml_branch_coverage=1 00:06:16.006 --rc genhtml_function_coverage=1 00:06:16.006 --rc genhtml_legend=1 00:06:16.006 --rc geninfo_all_blocks=1 00:06:16.006 --rc geninfo_unexecuted_blocks=1 00:06:16.006 00:06:16.006 ' 00:06:16.006 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:16.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.006 --rc genhtml_branch_coverage=1 00:06:16.006 --rc genhtml_function_coverage=1 00:06:16.006 --rc genhtml_legend=1 00:06:16.006 --rc geninfo_all_blocks=1 00:06:16.006 --rc geninfo_unexecuted_blocks=1 00:06:16.006 00:06:16.006 ' 00:06:16.006 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:16.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.006 --rc genhtml_branch_coverage=1 00:06:16.006 --rc genhtml_function_coverage=1 00:06:16.006 --rc genhtml_legend=1 00:06:16.006 --rc geninfo_all_blocks=1 00:06:16.006 --rc geninfo_unexecuted_blocks=1 00:06:16.006 00:06:16.006 ' 00:06:16.006 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:16.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.006 --rc genhtml_branch_coverage=1 00:06:16.006 --rc genhtml_function_coverage=1 00:06:16.006 --rc genhtml_legend=1 00:06:16.006 --rc geninfo_all_blocks=1 00:06:16.006 --rc geninfo_unexecuted_blocks=1 00:06:16.006 00:06:16.006 ' 00:06:16.006 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:16.006 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:06:16.006 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:16.006 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:16.006 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:16.006 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:16.006 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:16.006 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:06:16.006 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:16.006 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:06:16.006 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:16.006 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:16.006 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:16.006 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:06:16.006 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:06:16.006 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:16.006 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:16.006 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:06:16.006 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:16.006 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:16.006 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:16.006 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.006 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.006 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.006 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:06:16.006 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.007 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:06:16.007 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:06:16.007 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:06:16.007 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:06:16.007 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@50 -- # : 0 00:06:16.007 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:06:16.007 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:06:16.007 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:06:16.007 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:16.007 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:16.007 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:06:16.007 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:06:16.007 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:06:16.007 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:06:16.007 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@54 -- # have_pci_nics=0 00:06:16.007 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:16.007 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:06:16.007 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:06:16.007 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:06:16.007 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:16.007 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # prepare_net_devs 00:06:16.007 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # local -g is_hw=no 00:06:16.007 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@260 -- # remove_target_ns 00:06:16.007 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:06:16.007 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:06:16.007 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_target_ns 00:06:16.007 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:06:16.007 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:06:16.007 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # xtrace_disable 00:06:16.007 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:24.299 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:24.299 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@131 -- # pci_devs=() 00:06:24.299 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@131 -- # local -a pci_devs 00:06:24.299 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@132 -- # pci_net_devs=() 00:06:24.299 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:06:24.299 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@133 -- # pci_drivers=() 00:06:24.299 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@133 -- # local -A pci_drivers 00:06:24.299 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@135 -- # net_devs=() 00:06:24.299 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@135 -- # local -ga net_devs 00:06:24.299 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@136 -- # e810=() 00:06:24.299 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@136 -- # local -ga e810 00:06:24.299 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@137 -- # x722=() 00:06:24.299 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@137 -- # local -ga x722 00:06:24.299 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@138 -- # mlx=() 00:06:24.299 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@138 -- # local -ga mlx 00:06:24.299 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:24.299 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:24.299 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:24.299 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:24.299 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:24.299 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:24.299 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:24.299 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:24.299 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:24.299 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:24.299 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:24.299 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:24.299 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:06:24.299 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:06:24.299 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:06:24.299 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:06:24.299 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:06:24.299 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:06:24.299 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:06:24.299 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:06:24.299 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:06:24.299 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:06:24.299 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:06:24.299 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:24.299 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:24.299 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:06:24.299 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:06:24.299 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:06:24.299 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:06:24.299 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:06:24.299 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:06:24.299 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:24.299 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:24.299 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:06:24.299 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:06:24.299 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:06:24.299 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:06:24.299 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:06:24.299 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:24.299 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:06:24.299 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:24.299 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@234 -- # [[ up == up ]] 00:06:24.299 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:06:24.299 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:24.299 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:06:24.299 Found net devices under 0000:4b:00.0: cvl_0_0 00:06:24.299 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:06:24.299 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:06:24.299 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:24.299 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:06:24.299 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:24.299 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@234 -- # [[ up == up ]] 00:06:24.299 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:06:24.299 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:24.299 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:06:24.299 Found net devices under 0000:4b:00.1: cvl_0_1 00:06:24.299 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:06:24.299 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:06:24.299 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:06:24.299 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # is_hw=yes 00:06:24.299 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:06:24.299 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:06:24.299 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:06:24.299 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:06:24.299 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@257 -- # create_target_ns 00:06:24.299 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:06:24.299 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:06:24.299 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:06:24.299 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:24.299 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:06:24.299 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:06:24.299 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:06:24.300 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:06:24.300 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:06:24.300 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:06:24.300 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:06:24.300 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:06:24.300 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@27 -- # local -gA dev_map 00:06:24.300 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@28 -- # local -g _dev 00:06:24.300 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:06:24.300 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:06:24.300 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:06:24.300 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:06:24.300 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@44 -- # ips=() 00:06:24.300 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:06:24.300 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:06:24.300 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:06:24.300 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:06:24.300 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:06:24.300 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:06:24.300 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:06:24.300 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:06:24.300 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:06:24.300 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:06:24.300 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:06:24.300 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:06:24.300 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:06:24.300 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:06:24.300 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:06:24.300 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:06:24.300 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:06:24.300 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:06:24.300 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:06:24.300 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:06:24.300 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@11 -- # local val=167772161 00:06:24.300 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:06:24.300 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:06:24.300 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:06:24.300 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:06:24.300 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:06:24.300 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:06:24.300 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:06:24.300 10.0.0.1 00:06:24.300 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:06:24.300 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:06:24.300 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:06:24.300 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:06:24.300 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:06:24.300 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@11 -- # local val=167772162 00:06:24.300 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:06:24.300 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:06:24.300 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:06:24.300 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:06:24.300 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:06:24.300 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:06:24.300 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:06:24.300 10.0.0.2 00:06:24.300 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:06:24.300 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:06:24.300 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:06:24.300 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:06:24.300 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:06:24.300 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:06:24.300 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:06:24.300 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:06:24.300 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:06:24.300 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:06:24.300 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:06:24.300 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:06:24.300 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:06:24.300 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:06:24.300 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:06:24.300 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:06:24.300 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:06:24.300 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:06:24.300 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:06:24.300 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:06:24.300 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@38 -- # ping_ips 1 00:06:24.300 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:06:24.300 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:06:24.300 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:06:24.300 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:06:24.300 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:06:24.300 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:06:24.300 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:06:24.300 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:06:24.300 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:06:24.300 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@107 -- # local dev=initiator0 00:06:24.300 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:06:24.300 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:06:24.300 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:06:24.300 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:06:24.300 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:06:24.300 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:06:24.300 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:06:24.300 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:06:24.300 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:06:24.300 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:06:24.300 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:06:24.300 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:06:24.300 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:06:24.300 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:06:24.300 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:06:24.300 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:24.300 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.710 ms 00:06:24.300 00:06:24.300 --- 10.0.0.1 ping statistics --- 00:06:24.300 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:24.300 rtt min/avg/max/mdev = 0.710/0.710/0.710/0.000 ms 00:06:24.300 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:06:24.300 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:06:24.300 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:06:24.300 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:06:24.301 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:06:24.301 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:06:24.301 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@168 -- # get_net_dev target0 00:06:24.301 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@107 -- # local dev=target0 00:06:24.301 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:06:24.301 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:06:24.301 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:06:24.301 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:06:24.301 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:06:24.301 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:06:24.301 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:06:24.301 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:06:24.301 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:06:24.301 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:06:24.301 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:06:24.301 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:06:24.301 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:06:24.301 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:06:24.301 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:24.301 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.297 ms 00:06:24.301 00:06:24.301 --- 10.0.0.2 ping statistics --- 00:06:24.301 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:24.301 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:06:24.301 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@98 -- # (( pair++ )) 00:06:24.301 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:06:24.301 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:24.301 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@270 -- # return 0 00:06:24.301 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:06:24.301 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:06:24.301 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:06:24.301 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:06:24.301 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:06:24.301 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:06:24.301 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:06:24.301 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:06:24.301 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:06:24.301 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:06:24.301 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@107 -- # local dev=initiator0 00:06:24.301 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:06:24.301 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:06:24.301 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:06:24.301 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:06:24.301 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:06:24.301 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:06:24.301 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:06:24.301 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:06:24.301 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:06:24.301 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:24.301 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:06:24.301 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:06:24.301 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:06:24.301 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:06:24.301 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:06:24.301 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:06:24.301 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@107 -- # local dev=initiator1 00:06:24.301 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:06:24.301 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:06:24.301 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@109 -- # return 1 00:06:24.301 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@168 -- # dev= 00:06:24.301 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@169 -- # return 0 00:06:24.301 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:06:24.301 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:06:24.301 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:06:24.301 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:06:24.301 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:06:24.301 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:06:24.301 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:06:24.301 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@168 -- # get_net_dev target0 00:06:24.301 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@107 -- # local dev=target0 00:06:24.301 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:06:24.301 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:06:24.301 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:06:24.301 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:06:24.301 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:06:24.301 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:06:24.301 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:06:24.301 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:06:24.301 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:06:24.301 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:24.301 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:06:24.301 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:06:24.301 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:06:24.301 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:06:24.301 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:06:24.301 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:06:24.301 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@168 -- # get_net_dev target1 00:06:24.301 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@107 -- # local dev=target1 00:06:24.301 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:06:24.301 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:06:24.301 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@109 -- # return 1 00:06:24.301 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@168 -- # dev= 00:06:24.301 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@169 -- # return 0 00:06:24.301 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:06:24.301 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:24.301 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:06:24.301 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:06:24.301 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:24.301 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:06:24.301 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:06:24.301 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:06:24.301 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:06:24.301 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:24.301 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:24.301 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # nvmfpid=131623 00:06:24.301 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@329 -- # waitforlisten 131623 00:06:24.302 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:24.302 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@833 -- # '[' -z 131623 ']' 00:06:24.302 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:24.302 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:24.302 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:24.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:24.302 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:24.302 18:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:24.302 [2024-11-05 18:55:52.814371] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:06:24.302 [2024-11-05 18:55:52.814424] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:24.302 [2024-11-05 18:55:52.910336] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:24.302 [2024-11-05 18:55:52.956405] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:24.302 [2024-11-05 18:55:52.956458] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:24.302 [2024-11-05 18:55:52.956467] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:24.302 [2024-11-05 18:55:52.956474] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:24.302 [2024-11-05 18:55:52.956481] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:24.302 [2024-11-05 18:55:52.958469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:24.302 [2024-11-05 18:55:52.958632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:24.302 [2024-11-05 18:55:52.958633] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:24.302 18:55:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:24.302 18:55:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@866 -- # return 0 00:06:24.302 18:55:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:06:24.302 18:55:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:24.302 18:55:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:24.563 18:55:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:24.563 18:55:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:06:24.563 18:55:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.563 18:55:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:24.563 [2024-11-05 18:55:53.666621] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:24.563 18:55:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.563 18:55:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:06:24.563 18:55:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.563 18:55:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:24.563 Malloc0 00:06:24.563 18:55:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.563 18:55:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:24.563 18:55:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.563 18:55:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:24.563 Delay0 00:06:24.563 18:55:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.563 18:55:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:24.563 18:55:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.563 18:55:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:24.563 18:55:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.563 18:55:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:06:24.563 18:55:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.563 18:55:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:24.563 18:55:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.563 18:55:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:24.563 18:55:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.563 18:55:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:24.563 [2024-11-05 18:55:53.747855] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:24.563 18:55:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.563 18:55:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:24.563 18:55:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.563 18:55:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:24.563 18:55:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.563 18:55:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:06:24.563 [2024-11-05 18:55:53.877354] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:06:27.108 Initializing NVMe Controllers 00:06:27.109 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:27.109 controller IO queue size 128 less than required 00:06:27.109 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:06:27.109 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:06:27.109 Initialization complete. Launching workers. 00:06:27.109 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 29064 00:06:27.109 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 29125, failed to submit 62 00:06:27.109 success 29068, unsuccessful 57, failed 0 00:06:27.109 18:55:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:27.109 18:55:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:27.109 18:55:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:27.109 18:55:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:27.109 18:55:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:06:27.109 18:55:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:06:27.109 18:55:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@335 -- # nvmfcleanup 00:06:27.109 18:55:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@99 -- # sync 00:06:27.109 18:55:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:06:27.109 18:55:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@102 -- # set +e 00:06:27.109 18:55:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@103 -- # for i in {1..20} 00:06:27.109 18:55:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:06:27.109 rmmod nvme_tcp 00:06:27.109 rmmod nvme_fabrics 00:06:27.109 rmmod nvme_keyring 00:06:27.109 18:55:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:06:27.109 18:55:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # set -e 00:06:27.109 18:55:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # return 0 00:06:27.109 18:55:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # '[' -n 131623 ']' 00:06:27.109 18:55:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@337 -- # killprocess 131623 00:06:27.109 18:55:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@952 -- # '[' -z 131623 ']' 00:06:27.109 18:55:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # kill -0 131623 00:06:27.109 18:55:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@957 -- # uname 00:06:27.109 18:55:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:27.109 18:55:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 131623 00:06:27.109 18:55:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:06:27.109 18:55:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:06:27.109 18:55:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@970 -- # echo 'killing process with pid 131623' 00:06:27.109 killing process with pid 131623 00:06:27.109 18:55:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@971 -- # kill 131623 00:06:27.109 18:55:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@976 -- # wait 131623 00:06:27.109 18:55:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:06:27.109 18:55:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # nvmf_fini 00:06:27.109 18:55:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@264 -- # local dev 00:06:27.109 18:55:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@267 -- # remove_target_ns 00:06:27.109 18:55:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:06:27.109 18:55:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:06:27.109 18:55:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_target_ns 00:06:29.024 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@268 -- # delete_main_bridge 00:06:29.024 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:06:29.024 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@130 -- # return 0 00:06:29.024 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:06:29.024 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:06:29.024 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:06:29.024 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:06:29.024 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:06:29.024 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:06:29.024 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:06:29.024 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:06:29.024 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:06:29.024 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:06:29.024 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:06:29.024 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:06:29.024 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:06:29.024 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:06:29.024 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:06:29.024 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:06:29.024 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:06:29.024 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@41 -- # _dev=0 00:06:29.024 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@41 -- # dev_map=() 00:06:29.025 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@284 -- # iptr 00:06:29.025 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@542 -- # iptables-save 00:06:29.025 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:06:29.025 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@542 -- # iptables-restore 00:06:29.025 00:06:29.025 real 0m13.362s 00:06:29.025 user 0m13.836s 00:06:29.025 sys 0m6.558s 00:06:29.025 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:29.025 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:29.025 ************************************ 00:06:29.025 END TEST nvmf_abort 00:06:29.025 ************************************ 00:06:29.287 18:55:58 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@17 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:29.287 18:55:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:06:29.287 18:55:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:29.287 18:55:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:29.287 ************************************ 00:06:29.287 START TEST nvmf_ns_hotplug_stress 00:06:29.287 ************************************ 00:06:29.287 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:29.287 * Looking for test storage... 00:06:29.287 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:29.287 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:29.287 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:06:29.287 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:29.287 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:29.287 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:29.287 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:29.287 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:29.287 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:06:29.287 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:06:29.287 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:06:29.287 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:06:29.287 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:06:29.287 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:06:29.287 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:06:29.287 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:29.288 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:06:29.288 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:06:29.288 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:29.288 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:29.288 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:06:29.288 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:06:29.288 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:29.288 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:06:29.288 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:06:29.288 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:06:29.288 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:06:29.288 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:29.288 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:06:29.288 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:06:29.288 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:29.288 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:29.288 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:06:29.288 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:29.288 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:29.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.288 --rc genhtml_branch_coverage=1 00:06:29.288 --rc genhtml_function_coverage=1 00:06:29.288 --rc genhtml_legend=1 00:06:29.288 --rc geninfo_all_blocks=1 00:06:29.288 --rc geninfo_unexecuted_blocks=1 00:06:29.288 00:06:29.288 ' 00:06:29.288 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:29.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.288 --rc genhtml_branch_coverage=1 00:06:29.288 --rc genhtml_function_coverage=1 00:06:29.288 --rc genhtml_legend=1 00:06:29.288 --rc geninfo_all_blocks=1 00:06:29.288 --rc geninfo_unexecuted_blocks=1 00:06:29.288 00:06:29.288 ' 00:06:29.288 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:29.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.288 --rc genhtml_branch_coverage=1 00:06:29.288 --rc genhtml_function_coverage=1 00:06:29.288 --rc genhtml_legend=1 00:06:29.288 --rc geninfo_all_blocks=1 00:06:29.288 --rc geninfo_unexecuted_blocks=1 00:06:29.288 00:06:29.288 ' 00:06:29.288 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:29.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.288 --rc genhtml_branch_coverage=1 00:06:29.288 --rc genhtml_function_coverage=1 00:06:29.288 --rc genhtml_legend=1 00:06:29.288 --rc geninfo_all_blocks=1 00:06:29.288 --rc geninfo_unexecuted_blocks=1 00:06:29.288 00:06:29.288 ' 00:06:29.288 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:29.288 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:06:29.288 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:29.288 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:29.288 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:29.288 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:29.288 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:29.288 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:06:29.288 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:29.288 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:06:29.288 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:29.288 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:29.288 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:29.288 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:06:29.288 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:06:29.288 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:29.288 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:29.549 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:06:29.549 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:29.549 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:29.549 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:29.549 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:29.549 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:29.549 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:29.549 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:06:29.550 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:29.550 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:06:29.550 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:06:29.550 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:06:29.550 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:06:29.550 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@50 -- # : 0 00:06:29.550 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:06:29.550 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:06:29.550 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:06:29.550 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:29.550 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:29.550 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:06:29.550 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:06:29.550 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:06:29.550 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:06:29.550 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@54 -- # have_pci_nics=0 00:06:29.550 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:29.550 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:06:29.550 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:06:29.550 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:29.550 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # prepare_net_devs 00:06:29.550 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # local -g is_hw=no 00:06:29.550 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # remove_target_ns 00:06:29.550 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:06:29.550 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:06:29.550 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_target_ns 00:06:29.550 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:06:29.550 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:06:29.550 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # xtrace_disable 00:06:29.550 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:37.695 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:37.695 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@131 -- # pci_devs=() 00:06:37.695 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@131 -- # local -a pci_devs 00:06:37.695 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@132 -- # pci_net_devs=() 00:06:37.695 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:06:37.695 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@133 -- # pci_drivers=() 00:06:37.695 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@133 -- # local -A pci_drivers 00:06:37.695 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@135 -- # net_devs=() 00:06:37.695 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@135 -- # local -ga net_devs 00:06:37.695 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@136 -- # e810=() 00:06:37.695 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@136 -- # local -ga e810 00:06:37.695 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@137 -- # x722=() 00:06:37.695 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@137 -- # local -ga x722 00:06:37.695 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@138 -- # mlx=() 00:06:37.695 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@138 -- # local -ga mlx 00:06:37.695 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:37.695 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:37.695 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:37.695 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:37.695 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:37.695 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:37.695 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:37.695 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:37.695 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:37.695 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:37.695 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:37.695 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:37.695 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:06:37.695 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:06:37.695 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:06:37.695 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:06:37.695 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:06:37.695 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:06:37.695 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:06:37.695 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:06:37.695 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:06:37.695 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:06:37.695 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:06:37.695 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:37.695 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:37.695 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:06:37.695 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:06:37.695 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:06:37.695 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:06:37.695 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:06:37.695 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:06:37.695 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:37.695 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:37.695 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:06:37.695 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:06:37.695 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:06:37.695 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:06:37.695 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:06:37.695 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:37.695 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:06:37.695 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:37.695 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # [[ up == up ]] 00:06:37.695 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:06:37.695 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:37.695 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:06:37.695 Found net devices under 0000:4b:00.0: cvl_0_0 00:06:37.695 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:06:37.695 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:06:37.696 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:37.696 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:06:37.696 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:37.696 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # [[ up == up ]] 00:06:37.696 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:06:37.696 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:37.696 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:06:37.696 Found net devices under 0000:4b:00.1: cvl_0_1 00:06:37.696 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:06:37.696 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:06:37.696 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:06:37.696 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # is_hw=yes 00:06:37.696 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:06:37.696 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:06:37.696 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:06:37.696 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:06:37.696 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@257 -- # create_target_ns 00:06:37.696 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:06:37.696 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:06:37.696 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:06:37.696 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:37.696 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:06:37.696 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:06:37.696 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:06:37.696 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:06:37.696 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:06:37.696 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:06:37.696 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:06:37.696 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:06:37.696 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@27 -- # local -gA dev_map 00:06:37.696 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@28 -- # local -g _dev 00:06:37.696 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:06:37.696 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:06:37.696 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:06:37.696 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:06:37.696 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@44 -- # ips=() 00:06:37.696 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:06:37.696 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:06:37.696 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:06:37.696 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:06:37.696 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:06:37.696 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:06:37.696 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:06:37.696 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:06:37.696 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:06:37.696 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:06:37.696 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:06:37.696 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:06:37.696 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:06:37.696 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:06:37.696 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:06:37.696 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:06:37.696 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:06:37.696 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:06:37.696 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:06:37.696 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:06:37.696 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@11 -- # local val=167772161 00:06:37.696 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:06:37.696 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:06:37.696 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:06:37.696 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:06:37.696 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:06:37.696 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:06:37.696 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:06:37.696 10.0.0.1 00:06:37.696 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:06:37.696 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:06:37.696 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:06:37.696 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:06:37.696 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:06:37.696 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@11 -- # local val=167772162 00:06:37.696 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:06:37.696 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:06:37.696 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:06:37.696 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:06:37.696 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:06:37.696 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:06:37.696 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:06:37.696 10.0.0.2 00:06:37.696 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:06:37.696 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:06:37.696 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:06:37.696 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:06:37.696 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:06:37.696 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:06:37.696 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:06:37.696 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:06:37.696 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:06:37.696 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:06:37.696 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:06:37.696 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:06:37.696 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:06:37.696 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:06:37.697 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:06:37.697 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:06:37.697 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:06:37.697 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:06:37.697 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:06:37.697 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:06:37.697 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@38 -- # ping_ips 1 00:06:37.697 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:06:37.697 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:06:37.697 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:06:37.697 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:06:37.697 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:06:37.697 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:06:37.697 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:06:37.697 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:06:37.697 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:06:37.697 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@107 -- # local dev=initiator0 00:06:37.697 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:06:37.697 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:06:37.697 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:06:37.697 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:06:37.697 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:06:37.697 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:06:37.697 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:06:37.697 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:06:37.697 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:06:37.697 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:06:37.697 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:06:37.697 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:06:37.697 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:06:37.697 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:06:37.697 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:06:37.697 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:37.697 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.618 ms 00:06:37.697 00:06:37.697 --- 10.0.0.1 ping statistics --- 00:06:37.697 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:37.697 rtt min/avg/max/mdev = 0.618/0.618/0.618/0.000 ms 00:06:37.697 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:06:37.697 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:06:37.697 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:06:37.697 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:06:37.697 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:06:37.697 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:06:37.697 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@168 -- # get_net_dev target0 00:06:37.697 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@107 -- # local dev=target0 00:06:37.697 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:06:37.697 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:06:37.697 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:06:37.697 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:06:37.697 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:06:37.697 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:06:37.697 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:06:37.697 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:06:37.697 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:06:37.697 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:06:37.697 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:06:37.697 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:06:37.697 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:06:37.697 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:06:37.697 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:37.697 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.306 ms 00:06:37.697 00:06:37.697 --- 10.0.0.2 ping statistics --- 00:06:37.697 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:37.697 rtt min/avg/max/mdev = 0.306/0.306/0.306/0.000 ms 00:06:37.697 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@98 -- # (( pair++ )) 00:06:37.697 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:06:37.697 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:37.697 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # return 0 00:06:37.697 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:06:37.697 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:06:37.697 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:06:37.697 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:06:37.697 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:06:37.697 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:06:37.697 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:06:37.697 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:06:37.697 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:06:37.697 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:06:37.697 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@107 -- # local dev=initiator0 00:06:37.697 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:06:37.697 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:06:37.697 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:06:37.697 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:06:37.697 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:06:37.697 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:06:37.697 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:06:37.697 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:06:37.697 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:06:37.697 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:37.697 18:56:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:06:37.697 18:56:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:06:37.697 18:56:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:06:37.697 18:56:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:06:37.697 18:56:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:06:37.697 18:56:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:06:37.697 18:56:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@107 -- # local dev=initiator1 00:06:37.697 18:56:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:06:37.697 18:56:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:06:37.697 18:56:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@109 -- # return 1 00:06:37.697 18:56:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@168 -- # dev= 00:06:37.697 18:56:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@169 -- # return 0 00:06:37.697 18:56:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:06:37.697 18:56:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:06:37.697 18:56:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:06:37.697 18:56:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:06:37.697 18:56:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:06:37.697 18:56:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:06:37.697 18:56:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:06:37.697 18:56:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@168 -- # get_net_dev target0 00:06:37.698 18:56:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@107 -- # local dev=target0 00:06:37.698 18:56:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:06:37.698 18:56:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:06:37.698 18:56:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:06:37.698 18:56:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:06:37.698 18:56:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:06:37.698 18:56:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:06:37.698 18:56:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:06:37.698 18:56:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:06:37.698 18:56:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:06:37.698 18:56:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:37.698 18:56:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:06:37.698 18:56:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:06:37.698 18:56:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:06:37.698 18:56:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:06:37.698 18:56:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:06:37.698 18:56:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:06:37.698 18:56:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@168 -- # get_net_dev target1 00:06:37.698 18:56:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@107 -- # local dev=target1 00:06:37.698 18:56:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:06:37.698 18:56:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:06:37.698 18:56:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@109 -- # return 1 00:06:37.698 18:56:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@168 -- # dev= 00:06:37.698 18:56:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@169 -- # return 0 00:06:37.698 18:56:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:06:37.698 18:56:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:37.698 18:56:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:06:37.698 18:56:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:06:37.698 18:56:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:37.698 18:56:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:06:37.698 18:56:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:06:37.698 18:56:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:06:37.698 18:56:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:06:37.698 18:56:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:37.698 18:56:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:37.698 18:56:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # nvmfpid=136610 00:06:37.698 18:56:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # waitforlisten 136610 00:06:37.698 18:56:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:37.698 18:56:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # '[' -z 136610 ']' 00:06:37.698 18:56:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:37.698 18:56:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:37.698 18:56:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:37.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:37.698 18:56:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:37.698 18:56:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:37.698 [2024-11-05 18:56:06.153153] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:06:37.698 [2024-11-05 18:56:06.153230] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:37.698 [2024-11-05 18:56:06.253334] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:37.698 [2024-11-05 18:56:06.304867] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:37.698 [2024-11-05 18:56:06.304930] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:37.698 [2024-11-05 18:56:06.304939] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:37.698 [2024-11-05 18:56:06.304946] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:37.698 [2024-11-05 18:56:06.304953] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:37.698 [2024-11-05 18:56:06.306791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:37.698 [2024-11-05 18:56:06.306991] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:37.698 [2024-11-05 18:56:06.306992] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:37.698 18:56:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:37.698 18:56:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@866 -- # return 0 00:06:37.698 18:56:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:06:37.698 18:56:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:37.698 18:56:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:37.698 18:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:37.698 18:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:06:37.698 18:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:37.959 [2024-11-05 18:56:07.158412] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:37.959 18:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:38.220 18:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:38.220 [2024-11-05 18:56:07.519864] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:38.481 18:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:38.481 18:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:06:38.743 Malloc0 00:06:38.743 18:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:39.004 Delay0 00:06:39.004 18:56:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:39.004 18:56:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:06:39.264 NULL1 00:06:39.264 18:56:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:06:39.526 18:56:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:06:39.526 18:56:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=137067 00:06:39.526 18:56:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137067 00:06:39.526 18:56:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:39.787 18:56:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:39.787 18:56:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:06:39.787 18:56:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:06:40.047 true 00:06:40.047 18:56:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137067 00:06:40.047 18:56:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:40.307 18:56:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:40.307 18:56:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:06:40.307 18:56:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:06:40.567 true 00:06:40.567 18:56:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137067 00:06:40.567 18:56:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:40.827 18:56:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:41.088 18:56:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:06:41.088 18:56:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:06:41.088 true 00:06:41.088 18:56:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137067 00:06:41.088 18:56:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:41.349 18:56:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:41.610 18:56:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:06:41.610 18:56:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:06:41.610 true 00:06:41.610 18:56:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137067 00:06:41.611 18:56:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:41.871 18:56:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:42.132 18:56:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:06:42.132 18:56:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:06:42.132 true 00:06:42.132 18:56:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137067 00:06:42.132 18:56:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:42.392 18:56:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:42.653 18:56:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:06:42.653 18:56:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:06:42.653 true 00:06:42.914 18:56:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137067 00:06:42.914 18:56:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:42.914 18:56:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:43.175 18:56:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:06:43.175 18:56:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:06:43.175 true 00:06:43.436 18:56:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137067 00:06:43.436 18:56:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:43.436 18:56:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:43.697 18:56:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:06:43.697 18:56:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:06:43.958 true 00:06:43.958 18:56:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137067 00:06:43.958 18:56:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:43.958 18:56:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:44.218 18:56:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:06:44.218 18:56:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:06:44.479 true 00:06:44.479 18:56:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137067 00:06:44.479 18:56:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:44.479 18:56:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:44.740 18:56:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:06:44.740 18:56:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:06:45.001 true 00:06:45.001 18:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137067 00:06:45.001 18:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:45.262 18:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:45.262 18:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:06:45.262 18:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:06:45.523 true 00:06:45.523 18:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137067 00:06:45.523 18:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:45.784 18:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:46.045 18:56:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:06:46.045 18:56:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:06:46.045 true 00:06:46.045 18:56:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137067 00:06:46.045 18:56:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:46.305 18:56:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:46.567 18:56:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:06:46.567 18:56:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:06:46.567 true 00:06:46.567 18:56:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137067 00:06:46.567 18:56:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:46.827 18:56:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:47.087 18:56:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:06:47.087 18:56:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:06:47.087 true 00:06:47.087 18:56:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137067 00:06:47.087 18:56:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:47.348 18:56:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:47.609 18:56:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:06:47.609 18:56:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:06:47.609 true 00:06:47.609 18:56:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137067 00:06:47.609 18:56:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:47.870 18:56:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:48.130 18:56:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:06:48.130 18:56:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:06:48.130 true 00:06:48.130 18:56:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137067 00:06:48.130 18:56:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:48.390 18:56:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:48.651 18:56:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:06:48.651 18:56:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:06:48.651 true 00:06:48.911 18:56:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137067 00:06:48.911 18:56:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:48.911 18:56:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:49.171 18:56:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:06:49.171 18:56:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:06:49.431 true 00:06:49.431 18:56:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137067 00:06:49.431 18:56:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:49.431 18:56:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:49.691 18:56:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:06:49.691 18:56:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:06:49.951 true 00:06:49.951 18:56:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137067 00:06:49.951 18:56:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:49.951 18:56:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:50.212 18:56:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:06:50.212 18:56:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:06:50.472 true 00:06:50.472 18:56:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137067 00:06:50.472 18:56:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:50.732 18:56:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:50.732 18:56:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:06:50.732 18:56:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:06:50.993 true 00:06:50.993 18:56:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137067 00:06:50.993 18:56:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:51.253 18:56:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:51.253 18:56:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:06:51.253 18:56:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:06:51.515 true 00:06:51.515 18:56:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137067 00:06:51.515 18:56:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:51.776 18:56:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:51.776 18:56:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:06:51.776 18:56:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:06:52.038 true 00:06:52.038 18:56:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137067 00:06:52.038 18:56:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:52.299 18:56:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:52.560 18:56:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:06:52.560 18:56:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:06:52.560 true 00:06:52.560 18:56:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137067 00:06:52.560 18:56:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:52.820 18:56:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:53.081 18:56:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:06:53.081 18:56:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:06:53.081 true 00:06:53.081 18:56:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137067 00:06:53.081 18:56:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:53.343 18:56:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:53.604 18:56:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:06:53.604 18:56:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:06:53.604 true 00:06:53.604 18:56:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137067 00:06:53.604 18:56:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:53.864 18:56:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:54.127 18:56:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:06:54.127 18:56:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:06:54.127 true 00:06:54.391 18:56:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137067 00:06:54.391 18:56:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:54.391 18:56:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:54.651 18:56:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:06:54.651 18:56:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:06:54.912 true 00:06:54.912 18:56:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137067 00:06:54.912 18:56:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:54.912 18:56:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:55.173 18:56:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:06:55.173 18:56:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:06:55.434 true 00:06:55.434 18:56:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137067 00:06:55.434 18:56:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:55.434 18:56:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:55.695 18:56:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:06:55.695 18:56:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:06:55.956 true 00:06:55.956 18:56:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137067 00:06:55.956 18:56:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:56.216 18:56:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:56.216 18:56:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:06:56.216 18:56:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:06:56.477 true 00:06:56.477 18:56:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137067 00:06:56.477 18:56:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:56.738 18:56:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:56.738 18:56:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:06:56.738 18:56:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:06:57.000 true 00:06:57.000 18:56:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137067 00:06:57.000 18:56:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:57.261 18:56:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:57.261 18:56:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:06:57.261 18:56:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:06:57.522 true 00:06:57.522 18:56:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137067 00:06:57.522 18:56:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:57.807 18:56:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:57.807 18:56:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:06:57.807 18:56:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:06:58.145 true 00:06:58.145 18:56:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137067 00:06:58.145 18:56:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:58.429 18:56:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:58.429 18:56:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:06:58.429 18:56:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:06:58.689 true 00:06:58.689 18:56:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137067 00:06:58.689 18:56:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:58.689 18:56:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:58.949 18:56:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:06:58.949 18:56:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:06:59.208 true 00:06:59.208 18:56:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137067 00:06:59.208 18:56:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:59.468 18:56:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:59.469 18:56:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:06:59.469 18:56:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:06:59.728 true 00:06:59.728 18:56:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137067 00:06:59.728 18:56:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:59.989 18:56:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:59.989 18:56:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:06:59.989 18:56:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:07:00.248 true 00:07:00.248 18:56:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137067 00:07:00.248 18:56:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:00.516 18:56:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:00.517 18:56:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:07:00.517 18:56:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:07:00.780 true 00:07:00.780 18:56:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137067 00:07:00.780 18:56:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:01.039 18:56:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:01.039 18:56:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:07:01.039 18:56:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:07:01.299 true 00:07:01.299 18:56:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137067 00:07:01.299 18:56:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:01.559 18:56:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:01.819 18:56:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:07:01.819 18:56:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:07:01.819 true 00:07:01.819 18:56:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137067 00:07:01.819 18:56:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:02.079 18:56:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:02.339 18:56:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:07:02.339 18:56:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:07:02.339 true 00:07:02.339 18:56:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137067 00:07:02.339 18:56:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:02.599 18:56:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:02.859 18:56:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:07:02.859 18:56:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:07:02.859 true 00:07:03.119 18:56:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137067 00:07:03.119 18:56:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:03.119 18:56:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:03.380 18:56:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:07:03.380 18:56:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:07:03.380 true 00:07:03.639 18:56:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137067 00:07:03.639 18:56:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:03.639 18:56:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:03.899 18:56:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:07:03.899 18:56:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:07:04.158 true 00:07:04.158 18:56:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137067 00:07:04.158 18:56:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:04.158 18:56:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:04.417 18:56:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:07:04.418 18:56:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:07:04.677 true 00:07:04.677 18:56:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137067 00:07:04.678 18:56:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:04.678 18:56:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:04.938 18:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:07:04.938 18:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:07:05.198 true 00:07:05.198 18:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137067 00:07:05.198 18:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:05.458 18:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:05.458 18:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:07:05.458 18:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:07:05.718 true 00:07:05.718 18:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137067 00:07:05.718 18:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:05.978 18:56:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:05.978 18:56:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:07:05.978 18:56:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:07:06.239 true 00:07:06.239 18:56:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137067 00:07:06.239 18:56:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:06.499 18:56:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:06.499 18:56:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:07:06.499 18:56:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:07:06.758 true 00:07:06.758 18:56:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137067 00:07:06.758 18:56:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:07.018 18:56:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:07.279 18:56:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:07:07.279 18:56:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:07:07.279 true 00:07:07.279 18:56:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137067 00:07:07.279 18:56:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:07.539 18:56:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:07.799 18:56:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:07:07.799 18:56:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:07:07.799 true 00:07:07.800 18:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137067 00:07:07.800 18:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:08.060 18:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:08.322 18:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:07:08.322 18:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:07:08.322 true 00:07:08.322 18:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137067 00:07:08.322 18:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:08.582 18:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:08.842 18:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:07:08.842 18:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:07:08.842 true 00:07:08.842 18:56:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137067 00:07:08.842 18:56:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:09.102 18:56:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:09.362 18:56:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1055 00:07:09.362 18:56:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:07:09.362 true 00:07:09.623 18:56:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137067 00:07:09.623 18:56:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:09.623 18:56:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:09.623 Initializing NVMe Controllers 00:07:09.623 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:09.623 Controller IO queue size 128, less than required. 00:07:09.623 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:09.623 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:07:09.623 Initialization complete. Launching workers. 00:07:09.623 ======================================================== 00:07:09.623 Latency(us) 00:07:09.623 Device Information : IOPS MiB/s Average min max 00:07:09.623 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 30444.54 14.87 4204.28 1441.11 10856.95 00:07:09.623 ======================================================== 00:07:09.623 Total : 30444.54 14.87 4204.28 1441.11 10856.95 00:07:09.623 00:07:09.884 18:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1056 00:07:09.884 18:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1056 00:07:10.145 true 00:07:10.145 18:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 137067 00:07:10.145 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (137067) - No such process 00:07:10.145 18:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 137067 00:07:10.145 18:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:10.407 18:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:10.407 18:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:07:10.407 18:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:07:10.407 18:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:07:10.407 18:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:10.407 18:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:07:10.667 null0 00:07:10.667 18:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:10.667 18:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:10.667 18:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:07:10.928 null1 00:07:10.928 18:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:10.928 18:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:10.928 18:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:07:10.928 null2 00:07:10.928 18:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:10.928 18:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:10.928 18:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:07:11.189 null3 00:07:11.189 18:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:11.189 18:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:11.189 18:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:07:11.449 null4 00:07:11.449 18:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:11.449 18:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:11.449 18:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:07:11.449 null5 00:07:11.449 18:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:11.449 18:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:11.449 18:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:07:11.710 null6 00:07:11.710 18:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:11.710 18:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:11.710 18:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:07:11.971 null7 00:07:11.971 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:11.971 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:11.971 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:07:11.971 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:11.971 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:11.971 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:11.971 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:07:11.971 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:11.971 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:07:11.971 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:11.971 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.971 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:11.971 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:11.971 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:11.971 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:11.971 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:07:11.971 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:07:11.971 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:11.971 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.971 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:11.971 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:11.971 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:11.971 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:11.971 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:07:11.971 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:07:11.971 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:11.971 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.971 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:11.971 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:11.971 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:11.971 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:11.971 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:07:11.971 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:07:11.971 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:11.971 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.971 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:11.972 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:11.972 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:11.972 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:11.972 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:07:11.972 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:07:11.972 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:11.972 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.972 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:11.972 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:11.972 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:11.972 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:11.972 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:07:11.972 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:07:11.972 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:11.972 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.972 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:11.972 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:11.972 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:11.972 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:11.972 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:07:11.972 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:07:11.972 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:11.972 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:11.972 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:11.972 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.972 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:11.972 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:11.972 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 143730 143731 143735 143738 143741 143744 143747 143750 00:07:11.972 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:07:11.972 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:07:11.972 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:11.972 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.972 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:11.972 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:12.232 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:12.232 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:12.232 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:12.232 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:12.232 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:12.232 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:12.232 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:12.232 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.232 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.232 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:12.232 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.232 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.232 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:12.232 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.232 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.232 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.232 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.232 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:12.232 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:12.232 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.232 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.232 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:12.232 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.232 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.232 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:12.232 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.232 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.232 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:12.493 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.493 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.493 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:12.493 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:12.493 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:12.493 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:12.493 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:12.493 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:12.493 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:12.493 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:12.493 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:12.493 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.493 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.493 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:12.755 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.755 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.755 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:12.755 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.755 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.755 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:12.755 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.755 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.755 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:12.755 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.755 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.755 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:12.755 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.755 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.755 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:12.755 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.755 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.755 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:12.755 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.755 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.755 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:12.755 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:12.755 18:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:12.755 18:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:13.016 18:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:13.016 18:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:13.016 18:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:13.016 18:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:13.016 18:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:13.016 18:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.016 18:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.016 18:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:13.016 18:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.016 18:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.016 18:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:13.016 18:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.016 18:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.016 18:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:13.016 18:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.016 18:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.016 18:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:13.016 18:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.016 18:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.016 18:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:13.016 18:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.016 18:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.016 18:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:13.016 18:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:13.016 18:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.016 18:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.016 18:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:13.277 18:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.277 18:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.277 18:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:13.277 18:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:13.277 18:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:13.277 18:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.277 18:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.277 18:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:13.277 18:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:13.277 18:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:13.277 18:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:13.277 18:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:13.277 18:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:13.277 18:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.277 18:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.277 18:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:13.277 18:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.277 18:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.277 18:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:13.538 18:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:13.538 18:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.538 18:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.538 18:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:13.538 18:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.538 18:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.538 18:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:13.538 18:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.538 18:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.538 18:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:13.538 18:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.538 18:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.538 18:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:13.538 18:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:13.538 18:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:13.538 18:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.538 18:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.538 18:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:13.538 18:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.538 18:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.538 18:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:13.538 18:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:13.799 18:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:13.799 18:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:13.799 18:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.799 18:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.799 18:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:13.799 18:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:13.799 18:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:13.799 18:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:13.799 18:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.799 18:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.799 18:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:13.799 18:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.799 18:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.799 18:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:13.799 18:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.799 18:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.799 18:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:13.799 18:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.799 18:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.799 18:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:13.799 18:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.799 18:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.799 18:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:13.799 18:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.799 18:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.799 18:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:14.061 18:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:14.061 18:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.061 18:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.061 18:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:14.061 18:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:14.061 18:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:14.061 18:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:14.061 18:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:14.061 18:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:14.061 18:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.061 18:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.061 18:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:14.061 18:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:14.061 18:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:14.322 18:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.322 18:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.322 18:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:14.322 18:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.322 18:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.322 18:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:14.322 18:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.322 18:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.322 18:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:14.322 18:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.322 18:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.322 18:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:14.323 18:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.323 18:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.323 18:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:14.323 18:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:14.323 18:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.323 18:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.323 18:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:14.323 18:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.323 18:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.323 18:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:14.323 18:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:14.323 18:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:14.323 18:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:14.584 18:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:14.584 18:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.584 18:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.584 18:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:14.584 18:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:14.584 18:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:14.584 18:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:14.584 18:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.584 18:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.584 18:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:14.584 18:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.584 18:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.584 18:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:14.584 18:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.584 18:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.584 18:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:14.584 18:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.584 18:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.584 18:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:14.584 18:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.584 18:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.584 18:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:14.584 18:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:14.584 18:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.584 18:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.584 18:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:14.845 18:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.845 18:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.845 18:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:14.845 18:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:14.845 18:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:14.845 18:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:14.845 18:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:14.845 18:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.845 18:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.845 18:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:14.845 18:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:14.845 18:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:14.845 18:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:14.845 18:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.845 18:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.845 18:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:14.845 18:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.845 18:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.845 18:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:14.845 18:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.106 18:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.106 18:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:15.106 18:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.106 18:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.106 18:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:15.106 18:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.106 18:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.106 18:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:15.106 18:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.106 18:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.106 18:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:15.106 18:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:15.106 18:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:15.106 18:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.106 18:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.106 18:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:15.106 18:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:15.106 18:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:15.106 18:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:15.106 18:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:15.106 18:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:15.106 18:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.106 18:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.106 18:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:15.368 18:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.368 18:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.368 18:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:15.368 18:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:15.368 18:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.368 18:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.368 18:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:15.368 18:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.368 18:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.368 18:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:15.368 18:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.368 18:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.368 18:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:15.368 18:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.368 18:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.368 18:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:15.368 18:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:15.368 18:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.368 18:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.368 18:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:15.368 18:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:15.368 18:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.368 18:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.368 18:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:15.630 18:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:15.630 18:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:15.630 18:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:15.630 18:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:15.630 18:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.630 18:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.630 18:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.630 18:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.630 18:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.630 18:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.630 18:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.630 18:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.630 18:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.630 18:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.630 18:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.630 18:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.630 18:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.630 18:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.630 18:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:07:15.630 18:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:07:15.630 18:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # nvmfcleanup 00:07:15.630 18:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@99 -- # sync 00:07:15.630 18:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:07:15.630 18:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # set +e 00:07:15.630 18:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # for i in {1..20} 00:07:15.630 18:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:07:15.630 rmmod nvme_tcp 00:07:15.891 rmmod nvme_fabrics 00:07:15.891 rmmod nvme_keyring 00:07:15.891 18:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:07:15.891 18:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # set -e 00:07:15.891 18:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # return 0 00:07:15.891 18:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # '[' -n 136610 ']' 00:07:15.891 18:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@337 -- # killprocess 136610 00:07:15.891 18:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # '[' -z 136610 ']' 00:07:15.891 18:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # kill -0 136610 00:07:15.891 18:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # uname 00:07:15.891 18:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:15.891 18:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 136610 00:07:15.891 18:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:07:15.891 18:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:07:15.891 18:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@970 -- # echo 'killing process with pid 136610' 00:07:15.891 killing process with pid 136610 00:07:15.891 18:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@971 -- # kill 136610 00:07:15.891 18:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@976 -- # wait 136610 00:07:15.891 18:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:07:15.891 18:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # nvmf_fini 00:07:15.891 18:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@264 -- # local dev 00:07:15.891 18:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@267 -- # remove_target_ns 00:07:15.891 18:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:07:15.891 18:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:07:15.891 18:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_target_ns 00:07:18.434 18:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@268 -- # delete_main_bridge 00:07:18.434 18:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:07:18.435 18:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@130 -- # return 0 00:07:18.435 18:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:07:18.435 18:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:07:18.435 18:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:07:18.435 18:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:07:18.435 18:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:07:18.435 18:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:07:18.435 18:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:07:18.435 18:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:07:18.435 18:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:07:18.435 18:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:07:18.435 18:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:07:18.435 18:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:07:18.435 18:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:07:18.435 18:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:07:18.435 18:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:07:18.435 18:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:07:18.435 18:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:07:18.435 18:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@41 -- # _dev=0 00:07:18.435 18:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@41 -- # dev_map=() 00:07:18.435 18:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@284 -- # iptr 00:07:18.435 18:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@542 -- # iptables-save 00:07:18.435 18:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:07:18.435 18:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@542 -- # iptables-restore 00:07:18.435 00:07:18.435 real 0m48.876s 00:07:18.435 user 3m20.460s 00:07:18.435 sys 0m17.101s 00:07:18.435 18:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:18.435 18:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:18.435 ************************************ 00:07:18.435 END TEST nvmf_ns_hotplug_stress 00:07:18.435 ************************************ 00:07:18.435 18:56:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:18.435 18:56:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:18.435 18:56:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:18.435 18:56:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:18.435 ************************************ 00:07:18.435 START TEST nvmf_delete_subsystem 00:07:18.435 ************************************ 00:07:18.435 18:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:18.435 * Looking for test storage... 00:07:18.435 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:18.435 18:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:18.435 18:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lcov --version 00:07:18.435 18:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:18.435 18:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:18.435 18:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:18.435 18:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:18.435 18:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:18.435 18:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:07:18.435 18:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:07:18.435 18:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:07:18.435 18:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:07:18.435 18:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:07:18.435 18:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:07:18.435 18:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:07:18.435 18:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:18.435 18:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:07:18.435 18:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:07:18.435 18:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:18.435 18:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:18.435 18:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:07:18.435 18:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:07:18.435 18:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:18.435 18:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:07:18.435 18:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:07:18.435 18:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:07:18.435 18:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:07:18.435 18:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:18.435 18:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:07:18.435 18:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:07:18.435 18:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:18.435 18:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:18.435 18:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:07:18.435 18:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:18.435 18:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:18.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.435 --rc genhtml_branch_coverage=1 00:07:18.435 --rc genhtml_function_coverage=1 00:07:18.435 --rc genhtml_legend=1 00:07:18.435 --rc geninfo_all_blocks=1 00:07:18.435 --rc geninfo_unexecuted_blocks=1 00:07:18.435 00:07:18.435 ' 00:07:18.435 18:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:18.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.435 --rc genhtml_branch_coverage=1 00:07:18.435 --rc genhtml_function_coverage=1 00:07:18.435 --rc genhtml_legend=1 00:07:18.435 --rc geninfo_all_blocks=1 00:07:18.435 --rc geninfo_unexecuted_blocks=1 00:07:18.435 00:07:18.435 ' 00:07:18.435 18:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:18.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.435 --rc genhtml_branch_coverage=1 00:07:18.435 --rc genhtml_function_coverage=1 00:07:18.435 --rc genhtml_legend=1 00:07:18.435 --rc geninfo_all_blocks=1 00:07:18.435 --rc geninfo_unexecuted_blocks=1 00:07:18.435 00:07:18.435 ' 00:07:18.435 18:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:18.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.435 --rc genhtml_branch_coverage=1 00:07:18.435 --rc genhtml_function_coverage=1 00:07:18.435 --rc genhtml_legend=1 00:07:18.435 --rc geninfo_all_blocks=1 00:07:18.435 --rc geninfo_unexecuted_blocks=1 00:07:18.435 00:07:18.435 ' 00:07:18.435 18:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:18.435 18:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:07:18.435 18:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:18.435 18:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:18.435 18:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:18.435 18:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:18.435 18:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:18.435 18:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:07:18.435 18:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:18.435 18:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:07:18.435 18:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:18.435 18:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:18.435 18:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:18.435 18:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:07:18.435 18:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:07:18.435 18:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:18.436 18:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:18.436 18:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:07:18.436 18:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:18.436 18:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:18.436 18:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:18.436 18:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.436 18:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.436 18:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.436 18:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:07:18.436 18:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.436 18:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:07:18.436 18:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:07:18.436 18:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:07:18.436 18:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:07:18.436 18:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@50 -- # : 0 00:07:18.436 18:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:07:18.436 18:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:07:18.436 18:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:07:18.436 18:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:18.436 18:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:18.436 18:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:07:18.436 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:07:18.436 18:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:07:18.436 18:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:07:18.436 18:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@54 -- # have_pci_nics=0 00:07:18.436 18:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:07:18.436 18:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:07:18.436 18:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:18.436 18:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # prepare_net_devs 00:07:18.436 18:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # local -g is_hw=no 00:07:18.436 18:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # remove_target_ns 00:07:18.436 18:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:07:18.436 18:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:07:18.436 18:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_target_ns 00:07:18.436 18:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:07:18.436 18:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:07:18.436 18:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # xtrace_disable 00:07:18.436 18:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:26.583 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:26.583 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@131 -- # pci_devs=() 00:07:26.583 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@131 -- # local -a pci_devs 00:07:26.583 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@132 -- # pci_net_devs=() 00:07:26.583 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:07:26.583 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@133 -- # pci_drivers=() 00:07:26.583 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@133 -- # local -A pci_drivers 00:07:26.583 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@135 -- # net_devs=() 00:07:26.583 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@135 -- # local -ga net_devs 00:07:26.583 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@136 -- # e810=() 00:07:26.583 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@136 -- # local -ga e810 00:07:26.583 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@137 -- # x722=() 00:07:26.583 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@137 -- # local -ga x722 00:07:26.583 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@138 -- # mlx=() 00:07:26.583 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@138 -- # local -ga mlx 00:07:26.583 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:26.583 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:26.583 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:26.583 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:26.583 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:26.583 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:26.583 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:26.583 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:26.583 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:26.583 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:26.583 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:26.583 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:26.583 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:07:26.583 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:07:26.583 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:07:26.583 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:07:26.583 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:07:26.583 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:07:26.583 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:07:26.583 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:26.583 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:26.583 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:07:26.583 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:07:26.583 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:26.583 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:26.583 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:07:26.583 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:07:26.583 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:26.583 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:26.583 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:07:26.583 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:07:26.583 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:26.583 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:26.583 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:07:26.583 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:07:26.583 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:07:26.583 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:07:26.583 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:07:26.583 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:26.583 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:07:26.583 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:26.583 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # [[ up == up ]] 00:07:26.583 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:07:26.583 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:26.583 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:26.583 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:26.583 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:07:26.583 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:07:26.583 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:26.583 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:07:26.583 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:26.584 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # [[ up == up ]] 00:07:26.584 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:07:26.584 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:26.584 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:26.584 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:26.584 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:07:26.584 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:07:26.584 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:07:26.584 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # is_hw=yes 00:07:26.584 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:07:26.584 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:07:26.584 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:07:26.584 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:07:26.584 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@257 -- # create_target_ns 00:07:26.584 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:07:26.584 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:07:26.584 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:07:26.584 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:26.584 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:07:26.584 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:07:26.584 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:07:26.584 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:07:26.584 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:07:26.584 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:07:26.584 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:07:26.584 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:07:26.584 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@27 -- # local -gA dev_map 00:07:26.584 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@28 -- # local -g _dev 00:07:26.584 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:07:26.584 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:07:26.584 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:07:26.584 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:07:26.584 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@44 -- # ips=() 00:07:26.584 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:07:26.584 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:07:26.584 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:07:26.584 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:07:26.584 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:07:26.584 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:07:26.584 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:07:26.584 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:07:26.584 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:07:26.584 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:07:26.584 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:07:26.584 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:07:26.584 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:07:26.584 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:07:26.584 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:07:26.584 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:07:26.584 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:07:26.584 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:07:26.584 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:07:26.584 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:07:26.584 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@11 -- # local val=167772161 00:07:26.584 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:07:26.584 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:07:26.584 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:07:26.584 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:07:26.584 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:07:26.584 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:07:26.584 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:07:26.584 10.0.0.1 00:07:26.584 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:07:26.584 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:07:26.584 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:07:26.584 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:07:26.584 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:07:26.584 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@11 -- # local val=167772162 00:07:26.584 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:07:26.584 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:07:26.584 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:07:26.584 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:07:26.584 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:07:26.584 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:07:26.584 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:07:26.584 10.0.0.2 00:07:26.584 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:07:26.584 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:07:26.584 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:07:26.584 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:07:26.584 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:07:26.584 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:07:26.584 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:07:26.584 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:07:26.584 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:07:26.584 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:07:26.584 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:07:26.584 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:07:26.584 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:07:26.584 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:07:26.584 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:07:26.584 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:07:26.584 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:07:26.584 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:07:26.584 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:07:26.584 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:07:26.584 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@38 -- # ping_ips 1 00:07:26.584 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:07:26.584 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:07:26.584 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:07:26.584 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:07:26.584 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:07:26.584 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:07:26.584 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:07:26.584 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:07:26.584 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:07:26.584 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@107 -- # local dev=initiator0 00:07:26.585 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:07:26.585 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:07:26.585 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:07:26.585 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:07:26.585 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:07:26.585 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:07:26.585 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:07:26.585 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:07:26.585 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:07:26.585 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:07:26.585 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:07:26.585 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:07:26.585 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:07:26.585 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:07:26.585 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:07:26.585 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:26.585 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.583 ms 00:07:26.585 00:07:26.585 --- 10.0.0.1 ping statistics --- 00:07:26.585 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:26.585 rtt min/avg/max/mdev = 0.583/0.583/0.583/0.000 ms 00:07:26.585 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:07:26.585 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:07:26.585 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:07:26.585 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:07:26.585 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:07:26.585 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:07:26.585 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@168 -- # get_net_dev target0 00:07:26.585 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@107 -- # local dev=target0 00:07:26.585 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:07:26.585 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:07:26.585 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:07:26.585 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:07:26.585 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:07:26.585 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:07:26.585 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:07:26.585 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:07:26.585 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:07:26.585 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:07:26.585 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:07:26.585 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:07:26.585 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:07:26.585 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:07:26.585 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:26.585 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.362 ms 00:07:26.585 00:07:26.585 --- 10.0.0.2 ping statistics --- 00:07:26.585 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:26.585 rtt min/avg/max/mdev = 0.362/0.362/0.362/0.000 ms 00:07:26.585 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@98 -- # (( pair++ )) 00:07:26.585 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:07:26.585 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:26.585 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # return 0 00:07:26.585 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:07:26.585 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:07:26.585 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:07:26.585 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:07:26.585 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:07:26.585 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:07:26.585 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:07:26.585 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:07:26.585 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:07:26.585 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:07:26.585 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@107 -- # local dev=initiator0 00:07:26.585 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:07:26.585 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:07:26.585 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:07:26.585 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:07:26.585 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:07:26.585 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:07:26.585 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:07:26.585 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:07:26.585 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:07:26.585 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:26.585 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:07:26.585 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:07:26.585 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:07:26.585 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:07:26.585 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:07:26.585 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:07:26.585 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@107 -- # local dev=initiator1 00:07:26.585 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:07:26.585 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:07:26.585 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@109 -- # return 1 00:07:26.585 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@168 -- # dev= 00:07:26.585 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@169 -- # return 0 00:07:26.585 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:07:26.585 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:07:26.585 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:07:26.585 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:07:26.585 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:07:26.585 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:07:26.585 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:07:26.585 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@168 -- # get_net_dev target0 00:07:26.585 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@107 -- # local dev=target0 00:07:26.585 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:07:26.585 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:07:26.585 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:07:26.585 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:07:26.585 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:07:26.585 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:07:26.585 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:07:26.585 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:07:26.585 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:07:26.585 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:26.585 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:07:26.585 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:07:26.585 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:07:26.585 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:07:26.585 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:07:26.585 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:07:26.585 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@168 -- # get_net_dev target1 00:07:26.585 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@107 -- # local dev=target1 00:07:26.585 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:07:26.586 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:07:26.586 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@109 -- # return 1 00:07:26.586 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@168 -- # dev= 00:07:26.586 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@169 -- # return 0 00:07:26.586 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:07:26.586 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:26.586 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:07:26.586 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:07:26.586 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:26.586 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:07:26.586 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:07:26.586 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:07:26.586 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:07:26.586 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:26.586 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:26.586 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # nvmfpid=149147 00:07:26.586 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # waitforlisten 149147 00:07:26.586 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:07:26.586 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # '[' -z 149147 ']' 00:07:26.586 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:26.586 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:26.586 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:26.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:26.586 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:26.586 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:26.586 [2024-11-05 18:56:55.310583] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:07:26.586 [2024-11-05 18:56:55.310651] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:26.586 [2024-11-05 18:56:55.392868] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:26.586 [2024-11-05 18:56:55.433797] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:26.586 [2024-11-05 18:56:55.433834] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:26.586 [2024-11-05 18:56:55.433842] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:26.586 [2024-11-05 18:56:55.433849] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:26.586 [2024-11-05 18:56:55.433855] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:26.586 [2024-11-05 18:56:55.435196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:26.586 [2024-11-05 18:56:55.435198] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.846 18:56:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:26.846 18:56:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@866 -- # return 0 00:07:26.846 18:56:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:07:26.846 18:56:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:26.846 18:56:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:26.846 18:56:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:26.846 18:56:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:26.846 18:56:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.846 18:56:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:26.846 [2024-11-05 18:56:56.169052] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:27.108 18:56:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.108 18:56:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:27.108 18:56:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.108 18:56:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:27.108 18:56:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.108 18:56:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:27.108 18:56:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.108 18:56:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:27.108 [2024-11-05 18:56:56.193253] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:27.108 18:56:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.108 18:56:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:07:27.108 18:56:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.108 18:56:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:27.108 NULL1 00:07:27.108 18:56:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.108 18:56:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:27.108 18:56:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.108 18:56:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:27.108 Delay0 00:07:27.108 18:56:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.108 18:56:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:27.108 18:56:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.108 18:56:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:27.108 18:56:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.108 18:56:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=149206 00:07:27.108 18:56:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:07:27.108 18:56:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:27.108 [2024-11-05 18:56:56.300066] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:29.024 18:56:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:29.024 18:56:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.024 18:56:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:29.285 Write completed with error (sct=0, sc=8) 00:07:29.285 Write completed with error (sct=0, sc=8) 00:07:29.285 Read completed with error (sct=0, sc=8) 00:07:29.285 starting I/O failed: -6 00:07:29.285 Read completed with error (sct=0, sc=8) 00:07:29.285 Read completed with error (sct=0, sc=8) 00:07:29.285 Read completed with error (sct=0, sc=8) 00:07:29.285 Read completed with error (sct=0, sc=8) 00:07:29.285 starting I/O failed: -6 00:07:29.285 Write completed with error (sct=0, sc=8) 00:07:29.285 Read completed with error (sct=0, sc=8) 00:07:29.285 Read completed with error (sct=0, sc=8) 00:07:29.285 Write completed with error (sct=0, sc=8) 00:07:29.285 starting I/O failed: -6 00:07:29.285 Write completed with error (sct=0, sc=8) 00:07:29.285 Write completed with error (sct=0, sc=8) 00:07:29.285 Write completed with error (sct=0, sc=8) 00:07:29.285 Write completed with error (sct=0, sc=8) 00:07:29.285 starting I/O failed: -6 00:07:29.285 Read completed with error (sct=0, sc=8) 00:07:29.285 Write completed with error (sct=0, sc=8) 00:07:29.285 Write completed with error (sct=0, sc=8) 00:07:29.285 Read completed with error (sct=0, sc=8) 00:07:29.285 starting I/O failed: -6 00:07:29.286 Read completed with error (sct=0, sc=8) 00:07:29.286 Write completed with error (sct=0, sc=8) 00:07:29.286 Write completed with error (sct=0, sc=8) 00:07:29.286 Read completed with error (sct=0, sc=8) 00:07:29.286 starting I/O failed: -6 00:07:29.286 Read completed with error (sct=0, sc=8) 00:07:29.286 Write completed with error (sct=0, sc=8) 00:07:29.286 Write completed with error (sct=0, sc=8) 00:07:29.286 Read completed with error (sct=0, sc=8) 00:07:29.286 starting I/O failed: -6 00:07:29.286 Write completed with error (sct=0, sc=8) 00:07:29.286 Read completed with error (sct=0, sc=8) 00:07:29.286 Write completed with error (sct=0, sc=8) 00:07:29.286 Write completed with error (sct=0, sc=8) 00:07:29.286 starting I/O failed: -6 00:07:29.286 Read completed with error (sct=0, sc=8) 00:07:29.286 Read completed with error (sct=0, sc=8) 00:07:29.286 Read completed with error (sct=0, sc=8) 00:07:29.286 Read completed with error (sct=0, sc=8) 00:07:29.286 starting I/O failed: -6 00:07:29.286 Read completed with error (sct=0, sc=8) 00:07:29.286 Read completed with error (sct=0, sc=8) 00:07:29.286 Read completed with error (sct=0, sc=8) 00:07:29.286 Read completed with error (sct=0, sc=8) 00:07:29.286 starting I/O failed: -6 00:07:29.286 Write completed with error (sct=0, sc=8) 00:07:29.286 Write completed with error (sct=0, sc=8) 00:07:29.286 Read completed with error (sct=0, sc=8) 00:07:29.286 Write completed with error (sct=0, sc=8) 00:07:29.286 starting I/O failed: -6 00:07:29.286 Read completed with error (sct=0, sc=8) 00:07:29.286 Read completed with error (sct=0, sc=8) 00:07:29.286 Read completed with error (sct=0, sc=8) 00:07:29.286 [2024-11-05 18:56:58.423715] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1148960 is same with the state(6) to be set 00:07:29.286 Write completed with error (sct=0, sc=8) 00:07:29.286 Write completed with error (sct=0, sc=8) 00:07:29.286 Read completed with error (sct=0, sc=8) 00:07:29.286 Write completed with error (sct=0, sc=8) 00:07:29.286 Write completed with error (sct=0, sc=8) 00:07:29.286 Read completed with error (sct=0, sc=8) 00:07:29.286 Write completed with error (sct=0, sc=8) 00:07:29.286 Write completed with error (sct=0, sc=8) 00:07:29.286 Read completed with error (sct=0, sc=8) 00:07:29.286 Read completed with error (sct=0, sc=8) 00:07:29.286 Read completed with error (sct=0, sc=8) 00:07:29.286 Write completed with error (sct=0, sc=8) 00:07:29.286 Read completed with error (sct=0, sc=8) 00:07:29.286 Read completed with error (sct=0, sc=8) 00:07:29.286 Read completed with error (sct=0, sc=8) 00:07:29.286 Read completed with error (sct=0, sc=8) 00:07:29.286 Read completed with error (sct=0, sc=8) 00:07:29.286 Write completed with error (sct=0, sc=8) 00:07:29.286 Read completed with error (sct=0, sc=8) 00:07:29.286 Write completed with error (sct=0, sc=8) 00:07:29.286 Read completed with error (sct=0, sc=8) 00:07:29.286 Read completed with error (sct=0, sc=8) 00:07:29.286 Read completed with error (sct=0, sc=8) 00:07:29.286 Write completed with error (sct=0, sc=8) 00:07:29.286 Read completed with error (sct=0, sc=8) 00:07:29.286 Read completed with error (sct=0, sc=8) 00:07:29.286 Read completed with error (sct=0, sc=8) 00:07:29.286 Write completed with error (sct=0, sc=8) 00:07:29.286 Read completed with error (sct=0, sc=8) 00:07:29.286 Write completed with error (sct=0, sc=8) 00:07:29.286 Read completed with error (sct=0, sc=8) 00:07:29.286 Read completed with error (sct=0, sc=8) 00:07:29.286 Read completed with error (sct=0, sc=8) 00:07:29.286 Read completed with error (sct=0, sc=8) 00:07:29.286 Read completed with error (sct=0, sc=8) 00:07:29.286 Read completed with error (sct=0, sc=8) 00:07:29.286 Read completed with error (sct=0, sc=8) 00:07:29.286 Read completed with error (sct=0, sc=8) 00:07:29.286 Read completed with error (sct=0, sc=8) 00:07:29.286 Write completed with error (sct=0, sc=8) 00:07:29.286 Read completed with error (sct=0, sc=8) 00:07:29.286 Read completed with error (sct=0, sc=8) 00:07:29.286 Read completed with error (sct=0, sc=8) 00:07:29.286 Read completed with error (sct=0, sc=8) 00:07:29.286 Read completed with error (sct=0, sc=8) 00:07:29.286 Write completed with error (sct=0, sc=8) 00:07:29.286 Read completed with error (sct=0, sc=8) 00:07:29.286 Read completed with error (sct=0, sc=8) 00:07:29.286 Read completed with error (sct=0, sc=8) 00:07:29.286 Read completed with error (sct=0, sc=8) 00:07:29.286 Write completed with error (sct=0, sc=8) 00:07:29.286 Write completed with error (sct=0, sc=8) 00:07:29.286 Read completed with error (sct=0, sc=8) 00:07:29.286 Write completed with error (sct=0, sc=8) 00:07:29.286 Read completed with error (sct=0, sc=8) 00:07:29.286 Read completed with error (sct=0, sc=8) 00:07:29.286 Write completed with error (sct=0, sc=8) 00:07:29.286 Write completed with error (sct=0, sc=8) 00:07:29.286 Read completed with error (sct=0, sc=8) 00:07:29.286 Read completed with error (sct=0, sc=8) 00:07:29.286 Read completed with error (sct=0, sc=8) 00:07:29.286 Read completed with error (sct=0, sc=8) 00:07:29.286 starting I/O failed: -6 00:07:29.286 Read completed with error (sct=0, sc=8) 00:07:29.286 Read completed with error (sct=0, sc=8) 00:07:29.286 Read completed with error (sct=0, sc=8) 00:07:29.286 Read completed with error (sct=0, sc=8) 00:07:29.286 starting I/O failed: -6 00:07:29.286 Read completed with error (sct=0, sc=8) 00:07:29.286 Write completed with error (sct=0, sc=8) 00:07:29.286 Read completed with error (sct=0, sc=8) 00:07:29.286 Read completed with error (sct=0, sc=8) 00:07:29.286 starting I/O failed: -6 00:07:29.286 Read completed with error (sct=0, sc=8) 00:07:29.286 Read completed with error (sct=0, sc=8) 00:07:29.286 Read completed with error (sct=0, sc=8) 00:07:29.286 Read completed with error (sct=0, sc=8) 00:07:29.286 starting I/O failed: -6 00:07:29.286 Read completed with error (sct=0, sc=8) 00:07:29.286 Write completed with error (sct=0, sc=8) 00:07:29.286 Read completed with error (sct=0, sc=8) 00:07:29.286 Read completed with error (sct=0, sc=8) 00:07:29.286 starting I/O failed: -6 00:07:29.286 Read completed with error (sct=0, sc=8) 00:07:29.286 Read completed with error (sct=0, sc=8) 00:07:29.286 Read completed with error (sct=0, sc=8) 00:07:29.286 Read completed with error (sct=0, sc=8) 00:07:29.286 starting I/O failed: -6 00:07:29.286 Read completed with error (sct=0, sc=8) 00:07:29.286 Read completed with error (sct=0, sc=8) 00:07:29.286 Read completed with error (sct=0, sc=8) 00:07:29.286 Write completed with error (sct=0, sc=8) 00:07:29.286 starting I/O failed: -6 00:07:29.286 Write completed with error (sct=0, sc=8) 00:07:29.286 Read completed with error (sct=0, sc=8) 00:07:29.286 Read completed with error (sct=0, sc=8) 00:07:29.286 Write completed with error (sct=0, sc=8) 00:07:29.286 starting I/O failed: -6 00:07:29.286 Read completed with error (sct=0, sc=8) 00:07:29.286 Write completed with error (sct=0, sc=8) 00:07:29.286 Write completed with error (sct=0, sc=8) 00:07:29.286 Write completed with error (sct=0, sc=8) 00:07:29.286 starting I/O failed: -6 00:07:29.286 Write completed with error (sct=0, sc=8) 00:07:29.286 Read completed with error (sct=0, sc=8) 00:07:29.286 [2024-11-05 18:56:58.426449] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fbf30000c40 is same with the state(6) to be set 00:07:29.286 Write completed with error (sct=0, sc=8) 00:07:29.286 Write completed with error (sct=0, sc=8) 00:07:29.286 Read completed with error (sct=0, sc=8) 00:07:29.286 Read completed with error (sct=0, sc=8) 00:07:29.286 Read completed with error (sct=0, sc=8) 00:07:29.286 Read completed with error (sct=0, sc=8) 00:07:29.286 Read completed with error (sct=0, sc=8) 00:07:29.286 Read completed with error (sct=0, sc=8) 00:07:29.286 Write completed with error (sct=0, sc=8) 00:07:29.286 Write completed with error (sct=0, sc=8) 00:07:29.286 Read completed with error (sct=0, sc=8) 00:07:29.286 Write completed with error (sct=0, sc=8) 00:07:29.286 Read completed with error (sct=0, sc=8) 00:07:29.286 Write completed with error (sct=0, sc=8) 00:07:29.286 Read completed with error (sct=0, sc=8) 00:07:29.286 Write completed with error (sct=0, sc=8) 00:07:29.286 Read completed with error (sct=0, sc=8) 00:07:29.286 Read completed with error (sct=0, sc=8) 00:07:29.286 Write completed with error (sct=0, sc=8) 00:07:29.286 Write completed with error (sct=0, sc=8) 00:07:29.286 Read completed with error (sct=0, sc=8) 00:07:29.286 Write completed with error (sct=0, sc=8) 00:07:29.286 Read completed with error (sct=0, sc=8) 00:07:29.286 Read completed with error (sct=0, sc=8) 00:07:29.286 Read completed with error (sct=0, sc=8) 00:07:29.286 Write completed with error (sct=0, sc=8) 00:07:29.286 Read completed with error (sct=0, sc=8) 00:07:29.286 Write completed with error (sct=0, sc=8) 00:07:29.286 Read completed with error (sct=0, sc=8) 00:07:29.286 Write completed with error (sct=0, sc=8) 00:07:29.286 Read completed with error (sct=0, sc=8) 00:07:29.286 Read completed with error (sct=0, sc=8) 00:07:29.286 Read completed with error (sct=0, sc=8) 00:07:29.286 Read completed with error (sct=0, sc=8) 00:07:29.286 Read completed with error (sct=0, sc=8) 00:07:29.286 Read completed with error (sct=0, sc=8) 00:07:29.286 Read completed with error (sct=0, sc=8) 00:07:29.286 Write completed with error (sct=0, sc=8) 00:07:29.286 Write completed with error (sct=0, sc=8) 00:07:29.286 Read completed with error (sct=0, sc=8) 00:07:29.286 Read completed with error (sct=0, sc=8) 00:07:29.286 Read completed with error (sct=0, sc=8) 00:07:29.286 Read completed with error (sct=0, sc=8) 00:07:29.286 Read completed with error (sct=0, sc=8) 00:07:29.286 Write completed with error (sct=0, sc=8) 00:07:29.286 Read completed with error (sct=0, sc=8) 00:07:29.286 Write completed with error (sct=0, sc=8) 00:07:30.229 [2024-11-05 18:56:59.397675] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11499a0 is same with the state(6) to be set 00:07:30.229 Write completed with error (sct=0, sc=8) 00:07:30.229 Write completed with error (sct=0, sc=8) 00:07:30.229 Write completed with error (sct=0, sc=8) 00:07:30.230 Read completed with error (sct=0, sc=8) 00:07:30.230 Read completed with error (sct=0, sc=8) 00:07:30.230 Read completed with error (sct=0, sc=8) 00:07:30.230 Write completed with error (sct=0, sc=8) 00:07:30.230 Write completed with error (sct=0, sc=8) 00:07:30.230 Read completed with error (sct=0, sc=8) 00:07:30.230 Read completed with error (sct=0, sc=8) 00:07:30.230 Read completed with error (sct=0, sc=8) 00:07:30.230 Read completed with error (sct=0, sc=8) 00:07:30.230 Write completed with error (sct=0, sc=8) 00:07:30.230 Read completed with error (sct=0, sc=8) 00:07:30.230 Read completed with error (sct=0, sc=8) 00:07:30.230 Read completed with error (sct=0, sc=8) 00:07:30.230 Write completed with error (sct=0, sc=8) 00:07:30.230 Read completed with error (sct=0, sc=8) 00:07:30.230 Read completed with error (sct=0, sc=8) 00:07:30.230 Read completed with error (sct=0, sc=8) 00:07:30.230 Write completed with error (sct=0, sc=8) 00:07:30.230 Write completed with error (sct=0, sc=8) 00:07:30.230 Write completed with error (sct=0, sc=8) 00:07:30.230 Read completed with error (sct=0, sc=8) 00:07:30.230 Read completed with error (sct=0, sc=8) 00:07:30.230 [2024-11-05 18:56:59.427625] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1148b40 is same with the state(6) to be set 00:07:30.230 Read completed with error (sct=0, sc=8) 00:07:30.230 Read completed with error (sct=0, sc=8) 00:07:30.230 Write completed with error (sct=0, sc=8) 00:07:30.230 Read completed with error (sct=0, sc=8) 00:07:30.230 Read completed with error (sct=0, sc=8) 00:07:30.230 Read completed with error (sct=0, sc=8) 00:07:30.230 Read completed with error (sct=0, sc=8) 00:07:30.230 Read completed with error (sct=0, sc=8) 00:07:30.230 Read completed with error (sct=0, sc=8) 00:07:30.230 Write completed with error (sct=0, sc=8) 00:07:30.230 Read completed with error (sct=0, sc=8) 00:07:30.230 Read completed with error (sct=0, sc=8) 00:07:30.230 Write completed with error (sct=0, sc=8) 00:07:30.230 Read completed with error (sct=0, sc=8) 00:07:30.230 Read completed with error (sct=0, sc=8) 00:07:30.230 Read completed with error (sct=0, sc=8) 00:07:30.230 Read completed with error (sct=0, sc=8) 00:07:30.230 Read completed with error (sct=0, sc=8) 00:07:30.230 Write completed with error (sct=0, sc=8) 00:07:30.230 Read completed with error (sct=0, sc=8) 00:07:30.230 Read completed with error (sct=0, sc=8) 00:07:30.230 Read completed with error (sct=0, sc=8) 00:07:30.230 Read completed with error (sct=0, sc=8) 00:07:30.230 Read completed with error (sct=0, sc=8) 00:07:30.230 Read completed with error (sct=0, sc=8) 00:07:30.230 Write completed with error (sct=0, sc=8) 00:07:30.230 [2024-11-05 18:56:59.427783] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1148780 is same with the state(6) to be set 00:07:30.230 Write completed with error (sct=0, sc=8) 00:07:30.230 Write completed with error (sct=0, sc=8) 00:07:30.230 Write completed with error (sct=0, sc=8) 00:07:30.230 Read completed with error (sct=0, sc=8) 00:07:30.230 Read completed with error (sct=0, sc=8) 00:07:30.230 Read completed with error (sct=0, sc=8) 00:07:30.230 Write completed with error (sct=0, sc=8) 00:07:30.230 Write completed with error (sct=0, sc=8) 00:07:30.230 Read completed with error (sct=0, sc=8) 00:07:30.230 Read completed with error (sct=0, sc=8) 00:07:30.230 Write completed with error (sct=0, sc=8) 00:07:30.230 Write completed with error (sct=0, sc=8) 00:07:30.230 Read completed with error (sct=0, sc=8) 00:07:30.230 Read completed with error (sct=0, sc=8) 00:07:30.230 Read completed with error (sct=0, sc=8) 00:07:30.230 Write completed with error (sct=0, sc=8) 00:07:30.230 Read completed with error (sct=0, sc=8) 00:07:30.230 Read completed with error (sct=0, sc=8) 00:07:30.230 Read completed with error (sct=0, sc=8) 00:07:30.230 Read completed with error (sct=0, sc=8) 00:07:30.230 Read completed with error (sct=0, sc=8) 00:07:30.230 Read completed with error (sct=0, sc=8) 00:07:30.230 Read completed with error (sct=0, sc=8) 00:07:30.230 Write completed with error (sct=0, sc=8) 00:07:30.230 Read completed with error (sct=0, sc=8) 00:07:30.230 Write completed with error (sct=0, sc=8) 00:07:30.230 Read completed with error (sct=0, sc=8) 00:07:30.230 Read completed with error (sct=0, sc=8) 00:07:30.230 Write completed with error (sct=0, sc=8) 00:07:30.230 Read completed with error (sct=0, sc=8) 00:07:30.230 Write completed with error (sct=0, sc=8) 00:07:30.230 Write completed with error (sct=0, sc=8) 00:07:30.230 Write completed with error (sct=0, sc=8) 00:07:30.230 Write completed with error (sct=0, sc=8) 00:07:30.230 Read completed with error (sct=0, sc=8) 00:07:30.230 Read completed with error (sct=0, sc=8) 00:07:30.230 [2024-11-05 18:56:59.428646] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fbf3000d7c0 is same with the state(6) to be set 00:07:30.230 Read completed with error (sct=0, sc=8) 00:07:30.230 Write completed with error (sct=0, sc=8) 00:07:30.230 Write completed with error (sct=0, sc=8) 00:07:30.230 Read completed with error (sct=0, sc=8) 00:07:30.230 Read completed with error (sct=0, sc=8) 00:07:30.230 Write completed with error (sct=0, sc=8) 00:07:30.230 Read completed with error (sct=0, sc=8) 00:07:30.230 Read completed with error (sct=0, sc=8) 00:07:30.230 Read completed with error (sct=0, sc=8) 00:07:30.230 Write completed with error (sct=0, sc=8) 00:07:30.230 Write completed with error (sct=0, sc=8) 00:07:30.230 Write completed with error (sct=0, sc=8) 00:07:30.230 Read completed with error (sct=0, sc=8) 00:07:30.230 Read completed with error (sct=0, sc=8) 00:07:30.230 Read completed with error (sct=0, sc=8) 00:07:30.230 Write completed with error (sct=0, sc=8) 00:07:30.230 Write completed with error (sct=0, sc=8) 00:07:30.230 Write completed with error (sct=0, sc=8) 00:07:30.230 Read completed with error (sct=0, sc=8) 00:07:30.230 Read completed with error (sct=0, sc=8) 00:07:30.230 Read completed with error (sct=0, sc=8) 00:07:30.230 Read completed with error (sct=0, sc=8) 00:07:30.230 Read completed with error (sct=0, sc=8) 00:07:30.230 Read completed with error (sct=0, sc=8) 00:07:30.230 Read completed with error (sct=0, sc=8) 00:07:30.230 Write completed with error (sct=0, sc=8) 00:07:30.230 Write completed with error (sct=0, sc=8) 00:07:30.230 Read completed with error (sct=0, sc=8) 00:07:30.230 Read completed with error (sct=0, sc=8) 00:07:30.230 Write completed with error (sct=0, sc=8) 00:07:30.230 Read completed with error (sct=0, sc=8) 00:07:30.230 Read completed with error (sct=0, sc=8) 00:07:30.230 Read completed with error (sct=0, sc=8) 00:07:30.230 Read completed with error (sct=0, sc=8) 00:07:30.230 Write completed with error (sct=0, sc=8) 00:07:30.230 Read completed with error (sct=0, sc=8) 00:07:30.230 [2024-11-05 18:56:59.428778] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fbf3000d020 is same with the state(6) to be set 00:07:30.230 Initializing NVMe Controllers 00:07:30.230 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:30.230 Controller IO queue size 128, less than required. 00:07:30.230 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:30.230 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:30.230 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:30.230 Initialization complete. Launching workers. 00:07:30.230 ======================================================== 00:07:30.230 Latency(us) 00:07:30.230 Device Information : IOPS MiB/s Average min max 00:07:30.230 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 172.84 0.08 888032.03 233.41 1043744.78 00:07:30.230 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 152.41 0.07 1075767.79 192.03 2002270.40 00:07:30.230 ======================================================== 00:07:30.230 Total : 325.25 0.16 976006.21 192.03 2002270.40 00:07:30.230 00:07:30.230 [2024-11-05 18:56:59.429301] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11499a0 (9): Bad file descriptor 00:07:30.230 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:07:30.230 18:56:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.230 18:56:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:07:30.230 18:56:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 149206 00:07:30.230 18:56:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:07:30.818 18:56:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:07:30.818 18:56:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 149206 00:07:30.818 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (149206) - No such process 00:07:30.818 18:56:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 149206 00:07:30.818 18:56:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:07:30.818 18:56:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 149206 00:07:30.818 18:56:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:07:30.818 18:56:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:30.818 18:56:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:07:30.818 18:56:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:30.818 18:56:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 149206 00:07:30.818 18:56:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:07:30.818 18:56:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:30.818 18:56:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:30.818 18:56:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:30.818 18:56:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:30.818 18:56:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.818 18:56:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:30.818 18:56:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.818 18:56:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:30.818 18:56:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.818 18:56:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:30.818 [2024-11-05 18:56:59.961900] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:30.818 18:56:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.818 18:56:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:30.818 18:56:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.818 18:56:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:30.818 18:56:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.818 18:56:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=149995 00:07:30.818 18:56:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:07:30.818 18:56:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 149995 00:07:30.818 18:56:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:30.818 18:56:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:30.818 [2024-11-05 18:57:00.040762] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:31.391 18:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:31.391 18:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 149995 00:07:31.391 18:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:31.963 18:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:31.963 18:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 149995 00:07:31.963 18:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:32.224 18:57:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:32.224 18:57:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 149995 00:07:32.224 18:57:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:32.795 18:57:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:32.795 18:57:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 149995 00:07:32.795 18:57:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:33.365 18:57:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:33.365 18:57:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 149995 00:07:33.365 18:57:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:33.935 18:57:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:33.935 18:57:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 149995 00:07:33.935 18:57:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:33.935 Initializing NVMe Controllers 00:07:33.935 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:33.935 Controller IO queue size 128, less than required. 00:07:33.935 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:33.935 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:33.935 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:33.935 Initialization complete. Launching workers. 00:07:33.935 ======================================================== 00:07:33.935 Latency(us) 00:07:33.935 Device Information : IOPS MiB/s Average min max 00:07:33.936 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002389.12 1000094.12 1041015.41 00:07:33.936 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1002906.35 1000328.17 1008722.56 00:07:33.936 ======================================================== 00:07:33.936 Total : 256.00 0.12 1002647.73 1000094.12 1041015.41 00:07:33.936 00:07:34.196 18:57:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:34.196 18:57:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 149995 00:07:34.196 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (149995) - No such process 00:07:34.196 18:57:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 149995 00:07:34.196 18:57:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:34.196 18:57:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:07:34.196 18:57:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # nvmfcleanup 00:07:34.196 18:57:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@99 -- # sync 00:07:34.196 18:57:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:07:34.196 18:57:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # set +e 00:07:34.196 18:57:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # for i in {1..20} 00:07:34.197 18:57:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:07:34.457 rmmod nvme_tcp 00:07:34.457 rmmod nvme_fabrics 00:07:34.457 rmmod nvme_keyring 00:07:34.457 18:57:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:07:34.457 18:57:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # set -e 00:07:34.457 18:57:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # return 0 00:07:34.457 18:57:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # '[' -n 149147 ']' 00:07:34.457 18:57:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@337 -- # killprocess 149147 00:07:34.457 18:57:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # '[' -z 149147 ']' 00:07:34.457 18:57:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # kill -0 149147 00:07:34.457 18:57:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # uname 00:07:34.457 18:57:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:34.457 18:57:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 149147 00:07:34.457 18:57:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:34.457 18:57:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:34.457 18:57:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@970 -- # echo 'killing process with pid 149147' 00:07:34.457 killing process with pid 149147 00:07:34.457 18:57:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@971 -- # kill 149147 00:07:34.457 18:57:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@976 -- # wait 149147 00:07:34.457 18:57:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:07:34.457 18:57:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # nvmf_fini 00:07:34.457 18:57:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@264 -- # local dev 00:07:34.457 18:57:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@267 -- # remove_target_ns 00:07:34.457 18:57:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:07:34.457 18:57:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:07:34.457 18:57:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_target_ns 00:07:37.002 18:57:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@268 -- # delete_main_bridge 00:07:37.002 18:57:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:07:37.002 18:57:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@130 -- # return 0 00:07:37.002 18:57:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:07:37.002 18:57:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:07:37.002 18:57:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:07:37.002 18:57:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:07:37.002 18:57:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:07:37.002 18:57:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:07:37.002 18:57:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:07:37.002 18:57:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:07:37.002 18:57:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:07:37.002 18:57:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:07:37.002 18:57:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:07:37.002 18:57:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:07:37.002 18:57:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:07:37.002 18:57:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:07:37.002 18:57:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:07:37.002 18:57:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:07:37.002 18:57:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:07:37.002 18:57:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@41 -- # _dev=0 00:07:37.002 18:57:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@41 -- # dev_map=() 00:07:37.002 18:57:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@284 -- # iptr 00:07:37.002 18:57:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@542 -- # iptables-save 00:07:37.002 18:57:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:07:37.002 18:57:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@542 -- # iptables-restore 00:07:37.002 00:07:37.002 real 0m18.500s 00:07:37.002 user 0m30.762s 00:07:37.002 sys 0m6.936s 00:07:37.002 18:57:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:37.002 18:57:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:37.002 ************************************ 00:07:37.002 END TEST nvmf_delete_subsystem 00:07:37.002 ************************************ 00:07:37.002 18:57:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:37.002 18:57:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:37.002 18:57:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:37.002 18:57:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:37.002 ************************************ 00:07:37.002 START TEST nvmf_host_management 00:07:37.002 ************************************ 00:07:37.003 18:57:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:37.003 * Looking for test storage... 00:07:37.003 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:37.003 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:37.003 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lcov --version 00:07:37.003 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:37.003 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:37.003 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:37.003 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:37.003 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:37.003 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:07:37.003 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:07:37.003 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:07:37.003 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:07:37.003 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:07:37.003 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:07:37.003 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:07:37.003 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:37.003 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:07:37.003 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:07:37.003 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:37.003 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:37.003 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:07:37.003 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:07:37.003 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:37.003 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:07:37.003 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:07:37.003 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:07:37.003 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:07:37.003 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:37.003 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:07:37.003 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:07:37.003 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:37.003 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:37.003 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:07:37.003 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:37.003 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:37.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:37.003 --rc genhtml_branch_coverage=1 00:07:37.003 --rc genhtml_function_coverage=1 00:07:37.003 --rc genhtml_legend=1 00:07:37.003 --rc geninfo_all_blocks=1 00:07:37.003 --rc geninfo_unexecuted_blocks=1 00:07:37.003 00:07:37.003 ' 00:07:37.003 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:37.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:37.003 --rc genhtml_branch_coverage=1 00:07:37.003 --rc genhtml_function_coverage=1 00:07:37.003 --rc genhtml_legend=1 00:07:37.003 --rc geninfo_all_blocks=1 00:07:37.003 --rc geninfo_unexecuted_blocks=1 00:07:37.003 00:07:37.003 ' 00:07:37.003 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:37.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:37.003 --rc genhtml_branch_coverage=1 00:07:37.003 --rc genhtml_function_coverage=1 00:07:37.003 --rc genhtml_legend=1 00:07:37.003 --rc geninfo_all_blocks=1 00:07:37.003 --rc geninfo_unexecuted_blocks=1 00:07:37.003 00:07:37.003 ' 00:07:37.003 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:37.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:37.003 --rc genhtml_branch_coverage=1 00:07:37.003 --rc genhtml_function_coverage=1 00:07:37.003 --rc genhtml_legend=1 00:07:37.003 --rc geninfo_all_blocks=1 00:07:37.003 --rc geninfo_unexecuted_blocks=1 00:07:37.003 00:07:37.003 ' 00:07:37.003 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:37.003 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:37.003 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:37.003 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:37.003 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:37.003 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:37.003 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:37.003 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:07:37.003 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:37.003 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:07:37.003 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:37.003 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:37.003 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:37.003 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:07:37.003 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:07:37.003 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:37.003 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:37.003 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:07:37.003 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:37.003 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:37.003 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:37.003 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.003 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.003 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.003 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:37.003 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.003 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:07:37.003 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:07:37.003 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:07:37.003 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:07:37.003 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@50 -- # : 0 00:07:37.003 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:07:37.003 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:07:37.003 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:07:37.003 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:37.003 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:37.003 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:07:37.003 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:07:37.003 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:07:37.003 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:07:37.003 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@54 -- # have_pci_nics=0 00:07:37.003 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:37.003 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:37.003 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:37.003 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:07:37.003 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:37.003 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # prepare_net_devs 00:07:37.003 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # local -g is_hw=no 00:07:37.003 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@260 -- # remove_target_ns 00:07:37.003 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:07:37.003 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:07:37.003 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_target_ns 00:07:37.003 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:07:37.003 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:07:37.003 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # xtrace_disable 00:07:37.003 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:45.152 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:45.152 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@131 -- # pci_devs=() 00:07:45.152 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@131 -- # local -a pci_devs 00:07:45.152 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@132 -- # pci_net_devs=() 00:07:45.152 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:07:45.152 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@133 -- # pci_drivers=() 00:07:45.152 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@133 -- # local -A pci_drivers 00:07:45.152 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@135 -- # net_devs=() 00:07:45.152 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@135 -- # local -ga net_devs 00:07:45.152 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@136 -- # e810=() 00:07:45.152 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@136 -- # local -ga e810 00:07:45.152 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@137 -- # x722=() 00:07:45.152 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@137 -- # local -ga x722 00:07:45.152 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@138 -- # mlx=() 00:07:45.152 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@138 -- # local -ga mlx 00:07:45.152 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:45.152 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:45.152 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:45.152 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:45.152 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:45.152 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:45.152 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:45.152 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:45.152 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:45.152 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:45.152 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:45.152 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:45.152 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:07:45.152 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:07:45.152 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:07:45.152 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:07:45.152 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:07:45.152 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:07:45.152 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:07:45.152 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:45.152 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:45.152 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:07:45.152 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:07:45.152 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:45.152 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:45.152 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:07:45.153 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:07:45.153 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:45.153 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:45.153 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:07:45.153 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:07:45.153 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:45.153 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:45.153 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:07:45.153 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:07:45.153 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:07:45.153 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:07:45.153 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:07:45.153 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:45.153 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:07:45.153 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:45.153 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # [[ up == up ]] 00:07:45.153 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:07:45.153 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:45.153 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:45.153 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:45.153 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:07:45.153 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:07:45.153 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:45.153 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:07:45.153 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:45.153 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # [[ up == up ]] 00:07:45.153 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:07:45.153 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:45.153 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:45.153 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:45.153 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:07:45.153 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:07:45.153 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:07:45.153 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # is_hw=yes 00:07:45.153 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:07:45.153 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:07:45.153 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:07:45.153 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:07:45.153 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@257 -- # create_target_ns 00:07:45.153 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:07:45.153 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:07:45.153 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:07:45.153 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:45.153 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:07:45.153 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:07:45.153 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:07:45.153 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:07:45.153 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:07:45.153 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:07:45.153 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:07:45.153 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:07:45.153 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@27 -- # local -gA dev_map 00:07:45.153 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@28 -- # local -g _dev 00:07:45.153 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:07:45.153 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:07:45.153 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:07:45.153 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:07:45.153 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@44 -- # ips=() 00:07:45.153 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:07:45.153 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:07:45.153 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:07:45.153 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:07:45.153 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:07:45.153 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:07:45.153 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:07:45.153 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:07:45.153 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:07:45.153 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:07:45.153 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:07:45.153 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:07:45.153 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:07:45.153 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:07:45.153 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:07:45.153 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:07:45.153 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:07:45.153 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:07:45.153 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:07:45.153 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:07:45.153 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@11 -- # local val=167772161 00:07:45.153 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:07:45.153 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:07:45.153 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:07:45.153 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:07:45.153 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:07:45.153 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:07:45.153 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:07:45.153 10.0.0.1 00:07:45.153 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:07:45.153 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:07:45.153 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:07:45.153 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:07:45.153 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:07:45.153 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@11 -- # local val=167772162 00:07:45.153 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:07:45.153 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:07:45.153 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:07:45.153 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:07:45.153 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:07:45.153 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:07:45.153 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:07:45.153 10.0.0.2 00:07:45.153 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:07:45.154 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:07:45.154 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:07:45.154 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:07:45.154 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:07:45.154 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:07:45.154 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:07:45.154 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:07:45.154 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:07:45.154 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:07:45.154 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:07:45.154 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:07:45.154 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:07:45.154 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:07:45.154 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:07:45.154 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:07:45.154 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:07:45.154 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:07:45.154 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:07:45.154 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:07:45.154 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@38 -- # ping_ips 1 00:07:45.154 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:07:45.154 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:07:45.154 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:07:45.154 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:07:45.154 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:07:45.154 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:07:45.154 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:07:45.154 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:07:45.154 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:07:45.154 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@107 -- # local dev=initiator0 00:07:45.154 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:07:45.154 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:07:45.154 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:07:45.154 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:07:45.154 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:07:45.154 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:07:45.154 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:07:45.154 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:07:45.154 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:07:45.154 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:07:45.154 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:07:45.154 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:07:45.154 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:07:45.154 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:07:45.154 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:07:45.154 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:45.154 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.637 ms 00:07:45.154 00:07:45.154 --- 10.0.0.1 ping statistics --- 00:07:45.154 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:45.154 rtt min/avg/max/mdev = 0.637/0.637/0.637/0.000 ms 00:07:45.154 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:07:45.154 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:07:45.154 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:07:45.154 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:07:45.154 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:07:45.154 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:07:45.154 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@168 -- # get_net_dev target0 00:07:45.154 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@107 -- # local dev=target0 00:07:45.154 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:07:45.154 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:07:45.154 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:07:45.154 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:07:45.154 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:07:45.154 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:07:45.154 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:07:45.154 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:07:45.154 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:07:45.154 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:07:45.154 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:07:45.154 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:07:45.154 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:07:45.154 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:07:45.154 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:45.154 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.299 ms 00:07:45.154 00:07:45.154 --- 10.0.0.2 ping statistics --- 00:07:45.154 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:45.154 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:07:45.154 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@98 -- # (( pair++ )) 00:07:45.154 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:07:45.154 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:45.154 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@270 -- # return 0 00:07:45.154 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:07:45.154 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:07:45.154 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:07:45.154 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:07:45.154 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:07:45.154 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:07:45.154 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:07:45.154 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:07:45.154 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:07:45.154 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:07:45.154 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@107 -- # local dev=initiator0 00:07:45.154 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:07:45.154 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:07:45.154 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:07:45.154 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:07:45.154 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:07:45.154 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:07:45.154 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:07:45.154 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:07:45.154 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:07:45.154 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:45.154 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:07:45.154 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:07:45.154 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:07:45.154 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:07:45.154 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:07:45.154 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:07:45.154 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@107 -- # local dev=initiator1 00:07:45.154 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:07:45.154 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:07:45.154 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@109 -- # return 1 00:07:45.155 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@168 -- # dev= 00:07:45.155 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@169 -- # return 0 00:07:45.155 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:07:45.155 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:07:45.155 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:07:45.155 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:07:45.155 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:07:45.155 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:07:45.155 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:07:45.155 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@168 -- # get_net_dev target0 00:07:45.155 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@107 -- # local dev=target0 00:07:45.155 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:07:45.155 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:07:45.155 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:07:45.155 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:07:45.155 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:07:45.155 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:07:45.155 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:07:45.155 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:07:45.155 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:07:45.155 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:45.155 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:07:45.155 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:07:45.155 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:07:45.155 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:07:45.155 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:07:45.155 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:07:45.155 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@168 -- # get_net_dev target1 00:07:45.155 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@107 -- # local dev=target1 00:07:45.155 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:07:45.155 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:07:45.155 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@109 -- # return 1 00:07:45.155 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@168 -- # dev= 00:07:45.155 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@169 -- # return 0 00:07:45.155 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:07:45.155 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:45.155 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:07:45.155 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:07:45.155 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:45.155 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:07:45.155 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:07:45.155 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:45.155 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:45.155 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:45.155 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:07:45.155 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:45.155 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:45.155 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # nvmfpid=155539 00:07:45.155 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@329 -- # waitforlisten 155539 00:07:45.155 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:45.155 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 155539 ']' 00:07:45.155 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:45.155 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:45.155 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:45.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:45.155 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:45.155 18:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:45.155 [2024-11-05 18:57:13.787304] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:07:45.155 [2024-11-05 18:57:13.787371] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:45.155 [2024-11-05 18:57:13.887032] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:45.155 [2024-11-05 18:57:13.938525] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:45.155 [2024-11-05 18:57:13.938583] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:45.155 [2024-11-05 18:57:13.938591] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:45.155 [2024-11-05 18:57:13.938598] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:45.155 [2024-11-05 18:57:13.938604] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:45.155 [2024-11-05 18:57:13.940670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:45.155 [2024-11-05 18:57:13.940798] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:45.155 [2024-11-05 18:57:13.941029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:45.155 [2024-11-05 18:57:13.941030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:45.417 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:45.417 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:07:45.417 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:07:45.417 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:45.417 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:45.417 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:45.417 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:45.417 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.417 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:45.417 [2024-11-05 18:57:14.652998] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:45.417 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.417 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:45.417 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:45.417 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:45.417 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:45.417 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:45.417 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:45.417 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.417 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:45.417 Malloc0 00:07:45.417 [2024-11-05 18:57:14.731007] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:45.679 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.679 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:45.679 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:45.679 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:45.679 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=155877 00:07:45.679 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 155877 /var/tmp/bdevperf.sock 00:07:45.679 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 155877 ']' 00:07:45.679 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:45.679 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:45.679 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:45.679 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:45.679 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:45.679 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:45.679 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:45.679 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:45.679 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # config=() 00:07:45.679 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # local subsystem config 00:07:45.679 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:07:45.679 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:07:45.679 { 00:07:45.679 "params": { 00:07:45.679 "name": "Nvme$subsystem", 00:07:45.679 "trtype": "$TEST_TRANSPORT", 00:07:45.679 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:45.679 "adrfam": "ipv4", 00:07:45.679 "trsvcid": "$NVMF_PORT", 00:07:45.679 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:45.679 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:45.679 "hdgst": ${hdgst:-false}, 00:07:45.679 "ddgst": ${ddgst:-false} 00:07:45.679 }, 00:07:45.679 "method": "bdev_nvme_attach_controller" 00:07:45.679 } 00:07:45.679 EOF 00:07:45.679 )") 00:07:45.679 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # cat 00:07:45.679 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@396 -- # jq . 00:07:45.679 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@397 -- # IFS=, 00:07:45.679 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:07:45.679 "params": { 00:07:45.679 "name": "Nvme0", 00:07:45.679 "trtype": "tcp", 00:07:45.679 "traddr": "10.0.0.2", 00:07:45.679 "adrfam": "ipv4", 00:07:45.679 "trsvcid": "4420", 00:07:45.679 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:45.679 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:45.679 "hdgst": false, 00:07:45.679 "ddgst": false 00:07:45.679 }, 00:07:45.679 "method": "bdev_nvme_attach_controller" 00:07:45.679 }' 00:07:45.679 [2024-11-05 18:57:14.835309] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:07:45.679 [2024-11-05 18:57:14.835363] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid155877 ] 00:07:45.679 [2024-11-05 18:57:14.905894] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.679 [2024-11-05 18:57:14.942523] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.940 Running I/O for 10 seconds... 00:07:46.515 18:57:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:46.515 18:57:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:07:46.515 18:57:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:46.515 18:57:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.515 18:57:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:46.515 18:57:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.515 18:57:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:46.515 18:57:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:46.515 18:57:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:46.515 18:57:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:46.515 18:57:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:46.515 18:57:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:46.515 18:57:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:46.515 18:57:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:46.515 18:57:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:46.515 18:57:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.515 18:57:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:46.515 18:57:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:46.515 18:57:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.515 18:57:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=899 00:07:46.515 18:57:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 899 -ge 100 ']' 00:07:46.515 18:57:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:46.515 18:57:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:46.515 18:57:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:46.515 18:57:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:46.515 18:57:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.515 18:57:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:46.515 [2024-11-05 18:57:15.722140] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db130 is same with the state(6) to be set 00:07:46.515 [2024-11-05 18:57:15.722190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db130 is same with the state(6) to be set 00:07:46.515 [2024-11-05 18:57:15.722199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db130 is same with the state(6) to be set 00:07:46.515 [2024-11-05 18:57:15.722206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db130 is same with the state(6) to be set 00:07:46.515 [2024-11-05 18:57:15.722213] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db130 is same with the state(6) to be set 00:07:46.515 [2024-11-05 18:57:15.722220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db130 is same with the state(6) to be set 00:07:46.515 [2024-11-05 18:57:15.722227] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db130 is same with the state(6) to be set 00:07:46.515 [2024-11-05 18:57:15.722234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db130 is same with the state(6) to be set 00:07:46.516 [2024-11-05 18:57:15.722241] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db130 is same with the state(6) to be set 00:07:46.516 [2024-11-05 18:57:15.722247] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db130 is same with the state(6) to be set 00:07:46.516 [2024-11-05 18:57:15.722254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db130 is same with the state(6) to be set 00:07:46.516 [2024-11-05 18:57:15.722268] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db130 is same with the state(6) to be set 00:07:46.516 [2024-11-05 18:57:15.722275] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db130 is same with the state(6) to be set 00:07:46.516 [2024-11-05 18:57:15.722282] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db130 is same with the state(6) to be set 00:07:46.516 [2024-11-05 18:57:15.722289] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db130 is same with the state(6) to be set 00:07:46.516 [2024-11-05 18:57:15.722295] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db130 is same with the state(6) to be set 00:07:46.516 [2024-11-05 18:57:15.722302] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db130 is same with the state(6) to be set 00:07:46.516 [2024-11-05 18:57:15.722308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db130 is same with the state(6) to be set 00:07:46.516 [2024-11-05 18:57:15.722315] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db130 is same with the state(6) to be set 00:07:46.516 [2024-11-05 18:57:15.722322] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db130 is same with the state(6) to be set 00:07:46.516 [2024-11-05 18:57:15.722329] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db130 is same with the state(6) to be set 00:07:46.516 [2024-11-05 18:57:15.722335] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db130 is same with the state(6) to be set 00:07:46.516 [2024-11-05 18:57:15.722342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db130 is same with the state(6) to be set 00:07:46.516 [2024-11-05 18:57:15.722348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db130 is same with the state(6) to be set 00:07:46.516 [2024-11-05 18:57:15.722360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db130 is same with the state(6) to be set 00:07:46.516 [2024-11-05 18:57:15.722367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db130 is same with the state(6) to be set 00:07:46.516 [2024-11-05 18:57:15.722374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db130 is same with the state(6) to be set 00:07:46.516 [2024-11-05 18:57:15.722380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db130 is same with the state(6) to be set 00:07:46.516 [2024-11-05 18:57:15.722397] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db130 is same with the state(6) to be set 00:07:46.516 [2024-11-05 18:57:15.722404] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db130 is same with the state(6) to be set 00:07:46.516 [2024-11-05 18:57:15.722411] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db130 is same with the state(6) to be set 00:07:46.516 [2024-11-05 18:57:15.722417] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db130 is same with the state(6) to be set 00:07:46.516 [2024-11-05 18:57:15.722424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db130 is same with the state(6) to be set 00:07:46.516 [2024-11-05 18:57:15.722430] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db130 is same with the state(6) to be set 00:07:46.516 [2024-11-05 18:57:15.722437] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db130 is same with the state(6) to be set 00:07:46.516 [2024-11-05 18:57:15.722444] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db130 is same with the state(6) to be set 00:07:46.516 [2024-11-05 18:57:15.722450] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db130 is same with the state(6) to be set 00:07:46.516 [2024-11-05 18:57:15.722457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db130 is same with the state(6) to be set 00:07:46.516 [2024-11-05 18:57:15.722464] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db130 is same with the state(6) to be set 00:07:46.516 [2024-11-05 18:57:15.722471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db130 is same with the state(6) to be set 00:07:46.516 [2024-11-05 18:57:15.722478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db130 is same with the state(6) to be set 00:07:46.516 [2024-11-05 18:57:15.722485] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db130 is same with the state(6) to be set 00:07:46.516 [2024-11-05 18:57:15.722491] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db130 is same with the state(6) to be set 00:07:46.516 [2024-11-05 18:57:15.722498] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db130 is same with the state(6) to be set 00:07:46.516 [2024-11-05 18:57:15.722504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db130 is same with the state(6) to be set 00:07:46.516 [2024-11-05 18:57:15.722511] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db130 is same with the state(6) to be set 00:07:46.516 [2024-11-05 18:57:15.722521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db130 is same with the state(6) to be set 00:07:46.516 [2024-11-05 18:57:15.722528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db130 is same with the state(6) to be set 00:07:46.516 [2024-11-05 18:57:15.722534] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db130 is same with the state(6) to be set 00:07:46.516 [2024-11-05 18:57:15.722541] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db130 is same with the state(6) to be set 00:07:46.516 [2024-11-05 18:57:15.722548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db130 is same with the state(6) to be set 00:07:46.516 [2024-11-05 18:57:15.722555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db130 is same with the state(6) to be set 00:07:46.516 [2024-11-05 18:57:15.722563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db130 is same with the state(6) to be set 00:07:46.516 [2024-11-05 18:57:15.722570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db130 is same with the state(6) to be set 00:07:46.516 [2024-11-05 18:57:15.722576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db130 is same with the state(6) to be set 00:07:46.516 [2024-11-05 18:57:15.722583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db130 is same with the state(6) to be set 00:07:46.516 [2024-11-05 18:57:15.722589] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db130 is same with the state(6) to be set 00:07:46.516 [2024-11-05 18:57:15.722596] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db130 is same with the state(6) to be set 00:07:46.516 [2024-11-05 18:57:15.722603] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db130 is same with the state(6) to be set 00:07:46.516 [2024-11-05 18:57:15.722609] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db130 is same with the state(6) to be set 00:07:46.516 [2024-11-05 18:57:15.722616] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db130 is same with the state(6) to be set 00:07:46.516 [2024-11-05 18:57:15.722623] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db130 is same with the state(6) to be set 00:07:46.516 [2024-11-05 18:57:15.722629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20db130 is same with the state(6) to be set 00:07:46.516 [2024-11-05 18:57:15.723059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:122880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.516 [2024-11-05 18:57:15.723097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.516 [2024-11-05 18:57:15.723115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:123008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.516 [2024-11-05 18:57:15.723124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.516 [2024-11-05 18:57:15.723134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:123136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.516 [2024-11-05 18:57:15.723143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.516 [2024-11-05 18:57:15.723152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:123264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.516 [2024-11-05 18:57:15.723160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.516 [2024-11-05 18:57:15.723169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:123392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.516 [2024-11-05 18:57:15.723177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.516 [2024-11-05 18:57:15.723186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:123520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.516 [2024-11-05 18:57:15.723194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.516 [2024-11-05 18:57:15.723203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:123648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.516 [2024-11-05 18:57:15.723211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.516 [2024-11-05 18:57:15.723225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:123776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.516 [2024-11-05 18:57:15.723233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.516 [2024-11-05 18:57:15.723243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:123904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.516 [2024-11-05 18:57:15.723250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.516 [2024-11-05 18:57:15.723260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:124032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.516 [2024-11-05 18:57:15.723267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.516 [2024-11-05 18:57:15.723277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:124160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.516 [2024-11-05 18:57:15.723284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.516 [2024-11-05 18:57:15.723294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:124288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.517 [2024-11-05 18:57:15.723301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.517 [2024-11-05 18:57:15.723311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:124416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.517 [2024-11-05 18:57:15.723318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.517 [2024-11-05 18:57:15.723328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:124544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.517 [2024-11-05 18:57:15.723335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.517 [2024-11-05 18:57:15.723345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:124672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.517 [2024-11-05 18:57:15.723352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.517 [2024-11-05 18:57:15.723361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:124800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.517 [2024-11-05 18:57:15.723369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.517 [2024-11-05 18:57:15.723378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:124928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.517 [2024-11-05 18:57:15.723385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.517 [2024-11-05 18:57:15.723395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:125056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.517 [2024-11-05 18:57:15.723402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.517 [2024-11-05 18:57:15.723412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:125184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.517 [2024-11-05 18:57:15.723419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.517 [2024-11-05 18:57:15.723429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:125312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.517 [2024-11-05 18:57:15.723438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.517 [2024-11-05 18:57:15.723447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:125440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.517 [2024-11-05 18:57:15.723455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.517 [2024-11-05 18:57:15.723464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:125568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.517 [2024-11-05 18:57:15.723471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.517 [2024-11-05 18:57:15.723481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:125696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.517 [2024-11-05 18:57:15.723488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.517 [2024-11-05 18:57:15.723497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:125824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.517 [2024-11-05 18:57:15.723505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.517 [2024-11-05 18:57:15.723514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:125952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.517 [2024-11-05 18:57:15.723522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.517 [2024-11-05 18:57:15.723531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:126080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.517 [2024-11-05 18:57:15.723538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.517 [2024-11-05 18:57:15.723548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:126208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.517 [2024-11-05 18:57:15.723555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.517 [2024-11-05 18:57:15.723564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:126336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.517 [2024-11-05 18:57:15.723572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.517 [2024-11-05 18:57:15.723581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:126464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.517 [2024-11-05 18:57:15.723588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.517 [2024-11-05 18:57:15.723598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:126592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.517 [2024-11-05 18:57:15.723605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.517 [2024-11-05 18:57:15.723615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:126720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.517 [2024-11-05 18:57:15.723622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.517 [2024-11-05 18:57:15.723631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:126848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.517 [2024-11-05 18:57:15.723639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.517 [2024-11-05 18:57:15.723649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:126976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.517 [2024-11-05 18:57:15.723657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.517 [2024-11-05 18:57:15.723667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:127104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.517 [2024-11-05 18:57:15.723674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.517 [2024-11-05 18:57:15.723684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:127232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.517 [2024-11-05 18:57:15.723691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.517 [2024-11-05 18:57:15.723701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:127360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.517 [2024-11-05 18:57:15.723709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.517 [2024-11-05 18:57:15.723718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:127488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.517 [2024-11-05 18:57:15.723725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.517 [2024-11-05 18:57:15.723735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:127616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.517 [2024-11-05 18:57:15.723742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.517 [2024-11-05 18:57:15.723757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:127744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.517 [2024-11-05 18:57:15.723764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.517 [2024-11-05 18:57:15.723774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:127872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.517 [2024-11-05 18:57:15.723781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.517 [2024-11-05 18:57:15.723791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:128000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.517 [2024-11-05 18:57:15.723798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.517 [2024-11-05 18:57:15.723808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:128128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.517 [2024-11-05 18:57:15.723816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.517 [2024-11-05 18:57:15.723825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:128256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.517 [2024-11-05 18:57:15.723833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.517 [2024-11-05 18:57:15.723843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:128384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.517 [2024-11-05 18:57:15.723850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.517 [2024-11-05 18:57:15.723860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:128512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.517 [2024-11-05 18:57:15.723869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.517 [2024-11-05 18:57:15.723879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:128640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.517 [2024-11-05 18:57:15.723886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.517 [2024-11-05 18:57:15.723896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:128768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.517 [2024-11-05 18:57:15.723903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.517 [2024-11-05 18:57:15.723913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:128896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.517 [2024-11-05 18:57:15.723920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.517 [2024-11-05 18:57:15.723930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:129024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.517 [2024-11-05 18:57:15.723937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.517 [2024-11-05 18:57:15.723946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:129152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.517 [2024-11-05 18:57:15.723954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.517 [2024-11-05 18:57:15.723963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:129280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.518 [2024-11-05 18:57:15.723971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.518 [2024-11-05 18:57:15.723980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:129408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.518 [2024-11-05 18:57:15.723988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.518 [2024-11-05 18:57:15.723998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:129536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.518 [2024-11-05 18:57:15.724005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.518 [2024-11-05 18:57:15.724014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:129664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.518 [2024-11-05 18:57:15.724022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.518 [2024-11-05 18:57:15.724031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:129792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.518 [2024-11-05 18:57:15.724038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.518 [2024-11-05 18:57:15.724048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:129920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.518 [2024-11-05 18:57:15.724055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.518 [2024-11-05 18:57:15.724064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:130048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.518 [2024-11-05 18:57:15.724072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.518 [2024-11-05 18:57:15.724083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:130176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.518 [2024-11-05 18:57:15.724091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.518 [2024-11-05 18:57:15.724101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:130304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.518 [2024-11-05 18:57:15.724108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.518 [2024-11-05 18:57:15.724117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:130432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.518 [2024-11-05 18:57:15.724125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.518 [2024-11-05 18:57:15.724134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:130560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.518 [2024-11-05 18:57:15.724141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.518 [2024-11-05 18:57:15.724151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:130688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.518 [2024-11-05 18:57:15.724158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.518 [2024-11-05 18:57:15.724167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:130816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.518 [2024-11-05 18:57:15.724175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.518 [2024-11-05 18:57:15.724184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:130944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.518 [2024-11-05 18:57:15.724191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.518 [2024-11-05 18:57:15.724201] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1b1a0 is same with the state(6) to be set 00:07:46.518 [2024-11-05 18:57:15.725476] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:07:46.518 18:57:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.518 task offset: 122880 on job bdev=Nvme0n1 fails 00:07:46.518 00:07:46.518 Latency(us) 00:07:46.518 [2024-11-05T17:57:15.841Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:46.518 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:46.518 Job: Nvme0n1 ended in about 0.60 seconds with error 00:07:46.518 Verification LBA range: start 0x0 length 0x400 00:07:46.518 Nvme0n1 : 0.60 1612.21 100.76 107.48 0.00 36330.60 4969.81 32986.45 00:07:46.518 [2024-11-05T17:57:15.841Z] =================================================================================================================== 00:07:46.518 [2024-11-05T17:57:15.841Z] Total : 1612.21 100.76 107.48 0.00 36330.60 4969.81 32986.45 00:07:46.518 [2024-11-05 18:57:15.727486] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:46.518 [2024-11-05 18:57:15.727511] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1902000 (9): Bad file descriptor 00:07:46.518 18:57:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:46.518 18:57:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.518 18:57:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:46.518 [2024-11-05 18:57:15.733982] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:07:46.518 [2024-11-05 18:57:15.734061] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:07:46.518 [2024-11-05 18:57:15.734083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.518 [2024-11-05 18:57:15.734097] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:07:46.518 [2024-11-05 18:57:15.734104] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:07:46.518 [2024-11-05 18:57:15.734112] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:07:46.518 [2024-11-05 18:57:15.734119] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1902000 00:07:46.518 [2024-11-05 18:57:15.734138] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1902000 (9): Bad file descriptor 00:07:46.518 [2024-11-05 18:57:15.734150] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:07:46.518 [2024-11-05 18:57:15.734157] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:07:46.518 [2024-11-05 18:57:15.734166] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:07:46.518 [2024-11-05 18:57:15.734174] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:07:46.518 18:57:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.518 18:57:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:47.461 18:57:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 155877 00:07:47.461 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (155877) - No such process 00:07:47.461 18:57:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:47.461 18:57:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:47.461 18:57:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:47.461 18:57:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:47.461 18:57:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # config=() 00:07:47.461 18:57:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # local subsystem config 00:07:47.461 18:57:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:07:47.461 18:57:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:07:47.461 { 00:07:47.461 "params": { 00:07:47.461 "name": "Nvme$subsystem", 00:07:47.461 "trtype": "$TEST_TRANSPORT", 00:07:47.461 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:47.461 "adrfam": "ipv4", 00:07:47.461 "trsvcid": "$NVMF_PORT", 00:07:47.461 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:47.461 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:47.461 "hdgst": ${hdgst:-false}, 00:07:47.461 "ddgst": ${ddgst:-false} 00:07:47.461 }, 00:07:47.461 "method": "bdev_nvme_attach_controller" 00:07:47.461 } 00:07:47.461 EOF 00:07:47.461 )") 00:07:47.461 18:57:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # cat 00:07:47.461 18:57:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@396 -- # jq . 00:07:47.461 18:57:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@397 -- # IFS=, 00:07:47.461 18:57:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:07:47.461 "params": { 00:07:47.461 "name": "Nvme0", 00:07:47.461 "trtype": "tcp", 00:07:47.461 "traddr": "10.0.0.2", 00:07:47.461 "adrfam": "ipv4", 00:07:47.461 "trsvcid": "4420", 00:07:47.461 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:47.461 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:47.461 "hdgst": false, 00:07:47.461 "ddgst": false 00:07:47.461 }, 00:07:47.461 "method": "bdev_nvme_attach_controller" 00:07:47.461 }' 00:07:47.722 [2024-11-05 18:57:16.806502] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:07:47.722 [2024-11-05 18:57:16.806557] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid156231 ] 00:07:47.722 [2024-11-05 18:57:16.876939] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.722 [2024-11-05 18:57:16.912311] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.983 Running I/O for 1 seconds... 00:07:48.926 1536.00 IOPS, 96.00 MiB/s 00:07:48.926 Latency(us) 00:07:48.926 [2024-11-05T17:57:18.249Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:48.926 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:48.926 Verification LBA range: start 0x0 length 0x400 00:07:48.926 Nvme0n1 : 1.01 1576.63 98.54 0.00 0.00 39890.82 7099.73 34734.08 00:07:48.926 [2024-11-05T17:57:18.249Z] =================================================================================================================== 00:07:48.926 [2024-11-05T17:57:18.249Z] Total : 1576.63 98.54 0.00 0.00 39890.82 7099.73 34734.08 00:07:48.926 18:57:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:48.926 18:57:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:48.926 18:57:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:07:48.926 18:57:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:48.926 18:57:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:48.926 18:57:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@335 -- # nvmfcleanup 00:07:48.926 18:57:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@99 -- # sync 00:07:48.926 18:57:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:07:48.926 18:57:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@102 -- # set +e 00:07:49.186 18:57:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@103 -- # for i in {1..20} 00:07:49.186 18:57:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:07:49.186 rmmod nvme_tcp 00:07:49.186 rmmod nvme_fabrics 00:07:49.186 rmmod nvme_keyring 00:07:49.186 18:57:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:07:49.186 18:57:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # set -e 00:07:49.186 18:57:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # return 0 00:07:49.186 18:57:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # '[' -n 155539 ']' 00:07:49.186 18:57:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@337 -- # killprocess 155539 00:07:49.186 18:57:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@952 -- # '[' -z 155539 ']' 00:07:49.186 18:57:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # kill -0 155539 00:07:49.186 18:57:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@957 -- # uname 00:07:49.186 18:57:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:49.186 18:57:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 155539 00:07:49.186 18:57:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:07:49.186 18:57:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:07:49.186 18:57:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@970 -- # echo 'killing process with pid 155539' 00:07:49.186 killing process with pid 155539 00:07:49.186 18:57:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@971 -- # kill 155539 00:07:49.186 18:57:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@976 -- # wait 155539 00:07:49.186 [2024-11-05 18:57:18.470853] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:49.186 18:57:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:07:49.186 18:57:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # nvmf_fini 00:07:49.186 18:57:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@264 -- # local dev 00:07:49.186 18:57:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@267 -- # remove_target_ns 00:07:49.186 18:57:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:07:49.186 18:57:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:07:49.186 18:57:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_target_ns 00:07:51.817 18:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@268 -- # delete_main_bridge 00:07:51.817 18:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:07:51.817 18:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@130 -- # return 0 00:07:51.817 18:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:07:51.817 18:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:07:51.817 18:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:07:51.817 18:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:07:51.817 18:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:07:51.817 18:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:07:51.817 18:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:07:51.817 18:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:07:51.817 18:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:07:51.817 18:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:07:51.817 18:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:07:51.818 18:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:07:51.818 18:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:07:51.818 18:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:07:51.818 18:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:07:51.818 18:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:07:51.818 18:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:07:51.818 18:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@41 -- # _dev=0 00:07:51.818 18:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@41 -- # dev_map=() 00:07:51.818 18:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@284 -- # iptr 00:07:51.818 18:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@542 -- # iptables-save 00:07:51.818 18:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:07:51.818 18:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@542 -- # iptables-restore 00:07:51.818 18:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:51.818 00:07:51.818 real 0m14.647s 00:07:51.818 user 0m22.835s 00:07:51.818 sys 0m6.769s 00:07:51.818 18:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:51.818 18:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:51.818 ************************************ 00:07:51.818 END TEST nvmf_host_management 00:07:51.818 ************************************ 00:07:51.818 18:57:20 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:51.818 18:57:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:51.818 18:57:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:51.818 18:57:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:51.818 ************************************ 00:07:51.818 START TEST nvmf_lvol 00:07:51.818 ************************************ 00:07:51.818 18:57:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:51.818 * Looking for test storage... 00:07:51.818 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:51.818 18:57:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:51.818 18:57:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lcov --version 00:07:51.818 18:57:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:51.818 18:57:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:51.818 18:57:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:51.818 18:57:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:51.818 18:57:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:51.818 18:57:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:07:51.818 18:57:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:07:51.818 18:57:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:07:51.818 18:57:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:07:51.818 18:57:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:07:51.818 18:57:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:07:51.818 18:57:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:07:51.818 18:57:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:51.818 18:57:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:07:51.818 18:57:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:07:51.818 18:57:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:51.818 18:57:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:51.818 18:57:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:07:51.818 18:57:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:07:51.818 18:57:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:51.818 18:57:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:07:51.818 18:57:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:07:51.818 18:57:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:07:51.818 18:57:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:07:51.818 18:57:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:51.818 18:57:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:07:51.818 18:57:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:07:51.818 18:57:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:51.818 18:57:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:51.818 18:57:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:07:51.818 18:57:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:51.818 18:57:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:51.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.818 --rc genhtml_branch_coverage=1 00:07:51.818 --rc genhtml_function_coverage=1 00:07:51.818 --rc genhtml_legend=1 00:07:51.818 --rc geninfo_all_blocks=1 00:07:51.818 --rc geninfo_unexecuted_blocks=1 00:07:51.818 00:07:51.818 ' 00:07:51.818 18:57:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:51.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.818 --rc genhtml_branch_coverage=1 00:07:51.818 --rc genhtml_function_coverage=1 00:07:51.818 --rc genhtml_legend=1 00:07:51.818 --rc geninfo_all_blocks=1 00:07:51.818 --rc geninfo_unexecuted_blocks=1 00:07:51.818 00:07:51.818 ' 00:07:51.818 18:57:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:51.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.818 --rc genhtml_branch_coverage=1 00:07:51.818 --rc genhtml_function_coverage=1 00:07:51.818 --rc genhtml_legend=1 00:07:51.818 --rc geninfo_all_blocks=1 00:07:51.818 --rc geninfo_unexecuted_blocks=1 00:07:51.818 00:07:51.818 ' 00:07:51.818 18:57:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:51.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.818 --rc genhtml_branch_coverage=1 00:07:51.818 --rc genhtml_function_coverage=1 00:07:51.818 --rc genhtml_legend=1 00:07:51.818 --rc geninfo_all_blocks=1 00:07:51.818 --rc geninfo_unexecuted_blocks=1 00:07:51.818 00:07:51.818 ' 00:07:51.818 18:57:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:51.818 18:57:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:51.818 18:57:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:51.818 18:57:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:51.818 18:57:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:51.818 18:57:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:51.818 18:57:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:51.818 18:57:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:07:51.818 18:57:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:51.818 18:57:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:07:51.818 18:57:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:51.818 18:57:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:51.818 18:57:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:51.818 18:57:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:07:51.818 18:57:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:07:51.818 18:57:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:51.818 18:57:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:51.818 18:57:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:07:51.818 18:57:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:51.818 18:57:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:51.818 18:57:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:51.818 18:57:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.818 18:57:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.818 18:57:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.818 18:57:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:51.818 18:57:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.819 18:57:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:07:51.819 18:57:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:07:51.819 18:57:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:07:51.819 18:57:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:07:51.819 18:57:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@50 -- # : 0 00:07:51.819 18:57:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:07:51.819 18:57:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:07:51.819 18:57:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:07:51.819 18:57:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:51.819 18:57:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:51.819 18:57:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:07:51.819 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:07:51.819 18:57:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:07:51.819 18:57:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:07:51.819 18:57:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@54 -- # have_pci_nics=0 00:07:51.819 18:57:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:51.819 18:57:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:51.819 18:57:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:51.819 18:57:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:51.819 18:57:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:51.819 18:57:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:51.819 18:57:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:07:51.819 18:57:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:51.819 18:57:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # prepare_net_devs 00:07:51.819 18:57:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # local -g is_hw=no 00:07:51.819 18:57:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@260 -- # remove_target_ns 00:07:51.819 18:57:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:07:51.819 18:57:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:07:51.819 18:57:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_target_ns 00:07:51.819 18:57:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:07:51.819 18:57:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:07:51.819 18:57:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # xtrace_disable 00:07:51.819 18:57:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:59.967 18:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:59.967 18:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@131 -- # pci_devs=() 00:07:59.967 18:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@131 -- # local -a pci_devs 00:07:59.967 18:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@132 -- # pci_net_devs=() 00:07:59.967 18:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:07:59.967 18:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@133 -- # pci_drivers=() 00:07:59.967 18:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@133 -- # local -A pci_drivers 00:07:59.967 18:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@135 -- # net_devs=() 00:07:59.967 18:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@135 -- # local -ga net_devs 00:07:59.967 18:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@136 -- # e810=() 00:07:59.967 18:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@136 -- # local -ga e810 00:07:59.967 18:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@137 -- # x722=() 00:07:59.967 18:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@137 -- # local -ga x722 00:07:59.967 18:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@138 -- # mlx=() 00:07:59.967 18:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@138 -- # local -ga mlx 00:07:59.967 18:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:59.967 18:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:59.967 18:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:59.967 18:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:59.967 18:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:59.967 18:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:59.967 18:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:59.967 18:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:59.967 18:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:59.967 18:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:59.967 18:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:59.967 18:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:59.967 18:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:07:59.967 18:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:07:59.967 18:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:07:59.967 18:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:07:59.967 18:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:07:59.967 18:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:07:59.967 18:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:07:59.967 18:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:59.967 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:59.967 18:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:07:59.967 18:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:07:59.967 18:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:59.967 18:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:59.967 18:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:07:59.967 18:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:07:59.967 18:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:59.967 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:59.967 18:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:07:59.967 18:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:07:59.967 18:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:59.967 18:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:59.967 18:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:07:59.967 18:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:07:59.967 18:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:07:59.967 18:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:07:59.967 18:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:07:59.967 18:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:59.967 18:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:07:59.967 18:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:59.967 18:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # [[ up == up ]] 00:07:59.967 18:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:07:59.967 18:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:59.967 18:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:59.967 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:59.967 18:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:07:59.967 18:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:07:59.967 18:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:59.967 18:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:07:59.967 18:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:59.967 18:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # [[ up == up ]] 00:07:59.968 18:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:07:59.968 18:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:59.968 18:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:59.968 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:59.968 18:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:07:59.968 18:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:07:59.968 18:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:07:59.968 18:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # is_hw=yes 00:07:59.968 18:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:07:59.968 18:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:07:59.968 18:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:07:59.968 18:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:07:59.968 18:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@257 -- # create_target_ns 00:07:59.968 18:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:07:59.968 18:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:07:59.968 18:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:07:59.968 18:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:59.968 18:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:07:59.968 18:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:07:59.968 18:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:07:59.968 18:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:07:59.968 18:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:07:59.968 18:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:07:59.968 18:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:07:59.968 18:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:07:59.968 18:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@27 -- # local -gA dev_map 00:07:59.968 18:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@28 -- # local -g _dev 00:07:59.968 18:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:07:59.968 18:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:07:59.968 18:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:07:59.968 18:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:07:59.968 18:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@44 -- # ips=() 00:07:59.968 18:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:07:59.968 18:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:07:59.968 18:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:07:59.968 18:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:07:59.968 18:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:07:59.968 18:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:07:59.968 18:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:07:59.968 18:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:07:59.968 18:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:07:59.968 18:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:07:59.968 18:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:07:59.968 18:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:07:59.968 18:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:07:59.968 18:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:07:59.968 18:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:07:59.968 18:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:07:59.968 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:07:59.968 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:07:59.968 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:07:59.968 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:07:59.968 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@11 -- # local val=167772161 00:07:59.968 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:07:59.968 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:07:59.968 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:07:59.968 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:07:59.968 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:07:59.968 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:07:59.968 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:07:59.968 10.0.0.1 00:07:59.968 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:07:59.968 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:07:59.968 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:07:59.968 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:07:59.968 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:07:59.968 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@11 -- # local val=167772162 00:07:59.968 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:07:59.968 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:07:59.968 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:07:59.968 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:07:59.968 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:07:59.968 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:07:59.968 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:07:59.968 10.0.0.2 00:07:59.968 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:07:59.968 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:07:59.968 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:07:59.968 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:07:59.968 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:07:59.968 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:07:59.968 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:07:59.968 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:07:59.968 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:07:59.968 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:07:59.968 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:07:59.968 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:07:59.968 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:07:59.968 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:07:59.968 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:07:59.968 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:07:59.968 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:07:59.968 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:07:59.968 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:07:59.968 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:07:59.968 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@38 -- # ping_ips 1 00:07:59.968 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:07:59.968 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:07:59.968 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:07:59.968 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:07:59.968 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:07:59.968 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:07:59.968 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:07:59.968 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:07:59.968 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:07:59.968 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@107 -- # local dev=initiator0 00:07:59.968 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:07:59.968 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:07:59.968 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:07:59.968 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:07:59.968 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:07:59.968 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:07:59.968 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:07:59.968 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:07:59.968 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:07:59.968 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:07:59.969 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:07:59.969 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:07:59.969 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:07:59.969 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:07:59.969 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:07:59.969 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:59.969 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.609 ms 00:07:59.969 00:07:59.969 --- 10.0.0.1 ping statistics --- 00:07:59.969 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:59.969 rtt min/avg/max/mdev = 0.609/0.609/0.609/0.000 ms 00:07:59.969 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:07:59.969 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:07:59.969 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:07:59.969 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:07:59.969 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:07:59.969 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:07:59.969 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@168 -- # get_net_dev target0 00:07:59.969 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@107 -- # local dev=target0 00:07:59.969 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:07:59.969 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:07:59.969 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:07:59.969 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:07:59.969 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:07:59.969 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:07:59.969 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:07:59.969 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:07:59.969 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:07:59.969 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:07:59.969 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:07:59.969 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:07:59.969 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:07:59.969 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:07:59.969 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:59.969 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.282 ms 00:07:59.969 00:07:59.969 --- 10.0.0.2 ping statistics --- 00:07:59.969 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:59.969 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:07:59.969 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@98 -- # (( pair++ )) 00:07:59.969 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:07:59.969 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:59.969 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@270 -- # return 0 00:07:59.969 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:07:59.969 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:07:59.969 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:07:59.969 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:07:59.969 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:07:59.969 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:07:59.969 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:07:59.969 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:07:59.969 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:07:59.969 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:07:59.969 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@107 -- # local dev=initiator0 00:07:59.969 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:07:59.969 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:07:59.969 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:07:59.969 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:07:59.969 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:07:59.969 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:07:59.969 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:07:59.969 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:07:59.969 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:07:59.969 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:59.969 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:07:59.969 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:07:59.969 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:07:59.969 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:07:59.969 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:07:59.969 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:07:59.969 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@107 -- # local dev=initiator1 00:07:59.969 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:07:59.969 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:07:59.969 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@109 -- # return 1 00:07:59.969 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@168 -- # dev= 00:07:59.969 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@169 -- # return 0 00:07:59.969 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:07:59.969 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:07:59.969 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:07:59.969 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:07:59.969 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:07:59.969 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:07:59.969 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:07:59.969 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@168 -- # get_net_dev target0 00:07:59.969 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@107 -- # local dev=target0 00:07:59.969 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:07:59.969 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:07:59.969 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:07:59.969 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:07:59.969 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:07:59.969 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:07:59.969 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:07:59.969 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:07:59.969 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:07:59.969 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:59.969 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:07:59.969 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:07:59.969 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:07:59.969 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:07:59.969 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:07:59.969 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:07:59.969 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@168 -- # get_net_dev target1 00:07:59.969 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@107 -- # local dev=target1 00:07:59.969 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:07:59.969 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:07:59.969 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@109 -- # return 1 00:07:59.969 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@168 -- # dev= 00:07:59.969 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@169 -- # return 0 00:07:59.969 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:07:59.969 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:59.969 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:07:59.969 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:07:59.969 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:59.969 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:07:59.969 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:07:59.969 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:59.969 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:07:59.969 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:59.969 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:59.969 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # nvmfpid=160942 00:07:59.969 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@329 -- # waitforlisten 160942 00:07:59.970 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:59.970 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@833 -- # '[' -z 160942 ']' 00:07:59.970 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:59.970 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:59.970 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:59.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:59.970 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:59.970 18:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:59.970 [2024-11-05 18:57:28.486620] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:07:59.970 [2024-11-05 18:57:28.486706] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:59.970 [2024-11-05 18:57:28.569790] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:59.970 [2024-11-05 18:57:28.610741] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:59.970 [2024-11-05 18:57:28.610787] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:59.970 [2024-11-05 18:57:28.610795] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:59.970 [2024-11-05 18:57:28.610802] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:59.970 [2024-11-05 18:57:28.610808] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:59.970 [2024-11-05 18:57:28.612231] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:59.970 [2024-11-05 18:57:28.612347] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:59.970 [2024-11-05 18:57:28.612350] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.970 18:57:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:59.970 18:57:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@866 -- # return 0 00:07:59.970 18:57:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:07:59.970 18:57:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:59.970 18:57:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:00.230 18:57:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:00.230 18:57:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:00.230 [2024-11-05 18:57:29.479027] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:00.230 18:57:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:00.502 18:57:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:00.502 18:57:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:00.762 18:57:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:00.762 18:57:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:01.023 18:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:01.023 18:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=0787275f-263d-47f7-905b-7af4e6f99a65 00:08:01.023 18:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 0787275f-263d-47f7-905b-7af4e6f99a65 lvol 20 00:08:01.284 18:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=d4b7a212-8fe5-4530-bfe7-cebc4a6f1a12 00:08:01.284 18:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:01.545 18:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 d4b7a212-8fe5-4530-bfe7-cebc4a6f1a12 00:08:01.545 18:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:01.805 [2024-11-05 18:57:31.003833] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:01.805 18:57:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:02.066 18:57:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:02.066 18:57:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=161473 00:08:02.066 18:57:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:03.009 18:57:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot d4b7a212-8fe5-4530-bfe7-cebc4a6f1a12 MY_SNAPSHOT 00:08:03.270 18:57:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=5aa895d4-7355-481b-8365-19d17f17186c 00:08:03.270 18:57:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize d4b7a212-8fe5-4530-bfe7-cebc4a6f1a12 30 00:08:03.530 18:57:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 5aa895d4-7355-481b-8365-19d17f17186c MY_CLONE 00:08:03.530 18:57:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=1cd116cb-0dac-4867-a7e5-0f97d46ff542 00:08:03.530 18:57:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 1cd116cb-0dac-4867-a7e5-0f97d46ff542 00:08:03.790 18:57:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 161473 00:08:13.793 Initializing NVMe Controllers 00:08:13.793 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:13.793 Controller IO queue size 128, less than required. 00:08:13.793 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:13.793 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:13.793 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:13.793 Initialization complete. Launching workers. 00:08:13.793 ======================================================== 00:08:13.793 Latency(us) 00:08:13.793 Device Information : IOPS MiB/s Average min max 00:08:13.793 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12029.30 46.99 10645.26 1655.31 40779.78 00:08:13.793 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 17776.20 69.44 7202.37 3173.86 42633.38 00:08:13.793 ======================================================== 00:08:13.793 Total : 29805.49 116.43 8591.89 1655.31 42633.38 00:08:13.793 00:08:13.793 18:57:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:13.793 18:57:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete d4b7a212-8fe5-4530-bfe7-cebc4a6f1a12 00:08:13.793 18:57:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0787275f-263d-47f7-905b-7af4e6f99a65 00:08:13.793 18:57:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:13.793 18:57:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:13.793 18:57:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:13.793 18:57:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@335 -- # nvmfcleanup 00:08:13.793 18:57:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@99 -- # sync 00:08:13.793 18:57:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:08:13.793 18:57:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@102 -- # set +e 00:08:13.793 18:57:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@103 -- # for i in {1..20} 00:08:13.793 18:57:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:08:13.793 rmmod nvme_tcp 00:08:13.793 rmmod nvme_fabrics 00:08:13.793 rmmod nvme_keyring 00:08:13.793 18:57:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:08:13.793 18:57:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # set -e 00:08:13.793 18:57:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # return 0 00:08:13.793 18:57:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # '[' -n 160942 ']' 00:08:13.793 18:57:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@337 -- # killprocess 160942 00:08:13.793 18:57:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@952 -- # '[' -z 160942 ']' 00:08:13.793 18:57:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # kill -0 160942 00:08:13.793 18:57:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@957 -- # uname 00:08:13.793 18:57:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:13.793 18:57:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 160942 00:08:13.793 18:57:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:13.793 18:57:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:13.793 18:57:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@970 -- # echo 'killing process with pid 160942' 00:08:13.793 killing process with pid 160942 00:08:13.793 18:57:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@971 -- # kill 160942 00:08:13.793 18:57:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@976 -- # wait 160942 00:08:13.793 18:57:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:08:13.793 18:57:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # nvmf_fini 00:08:13.793 18:57:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@264 -- # local dev 00:08:13.793 18:57:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@267 -- # remove_target_ns 00:08:13.793 18:57:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:08:13.793 18:57:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:08:13.793 18:57:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_target_ns 00:08:15.182 18:57:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@268 -- # delete_main_bridge 00:08:15.182 18:57:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:08:15.182 18:57:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@130 -- # return 0 00:08:15.182 18:57:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:08:15.182 18:57:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:08:15.182 18:57:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:08:15.182 18:57:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:08:15.182 18:57:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:08:15.182 18:57:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:08:15.182 18:57:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:08:15.182 18:57:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:08:15.182 18:57:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:08:15.182 18:57:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:08:15.182 18:57:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:08:15.182 18:57:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:08:15.182 18:57:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:08:15.182 18:57:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:08:15.182 18:57:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:08:15.182 18:57:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:08:15.182 18:57:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:08:15.182 18:57:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@41 -- # _dev=0 00:08:15.182 18:57:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@41 -- # dev_map=() 00:08:15.182 18:57:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@284 -- # iptr 00:08:15.182 18:57:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@542 -- # iptables-save 00:08:15.182 18:57:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:08:15.182 18:57:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@542 -- # iptables-restore 00:08:15.182 00:08:15.182 real 0m23.822s 00:08:15.182 user 1m4.415s 00:08:15.182 sys 0m8.514s 00:08:15.182 18:57:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:15.182 18:57:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:15.182 ************************************ 00:08:15.182 END TEST nvmf_lvol 00:08:15.182 ************************************ 00:08:15.443 18:57:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:15.443 18:57:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:15.443 18:57:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:15.443 18:57:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:15.443 ************************************ 00:08:15.443 START TEST nvmf_lvs_grow 00:08:15.443 ************************************ 00:08:15.443 18:57:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:15.443 * Looking for test storage... 00:08:15.443 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:15.443 18:57:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:15.443 18:57:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lcov --version 00:08:15.443 18:57:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:15.443 18:57:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:15.443 18:57:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:15.443 18:57:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:15.443 18:57:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:15.443 18:57:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:08:15.444 18:57:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:08:15.444 18:57:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:08:15.444 18:57:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:08:15.444 18:57:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:08:15.444 18:57:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:08:15.444 18:57:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:08:15.444 18:57:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:15.444 18:57:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:08:15.444 18:57:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:08:15.444 18:57:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:15.444 18:57:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:15.444 18:57:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:08:15.444 18:57:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:08:15.444 18:57:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:15.444 18:57:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:08:15.444 18:57:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:08:15.444 18:57:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:08:15.706 18:57:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:08:15.706 18:57:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:15.706 18:57:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:08:15.706 18:57:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:08:15.706 18:57:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:15.706 18:57:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:15.706 18:57:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:08:15.706 18:57:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:15.706 18:57:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:15.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.706 --rc genhtml_branch_coverage=1 00:08:15.706 --rc genhtml_function_coverage=1 00:08:15.706 --rc genhtml_legend=1 00:08:15.706 --rc geninfo_all_blocks=1 00:08:15.706 --rc geninfo_unexecuted_blocks=1 00:08:15.706 00:08:15.706 ' 00:08:15.706 18:57:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:15.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.706 --rc genhtml_branch_coverage=1 00:08:15.706 --rc genhtml_function_coverage=1 00:08:15.706 --rc genhtml_legend=1 00:08:15.706 --rc geninfo_all_blocks=1 00:08:15.706 --rc geninfo_unexecuted_blocks=1 00:08:15.706 00:08:15.706 ' 00:08:15.706 18:57:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:15.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.706 --rc genhtml_branch_coverage=1 00:08:15.706 --rc genhtml_function_coverage=1 00:08:15.706 --rc genhtml_legend=1 00:08:15.706 --rc geninfo_all_blocks=1 00:08:15.706 --rc geninfo_unexecuted_blocks=1 00:08:15.706 00:08:15.706 ' 00:08:15.706 18:57:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:15.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.706 --rc genhtml_branch_coverage=1 00:08:15.706 --rc genhtml_function_coverage=1 00:08:15.706 --rc genhtml_legend=1 00:08:15.706 --rc geninfo_all_blocks=1 00:08:15.706 --rc geninfo_unexecuted_blocks=1 00:08:15.706 00:08:15.706 ' 00:08:15.706 18:57:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:15.706 18:57:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:08:15.706 18:57:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:15.706 18:57:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:15.706 18:57:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:15.706 18:57:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:15.706 18:57:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:15.706 18:57:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:08:15.706 18:57:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:15.706 18:57:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:08:15.706 18:57:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:15.706 18:57:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:15.706 18:57:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:15.706 18:57:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:08:15.706 18:57:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:08:15.706 18:57:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:15.706 18:57:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:15.706 18:57:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:08:15.706 18:57:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:15.706 18:57:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:15.706 18:57:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:15.706 18:57:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.706 18:57:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.707 18:57:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.707 18:57:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:08:15.707 18:57:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.707 18:57:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:08:15.707 18:57:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:08:15.707 18:57:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:08:15.707 18:57:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:08:15.707 18:57:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@50 -- # : 0 00:08:15.707 18:57:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:08:15.707 18:57:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:08:15.707 18:57:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:08:15.707 18:57:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:15.707 18:57:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:15.707 18:57:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:08:15.707 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:08:15.707 18:57:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:08:15.707 18:57:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:08:15.707 18:57:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@54 -- # have_pci_nics=0 00:08:15.707 18:57:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:15.707 18:57:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:15.707 18:57:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:08:15.707 18:57:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:08:15.707 18:57:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:15.707 18:57:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # prepare_net_devs 00:08:15.707 18:57:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # local -g is_hw=no 00:08:15.707 18:57:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@260 -- # remove_target_ns 00:08:15.707 18:57:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:08:15.707 18:57:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:08:15.707 18:57:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_target_ns 00:08:15.707 18:57:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:08:15.707 18:57:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:08:15.707 18:57:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # xtrace_disable 00:08:15.707 18:57:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:23.986 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:23.986 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@131 -- # pci_devs=() 00:08:23.986 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@131 -- # local -a pci_devs 00:08:23.986 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@132 -- # pci_net_devs=() 00:08:23.986 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:08:23.986 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@133 -- # pci_drivers=() 00:08:23.986 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@133 -- # local -A pci_drivers 00:08:23.986 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@135 -- # net_devs=() 00:08:23.986 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@135 -- # local -ga net_devs 00:08:23.986 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@136 -- # e810=() 00:08:23.986 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@136 -- # local -ga e810 00:08:23.986 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@137 -- # x722=() 00:08:23.986 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@137 -- # local -ga x722 00:08:23.986 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@138 -- # mlx=() 00:08:23.986 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@138 -- # local -ga mlx 00:08:23.986 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:23.986 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:23.986 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:23.986 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:23.986 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:23.986 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:23.986 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:23.986 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:23.986 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:23.986 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:23.987 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:23.987 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:23.987 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:08:23.987 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:08:23.987 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:08:23.987 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:08:23.987 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:08:23.987 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:08:23.987 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:08:23.987 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:23.987 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:23.987 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:08:23.987 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:08:23.987 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:23.987 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:23.987 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:08:23.987 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:08:23.987 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:23.987 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:23.987 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:08:23.987 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:08:23.987 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:23.987 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:23.987 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:08:23.987 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:08:23.987 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:08:23.987 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:08:23.987 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:08:23.987 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:23.987 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:08:23.987 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:23.987 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # [[ up == up ]] 00:08:23.987 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:08:23.987 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:23.987 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:23.987 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:23.987 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:08:23.987 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:08:23.987 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:23.987 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:08:23.987 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:23.987 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # [[ up == up ]] 00:08:23.987 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:08:23.987 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:23.987 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:23.987 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:23.987 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:08:23.987 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:08:23.987 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:08:23.987 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # is_hw=yes 00:08:23.987 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:08:23.987 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:08:23.987 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:08:23.987 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:08:23.987 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@257 -- # create_target_ns 00:08:23.987 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:08:23.987 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:08:23.987 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:08:23.987 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:23.987 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:08:23.987 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:08:23.987 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:08:23.987 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:08:23.987 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:08:23.987 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:08:23.987 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:08:23.987 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:08:23.987 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@27 -- # local -gA dev_map 00:08:23.987 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@28 -- # local -g _dev 00:08:23.987 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:08:23.987 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:08:23.987 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:08:23.987 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:08:23.987 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@44 -- # ips=() 00:08:23.987 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:08:23.987 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:08:23.987 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:08:23.987 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:08:23.987 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:08:23.987 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:08:23.987 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:08:23.987 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:08:23.987 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:08:23.987 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:08:23.987 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:08:23.987 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:08:23.987 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:08:23.987 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:08:23.987 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:08:23.987 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:08:23.987 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:08:23.987 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:08:23.987 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:08:23.987 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:08:23.987 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@11 -- # local val=167772161 00:08:23.987 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:08:23.987 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:08:23.987 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:08:23.987 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:08:23.987 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:08:23.987 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:08:23.987 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:08:23.987 10.0.0.1 00:08:23.987 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:08:23.987 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:08:23.987 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:08:23.987 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:08:23.987 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:08:23.987 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@11 -- # local val=167772162 00:08:23.987 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:08:23.987 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:08:23.987 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:08:23.987 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:08:23.987 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:08:23.987 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:08:23.987 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:08:23.987 10.0.0.2 00:08:23.987 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:08:23.987 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:08:23.987 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:08:23.987 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:08:23.987 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:08:23.987 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:08:23.987 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:08:23.987 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:08:23.987 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:08:23.987 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:08:23.987 18:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:08:23.987 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:08:23.987 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:08:23.987 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:08:23.987 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:08:23.987 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:08:23.987 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:08:23.987 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:08:23.987 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:08:23.987 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:08:23.987 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@38 -- # ping_ips 1 00:08:23.987 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:08:23.987 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:08:23.987 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:08:23.987 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:08:23.987 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:08:23.987 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:08:23.987 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:08:23.987 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:08:23.987 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:08:23.987 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@107 -- # local dev=initiator0 00:08:23.987 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:08:23.987 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:08:23.987 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:08:23.987 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:08:23.987 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:08:23.987 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:08:23.987 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:08:23.987 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:08:23.987 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:08:23.987 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:08:23.987 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:08:23.987 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:08:23.987 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:08:23.987 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:08:23.987 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:08:23.987 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:23.987 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.656 ms 00:08:23.987 00:08:23.987 --- 10.0.0.1 ping statistics --- 00:08:23.987 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:23.988 rtt min/avg/max/mdev = 0.656/0.656/0.656/0.000 ms 00:08:23.988 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:08:23.988 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:08:23.988 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:08:23.988 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:08:23.988 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:08:23.988 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:08:23.988 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@168 -- # get_net_dev target0 00:08:23.988 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@107 -- # local dev=target0 00:08:23.988 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:08:23.988 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:08:23.988 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:08:23.988 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:08:23.988 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:08:23.988 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:08:23.988 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:08:23.988 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:08:23.988 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:08:23.988 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:08:23.988 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:08:23.988 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:08:23.988 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:08:23.988 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:08:23.988 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:23.988 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.335 ms 00:08:23.988 00:08:23.988 --- 10.0.0.2 ping statistics --- 00:08:23.988 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:23.988 rtt min/avg/max/mdev = 0.335/0.335/0.335/0.000 ms 00:08:23.988 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # (( pair++ )) 00:08:23.988 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:08:23.988 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:23.988 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@270 -- # return 0 00:08:23.988 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:08:23.988 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:08:23.988 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:08:23.988 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:08:23.988 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:08:23.988 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:08:23.988 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:08:23.988 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:08:23.988 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:08:23.988 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:08:23.988 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@107 -- # local dev=initiator0 00:08:23.988 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:08:23.988 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:08:23.988 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:08:23.988 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:08:23.988 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:08:23.988 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:08:23.988 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:08:23.988 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:08:23.988 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:08:23.988 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:23.988 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:08:23.988 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:08:23.988 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:08:23.988 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:08:23.988 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:08:23.988 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:08:23.988 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@107 -- # local dev=initiator1 00:08:23.988 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:08:23.988 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:08:23.988 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # return 1 00:08:23.988 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@168 -- # dev= 00:08:23.988 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@169 -- # return 0 00:08:23.988 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:08:23.988 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:08:23.988 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:08:23.988 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:08:23.988 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:08:23.988 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:08:23.988 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:08:23.988 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@168 -- # get_net_dev target0 00:08:23.988 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@107 -- # local dev=target0 00:08:23.988 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:08:23.988 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:08:23.988 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:08:23.988 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:08:23.988 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:08:23.988 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:08:23.988 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:08:23.988 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:08:23.988 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:08:23.988 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:23.988 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:08:23.988 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:08:23.988 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:08:23.988 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:08:23.988 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:08:23.988 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:08:23.988 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@168 -- # get_net_dev target1 00:08:23.988 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@107 -- # local dev=target1 00:08:23.988 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:08:23.988 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:08:23.988 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # return 1 00:08:23.988 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@168 -- # dev= 00:08:23.988 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@169 -- # return 0 00:08:23.988 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:08:23.988 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:23.988 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:08:23.988 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:08:23.988 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:23.988 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:08:23.988 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:08:23.988 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:23.988 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:08:23.988 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:23.988 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:23.988 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # nvmfpid=168033 00:08:23.988 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@329 -- # waitforlisten 168033 00:08:23.988 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:23.988 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # '[' -z 168033 ']' 00:08:23.988 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:23.988 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:23.988 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:23.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:23.988 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:23.988 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:23.988 [2024-11-05 18:57:52.335036] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:08:23.988 [2024-11-05 18:57:52.335089] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:23.988 [2024-11-05 18:57:52.411991] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.988 [2024-11-05 18:57:52.446109] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:23.988 [2024-11-05 18:57:52.446142] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:23.988 [2024-11-05 18:57:52.446150] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:23.988 [2024-11-05 18:57:52.446158] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:23.988 [2024-11-05 18:57:52.446163] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:23.988 [2024-11-05 18:57:52.446733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.988 18:57:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:23.988 18:57:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@866 -- # return 0 00:08:23.988 18:57:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:08:23.988 18:57:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:23.988 18:57:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:23.988 18:57:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:23.988 18:57:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:24.248 [2024-11-05 18:57:53.315017] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:24.248 18:57:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:24.248 18:57:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:24.248 18:57:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:24.248 18:57:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:24.248 ************************************ 00:08:24.248 START TEST lvs_grow_clean 00:08:24.248 ************************************ 00:08:24.248 18:57:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1127 -- # lvs_grow 00:08:24.248 18:57:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:24.248 18:57:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:24.248 18:57:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:24.248 18:57:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:24.248 18:57:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:24.248 18:57:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:24.248 18:57:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:24.248 18:57:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:24.248 18:57:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:24.507 18:57:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:24.507 18:57:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:24.507 18:57:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=233eca6b-4987-4322-8a86-3c53242ced83 00:08:24.507 18:57:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 233eca6b-4987-4322-8a86-3c53242ced83 00:08:24.507 18:57:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:24.767 18:57:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:24.767 18:57:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:24.767 18:57:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 233eca6b-4987-4322-8a86-3c53242ced83 lvol 150 00:08:25.027 18:57:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=cb02fee0-4835-4f76-b240-e7f2c05ca5d6 00:08:25.027 18:57:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:25.027 18:57:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:25.027 [2024-11-05 18:57:54.256954] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:25.027 [2024-11-05 18:57:54.257008] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:25.027 true 00:08:25.027 18:57:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 233eca6b-4987-4322-8a86-3c53242ced83 00:08:25.027 18:57:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:25.286 18:57:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:25.286 18:57:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:25.546 18:57:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 cb02fee0-4835-4f76-b240-e7f2c05ca5d6 00:08:25.546 18:57:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:25.806 [2024-11-05 18:57:54.927041] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:25.806 18:57:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:25.806 18:57:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=168563 00:08:25.806 18:57:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:25.807 18:57:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:25.807 18:57:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 168563 /var/tmp/bdevperf.sock 00:08:25.807 18:57:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # '[' -z 168563 ']' 00:08:25.807 18:57:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:25.807 18:57:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:25.807 18:57:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:25.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:25.807 18:57:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:25.807 18:57:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:26.067 [2024-11-05 18:57:55.146258] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:08:26.067 [2024-11-05 18:57:55.146312] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid168563 ] 00:08:26.067 [2024-11-05 18:57:55.232895] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.067 [2024-11-05 18:57:55.269104] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:26.637 18:57:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:26.637 18:57:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@866 -- # return 0 00:08:26.637 18:57:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:26.897 Nvme0n1 00:08:27.157 18:57:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:27.157 [ 00:08:27.157 { 00:08:27.157 "name": "Nvme0n1", 00:08:27.157 "aliases": [ 00:08:27.157 "cb02fee0-4835-4f76-b240-e7f2c05ca5d6" 00:08:27.157 ], 00:08:27.157 "product_name": "NVMe disk", 00:08:27.157 "block_size": 4096, 00:08:27.157 "num_blocks": 38912, 00:08:27.157 "uuid": "cb02fee0-4835-4f76-b240-e7f2c05ca5d6", 00:08:27.157 "numa_id": 0, 00:08:27.157 "assigned_rate_limits": { 00:08:27.157 "rw_ios_per_sec": 0, 00:08:27.157 "rw_mbytes_per_sec": 0, 00:08:27.157 "r_mbytes_per_sec": 0, 00:08:27.157 "w_mbytes_per_sec": 0 00:08:27.157 }, 00:08:27.157 "claimed": false, 00:08:27.157 "zoned": false, 00:08:27.157 "supported_io_types": { 00:08:27.157 "read": true, 00:08:27.157 "write": true, 00:08:27.157 "unmap": true, 00:08:27.157 "flush": true, 00:08:27.157 "reset": true, 00:08:27.157 "nvme_admin": true, 00:08:27.157 "nvme_io": true, 00:08:27.157 "nvme_io_md": false, 00:08:27.157 "write_zeroes": true, 00:08:27.157 "zcopy": false, 00:08:27.157 "get_zone_info": false, 00:08:27.157 "zone_management": false, 00:08:27.157 "zone_append": false, 00:08:27.157 "compare": true, 00:08:27.157 "compare_and_write": true, 00:08:27.157 "abort": true, 00:08:27.157 "seek_hole": false, 00:08:27.157 "seek_data": false, 00:08:27.157 "copy": true, 00:08:27.157 "nvme_iov_md": false 00:08:27.157 }, 00:08:27.157 "memory_domains": [ 00:08:27.157 { 00:08:27.157 "dma_device_id": "system", 00:08:27.157 "dma_device_type": 1 00:08:27.157 } 00:08:27.157 ], 00:08:27.157 "driver_specific": { 00:08:27.157 "nvme": [ 00:08:27.157 { 00:08:27.157 "trid": { 00:08:27.157 "trtype": "TCP", 00:08:27.157 "adrfam": "IPv4", 00:08:27.157 "traddr": "10.0.0.2", 00:08:27.157 "trsvcid": "4420", 00:08:27.157 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:27.157 }, 00:08:27.157 "ctrlr_data": { 00:08:27.157 "cntlid": 1, 00:08:27.157 "vendor_id": "0x8086", 00:08:27.157 "model_number": "SPDK bdev Controller", 00:08:27.157 "serial_number": "SPDK0", 00:08:27.157 "firmware_revision": "25.01", 00:08:27.157 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:27.157 "oacs": { 00:08:27.157 "security": 0, 00:08:27.157 "format": 0, 00:08:27.157 "firmware": 0, 00:08:27.157 "ns_manage": 0 00:08:27.157 }, 00:08:27.157 "multi_ctrlr": true, 00:08:27.157 "ana_reporting": false 00:08:27.157 }, 00:08:27.157 "vs": { 00:08:27.157 "nvme_version": "1.3" 00:08:27.157 }, 00:08:27.158 "ns_data": { 00:08:27.158 "id": 1, 00:08:27.158 "can_share": true 00:08:27.158 } 00:08:27.158 } 00:08:27.158 ], 00:08:27.158 "mp_policy": "active_passive" 00:08:27.158 } 00:08:27.158 } 00:08:27.158 ] 00:08:27.158 18:57:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=168767 00:08:27.158 18:57:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:27.158 18:57:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:27.418 Running I/O for 10 seconds... 00:08:28.361 Latency(us) 00:08:28.361 [2024-11-05T17:57:57.684Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:28.361 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:28.361 Nvme0n1 : 1.00 17601.00 68.75 0.00 0.00 0.00 0.00 0.00 00:08:28.361 [2024-11-05T17:57:57.684Z] =================================================================================================================== 00:08:28.361 [2024-11-05T17:57:57.684Z] Total : 17601.00 68.75 0.00 0.00 0.00 0.00 0.00 00:08:28.361 00:08:29.301 18:57:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 233eca6b-4987-4322-8a86-3c53242ced83 00:08:29.301 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:29.301 Nvme0n1 : 2.00 17794.00 69.51 0.00 0.00 0.00 0.00 0.00 00:08:29.301 [2024-11-05T17:57:58.624Z] =================================================================================================================== 00:08:29.301 [2024-11-05T17:57:58.624Z] Total : 17794.00 69.51 0.00 0.00 0.00 0.00 0.00 00:08:29.301 00:08:29.301 true 00:08:29.301 18:57:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 233eca6b-4987-4322-8a86-3c53242ced83 00:08:29.301 18:57:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:29.562 18:57:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:29.562 18:57:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:29.562 18:57:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 168767 00:08:30.501 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:30.501 Nvme0n1 : 3.00 17866.33 69.79 0.00 0.00 0.00 0.00 0.00 00:08:30.501 [2024-11-05T17:57:59.824Z] =================================================================================================================== 00:08:30.501 [2024-11-05T17:57:59.824Z] Total : 17866.33 69.79 0.00 0.00 0.00 0.00 0.00 00:08:30.501 00:08:31.442 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:31.442 Nvme0n1 : 4.00 17897.50 69.91 0.00 0.00 0.00 0.00 0.00 00:08:31.442 [2024-11-05T17:58:00.765Z] =================================================================================================================== 00:08:31.442 [2024-11-05T17:58:00.765Z] Total : 17897.50 69.91 0.00 0.00 0.00 0.00 0.00 00:08:31.442 00:08:32.381 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:32.381 Nvme0n1 : 5.00 17924.80 70.02 0.00 0.00 0.00 0.00 0.00 00:08:32.381 [2024-11-05T17:58:01.704Z] =================================================================================================================== 00:08:32.381 [2024-11-05T17:58:01.704Z] Total : 17924.80 70.02 0.00 0.00 0.00 0.00 0.00 00:08:32.381 00:08:33.321 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:33.321 Nvme0n1 : 6.00 17953.00 70.13 0.00 0.00 0.00 0.00 0.00 00:08:33.321 [2024-11-05T17:58:02.644Z] =================================================================================================================== 00:08:33.321 [2024-11-05T17:58:02.644Z] Total : 17953.00 70.13 0.00 0.00 0.00 0.00 0.00 00:08:33.321 00:08:34.263 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:34.263 Nvme0n1 : 7.00 17970.00 70.20 0.00 0.00 0.00 0.00 0.00 00:08:34.263 [2024-11-05T17:58:03.586Z] =================================================================================================================== 00:08:34.263 [2024-11-05T17:58:03.586Z] Total : 17970.00 70.20 0.00 0.00 0.00 0.00 0.00 00:08:34.263 00:08:35.203 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:35.203 Nvme0n1 : 8.00 17997.62 70.30 0.00 0.00 0.00 0.00 0.00 00:08:35.203 [2024-11-05T17:58:04.526Z] =================================================================================================================== 00:08:35.203 [2024-11-05T17:58:04.526Z] Total : 17997.62 70.30 0.00 0.00 0.00 0.00 0.00 00:08:35.203 00:08:36.585 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:36.585 Nvme0n1 : 9.00 18004.67 70.33 0.00 0.00 0.00 0.00 0.00 00:08:36.585 [2024-11-05T17:58:05.908Z] =================================================================================================================== 00:08:36.585 [2024-11-05T17:58:05.908Z] Total : 18004.67 70.33 0.00 0.00 0.00 0.00 0.00 00:08:36.585 00:08:37.527 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:37.527 Nvme0n1 : 10.00 18019.70 70.39 0.00 0.00 0.00 0.00 0.00 00:08:37.527 [2024-11-05T17:58:06.850Z] =================================================================================================================== 00:08:37.527 [2024-11-05T17:58:06.850Z] Total : 18019.70 70.39 0.00 0.00 0.00 0.00 0.00 00:08:37.527 00:08:37.527 00:08:37.527 Latency(us) 00:08:37.527 [2024-11-05T17:58:06.850Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:37.527 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:37.527 Nvme0n1 : 10.01 18023.13 70.40 0.00 0.00 7099.26 4259.84 17913.17 00:08:37.527 [2024-11-05T17:58:06.850Z] =================================================================================================================== 00:08:37.527 [2024-11-05T17:58:06.850Z] Total : 18023.13 70.40 0.00 0.00 7099.26 4259.84 17913.17 00:08:37.527 { 00:08:37.527 "results": [ 00:08:37.527 { 00:08:37.527 "job": "Nvme0n1", 00:08:37.527 "core_mask": "0x2", 00:08:37.527 "workload": "randwrite", 00:08:37.527 "status": "finished", 00:08:37.527 "queue_depth": 128, 00:08:37.527 "io_size": 4096, 00:08:37.527 "runtime": 10.005201, 00:08:37.527 "iops": 18023.126172077904, 00:08:37.527 "mibps": 70.40283660967931, 00:08:37.527 "io_failed": 0, 00:08:37.527 "io_timeout": 0, 00:08:37.527 "avg_latency_us": 7099.264883737696, 00:08:37.527 "min_latency_us": 4259.84, 00:08:37.527 "max_latency_us": 17913.173333333332 00:08:37.527 } 00:08:37.527 ], 00:08:37.527 "core_count": 1 00:08:37.527 } 00:08:37.527 18:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 168563 00:08:37.527 18:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # '[' -z 168563 ']' 00:08:37.527 18:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # kill -0 168563 00:08:37.527 18:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # uname 00:08:37.527 18:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:37.527 18:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 168563 00:08:37.527 18:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:08:37.527 18:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:08:37.527 18:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 168563' 00:08:37.527 killing process with pid 168563 00:08:37.527 18:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@971 -- # kill 168563 00:08:37.527 Received shutdown signal, test time was about 10.000000 seconds 00:08:37.527 00:08:37.527 Latency(us) 00:08:37.527 [2024-11-05T17:58:06.850Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:37.527 [2024-11-05T17:58:06.850Z] =================================================================================================================== 00:08:37.527 [2024-11-05T17:58:06.850Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:37.527 18:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@976 -- # wait 168563 00:08:37.527 18:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:37.788 18:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:38.048 18:58:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 233eca6b-4987-4322-8a86-3c53242ced83 00:08:38.048 18:58:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:38.048 18:58:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:38.048 18:58:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:38.048 18:58:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:38.308 [2024-11-05 18:58:07.454378] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:38.308 18:58:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 233eca6b-4987-4322-8a86-3c53242ced83 00:08:38.308 18:58:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:08:38.308 18:58:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 233eca6b-4987-4322-8a86-3c53242ced83 00:08:38.308 18:58:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:38.308 18:58:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:38.308 18:58:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:38.308 18:58:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:38.308 18:58:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:38.308 18:58:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:38.308 18:58:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:38.308 18:58:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:38.308 18:58:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 233eca6b-4987-4322-8a86-3c53242ced83 00:08:38.569 request: 00:08:38.569 { 00:08:38.569 "uuid": "233eca6b-4987-4322-8a86-3c53242ced83", 00:08:38.569 "method": "bdev_lvol_get_lvstores", 00:08:38.569 "req_id": 1 00:08:38.569 } 00:08:38.569 Got JSON-RPC error response 00:08:38.569 response: 00:08:38.569 { 00:08:38.569 "code": -19, 00:08:38.569 "message": "No such device" 00:08:38.569 } 00:08:38.569 18:58:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:08:38.569 18:58:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:38.569 18:58:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:38.569 18:58:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:38.569 18:58:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:38.569 aio_bdev 00:08:38.569 18:58:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev cb02fee0-4835-4f76-b240-e7f2c05ca5d6 00:08:38.569 18:58:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local bdev_name=cb02fee0-4835-4f76-b240-e7f2c05ca5d6 00:08:38.569 18:58:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:38.569 18:58:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local i 00:08:38.569 18:58:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:38.569 18:58:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:38.569 18:58:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:38.829 18:58:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b cb02fee0-4835-4f76-b240-e7f2c05ca5d6 -t 2000 00:08:38.829 [ 00:08:38.829 { 00:08:38.829 "name": "cb02fee0-4835-4f76-b240-e7f2c05ca5d6", 00:08:38.829 "aliases": [ 00:08:38.829 "lvs/lvol" 00:08:38.829 ], 00:08:38.829 "product_name": "Logical Volume", 00:08:38.829 "block_size": 4096, 00:08:38.829 "num_blocks": 38912, 00:08:38.829 "uuid": "cb02fee0-4835-4f76-b240-e7f2c05ca5d6", 00:08:38.829 "assigned_rate_limits": { 00:08:38.829 "rw_ios_per_sec": 0, 00:08:38.829 "rw_mbytes_per_sec": 0, 00:08:38.829 "r_mbytes_per_sec": 0, 00:08:38.829 "w_mbytes_per_sec": 0 00:08:38.829 }, 00:08:38.829 "claimed": false, 00:08:38.829 "zoned": false, 00:08:38.829 "supported_io_types": { 00:08:38.829 "read": true, 00:08:38.829 "write": true, 00:08:38.829 "unmap": true, 00:08:38.829 "flush": false, 00:08:38.829 "reset": true, 00:08:38.829 "nvme_admin": false, 00:08:38.829 "nvme_io": false, 00:08:38.829 "nvme_io_md": false, 00:08:38.829 "write_zeroes": true, 00:08:38.829 "zcopy": false, 00:08:38.829 "get_zone_info": false, 00:08:38.829 "zone_management": false, 00:08:38.829 "zone_append": false, 00:08:38.829 "compare": false, 00:08:38.829 "compare_and_write": false, 00:08:38.829 "abort": false, 00:08:38.829 "seek_hole": true, 00:08:38.829 "seek_data": true, 00:08:38.829 "copy": false, 00:08:38.829 "nvme_iov_md": false 00:08:38.829 }, 00:08:38.829 "driver_specific": { 00:08:38.829 "lvol": { 00:08:38.829 "lvol_store_uuid": "233eca6b-4987-4322-8a86-3c53242ced83", 00:08:38.829 "base_bdev": "aio_bdev", 00:08:38.829 "thin_provision": false, 00:08:38.829 "num_allocated_clusters": 38, 00:08:38.829 "snapshot": false, 00:08:38.829 "clone": false, 00:08:38.829 "esnap_clone": false 00:08:38.829 } 00:08:38.829 } 00:08:38.829 } 00:08:38.829 ] 00:08:39.089 18:58:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@909 -- # return 0 00:08:39.089 18:58:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 233eca6b-4987-4322-8a86-3c53242ced83 00:08:39.089 18:58:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:39.089 18:58:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:39.089 18:58:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 233eca6b-4987-4322-8a86-3c53242ced83 00:08:39.089 18:58:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:39.349 18:58:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:39.349 18:58:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete cb02fee0-4835-4f76-b240-e7f2c05ca5d6 00:08:39.349 18:58:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 233eca6b-4987-4322-8a86-3c53242ced83 00:08:39.609 18:58:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:39.870 18:58:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:39.870 00:08:39.870 real 0m15.678s 00:08:39.870 user 0m15.382s 00:08:39.870 sys 0m1.377s 00:08:39.870 18:58:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:39.870 18:58:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:39.870 ************************************ 00:08:39.870 END TEST lvs_grow_clean 00:08:39.870 ************************************ 00:08:39.870 18:58:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:39.870 18:58:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:39.870 18:58:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:39.870 18:58:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:39.870 ************************************ 00:08:39.870 START TEST lvs_grow_dirty 00:08:39.870 ************************************ 00:08:39.870 18:58:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1127 -- # lvs_grow dirty 00:08:39.870 18:58:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:39.870 18:58:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:39.870 18:58:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:39.870 18:58:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:39.870 18:58:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:39.870 18:58:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:39.870 18:58:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:39.870 18:58:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:39.870 18:58:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:40.130 18:58:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:40.130 18:58:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:40.390 18:58:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=61a43c8a-642c-4637-b630-7fa5658ed9ac 00:08:40.390 18:58:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 61a43c8a-642c-4637-b630-7fa5658ed9ac 00:08:40.390 18:58:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:40.390 18:58:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:40.390 18:58:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:40.390 18:58:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 61a43c8a-642c-4637-b630-7fa5658ed9ac lvol 150 00:08:40.650 18:58:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=af39dbe3-58ed-4a4c-96a8-567b0a2999fd 00:08:40.650 18:58:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:40.650 18:58:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:40.911 [2024-11-05 18:58:10.005673] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:40.911 [2024-11-05 18:58:10.005730] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:40.911 true 00:08:40.911 18:58:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 61a43c8a-642c-4637-b630-7fa5658ed9ac 00:08:40.911 18:58:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:40.911 18:58:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:40.911 18:58:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:41.171 18:58:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 af39dbe3-58ed-4a4c-96a8-567b0a2999fd 00:08:41.432 18:58:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:41.432 [2024-11-05 18:58:10.679737] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:41.432 18:58:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:41.693 18:58:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:41.693 18:58:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=171840 00:08:41.693 18:58:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:41.693 18:58:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 171840 /var/tmp/bdevperf.sock 00:08:41.693 18:58:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 171840 ']' 00:08:41.693 18:58:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:41.693 18:58:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:41.693 18:58:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:41.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:41.693 18:58:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:41.693 18:58:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:41.693 [2024-11-05 18:58:10.894494] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:08:41.693 [2024-11-05 18:58:10.894546] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid171840 ] 00:08:41.693 [2024-11-05 18:58:10.974763] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.693 [2024-11-05 18:58:11.004555] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:41.953 18:58:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:41.953 18:58:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:08:41.953 18:58:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:42.214 Nvme0n1 00:08:42.214 18:58:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:42.214 [ 00:08:42.214 { 00:08:42.214 "name": "Nvme0n1", 00:08:42.214 "aliases": [ 00:08:42.214 "af39dbe3-58ed-4a4c-96a8-567b0a2999fd" 00:08:42.214 ], 00:08:42.214 "product_name": "NVMe disk", 00:08:42.214 "block_size": 4096, 00:08:42.214 "num_blocks": 38912, 00:08:42.214 "uuid": "af39dbe3-58ed-4a4c-96a8-567b0a2999fd", 00:08:42.214 "numa_id": 0, 00:08:42.214 "assigned_rate_limits": { 00:08:42.214 "rw_ios_per_sec": 0, 00:08:42.214 "rw_mbytes_per_sec": 0, 00:08:42.214 "r_mbytes_per_sec": 0, 00:08:42.214 "w_mbytes_per_sec": 0 00:08:42.214 }, 00:08:42.214 "claimed": false, 00:08:42.214 "zoned": false, 00:08:42.214 "supported_io_types": { 00:08:42.214 "read": true, 00:08:42.214 "write": true, 00:08:42.214 "unmap": true, 00:08:42.214 "flush": true, 00:08:42.214 "reset": true, 00:08:42.214 "nvme_admin": true, 00:08:42.214 "nvme_io": true, 00:08:42.214 "nvme_io_md": false, 00:08:42.214 "write_zeroes": true, 00:08:42.214 "zcopy": false, 00:08:42.214 "get_zone_info": false, 00:08:42.214 "zone_management": false, 00:08:42.214 "zone_append": false, 00:08:42.214 "compare": true, 00:08:42.214 "compare_and_write": true, 00:08:42.214 "abort": true, 00:08:42.214 "seek_hole": false, 00:08:42.214 "seek_data": false, 00:08:42.214 "copy": true, 00:08:42.214 "nvme_iov_md": false 00:08:42.214 }, 00:08:42.214 "memory_domains": [ 00:08:42.214 { 00:08:42.214 "dma_device_id": "system", 00:08:42.214 "dma_device_type": 1 00:08:42.214 } 00:08:42.214 ], 00:08:42.214 "driver_specific": { 00:08:42.214 "nvme": [ 00:08:42.214 { 00:08:42.214 "trid": { 00:08:42.214 "trtype": "TCP", 00:08:42.214 "adrfam": "IPv4", 00:08:42.214 "traddr": "10.0.0.2", 00:08:42.214 "trsvcid": "4420", 00:08:42.214 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:42.214 }, 00:08:42.214 "ctrlr_data": { 00:08:42.214 "cntlid": 1, 00:08:42.214 "vendor_id": "0x8086", 00:08:42.214 "model_number": "SPDK bdev Controller", 00:08:42.214 "serial_number": "SPDK0", 00:08:42.214 "firmware_revision": "25.01", 00:08:42.214 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:42.214 "oacs": { 00:08:42.214 "security": 0, 00:08:42.214 "format": 0, 00:08:42.214 "firmware": 0, 00:08:42.214 "ns_manage": 0 00:08:42.214 }, 00:08:42.214 "multi_ctrlr": true, 00:08:42.214 "ana_reporting": false 00:08:42.214 }, 00:08:42.214 "vs": { 00:08:42.214 "nvme_version": "1.3" 00:08:42.214 }, 00:08:42.214 "ns_data": { 00:08:42.214 "id": 1, 00:08:42.214 "can_share": true 00:08:42.214 } 00:08:42.214 } 00:08:42.214 ], 00:08:42.214 "mp_policy": "active_passive" 00:08:42.214 } 00:08:42.214 } 00:08:42.214 ] 00:08:42.474 18:58:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=171858 00:08:42.474 18:58:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:42.474 18:58:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:42.474 Running I/O for 10 seconds... 00:08:43.415 Latency(us) 00:08:43.415 [2024-11-05T17:58:12.738Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:43.415 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:43.415 Nvme0n1 : 1.00 17587.00 68.70 0.00 0.00 0.00 0.00 0.00 00:08:43.415 [2024-11-05T17:58:12.738Z] =================================================================================================================== 00:08:43.415 [2024-11-05T17:58:12.738Z] Total : 17587.00 68.70 0.00 0.00 0.00 0.00 0.00 00:08:43.415 00:08:44.356 18:58:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 61a43c8a-642c-4637-b630-7fa5658ed9ac 00:08:44.356 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:44.356 Nvme0n1 : 2.00 17794.50 69.51 0.00 0.00 0.00 0.00 0.00 00:08:44.356 [2024-11-05T17:58:13.679Z] =================================================================================================================== 00:08:44.356 [2024-11-05T17:58:13.680Z] Total : 17794.50 69.51 0.00 0.00 0.00 0.00 0.00 00:08:44.357 00:08:44.617 true 00:08:44.617 18:58:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 61a43c8a-642c-4637-b630-7fa5658ed9ac 00:08:44.617 18:58:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:44.617 18:58:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:44.617 18:58:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:44.617 18:58:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 171858 00:08:45.558 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:45.558 Nvme0n1 : 3.00 17829.67 69.65 0.00 0.00 0.00 0.00 0.00 00:08:45.558 [2024-11-05T17:58:14.881Z] =================================================================================================================== 00:08:45.559 [2024-11-05T17:58:14.882Z] Total : 17829.67 69.65 0.00 0.00 0.00 0.00 0.00 00:08:45.559 00:08:46.498 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:46.498 Nvme0n1 : 4.00 17892.00 69.89 0.00 0.00 0.00 0.00 0.00 00:08:46.498 [2024-11-05T17:58:15.821Z] =================================================================================================================== 00:08:46.498 [2024-11-05T17:58:15.821Z] Total : 17892.00 69.89 0.00 0.00 0.00 0.00 0.00 00:08:46.498 00:08:47.439 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:47.440 Nvme0n1 : 5.00 17932.60 70.05 0.00 0.00 0.00 0.00 0.00 00:08:47.440 [2024-11-05T17:58:16.763Z] =================================================================================================================== 00:08:47.440 [2024-11-05T17:58:16.763Z] Total : 17932.60 70.05 0.00 0.00 0.00 0.00 0.00 00:08:47.440 00:08:48.424 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:48.424 Nvme0n1 : 6.00 17955.67 70.14 0.00 0.00 0.00 0.00 0.00 00:08:48.424 [2024-11-05T17:58:17.747Z] =================================================================================================================== 00:08:48.424 [2024-11-05T17:58:17.747Z] Total : 17955.67 70.14 0.00 0.00 0.00 0.00 0.00 00:08:48.424 00:08:49.425 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:49.426 Nvme0n1 : 7.00 17963.43 70.17 0.00 0.00 0.00 0.00 0.00 00:08:49.426 [2024-11-05T17:58:18.749Z] =================================================================================================================== 00:08:49.426 [2024-11-05T17:58:18.749Z] Total : 17963.43 70.17 0.00 0.00 0.00 0.00 0.00 00:08:49.426 00:08:50.370 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:50.370 Nvme0n1 : 8.00 17985.12 70.25 0.00 0.00 0.00 0.00 0.00 00:08:50.370 [2024-11-05T17:58:19.693Z] =================================================================================================================== 00:08:50.370 [2024-11-05T17:58:19.693Z] Total : 17985.12 70.25 0.00 0.00 0.00 0.00 0.00 00:08:50.370 00:08:51.752 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:51.752 Nvme0n1 : 9.00 17992.67 70.28 0.00 0.00 0.00 0.00 0.00 00:08:51.752 [2024-11-05T17:58:21.076Z] =================================================================================================================== 00:08:51.753 [2024-11-05T17:58:21.076Z] Total : 17992.67 70.28 0.00 0.00 0.00 0.00 0.00 00:08:51.753 00:08:52.695 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:52.695 Nvme0n1 : 10.00 18009.80 70.35 0.00 0.00 0.00 0.00 0.00 00:08:52.695 [2024-11-05T17:58:22.018Z] =================================================================================================================== 00:08:52.695 [2024-11-05T17:58:22.018Z] Total : 18009.80 70.35 0.00 0.00 0.00 0.00 0.00 00:08:52.695 00:08:52.695 00:08:52.695 Latency(us) 00:08:52.695 [2024-11-05T17:58:22.018Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:52.695 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:52.695 Nvme0n1 : 10.01 18010.32 70.35 0.00 0.00 7103.72 4287.15 16056.32 00:08:52.695 [2024-11-05T17:58:22.018Z] =================================================================================================================== 00:08:52.695 [2024-11-05T17:58:22.018Z] Total : 18010.32 70.35 0.00 0.00 7103.72 4287.15 16056.32 00:08:52.695 { 00:08:52.695 "results": [ 00:08:52.695 { 00:08:52.695 "job": "Nvme0n1", 00:08:52.695 "core_mask": "0x2", 00:08:52.695 "workload": "randwrite", 00:08:52.695 "status": "finished", 00:08:52.695 "queue_depth": 128, 00:08:52.695 "io_size": 4096, 00:08:52.695 "runtime": 10.006818, 00:08:52.695 "iops": 18010.320563439847, 00:08:52.695 "mibps": 70.3528147009369, 00:08:52.695 "io_failed": 0, 00:08:52.695 "io_timeout": 0, 00:08:52.695 "avg_latency_us": 7103.718061840874, 00:08:52.695 "min_latency_us": 4287.1466666666665, 00:08:52.695 "max_latency_us": 16056.32 00:08:52.695 } 00:08:52.695 ], 00:08:52.695 "core_count": 1 00:08:52.695 } 00:08:52.695 18:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 171840 00:08:52.695 18:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # '[' -z 171840 ']' 00:08:52.695 18:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # kill -0 171840 00:08:52.695 18:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # uname 00:08:52.695 18:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:52.695 18:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 171840 00:08:52.695 18:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:08:52.695 18:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:08:52.695 18:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # echo 'killing process with pid 171840' 00:08:52.695 killing process with pid 171840 00:08:52.695 18:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@971 -- # kill 171840 00:08:52.695 Received shutdown signal, test time was about 10.000000 seconds 00:08:52.695 00:08:52.695 Latency(us) 00:08:52.695 [2024-11-05T17:58:22.018Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:52.695 [2024-11-05T17:58:22.018Z] =================================================================================================================== 00:08:52.695 [2024-11-05T17:58:22.018Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:52.695 18:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@976 -- # wait 171840 00:08:52.695 18:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:52.955 18:58:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:52.956 18:58:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 61a43c8a-642c-4637-b630-7fa5658ed9ac 00:08:52.956 18:58:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:53.216 18:58:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:53.216 18:58:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:53.216 18:58:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 168033 00:08:53.216 18:58:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 168033 00:08:53.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 168033 Killed "${NVMF_APP[@]}" "$@" 00:08:53.216 18:58:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:53.216 18:58:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:53.216 18:58:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:08:53.216 18:58:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:53.216 18:58:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:53.216 18:58:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@328 -- # nvmfpid=174140 00:08:53.216 18:58:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@329 -- # waitforlisten 174140 00:08:53.216 18:58:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:53.216 18:58:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 174140 ']' 00:08:53.216 18:58:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:53.216 18:58:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:53.216 18:58:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:53.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:53.216 18:58:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:53.216 18:58:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:53.476 [2024-11-05 18:58:22.545809] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:08:53.476 [2024-11-05 18:58:22.545889] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:53.477 [2024-11-05 18:58:22.625662] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:53.477 [2024-11-05 18:58:22.661120] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:53.477 [2024-11-05 18:58:22.661152] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:53.477 [2024-11-05 18:58:22.661160] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:53.477 [2024-11-05 18:58:22.661167] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:53.477 [2024-11-05 18:58:22.661172] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:53.477 [2024-11-05 18:58:22.661733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:54.047 18:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:54.047 18:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:08:54.047 18:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:08:54.047 18:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:54.047 18:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:54.047 18:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:54.047 18:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:54.308 [2024-11-05 18:58:23.520027] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:54.308 [2024-11-05 18:58:23.520118] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:54.308 [2024-11-05 18:58:23.520149] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:54.308 18:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:54.308 18:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev af39dbe3-58ed-4a4c-96a8-567b0a2999fd 00:08:54.308 18:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=af39dbe3-58ed-4a4c-96a8-567b0a2999fd 00:08:54.308 18:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:54.308 18:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:08:54.308 18:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:54.308 18:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:54.308 18:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:54.568 18:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b af39dbe3-58ed-4a4c-96a8-567b0a2999fd -t 2000 00:08:54.568 [ 00:08:54.568 { 00:08:54.568 "name": "af39dbe3-58ed-4a4c-96a8-567b0a2999fd", 00:08:54.568 "aliases": [ 00:08:54.568 "lvs/lvol" 00:08:54.568 ], 00:08:54.568 "product_name": "Logical Volume", 00:08:54.568 "block_size": 4096, 00:08:54.568 "num_blocks": 38912, 00:08:54.568 "uuid": "af39dbe3-58ed-4a4c-96a8-567b0a2999fd", 00:08:54.568 "assigned_rate_limits": { 00:08:54.568 "rw_ios_per_sec": 0, 00:08:54.568 "rw_mbytes_per_sec": 0, 00:08:54.568 "r_mbytes_per_sec": 0, 00:08:54.568 "w_mbytes_per_sec": 0 00:08:54.568 }, 00:08:54.568 "claimed": false, 00:08:54.568 "zoned": false, 00:08:54.568 "supported_io_types": { 00:08:54.568 "read": true, 00:08:54.568 "write": true, 00:08:54.568 "unmap": true, 00:08:54.568 "flush": false, 00:08:54.568 "reset": true, 00:08:54.568 "nvme_admin": false, 00:08:54.568 "nvme_io": false, 00:08:54.568 "nvme_io_md": false, 00:08:54.568 "write_zeroes": true, 00:08:54.568 "zcopy": false, 00:08:54.568 "get_zone_info": false, 00:08:54.568 "zone_management": false, 00:08:54.568 "zone_append": false, 00:08:54.568 "compare": false, 00:08:54.568 "compare_and_write": false, 00:08:54.568 "abort": false, 00:08:54.568 "seek_hole": true, 00:08:54.568 "seek_data": true, 00:08:54.568 "copy": false, 00:08:54.568 "nvme_iov_md": false 00:08:54.568 }, 00:08:54.568 "driver_specific": { 00:08:54.568 "lvol": { 00:08:54.568 "lvol_store_uuid": "61a43c8a-642c-4637-b630-7fa5658ed9ac", 00:08:54.568 "base_bdev": "aio_bdev", 00:08:54.568 "thin_provision": false, 00:08:54.568 "num_allocated_clusters": 38, 00:08:54.568 "snapshot": false, 00:08:54.568 "clone": false, 00:08:54.568 "esnap_clone": false 00:08:54.568 } 00:08:54.568 } 00:08:54.568 } 00:08:54.568 ] 00:08:54.568 18:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:08:54.568 18:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 61a43c8a-642c-4637-b630-7fa5658ed9ac 00:08:54.828 18:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:54.828 18:58:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:54.828 18:58:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 61a43c8a-642c-4637-b630-7fa5658ed9ac 00:08:54.828 18:58:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:55.088 18:58:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:55.088 18:58:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:55.088 [2024-11-05 18:58:24.368275] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:55.089 18:58:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 61a43c8a-642c-4637-b630-7fa5658ed9ac 00:08:55.089 18:58:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:08:55.089 18:58:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 61a43c8a-642c-4637-b630-7fa5658ed9ac 00:08:55.089 18:58:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:55.089 18:58:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:55.089 18:58:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:55.089 18:58:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:55.089 18:58:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:55.089 18:58:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:55.089 18:58:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:55.089 18:58:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:55.349 18:58:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 61a43c8a-642c-4637-b630-7fa5658ed9ac 00:08:55.349 request: 00:08:55.349 { 00:08:55.349 "uuid": "61a43c8a-642c-4637-b630-7fa5658ed9ac", 00:08:55.349 "method": "bdev_lvol_get_lvstores", 00:08:55.349 "req_id": 1 00:08:55.349 } 00:08:55.349 Got JSON-RPC error response 00:08:55.349 response: 00:08:55.349 { 00:08:55.349 "code": -19, 00:08:55.349 "message": "No such device" 00:08:55.349 } 00:08:55.349 18:58:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:08:55.349 18:58:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:55.349 18:58:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:55.349 18:58:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:55.349 18:58:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:55.609 aio_bdev 00:08:55.609 18:58:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev af39dbe3-58ed-4a4c-96a8-567b0a2999fd 00:08:55.609 18:58:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=af39dbe3-58ed-4a4c-96a8-567b0a2999fd 00:08:55.609 18:58:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:55.609 18:58:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:08:55.609 18:58:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:55.609 18:58:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:55.609 18:58:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:55.869 18:58:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b af39dbe3-58ed-4a4c-96a8-567b0a2999fd -t 2000 00:08:55.869 [ 00:08:55.869 { 00:08:55.869 "name": "af39dbe3-58ed-4a4c-96a8-567b0a2999fd", 00:08:55.869 "aliases": [ 00:08:55.869 "lvs/lvol" 00:08:55.869 ], 00:08:55.869 "product_name": "Logical Volume", 00:08:55.869 "block_size": 4096, 00:08:55.869 "num_blocks": 38912, 00:08:55.869 "uuid": "af39dbe3-58ed-4a4c-96a8-567b0a2999fd", 00:08:55.869 "assigned_rate_limits": { 00:08:55.869 "rw_ios_per_sec": 0, 00:08:55.869 "rw_mbytes_per_sec": 0, 00:08:55.869 "r_mbytes_per_sec": 0, 00:08:55.869 "w_mbytes_per_sec": 0 00:08:55.869 }, 00:08:55.869 "claimed": false, 00:08:55.869 "zoned": false, 00:08:55.869 "supported_io_types": { 00:08:55.869 "read": true, 00:08:55.869 "write": true, 00:08:55.869 "unmap": true, 00:08:55.869 "flush": false, 00:08:55.869 "reset": true, 00:08:55.869 "nvme_admin": false, 00:08:55.869 "nvme_io": false, 00:08:55.869 "nvme_io_md": false, 00:08:55.869 "write_zeroes": true, 00:08:55.869 "zcopy": false, 00:08:55.869 "get_zone_info": false, 00:08:55.869 "zone_management": false, 00:08:55.869 "zone_append": false, 00:08:55.869 "compare": false, 00:08:55.869 "compare_and_write": false, 00:08:55.869 "abort": false, 00:08:55.869 "seek_hole": true, 00:08:55.869 "seek_data": true, 00:08:55.869 "copy": false, 00:08:55.869 "nvme_iov_md": false 00:08:55.869 }, 00:08:55.869 "driver_specific": { 00:08:55.869 "lvol": { 00:08:55.869 "lvol_store_uuid": "61a43c8a-642c-4637-b630-7fa5658ed9ac", 00:08:55.869 "base_bdev": "aio_bdev", 00:08:55.869 "thin_provision": false, 00:08:55.869 "num_allocated_clusters": 38, 00:08:55.869 "snapshot": false, 00:08:55.869 "clone": false, 00:08:55.869 "esnap_clone": false 00:08:55.869 } 00:08:55.869 } 00:08:55.869 } 00:08:55.869 ] 00:08:55.869 18:58:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:08:55.869 18:58:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 61a43c8a-642c-4637-b630-7fa5658ed9ac 00:08:55.869 18:58:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:56.129 18:58:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:56.129 18:58:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 61a43c8a-642c-4637-b630-7fa5658ed9ac 00:08:56.129 18:58:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:56.389 18:58:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:56.389 18:58:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete af39dbe3-58ed-4a4c-96a8-567b0a2999fd 00:08:56.389 18:58:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 61a43c8a-642c-4637-b630-7fa5658ed9ac 00:08:56.649 18:58:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:56.910 18:58:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:56.910 00:08:56.910 real 0m16.925s 00:08:56.910 user 0m44.241s 00:08:56.910 sys 0m2.972s 00:08:56.910 18:58:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:56.910 18:58:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:56.910 ************************************ 00:08:56.910 END TEST lvs_grow_dirty 00:08:56.910 ************************************ 00:08:56.910 18:58:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:56.910 18:58:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # type=--id 00:08:56.910 18:58:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@811 -- # id=0 00:08:56.910 18:58:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:08:56.911 18:58:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:56.911 18:58:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:08:56.911 18:58:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:08:56.911 18:58:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@822 -- # for n in $shm_files 00:08:56.911 18:58:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:56.911 nvmf_trace.0 00:08:56.911 18:58:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # return 0 00:08:56.911 18:58:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:56.911 18:58:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@335 -- # nvmfcleanup 00:08:56.911 18:58:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@99 -- # sync 00:08:56.911 18:58:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:08:56.911 18:58:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@102 -- # set +e 00:08:56.911 18:58:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@103 -- # for i in {1..20} 00:08:56.911 18:58:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:08:56.911 rmmod nvme_tcp 00:08:56.911 rmmod nvme_fabrics 00:08:56.911 rmmod nvme_keyring 00:08:56.911 18:58:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:08:56.911 18:58:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # set -e 00:08:56.911 18:58:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # return 0 00:08:56.911 18:58:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # '[' -n 174140 ']' 00:08:56.911 18:58:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@337 -- # killprocess 174140 00:08:56.911 18:58:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # '[' -z 174140 ']' 00:08:56.911 18:58:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # kill -0 174140 00:08:56.911 18:58:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # uname 00:08:56.911 18:58:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:56.911 18:58:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 174140 00:08:57.172 18:58:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:57.172 18:58:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:57.172 18:58:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # echo 'killing process with pid 174140' 00:08:57.172 killing process with pid 174140 00:08:57.172 18:58:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@971 -- # kill 174140 00:08:57.172 18:58:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@976 -- # wait 174140 00:08:57.172 18:58:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:08:57.172 18:58:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # nvmf_fini 00:08:57.172 18:58:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@264 -- # local dev 00:08:57.172 18:58:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@267 -- # remove_target_ns 00:08:57.172 18:58:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:08:57.172 18:58:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:08:57.172 18:58:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_target_ns 00:08:59.721 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@268 -- # delete_main_bridge 00:08:59.721 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:08:59.721 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@130 -- # return 0 00:08:59.721 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:08:59.721 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:08:59.721 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:08:59.721 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:08:59.721 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:08:59.721 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:08:59.721 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:08:59.721 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:08:59.721 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:08:59.721 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:08:59.721 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:08:59.721 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:08:59.721 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:08:59.721 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:08:59.721 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:08:59.721 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:08:59.721 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:08:59.721 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@41 -- # _dev=0 00:08:59.721 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@41 -- # dev_map=() 00:08:59.721 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@284 -- # iptr 00:08:59.721 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@542 -- # iptables-save 00:08:59.721 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:08:59.721 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@542 -- # iptables-restore 00:08:59.721 00:08:59.721 real 0m43.915s 00:08:59.721 user 1m6.140s 00:08:59.721 sys 0m10.324s 00:08:59.721 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:59.721 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:59.721 ************************************ 00:08:59.721 END TEST nvmf_lvs_grow 00:08:59.721 ************************************ 00:08:59.721 18:58:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@24 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:59.721 18:58:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:59.721 18:58:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:59.721 18:58:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:59.721 ************************************ 00:08:59.721 START TEST nvmf_bdev_io_wait 00:08:59.721 ************************************ 00:08:59.721 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:59.721 * Looking for test storage... 00:08:59.721 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:59.721 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:59.721 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lcov --version 00:08:59.721 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:59.721 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:59.721 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:59.721 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:59.721 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:59.721 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:08:59.721 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:08:59.721 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:08:59.721 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:08:59.721 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:08:59.721 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:08:59.721 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:08:59.721 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:59.721 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:08:59.721 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:08:59.721 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:59.722 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:59.722 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:08:59.722 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:08:59.722 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:59.722 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:08:59.722 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:08:59.722 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:08:59.722 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:08:59.722 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:59.722 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:08:59.722 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:08:59.722 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:59.722 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:59.722 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:08:59.722 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:59.722 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:59.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:59.722 --rc genhtml_branch_coverage=1 00:08:59.722 --rc genhtml_function_coverage=1 00:08:59.722 --rc genhtml_legend=1 00:08:59.722 --rc geninfo_all_blocks=1 00:08:59.722 --rc geninfo_unexecuted_blocks=1 00:08:59.722 00:08:59.722 ' 00:08:59.722 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:59.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:59.722 --rc genhtml_branch_coverage=1 00:08:59.722 --rc genhtml_function_coverage=1 00:08:59.722 --rc genhtml_legend=1 00:08:59.722 --rc geninfo_all_blocks=1 00:08:59.722 --rc geninfo_unexecuted_blocks=1 00:08:59.722 00:08:59.722 ' 00:08:59.722 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:59.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:59.722 --rc genhtml_branch_coverage=1 00:08:59.722 --rc genhtml_function_coverage=1 00:08:59.722 --rc genhtml_legend=1 00:08:59.722 --rc geninfo_all_blocks=1 00:08:59.722 --rc geninfo_unexecuted_blocks=1 00:08:59.722 00:08:59.722 ' 00:08:59.722 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:59.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:59.722 --rc genhtml_branch_coverage=1 00:08:59.722 --rc genhtml_function_coverage=1 00:08:59.722 --rc genhtml_legend=1 00:08:59.722 --rc geninfo_all_blocks=1 00:08:59.722 --rc geninfo_unexecuted_blocks=1 00:08:59.722 00:08:59.722 ' 00:08:59.722 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:59.722 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:08:59.722 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:59.722 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:59.722 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:59.722 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:59.722 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:59.722 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:08:59.722 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:59.722 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:08:59.722 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:59.722 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:59.722 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:59.722 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:08:59.722 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:08:59.722 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:59.722 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:59.722 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:08:59.722 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:59.722 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:59.722 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:59.722 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.722 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.722 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.722 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:08:59.722 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.722 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:08:59.722 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:08:59.722 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:08:59.722 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:08:59.722 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@50 -- # : 0 00:08:59.722 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:08:59.722 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:08:59.722 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:08:59.722 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:59.722 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:59.722 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:08:59.722 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:08:59.722 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:08:59.722 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:08:59.722 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@54 -- # have_pci_nics=0 00:08:59.722 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:59.722 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:59.722 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:59.722 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:08:59.722 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:59.722 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # prepare_net_devs 00:08:59.722 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # local -g is_hw=no 00:08:59.722 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # remove_target_ns 00:08:59.722 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:08:59.722 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:08:59.722 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_target_ns 00:08:59.722 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:08:59.722 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:08:59.722 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # xtrace_disable 00:08:59.723 18:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:07.866 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:07.866 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@131 -- # pci_devs=() 00:09:07.866 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@131 -- # local -a pci_devs 00:09:07.866 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@132 -- # pci_net_devs=() 00:09:07.866 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:09:07.866 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@133 -- # pci_drivers=() 00:09:07.866 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@133 -- # local -A pci_drivers 00:09:07.866 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@135 -- # net_devs=() 00:09:07.866 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@135 -- # local -ga net_devs 00:09:07.866 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@136 -- # e810=() 00:09:07.866 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@136 -- # local -ga e810 00:09:07.866 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@137 -- # x722=() 00:09:07.866 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@137 -- # local -ga x722 00:09:07.866 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@138 -- # mlx=() 00:09:07.866 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@138 -- # local -ga mlx 00:09:07.866 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:07.866 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:07.866 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:07.866 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:07.866 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:07.866 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:07.866 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:07.866 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:07.866 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:07.866 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:07.866 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:07.866 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:07.867 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:09:07.867 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:09:07.867 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:09:07.867 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:09:07.867 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:09:07.867 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:09:07.867 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:09:07.867 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:07.867 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:07.867 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:09:07.867 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:09:07.867 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:07.867 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:07.867 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:09:07.867 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:09:07.867 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:07.867 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:07.867 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:09:07.867 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:09:07.867 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:07.867 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:07.867 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:09:07.867 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:09:07.867 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:09:07.867 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:09:07.867 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:09:07.867 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:07.867 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:09:07.867 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:07.867 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # [[ up == up ]] 00:09:07.867 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:09:07.867 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:07.867 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:07.867 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:07.867 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:09:07.867 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:09:07.867 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:07.867 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:09:07.867 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:07.867 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # [[ up == up ]] 00:09:07.867 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:09:07.867 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:07.867 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:07.867 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:07.867 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:09:07.867 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:09:07.867 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:09:07.867 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # is_hw=yes 00:09:07.867 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:09:07.867 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:09:07.867 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:09:07.867 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:09:07.867 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@257 -- # create_target_ns 00:09:07.867 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:09:07.867 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:09:07.867 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:09:07.867 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:07.867 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:09:07.867 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:09:07.867 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:09:07.867 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:09:07.867 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:09:07.867 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:09:07.867 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:09:07.867 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:09:07.867 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@27 -- # local -gA dev_map 00:09:07.867 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@28 -- # local -g _dev 00:09:07.867 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:09:07.867 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:09:07.867 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:09:07.867 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:09:07.867 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@44 -- # ips=() 00:09:07.867 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:09:07.867 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:09:07.867 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:09:07.867 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:09:07.867 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:09:07.867 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:09:07.867 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:09:07.867 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:09:07.867 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:09:07.867 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:09:07.867 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:09:07.867 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:09:07.867 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:09:07.867 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:09:07.867 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:09:07.867 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:09:07.867 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:09:07.867 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:09:07.867 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:09:07.867 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:09:07.867 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@11 -- # local val=167772161 00:09:07.867 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:09:07.867 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:09:07.867 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:09:07.867 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:09:07.867 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:09:07.867 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:09:07.867 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:09:07.867 10.0.0.1 00:09:07.868 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:09:07.868 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:09:07.868 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:09:07.868 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:09:07.868 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:09:07.868 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@11 -- # local val=167772162 00:09:07.868 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:09:07.868 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:09:07.868 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:09:07.868 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:09:07.868 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:09:07.868 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:09:07.868 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:09:07.868 10.0.0.2 00:09:07.868 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:09:07.868 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:09:07.868 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:09:07.868 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:09:07.868 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:09:07.868 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:09:07.868 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:09:07.868 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:09:07.868 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:09:07.868 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:09:07.868 18:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:09:07.868 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:09:07.868 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:09:07.868 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:09:07.868 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:09:07.868 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:09:07.868 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:09:07.868 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:09:07.868 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:09:07.868 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:09:07.868 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@38 -- # ping_ips 1 00:09:07.868 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:09:07.868 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:09:07.868 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:09:07.868 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:09:07.868 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:09:07.868 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:09:07.868 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:09:07.868 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:09:07.868 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:09:07.868 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@107 -- # local dev=initiator0 00:09:07.868 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:09:07.868 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:09:07.868 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:09:07.868 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:09:07.868 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:09:07.868 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:09:07.868 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:09:07.868 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:09:07.868 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:09:07.868 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:09:07.868 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:09:07.868 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:09:07.868 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:09:07.868 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:09:07.868 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:09:07.868 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:07.868 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.557 ms 00:09:07.868 00:09:07.868 --- 10.0.0.1 ping statistics --- 00:09:07.868 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:07.868 rtt min/avg/max/mdev = 0.557/0.557/0.557/0.000 ms 00:09:07.868 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:09:07.868 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:09:07.868 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:09:07.868 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:09:07.868 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:09:07.868 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:09:07.868 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@168 -- # get_net_dev target0 00:09:07.868 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@107 -- # local dev=target0 00:09:07.868 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:09:07.868 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:09:07.868 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:09:07.868 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:09:07.868 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:09:07.868 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:09:07.868 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:09:07.868 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:09:07.868 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:09:07.868 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:09:07.868 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:09:07.868 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:09:07.868 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:09:07.868 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:09:07.868 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:07.868 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.317 ms 00:09:07.868 00:09:07.868 --- 10.0.0.2 ping statistics --- 00:09:07.868 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:07.868 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:09:07.868 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # (( pair++ )) 00:09:07.868 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:09:07.868 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:07.868 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # return 0 00:09:07.868 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:09:07.868 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:09:07.868 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:09:07.868 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:09:07.868 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:09:07.868 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:09:07.869 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:09:07.869 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:09:07.869 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:09:07.869 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:09:07.869 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@107 -- # local dev=initiator0 00:09:07.869 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:09:07.869 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:09:07.869 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:09:07.869 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:09:07.869 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:09:07.869 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:09:07.869 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:09:07.869 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:09:07.869 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:09:07.869 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:07.869 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:09:07.869 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:09:07.869 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:09:07.869 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:09:07.869 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:09:07.869 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:09:07.869 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@107 -- # local dev=initiator1 00:09:07.869 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:09:07.869 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:09:07.869 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # return 1 00:09:07.869 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@168 -- # dev= 00:09:07.869 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@169 -- # return 0 00:09:07.869 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:09:07.869 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:09:07.869 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:09:07.869 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:09:07.869 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:09:07.869 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:09:07.869 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:09:07.869 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@168 -- # get_net_dev target0 00:09:07.869 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@107 -- # local dev=target0 00:09:07.869 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:09:07.869 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:09:07.869 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:09:07.869 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:09:07.869 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:09:07.869 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:09:07.869 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:09:07.869 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:09:07.869 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:09:07.869 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:07.869 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:09:07.869 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:09:07.869 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:09:07.869 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:09:07.869 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:09:07.869 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:09:07.869 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@168 -- # get_net_dev target1 00:09:07.869 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@107 -- # local dev=target1 00:09:07.869 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:09:07.869 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:09:07.869 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # return 1 00:09:07.869 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@168 -- # dev= 00:09:07.869 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@169 -- # return 0 00:09:07.869 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:09:07.869 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:07.869 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:09:07.869 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:09:07.869 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:07.869 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:09:07.869 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:09:07.869 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:09:07.869 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:09:07.869 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:07.869 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:07.869 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # nvmfpid=179202 00:09:07.869 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # waitforlisten 179202 00:09:07.869 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:09:07.869 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # '[' -z 179202 ']' 00:09:07.869 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:07.869 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:07.869 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:07.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:07.869 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:07.869 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:07.869 [2024-11-05 18:58:36.322548] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:09:07.869 [2024-11-05 18:58:36.322617] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:07.869 [2024-11-05 18:58:36.409721] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:07.869 [2024-11-05 18:58:36.453244] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:07.869 [2024-11-05 18:58:36.453284] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:07.869 [2024-11-05 18:58:36.453292] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:07.869 [2024-11-05 18:58:36.453298] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:07.869 [2024-11-05 18:58:36.453304] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:07.869 [2024-11-05 18:58:36.454834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:07.869 [2024-11-05 18:58:36.455091] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:07.869 [2024-11-05 18:58:36.455228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:07.869 [2024-11-05 18:58:36.455232] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:07.869 18:58:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:07.869 18:58:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@866 -- # return 0 00:09:07.869 18:58:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:09:07.869 18:58:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:07.869 18:58:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:07.869 18:58:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:07.869 18:58:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:09:07.869 18:58:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.869 18:58:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:07.869 18:58:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.869 18:58:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:09:07.869 18:58:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.869 18:58:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:08.131 18:58:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.131 18:58:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:08.131 18:58:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.131 18:58:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:08.131 [2024-11-05 18:58:37.231535] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:08.131 18:58:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.131 18:58:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:08.131 18:58:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.131 18:58:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:08.131 Malloc0 00:09:08.131 18:58:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.131 18:58:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:08.131 18:58:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.131 18:58:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:08.131 18:58:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.131 18:58:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:08.131 18:58:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.131 18:58:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:08.131 18:58:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.131 18:58:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:08.131 18:58:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.131 18:58:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:08.131 [2024-11-05 18:58:37.290665] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:08.131 18:58:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.131 18:58:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=179340 00:09:08.131 18:58:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:09:08.131 18:58:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=179342 00:09:08.131 18:58:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:09:08.131 18:58:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # config=() 00:09:08.131 18:58:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # local subsystem config 00:09:08.131 18:58:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:09:08.131 18:58:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:09:08.131 { 00:09:08.131 "params": { 00:09:08.131 "name": "Nvme$subsystem", 00:09:08.131 "trtype": "$TEST_TRANSPORT", 00:09:08.131 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:08.131 "adrfam": "ipv4", 00:09:08.131 "trsvcid": "$NVMF_PORT", 00:09:08.131 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:08.131 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:08.131 "hdgst": ${hdgst:-false}, 00:09:08.131 "ddgst": ${ddgst:-false} 00:09:08.131 }, 00:09:08.131 "method": "bdev_nvme_attach_controller" 00:09:08.131 } 00:09:08.131 EOF 00:09:08.131 )") 00:09:08.131 18:58:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=179344 00:09:08.131 18:58:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:09:08.131 18:58:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:09:08.131 18:58:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # config=() 00:09:08.131 18:58:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # local subsystem config 00:09:08.131 18:58:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:09:08.131 18:58:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:09:08.131 { 00:09:08.131 "params": { 00:09:08.131 "name": "Nvme$subsystem", 00:09:08.131 "trtype": "$TEST_TRANSPORT", 00:09:08.131 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:08.131 "adrfam": "ipv4", 00:09:08.131 "trsvcid": "$NVMF_PORT", 00:09:08.131 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:08.131 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:08.131 "hdgst": ${hdgst:-false}, 00:09:08.131 "ddgst": ${ddgst:-false} 00:09:08.131 }, 00:09:08.131 "method": "bdev_nvme_attach_controller" 00:09:08.131 } 00:09:08.131 EOF 00:09:08.131 )") 00:09:08.131 18:58:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=179347 00:09:08.131 18:58:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:09:08.131 18:58:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:09:08.132 18:58:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:09:08.132 18:58:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # cat 00:09:08.132 18:58:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # config=() 00:09:08.132 18:58:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # local subsystem config 00:09:08.132 18:58:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:09:08.132 18:58:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:09:08.132 { 00:09:08.132 "params": { 00:09:08.132 "name": "Nvme$subsystem", 00:09:08.132 "trtype": "$TEST_TRANSPORT", 00:09:08.132 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:08.132 "adrfam": "ipv4", 00:09:08.132 "trsvcid": "$NVMF_PORT", 00:09:08.132 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:08.132 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:08.132 "hdgst": ${hdgst:-false}, 00:09:08.132 "ddgst": ${ddgst:-false} 00:09:08.132 }, 00:09:08.132 "method": "bdev_nvme_attach_controller" 00:09:08.132 } 00:09:08.132 EOF 00:09:08.132 )") 00:09:08.132 18:58:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:09:08.132 18:58:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:09:08.132 18:58:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # config=() 00:09:08.132 18:58:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # cat 00:09:08.132 18:58:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # local subsystem config 00:09:08.132 18:58:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:09:08.132 18:58:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:09:08.132 { 00:09:08.132 "params": { 00:09:08.132 "name": "Nvme$subsystem", 00:09:08.132 "trtype": "$TEST_TRANSPORT", 00:09:08.132 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:08.132 "adrfam": "ipv4", 00:09:08.132 "trsvcid": "$NVMF_PORT", 00:09:08.132 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:08.132 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:08.132 "hdgst": ${hdgst:-false}, 00:09:08.132 "ddgst": ${ddgst:-false} 00:09:08.132 }, 00:09:08.132 "method": "bdev_nvme_attach_controller" 00:09:08.132 } 00:09:08.132 EOF 00:09:08.132 )") 00:09:08.132 18:58:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # cat 00:09:08.132 18:58:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 179340 00:09:08.132 18:58:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # cat 00:09:08.132 18:58:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # jq . 00:09:08.132 18:58:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # jq . 00:09:08.132 18:58:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # jq . 00:09:08.132 18:58:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@397 -- # IFS=, 00:09:08.132 18:58:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:09:08.132 "params": { 00:09:08.132 "name": "Nvme1", 00:09:08.132 "trtype": "tcp", 00:09:08.132 "traddr": "10.0.0.2", 00:09:08.132 "adrfam": "ipv4", 00:09:08.132 "trsvcid": "4420", 00:09:08.132 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:08.132 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:08.132 "hdgst": false, 00:09:08.132 "ddgst": false 00:09:08.132 }, 00:09:08.132 "method": "bdev_nvme_attach_controller" 00:09:08.132 }' 00:09:08.132 18:58:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # jq . 00:09:08.132 18:58:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@397 -- # IFS=, 00:09:08.132 18:58:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:09:08.132 "params": { 00:09:08.132 "name": "Nvme1", 00:09:08.132 "trtype": "tcp", 00:09:08.132 "traddr": "10.0.0.2", 00:09:08.132 "adrfam": "ipv4", 00:09:08.132 "trsvcid": "4420", 00:09:08.132 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:08.132 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:08.132 "hdgst": false, 00:09:08.132 "ddgst": false 00:09:08.132 }, 00:09:08.132 "method": "bdev_nvme_attach_controller" 00:09:08.132 }' 00:09:08.132 18:58:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@397 -- # IFS=, 00:09:08.132 18:58:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:09:08.132 "params": { 00:09:08.132 "name": "Nvme1", 00:09:08.132 "trtype": "tcp", 00:09:08.132 "traddr": "10.0.0.2", 00:09:08.132 "adrfam": "ipv4", 00:09:08.132 "trsvcid": "4420", 00:09:08.132 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:08.132 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:08.132 "hdgst": false, 00:09:08.132 "ddgst": false 00:09:08.132 }, 00:09:08.132 "method": "bdev_nvme_attach_controller" 00:09:08.132 }' 00:09:08.132 18:58:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@397 -- # IFS=, 00:09:08.132 18:58:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:09:08.132 "params": { 00:09:08.132 "name": "Nvme1", 00:09:08.132 "trtype": "tcp", 00:09:08.132 "traddr": "10.0.0.2", 00:09:08.132 "adrfam": "ipv4", 00:09:08.132 "trsvcid": "4420", 00:09:08.132 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:08.132 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:08.132 "hdgst": false, 00:09:08.132 "ddgst": false 00:09:08.132 }, 00:09:08.132 "method": "bdev_nvme_attach_controller" 00:09:08.132 }' 00:09:08.132 [2024-11-05 18:58:37.347265] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:09:08.132 [2024-11-05 18:58:37.347319] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:09:08.132 [2024-11-05 18:58:37.347675] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:09:08.132 [2024-11-05 18:58:37.347721] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:09:08.132 [2024-11-05 18:58:37.348359] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:09:08.132 [2024-11-05 18:58:37.348403] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:09:08.132 [2024-11-05 18:58:37.349901] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:09:08.132 [2024-11-05 18:58:37.349951] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:09:08.393 [2024-11-05 18:58:37.504937] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:08.393 [2024-11-05 18:58:37.534270] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:08.393 [2024-11-05 18:58:37.563043] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:08.393 [2024-11-05 18:58:37.591333] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:09:08.393 [2024-11-05 18:58:37.622017] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:08.393 [2024-11-05 18:58:37.651634] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:09:08.393 [2024-11-05 18:58:37.674083] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:08.393 [2024-11-05 18:58:37.702216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:09:08.653 Running I/O for 1 seconds... 00:09:08.653 Running I/O for 1 seconds... 00:09:08.653 Running I/O for 1 seconds... 00:09:08.653 Running I/O for 1 seconds... 00:09:09.592 20721.00 IOPS, 80.94 MiB/s 00:09:09.592 Latency(us) 00:09:09.592 [2024-11-05T17:58:38.915Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:09.592 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:09:09.592 Nvme1n1 : 1.01 20780.44 81.17 0.00 0.00 6143.86 2443.95 13653.33 00:09:09.592 [2024-11-05T17:58:38.915Z] =================================================================================================================== 00:09:09.592 [2024-11-05T17:58:38.915Z] Total : 20780.44 81.17 0.00 0.00 6143.86 2443.95 13653.33 00:09:09.592 186576.00 IOPS, 728.81 MiB/s 00:09:09.592 Latency(us) 00:09:09.592 [2024-11-05T17:58:38.915Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:09.592 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:09:09.592 Nvme1n1 : 1.00 186208.79 727.38 0.00 0.00 683.56 298.67 1966.08 00:09:09.592 [2024-11-05T17:58:38.915Z] =================================================================================================================== 00:09:09.592 [2024-11-05T17:58:38.915Z] Total : 186208.79 727.38 0.00 0.00 683.56 298.67 1966.08 00:09:09.592 11767.00 IOPS, 45.96 MiB/s 00:09:09.592 Latency(us) 00:09:09.592 [2024-11-05T17:58:38.915Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:09.592 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:09:09.592 Nvme1n1 : 1.01 11835.09 46.23 0.00 0.00 10778.31 5188.27 20643.84 00:09:09.592 [2024-11-05T17:58:38.915Z] =================================================================================================================== 00:09:09.592 [2024-11-05T17:58:38.915Z] Total : 11835.09 46.23 0.00 0.00 10778.31 5188.27 20643.84 00:09:09.853 11468.00 IOPS, 44.80 MiB/s 00:09:09.853 Latency(us) 00:09:09.853 [2024-11-05T17:58:39.176Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:09.853 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:09:09.853 Nvme1n1 : 1.01 11527.04 45.03 0.00 0.00 11067.64 4860.59 17257.81 00:09:09.853 [2024-11-05T17:58:39.176Z] =================================================================================================================== 00:09:09.853 [2024-11-05T17:58:39.176Z] Total : 11527.04 45.03 0.00 0.00 11067.64 4860.59 17257.81 00:09:09.853 18:58:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 179342 00:09:09.853 18:58:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 179344 00:09:09.853 18:58:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 179347 00:09:09.853 18:58:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:09.853 18:58:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.853 18:58:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:09.853 18:58:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.853 18:58:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:09:09.853 18:58:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:09:09.853 18:58:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # nvmfcleanup 00:09:09.853 18:58:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@99 -- # sync 00:09:09.853 18:58:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:09:09.853 18:58:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # set +e 00:09:09.853 18:58:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # for i in {1..20} 00:09:09.853 18:58:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:09:09.853 rmmod nvme_tcp 00:09:09.853 rmmod nvme_fabrics 00:09:09.853 rmmod nvme_keyring 00:09:09.853 18:58:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:09:09.853 18:58:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # set -e 00:09:09.853 18:58:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # return 0 00:09:09.853 18:58:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # '[' -n 179202 ']' 00:09:09.853 18:58:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@337 -- # killprocess 179202 00:09:09.853 18:58:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # '[' -z 179202 ']' 00:09:09.853 18:58:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # kill -0 179202 00:09:09.853 18:58:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # uname 00:09:09.853 18:58:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:09.853 18:58:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 179202 00:09:10.113 18:58:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:10.113 18:58:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:10.113 18:58:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # echo 'killing process with pid 179202' 00:09:10.113 killing process with pid 179202 00:09:10.113 18:58:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@971 -- # kill 179202 00:09:10.113 18:58:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@976 -- # wait 179202 00:09:10.113 18:58:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:09:10.113 18:58:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # nvmf_fini 00:09:10.113 18:58:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@264 -- # local dev 00:09:10.113 18:58:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@267 -- # remove_target_ns 00:09:10.113 18:58:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:09:10.113 18:58:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:09:10.113 18:58:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_target_ns 00:09:12.658 18:58:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@268 -- # delete_main_bridge 00:09:12.658 18:58:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:09:12.658 18:58:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@130 -- # return 0 00:09:12.658 18:58:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:09:12.658 18:58:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:09:12.658 18:58:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:09:12.658 18:58:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:09:12.658 18:58:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:09:12.658 18:58:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:09:12.658 18:58:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:09:12.658 18:58:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:09:12.658 18:58:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:09:12.658 18:58:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:09:12.658 18:58:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:09:12.658 18:58:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:09:12.658 18:58:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:09:12.658 18:58:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:09:12.658 18:58:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:09:12.658 18:58:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:09:12.658 18:58:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:09:12.658 18:58:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@41 -- # _dev=0 00:09:12.658 18:58:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@41 -- # dev_map=() 00:09:12.658 18:58:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@284 -- # iptr 00:09:12.658 18:58:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@542 -- # iptables-save 00:09:12.658 18:58:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:09:12.658 18:58:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@542 -- # iptables-restore 00:09:12.658 00:09:12.658 real 0m12.813s 00:09:12.658 user 0m18.707s 00:09:12.658 sys 0m7.099s 00:09:12.658 18:58:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:12.658 18:58:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:12.658 ************************************ 00:09:12.658 END TEST nvmf_bdev_io_wait 00:09:12.658 ************************************ 00:09:12.658 18:58:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@25 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:12.658 18:58:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:12.658 18:58:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:12.658 18:58:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:12.658 ************************************ 00:09:12.658 START TEST nvmf_queue_depth 00:09:12.658 ************************************ 00:09:12.658 18:58:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:12.658 * Looking for test storage... 00:09:12.658 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:12.658 18:58:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:12.658 18:58:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lcov --version 00:09:12.658 18:58:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:12.658 18:58:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:12.658 18:58:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:12.658 18:58:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:12.658 18:58:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:12.658 18:58:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:09:12.658 18:58:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:09:12.658 18:58:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:09:12.658 18:58:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:09:12.658 18:58:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:09:12.658 18:58:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:09:12.658 18:58:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:09:12.658 18:58:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:12.658 18:58:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:09:12.658 18:58:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:09:12.658 18:58:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:12.658 18:58:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:12.658 18:58:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:09:12.658 18:58:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:09:12.658 18:58:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:12.658 18:58:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:09:12.658 18:58:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:09:12.658 18:58:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:09:12.658 18:58:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:09:12.658 18:58:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:12.658 18:58:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:09:12.658 18:58:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:09:12.658 18:58:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:12.658 18:58:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:12.658 18:58:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:09:12.658 18:58:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:12.658 18:58:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:12.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:12.658 --rc genhtml_branch_coverage=1 00:09:12.658 --rc genhtml_function_coverage=1 00:09:12.658 --rc genhtml_legend=1 00:09:12.658 --rc geninfo_all_blocks=1 00:09:12.658 --rc geninfo_unexecuted_blocks=1 00:09:12.658 00:09:12.658 ' 00:09:12.658 18:58:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:12.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:12.659 --rc genhtml_branch_coverage=1 00:09:12.659 --rc genhtml_function_coverage=1 00:09:12.659 --rc genhtml_legend=1 00:09:12.659 --rc geninfo_all_blocks=1 00:09:12.659 --rc geninfo_unexecuted_blocks=1 00:09:12.659 00:09:12.659 ' 00:09:12.659 18:58:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:12.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:12.659 --rc genhtml_branch_coverage=1 00:09:12.659 --rc genhtml_function_coverage=1 00:09:12.659 --rc genhtml_legend=1 00:09:12.659 --rc geninfo_all_blocks=1 00:09:12.659 --rc geninfo_unexecuted_blocks=1 00:09:12.659 00:09:12.659 ' 00:09:12.659 18:58:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:12.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:12.659 --rc genhtml_branch_coverage=1 00:09:12.659 --rc genhtml_function_coverage=1 00:09:12.659 --rc genhtml_legend=1 00:09:12.659 --rc geninfo_all_blocks=1 00:09:12.659 --rc geninfo_unexecuted_blocks=1 00:09:12.659 00:09:12.659 ' 00:09:12.659 18:58:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:12.659 18:58:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:09:12.659 18:58:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:12.659 18:58:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:12.659 18:58:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:12.659 18:58:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:12.659 18:58:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:12.659 18:58:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:09:12.659 18:58:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:12.659 18:58:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:09:12.659 18:58:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:12.659 18:58:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:12.659 18:58:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:12.659 18:58:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:09:12.659 18:58:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:09:12.659 18:58:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:12.659 18:58:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:12.659 18:58:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:09:12.659 18:58:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:12.659 18:58:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:12.659 18:58:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:12.659 18:58:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.659 18:58:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.659 18:58:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.659 18:58:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:09:12.659 18:58:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.659 18:58:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:09:12.659 18:58:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:09:12.659 18:58:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:09:12.659 18:58:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:09:12.659 18:58:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@50 -- # : 0 00:09:12.659 18:58:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:09:12.659 18:58:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:09:12.659 18:58:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:09:12.659 18:58:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:12.659 18:58:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:12.659 18:58:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:09:12.659 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:09:12.659 18:58:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:09:12.659 18:58:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:09:12.659 18:58:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@54 -- # have_pci_nics=0 00:09:12.659 18:58:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:12.659 18:58:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:12.659 18:58:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:12.659 18:58:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:12.659 18:58:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:09:12.659 18:58:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:12.659 18:58:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # prepare_net_devs 00:09:12.659 18:58:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # local -g is_hw=no 00:09:12.659 18:58:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@260 -- # remove_target_ns 00:09:12.659 18:58:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:09:12.659 18:58:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:09:12.659 18:58:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_target_ns 00:09:12.659 18:58:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:09:12.659 18:58:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:09:12.659 18:58:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # xtrace_disable 00:09:12.659 18:58:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:20.800 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:20.800 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@131 -- # pci_devs=() 00:09:20.800 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@131 -- # local -a pci_devs 00:09:20.800 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@132 -- # pci_net_devs=() 00:09:20.800 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:09:20.800 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@133 -- # pci_drivers=() 00:09:20.800 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@133 -- # local -A pci_drivers 00:09:20.800 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@135 -- # net_devs=() 00:09:20.800 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@135 -- # local -ga net_devs 00:09:20.800 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@136 -- # e810=() 00:09:20.800 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@136 -- # local -ga e810 00:09:20.800 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@137 -- # x722=() 00:09:20.800 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@137 -- # local -ga x722 00:09:20.800 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@138 -- # mlx=() 00:09:20.800 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@138 -- # local -ga mlx 00:09:20.800 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:20.800 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:20.800 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:20.800 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:20.800 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:20.800 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:20.800 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:20.800 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:20.800 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:20.800 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:20.800 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:20.800 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:20.800 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:09:20.800 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:09:20.800 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:09:20.800 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:09:20.800 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:09:20.800 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:09:20.800 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:09:20.800 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:20.800 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:20.800 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:09:20.800 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:09:20.800 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:20.800 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:20.800 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:09:20.800 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:09:20.800 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:20.800 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:20.800 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:09:20.800 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:09:20.800 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:20.800 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:20.800 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:09:20.800 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:09:20.800 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:09:20.800 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:09:20.800 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:09:20.800 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:20.800 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:09:20.800 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:20.800 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # [[ up == up ]] 00:09:20.800 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:09:20.800 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:20.800 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:20.800 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:20.800 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:09:20.800 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:09:20.800 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:20.800 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:09:20.800 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:20.800 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # [[ up == up ]] 00:09:20.800 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:09:20.800 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:20.800 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:20.800 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:20.800 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:09:20.800 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:09:20.800 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:09:20.800 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # is_hw=yes 00:09:20.800 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:09:20.800 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:09:20.800 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:09:20.800 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:09:20.800 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@257 -- # create_target_ns 00:09:20.800 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:09:20.800 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:09:20.800 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:09:20.800 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:20.800 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:09:20.800 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:09:20.800 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:09:20.800 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:09:20.800 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:09:20.800 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:09:20.800 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:09:20.800 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:09:20.800 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@27 -- # local -gA dev_map 00:09:20.800 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@28 -- # local -g _dev 00:09:20.800 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:09:20.800 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:09:20.800 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:09:20.800 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:09:20.800 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@44 -- # ips=() 00:09:20.800 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:09:20.800 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:09:20.800 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:09:20.800 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:09:20.800 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:09:20.800 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:09:20.800 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:09:20.801 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:09:20.801 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:09:20.801 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:09:20.801 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:09:20.801 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:09:20.801 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:09:20.801 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:09:20.801 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:09:20.801 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:09:20.801 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:09:20.801 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:09:20.801 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:09:20.801 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:09:20.801 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@11 -- # local val=167772161 00:09:20.801 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:09:20.801 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:09:20.801 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:09:20.801 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:09:20.801 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:09:20.801 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:09:20.801 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:09:20.801 10.0.0.1 00:09:20.801 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:09:20.801 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:09:20.801 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:09:20.801 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:09:20.801 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:09:20.801 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@11 -- # local val=167772162 00:09:20.801 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:09:20.801 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:09:20.801 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:09:20.801 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:09:20.801 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:09:20.801 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:09:20.801 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:09:20.801 10.0.0.2 00:09:20.801 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:09:20.801 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:09:20.801 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:09:20.801 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:09:20.801 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:09:20.801 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:09:20.801 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:09:20.801 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:09:20.801 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:09:20.801 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:09:20.801 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:09:20.801 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:09:20.801 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:09:20.801 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:09:20.801 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:09:20.801 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:09:20.801 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:09:20.801 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:09:20.801 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:09:20.801 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:09:20.801 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@38 -- # ping_ips 1 00:09:20.801 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:09:20.801 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:09:20.801 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:09:20.801 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:09:20.801 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:09:20.801 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:09:20.801 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:09:20.801 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:09:20.801 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:09:20.801 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@107 -- # local dev=initiator0 00:09:20.801 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:09:20.801 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:09:20.801 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:09:20.801 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:09:20.801 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:09:20.801 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:09:20.801 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:09:20.801 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:09:20.801 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:09:20.801 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:09:20.801 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:09:20.801 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:09:20.801 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:09:20.801 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:09:20.801 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:09:20.801 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:20.801 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.658 ms 00:09:20.801 00:09:20.801 --- 10.0.0.1 ping statistics --- 00:09:20.801 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:20.801 rtt min/avg/max/mdev = 0.658/0.658/0.658/0.000 ms 00:09:20.801 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:09:20.801 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:09:20.801 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:09:20.801 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:09:20.801 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:09:20.801 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:09:20.801 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@168 -- # get_net_dev target0 00:09:20.801 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@107 -- # local dev=target0 00:09:20.801 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:09:20.801 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:09:20.801 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:09:20.801 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:09:20.801 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:09:20.801 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:09:20.801 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:09:20.801 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:09:20.802 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:09:20.802 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:09:20.802 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:09:20.802 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:09:20.802 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:09:20.802 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:09:20.802 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:20.802 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.283 ms 00:09:20.802 00:09:20.802 --- 10.0.0.2 ping statistics --- 00:09:20.802 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:20.802 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:09:20.802 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@98 -- # (( pair++ )) 00:09:20.802 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:09:20.802 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:20.802 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@270 -- # return 0 00:09:20.802 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:09:20.802 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:09:20.802 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:09:20.802 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:09:20.802 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:09:20.802 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:09:20.802 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:09:20.802 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:09:20.802 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:09:20.802 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:09:20.802 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@107 -- # local dev=initiator0 00:09:20.802 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:09:20.802 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:09:20.802 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:09:20.802 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:09:20.802 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:09:20.802 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:09:20.802 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:09:20.802 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:09:20.802 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:09:20.802 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:20.802 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:09:20.802 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:09:20.802 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:09:20.802 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:09:20.802 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:09:20.802 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:09:20.802 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@107 -- # local dev=initiator1 00:09:20.802 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:09:20.802 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:09:20.802 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@109 -- # return 1 00:09:20.802 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@168 -- # dev= 00:09:20.802 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@169 -- # return 0 00:09:20.802 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:09:20.802 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:09:20.802 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:09:20.802 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:09:20.802 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:09:20.802 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:09:20.802 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:09:20.802 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@168 -- # get_net_dev target0 00:09:20.802 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@107 -- # local dev=target0 00:09:20.802 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:09:20.802 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:09:20.802 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:09:20.802 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:09:20.802 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:09:20.802 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:09:20.802 18:58:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:09:20.802 18:58:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:09:20.802 18:58:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:09:20.802 18:58:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:20.802 18:58:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:09:20.802 18:58:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:09:20.802 18:58:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:09:20.802 18:58:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:09:20.802 18:58:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:09:20.802 18:58:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:09:20.802 18:58:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@168 -- # get_net_dev target1 00:09:20.802 18:58:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@107 -- # local dev=target1 00:09:20.802 18:58:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:09:20.802 18:58:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:09:20.802 18:58:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@109 -- # return 1 00:09:20.802 18:58:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@168 -- # dev= 00:09:20.802 18:58:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@169 -- # return 0 00:09:20.802 18:58:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:09:20.802 18:58:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:20.802 18:58:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:09:20.802 18:58:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:09:20.802 18:58:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:20.802 18:58:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:09:20.802 18:58:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:09:20.802 18:58:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:20.802 18:58:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:09:20.802 18:58:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:20.802 18:58:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:20.802 18:58:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # nvmfpid=184058 00:09:20.802 18:58:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@329 -- # waitforlisten 184058 00:09:20.802 18:58:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 184058 ']' 00:09:20.802 18:58:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:20.802 18:58:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:20.802 18:58:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:20.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:20.802 18:58:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:20.802 18:58:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:20.802 18:58:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:20.802 [2024-11-05 18:58:49.127431] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:09:20.802 [2024-11-05 18:58:49.127499] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:20.802 [2024-11-05 18:58:49.228741] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:20.802 [2024-11-05 18:58:49.279320] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:20.802 [2024-11-05 18:58:49.279365] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:20.802 [2024-11-05 18:58:49.279374] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:20.802 [2024-11-05 18:58:49.279381] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:20.802 [2024-11-05 18:58:49.279388] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:20.802 [2024-11-05 18:58:49.280144] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:20.803 18:58:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:20.803 18:58:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:09:20.803 18:58:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:09:20.803 18:58:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:20.803 18:58:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:20.803 18:58:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:20.803 18:58:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:20.803 18:58:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.803 18:58:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:20.803 [2024-11-05 18:58:49.977531] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:20.803 18:58:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.803 18:58:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:20.803 18:58:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.803 18:58:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:20.803 Malloc0 00:09:20.803 18:58:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.803 18:58:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:20.803 18:58:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.803 18:58:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:20.803 18:58:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.803 18:58:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:20.803 18:58:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.803 18:58:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:20.803 18:58:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.803 18:58:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:20.803 18:58:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.803 18:58:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:20.803 [2024-11-05 18:58:50.035313] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:20.803 18:58:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.803 18:58:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=184286 00:09:20.803 18:58:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:20.803 18:58:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:20.803 18:58:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 184286 /var/tmp/bdevperf.sock 00:09:20.803 18:58:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 184286 ']' 00:09:20.803 18:58:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:20.803 18:58:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:20.803 18:58:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:20.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:20.803 18:58:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:20.803 18:58:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:20.803 [2024-11-05 18:58:50.093023] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:09:20.803 [2024-11-05 18:58:50.093087] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid184286 ] 00:09:21.064 [2024-11-05 18:58:50.169027] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:21.064 [2024-11-05 18:58:50.211007] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:21.636 18:58:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:21.636 18:58:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:09:21.636 18:58:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:21.636 18:58:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.636 18:58:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:21.897 NVMe0n1 00:09:21.897 18:58:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.897 18:58:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:21.897 Running I/O for 10 seconds... 00:09:24.221 9213.00 IOPS, 35.99 MiB/s [2024-11-05T17:58:54.114Z] 9513.50 IOPS, 37.16 MiB/s [2024-11-05T17:58:55.499Z] 10241.33 IOPS, 40.01 MiB/s [2024-11-05T17:58:56.441Z] 10735.50 IOPS, 41.94 MiB/s [2024-11-05T17:58:57.383Z] 10921.40 IOPS, 42.66 MiB/s [2024-11-05T17:58:58.326Z] 11090.33 IOPS, 43.32 MiB/s [2024-11-05T17:58:59.269Z] 11215.86 IOPS, 43.81 MiB/s [2024-11-05T17:59:00.211Z] 11263.00 IOPS, 44.00 MiB/s [2024-11-05T17:59:01.154Z] 11369.00 IOPS, 44.41 MiB/s [2024-11-05T17:59:01.416Z] 11441.60 IOPS, 44.69 MiB/s 00:09:32.093 Latency(us) 00:09:32.093 [2024-11-05T17:59:01.416Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:32.093 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:32.093 Verification LBA range: start 0x0 length 0x4000 00:09:32.093 NVMe0n1 : 10.06 11460.55 44.77 0.00 0.00 89002.52 20425.39 65099.09 00:09:32.093 [2024-11-05T17:59:01.416Z] =================================================================================================================== 00:09:32.093 [2024-11-05T17:59:01.416Z] Total : 11460.55 44.77 0.00 0.00 89002.52 20425.39 65099.09 00:09:32.093 { 00:09:32.093 "results": [ 00:09:32.093 { 00:09:32.093 "job": "NVMe0n1", 00:09:32.093 "core_mask": "0x1", 00:09:32.093 "workload": "verify", 00:09:32.093 "status": "finished", 00:09:32.093 "verify_range": { 00:09:32.093 "start": 0, 00:09:32.093 "length": 16384 00:09:32.093 }, 00:09:32.093 "queue_depth": 1024, 00:09:32.093 "io_size": 4096, 00:09:32.093 "runtime": 10.057111, 00:09:32.093 "iops": 11460.54766622343, 00:09:32.093 "mibps": 44.767764321185275, 00:09:32.093 "io_failed": 0, 00:09:32.093 "io_timeout": 0, 00:09:32.093 "avg_latency_us": 89002.52203088668, 00:09:32.093 "min_latency_us": 20425.386666666665, 00:09:32.093 "max_latency_us": 65099.09333333333 00:09:32.093 } 00:09:32.093 ], 00:09:32.093 "core_count": 1 00:09:32.093 } 00:09:32.093 18:59:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 184286 00:09:32.093 18:59:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 184286 ']' 00:09:32.093 18:59:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 184286 00:09:32.093 18:59:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:09:32.093 18:59:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:32.093 18:59:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 184286 00:09:32.093 18:59:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:32.093 18:59:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:32.093 18:59:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 184286' 00:09:32.093 killing process with pid 184286 00:09:32.093 18:59:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 184286 00:09:32.093 Received shutdown signal, test time was about 10.000000 seconds 00:09:32.093 00:09:32.093 Latency(us) 00:09:32.093 [2024-11-05T17:59:01.416Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:32.093 [2024-11-05T17:59:01.416Z] =================================================================================================================== 00:09:32.093 [2024-11-05T17:59:01.416Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:32.093 18:59:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 184286 00:09:32.093 18:59:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:32.093 18:59:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:32.093 18:59:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@335 -- # nvmfcleanup 00:09:32.093 18:59:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@99 -- # sync 00:09:32.093 18:59:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:09:32.093 18:59:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@102 -- # set +e 00:09:32.093 18:59:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@103 -- # for i in {1..20} 00:09:32.093 18:59:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:09:32.093 rmmod nvme_tcp 00:09:32.353 rmmod nvme_fabrics 00:09:32.353 rmmod nvme_keyring 00:09:32.353 18:59:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:09:32.353 18:59:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # set -e 00:09:32.353 18:59:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # return 0 00:09:32.353 18:59:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # '[' -n 184058 ']' 00:09:32.353 18:59:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@337 -- # killprocess 184058 00:09:32.353 18:59:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 184058 ']' 00:09:32.353 18:59:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 184058 00:09:32.353 18:59:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:09:32.353 18:59:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:32.353 18:59:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 184058 00:09:32.353 18:59:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:09:32.353 18:59:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:09:32.353 18:59:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 184058' 00:09:32.353 killing process with pid 184058 00:09:32.353 18:59:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 184058 00:09:32.353 18:59:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 184058 00:09:32.353 18:59:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:09:32.354 18:59:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # nvmf_fini 00:09:32.354 18:59:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@264 -- # local dev 00:09:32.354 18:59:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@267 -- # remove_target_ns 00:09:32.354 18:59:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:09:32.354 18:59:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:09:32.354 18:59:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_target_ns 00:09:34.895 18:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@268 -- # delete_main_bridge 00:09:34.895 18:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:09:34.895 18:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@130 -- # return 0 00:09:34.895 18:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:09:34.895 18:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:09:34.895 18:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:09:34.895 18:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:09:34.895 18:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:09:34.895 18:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:09:34.895 18:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:09:34.895 18:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:09:34.895 18:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:09:34.895 18:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:09:34.895 18:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:09:34.895 18:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:09:34.895 18:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:09:34.895 18:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:09:34.895 18:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:09:34.895 18:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:09:34.895 18:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:09:34.895 18:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@41 -- # _dev=0 00:09:34.895 18:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@41 -- # dev_map=() 00:09:34.895 18:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@284 -- # iptr 00:09:34.895 18:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@542 -- # iptables-save 00:09:34.895 18:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:09:34.895 18:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@542 -- # iptables-restore 00:09:34.895 00:09:34.895 real 0m22.267s 00:09:34.895 user 0m25.724s 00:09:34.895 sys 0m6.787s 00:09:34.895 18:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:34.895 18:59:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:34.895 ************************************ 00:09:34.895 END TEST nvmf_queue_depth 00:09:34.895 ************************************ 00:09:34.895 18:59:03 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:34.895 18:59:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:34.895 18:59:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:34.895 18:59:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:34.895 ************************************ 00:09:34.895 START TEST nvmf_nmic 00:09:34.895 ************************************ 00:09:34.895 18:59:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:34.895 * Looking for test storage... 00:09:34.895 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:34.895 18:59:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:34.895 18:59:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:34.895 18:59:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lcov --version 00:09:34.895 18:59:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:34.895 18:59:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:34.895 18:59:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:34.895 18:59:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:34.896 18:59:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:09:34.896 18:59:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:09:34.896 18:59:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:09:34.896 18:59:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:09:34.896 18:59:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:09:34.896 18:59:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:09:34.896 18:59:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:09:34.896 18:59:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:34.896 18:59:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:09:34.896 18:59:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:09:34.896 18:59:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:34.896 18:59:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:34.896 18:59:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:09:34.896 18:59:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:09:34.896 18:59:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:34.896 18:59:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:09:34.896 18:59:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:09:34.896 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:09:34.896 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:09:34.896 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:34.896 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:09:34.896 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:09:34.896 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:34.896 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:34.896 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:09:34.896 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:34.896 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:34.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.896 --rc genhtml_branch_coverage=1 00:09:34.896 --rc genhtml_function_coverage=1 00:09:34.896 --rc genhtml_legend=1 00:09:34.896 --rc geninfo_all_blocks=1 00:09:34.896 --rc geninfo_unexecuted_blocks=1 00:09:34.896 00:09:34.896 ' 00:09:34.896 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:34.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.896 --rc genhtml_branch_coverage=1 00:09:34.896 --rc genhtml_function_coverage=1 00:09:34.896 --rc genhtml_legend=1 00:09:34.896 --rc geninfo_all_blocks=1 00:09:34.896 --rc geninfo_unexecuted_blocks=1 00:09:34.896 00:09:34.896 ' 00:09:34.896 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:34.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.896 --rc genhtml_branch_coverage=1 00:09:34.896 --rc genhtml_function_coverage=1 00:09:34.896 --rc genhtml_legend=1 00:09:34.896 --rc geninfo_all_blocks=1 00:09:34.896 --rc geninfo_unexecuted_blocks=1 00:09:34.896 00:09:34.896 ' 00:09:34.896 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:34.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.896 --rc genhtml_branch_coverage=1 00:09:34.896 --rc genhtml_function_coverage=1 00:09:34.896 --rc genhtml_legend=1 00:09:34.896 --rc geninfo_all_blocks=1 00:09:34.896 --rc geninfo_unexecuted_blocks=1 00:09:34.896 00:09:34.896 ' 00:09:34.896 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:34.896 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:34.896 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:34.896 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:34.896 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:34.896 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:34.896 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:34.896 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:09:34.896 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:34.896 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:09:34.896 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:34.896 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:34.896 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:34.896 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:09:34.896 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:09:34.896 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:34.896 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:34.896 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:09:34.896 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:34.896 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:34.896 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:34.896 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.896 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.896 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.896 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:34.896 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.896 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:09:34.896 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:09:34.896 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:09:34.896 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:09:34.896 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@50 -- # : 0 00:09:34.896 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:09:34.896 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:09:34.896 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:09:34.896 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:34.896 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:34.896 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:09:34.896 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:09:34.896 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:09:34.896 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:09:34.896 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@54 -- # have_pci_nics=0 00:09:34.896 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:34.896 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:34.896 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:34.896 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:09:34.896 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:34.896 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # prepare_net_devs 00:09:34.896 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # local -g is_hw=no 00:09:34.896 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@260 -- # remove_target_ns 00:09:34.896 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:09:34.896 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:09:34.896 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_target_ns 00:09:34.896 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:09:34.896 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:09:34.897 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # xtrace_disable 00:09:34.897 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:43.036 18:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:43.036 18:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@131 -- # pci_devs=() 00:09:43.036 18:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@131 -- # local -a pci_devs 00:09:43.036 18:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@132 -- # pci_net_devs=() 00:09:43.036 18:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:09:43.036 18:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@133 -- # pci_drivers=() 00:09:43.036 18:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@133 -- # local -A pci_drivers 00:09:43.036 18:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@135 -- # net_devs=() 00:09:43.036 18:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@135 -- # local -ga net_devs 00:09:43.036 18:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@136 -- # e810=() 00:09:43.036 18:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@136 -- # local -ga e810 00:09:43.036 18:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@137 -- # x722=() 00:09:43.036 18:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@137 -- # local -ga x722 00:09:43.036 18:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@138 -- # mlx=() 00:09:43.036 18:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@138 -- # local -ga mlx 00:09:43.036 18:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:43.036 18:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:43.036 18:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:43.036 18:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:43.036 18:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:43.036 18:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:43.036 18:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:43.036 18:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:43.036 18:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:43.036 18:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:43.036 18:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:43.036 18:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:43.036 18:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:09:43.036 18:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:09:43.036 18:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:09:43.036 18:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:09:43.036 18:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:09:43.036 18:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:09:43.036 18:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:09:43.036 18:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:43.036 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:43.036 18:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:09:43.036 18:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:09:43.036 18:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:43.036 18:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:43.036 18:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:09:43.036 18:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:09:43.036 18:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:43.036 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:43.036 18:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:09:43.036 18:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:09:43.036 18:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:43.036 18:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:43.036 18:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:09:43.036 18:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:09:43.036 18:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:09:43.036 18:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:09:43.036 18:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:09:43.036 18:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:43.036 18:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:09:43.036 18:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:43.036 18:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # [[ up == up ]] 00:09:43.036 18:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:09:43.036 18:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:43.036 18:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:43.036 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:43.036 18:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:09:43.036 18:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:09:43.036 18:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:43.036 18:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:09:43.036 18:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:43.036 18:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # [[ up == up ]] 00:09:43.036 18:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:09:43.036 18:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:43.036 18:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:43.036 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:43.036 18:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:09:43.036 18:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:09:43.036 18:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:09:43.036 18:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # is_hw=yes 00:09:43.036 18:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:09:43.036 18:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:09:43.036 18:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:09:43.037 18:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:09:43.037 18:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@257 -- # create_target_ns 00:09:43.037 18:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:09:43.037 18:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:09:43.037 18:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:09:43.037 18:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:43.037 18:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:09:43.037 18:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:09:43.037 18:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:09:43.037 18:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:09:43.037 18:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:09:43.037 18:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:09:43.037 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:09:43.037 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:09:43.037 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@27 -- # local -gA dev_map 00:09:43.037 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@28 -- # local -g _dev 00:09:43.037 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:09:43.037 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:09:43.037 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:09:43.037 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:09:43.037 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@44 -- # ips=() 00:09:43.037 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:09:43.037 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:09:43.037 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:09:43.037 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:09:43.037 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:09:43.037 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:09:43.037 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:09:43.037 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:09:43.037 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:09:43.037 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:09:43.037 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:09:43.037 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:09:43.037 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:09:43.037 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:09:43.037 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:09:43.037 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:09:43.037 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:09:43.037 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:09:43.037 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:09:43.037 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:09:43.037 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@11 -- # local val=167772161 00:09:43.037 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:09:43.037 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:09:43.037 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:09:43.037 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:09:43.037 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:09:43.037 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:09:43.037 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:09:43.037 10.0.0.1 00:09:43.037 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:09:43.037 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:09:43.037 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:09:43.037 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:09:43.037 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:09:43.037 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@11 -- # local val=167772162 00:09:43.037 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:09:43.037 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:09:43.037 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:09:43.037 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:09:43.037 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:09:43.037 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:09:43.037 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:09:43.037 10.0.0.2 00:09:43.037 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:09:43.037 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:09:43.037 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:09:43.037 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:09:43.037 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:09:43.037 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:09:43.037 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:09:43.037 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:09:43.037 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:09:43.037 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:09:43.037 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:09:43.037 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:09:43.037 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:09:43.037 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:09:43.037 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:09:43.037 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:09:43.037 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:09:43.037 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:09:43.037 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:09:43.037 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:09:43.037 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@38 -- # ping_ips 1 00:09:43.037 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:09:43.037 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:09:43.037 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:09:43.037 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:09:43.037 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:09:43.037 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:09:43.037 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:09:43.037 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:09:43.037 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:09:43.037 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@107 -- # local dev=initiator0 00:09:43.037 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:09:43.037 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:09:43.037 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:09:43.037 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:09:43.037 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:09:43.037 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:09:43.037 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:09:43.037 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:09:43.037 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:09:43.037 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:09:43.037 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:09:43.037 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:09:43.037 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:09:43.037 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:09:43.037 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:09:43.037 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:43.037 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.621 ms 00:09:43.037 00:09:43.037 --- 10.0.0.1 ping statistics --- 00:09:43.037 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:43.037 rtt min/avg/max/mdev = 0.621/0.621/0.621/0.000 ms 00:09:43.037 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:09:43.038 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:09:43.038 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:09:43.038 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:09:43.038 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:09:43.038 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:09:43.038 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@168 -- # get_net_dev target0 00:09:43.038 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@107 -- # local dev=target0 00:09:43.038 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:09:43.038 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:09:43.038 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:09:43.038 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:09:43.038 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:09:43.038 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:09:43.038 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:09:43.038 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:09:43.038 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:09:43.038 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:09:43.038 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:09:43.038 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:09:43.038 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:09:43.038 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:09:43.038 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:43.038 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.275 ms 00:09:43.038 00:09:43.038 --- 10.0.0.2 ping statistics --- 00:09:43.038 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:43.038 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:09:43.038 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@98 -- # (( pair++ )) 00:09:43.038 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:09:43.038 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:43.038 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@270 -- # return 0 00:09:43.038 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:09:43.038 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:09:43.038 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:09:43.038 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:09:43.038 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:09:43.038 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:09:43.038 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:09:43.038 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:09:43.038 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:09:43.038 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:09:43.038 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@107 -- # local dev=initiator0 00:09:43.038 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:09:43.038 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:09:43.038 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:09:43.038 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:09:43.038 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:09:43.038 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:09:43.038 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:09:43.038 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:09:43.038 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:09:43.038 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:43.038 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:09:43.038 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:09:43.038 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:09:43.038 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:09:43.038 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:09:43.038 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:09:43.038 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@107 -- # local dev=initiator1 00:09:43.038 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:09:43.038 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:09:43.038 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@109 -- # return 1 00:09:43.038 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@168 -- # dev= 00:09:43.038 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@169 -- # return 0 00:09:43.038 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:09:43.038 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:09:43.038 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:09:43.038 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:09:43.038 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:09:43.038 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:09:43.038 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:09:43.038 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@168 -- # get_net_dev target0 00:09:43.038 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@107 -- # local dev=target0 00:09:43.038 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:09:43.038 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:09:43.038 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:09:43.038 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:09:43.038 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:09:43.038 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:09:43.038 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:09:43.038 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:09:43.038 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:09:43.038 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:43.038 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:09:43.038 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:09:43.038 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:09:43.038 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:09:43.038 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:09:43.038 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:09:43.038 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@168 -- # get_net_dev target1 00:09:43.038 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@107 -- # local dev=target1 00:09:43.038 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:09:43.038 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:09:43.038 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@109 -- # return 1 00:09:43.038 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@168 -- # dev= 00:09:43.038 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@169 -- # return 0 00:09:43.038 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:09:43.038 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:43.038 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:09:43.038 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:09:43.038 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:43.038 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:09:43.038 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:09:43.038 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:43.038 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:09:43.038 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:43.038 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:43.038 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # nvmfpid=190790 00:09:43.038 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@329 -- # waitforlisten 190790 00:09:43.038 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:43.038 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@833 -- # '[' -z 190790 ']' 00:09:43.038 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:43.038 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:43.038 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:43.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:43.038 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:43.038 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:43.039 [2024-11-05 18:59:11.533535] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:09:43.039 [2024-11-05 18:59:11.533586] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:43.039 [2024-11-05 18:59:11.612672] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:43.039 [2024-11-05 18:59:11.649752] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:43.039 [2024-11-05 18:59:11.649786] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:43.039 [2024-11-05 18:59:11.649794] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:43.039 [2024-11-05 18:59:11.649801] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:43.039 [2024-11-05 18:59:11.649807] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:43.039 [2024-11-05 18:59:11.651318] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:43.039 [2024-11-05 18:59:11.651431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:43.039 [2024-11-05 18:59:11.651586] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:43.039 [2024-11-05 18:59:11.651587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:43.039 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:43.039 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@866 -- # return 0 00:09:43.039 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:09:43.039 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:43.039 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:43.039 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:43.039 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:43.039 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.039 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:43.039 [2024-11-05 18:59:11.791379] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:43.039 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.039 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:43.039 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.039 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:43.039 Malloc0 00:09:43.039 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.039 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:43.039 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.039 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:43.039 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.039 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:43.039 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.039 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:43.039 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.039 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:43.039 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.039 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:43.039 [2024-11-05 18:59:11.860054] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:43.039 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.039 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:43.039 test case1: single bdev can't be used in multiple subsystems 00:09:43.039 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:43.039 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.039 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:43.039 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.039 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:43.039 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.039 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:43.039 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.039 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:09:43.039 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:43.039 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.039 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:43.039 [2024-11-05 18:59:11.895936] bdev.c:8198:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:43.039 [2024-11-05 18:59:11.895956] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:43.039 [2024-11-05 18:59:11.895965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.039 request: 00:09:43.039 { 00:09:43.039 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:43.039 "namespace": { 00:09:43.039 "bdev_name": "Malloc0", 00:09:43.039 "no_auto_visible": false 00:09:43.039 }, 00:09:43.039 "method": "nvmf_subsystem_add_ns", 00:09:43.039 "req_id": 1 00:09:43.039 } 00:09:43.039 Got JSON-RPC error response 00:09:43.039 response: 00:09:43.039 { 00:09:43.039 "code": -32602, 00:09:43.039 "message": "Invalid parameters" 00:09:43.039 } 00:09:43.039 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:43.039 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:09:43.039 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:43.039 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:43.039 Adding namespace failed - expected result. 00:09:43.039 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:43.039 test case2: host connect to nvmf target in multiple paths 00:09:43.039 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:09:43.039 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.039 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:43.039 [2024-11-05 18:59:11.908075] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:09:43.039 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.039 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:44.426 18:59:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:09:45.808 18:59:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:45.808 18:59:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # local i=0 00:09:45.808 18:59:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:09:45.808 18:59:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:09:45.808 18:59:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # sleep 2 00:09:48.352 18:59:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:09:48.352 18:59:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:09:48.352 18:59:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:09:48.352 18:59:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:09:48.352 18:59:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:09:48.352 18:59:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # return 0 00:09:48.352 18:59:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:48.352 [global] 00:09:48.352 thread=1 00:09:48.352 invalidate=1 00:09:48.352 rw=write 00:09:48.352 time_based=1 00:09:48.352 runtime=1 00:09:48.352 ioengine=libaio 00:09:48.352 direct=1 00:09:48.352 bs=4096 00:09:48.352 iodepth=1 00:09:48.352 norandommap=0 00:09:48.352 numjobs=1 00:09:48.352 00:09:48.352 verify_dump=1 00:09:48.352 verify_backlog=512 00:09:48.352 verify_state_save=0 00:09:48.352 do_verify=1 00:09:48.352 verify=crc32c-intel 00:09:48.352 [job0] 00:09:48.352 filename=/dev/nvme0n1 00:09:48.352 Could not set queue depth (nvme0n1) 00:09:48.352 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:48.352 fio-3.35 00:09:48.352 Starting 1 thread 00:09:49.359 00:09:49.359 job0: (groupid=0, jobs=1): err= 0: pid=192212: Tue Nov 5 18:59:18 2024 00:09:49.359 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:09:49.359 slat (nsec): min=26013, max=60375, avg=26740.79, stdev=2153.60 00:09:49.359 clat (usec): min=626, max=40957, avg=1132.03, stdev=2490.45 00:09:49.359 lat (usec): min=656, max=40983, avg=1158.77, stdev=2490.42 00:09:49.359 clat percentiles (usec): 00:09:49.359 | 1.00th=[ 766], 5.00th=[ 832], 10.00th=[ 881], 20.00th=[ 930], 00:09:49.359 | 30.00th=[ 955], 40.00th=[ 971], 50.00th=[ 979], 60.00th=[ 996], 00:09:49.359 | 70.00th=[ 1012], 80.00th=[ 1037], 90.00th=[ 1057], 95.00th=[ 1090], 00:09:49.359 | 99.00th=[ 1139], 99.50th=[ 1172], 99.90th=[41157], 99.95th=[41157], 00:09:49.359 | 99.99th=[41157] 00:09:49.359 write: IOPS=658, BW=2633KiB/s (2697kB/s)(2636KiB/1001msec); 0 zone resets 00:09:49.359 slat (nsec): min=9124, max=65599, avg=28980.56, stdev=10384.26 00:09:49.359 clat (usec): min=165, max=906, avg=575.00, stdev=100.61 00:09:49.359 lat (usec): min=176, max=939, avg=603.98, stdev=105.72 00:09:49.359 clat percentiles (usec): 00:09:49.359 | 1.00th=[ 330], 5.00th=[ 400], 10.00th=[ 437], 20.00th=[ 494], 00:09:49.359 | 30.00th=[ 545], 40.00th=[ 570], 50.00th=[ 586], 60.00th=[ 594], 00:09:49.359 | 70.00th=[ 627], 80.00th=[ 668], 90.00th=[ 701], 95.00th=[ 734], 00:09:49.359 | 99.00th=[ 766], 99.50th=[ 783], 99.90th=[ 906], 99.95th=[ 906], 00:09:49.359 | 99.99th=[ 906] 00:09:49.359 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:09:49.359 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:49.359 lat (usec) : 250=0.09%, 500=12.04%, 750=42.78%, 1000=28.69% 00:09:49.359 lat (msec) : 2=16.23%, 50=0.17% 00:09:49.359 cpu : usr=2.80%, sys=4.00%, ctx=1171, majf=0, minf=1 00:09:49.359 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:49.359 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:49.359 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:49.359 issued rwts: total=512,659,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:49.359 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:49.359 00:09:49.359 Run status group 0 (all jobs): 00:09:49.359 READ: bw=2046KiB/s (2095kB/s), 2046KiB/s-2046KiB/s (2095kB/s-2095kB/s), io=2048KiB (2097kB), run=1001-1001msec 00:09:49.359 WRITE: bw=2633KiB/s (2697kB/s), 2633KiB/s-2633KiB/s (2697kB/s-2697kB/s), io=2636KiB (2699kB), run=1001-1001msec 00:09:49.359 00:09:49.359 Disk stats (read/write): 00:09:49.359 nvme0n1: ios=548/512, merge=0/0, ticks=595/223, in_queue=818, util=92.59% 00:09:49.359 18:59:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:49.670 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:49.670 18:59:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:49.670 18:59:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1221 -- # local i=0 00:09:49.670 18:59:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:09:49.670 18:59:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:49.670 18:59:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:09:49.670 18:59:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:49.670 18:59:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1233 -- # return 0 00:09:49.670 18:59:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:09:49.670 18:59:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:09:49.670 18:59:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@335 -- # nvmfcleanup 00:09:49.670 18:59:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@99 -- # sync 00:09:49.670 18:59:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:09:49.670 18:59:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@102 -- # set +e 00:09:49.670 18:59:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@103 -- # for i in {1..20} 00:09:49.670 18:59:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:09:49.670 rmmod nvme_tcp 00:09:49.670 rmmod nvme_fabrics 00:09:49.670 rmmod nvme_keyring 00:09:49.670 18:59:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:09:49.670 18:59:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # set -e 00:09:49.670 18:59:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # return 0 00:09:49.670 18:59:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # '[' -n 190790 ']' 00:09:49.670 18:59:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@337 -- # killprocess 190790 00:09:49.670 18:59:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@952 -- # '[' -z 190790 ']' 00:09:49.670 18:59:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # kill -0 190790 00:09:49.670 18:59:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@957 -- # uname 00:09:49.670 18:59:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:49.670 18:59:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 190790 00:09:49.670 18:59:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:49.670 18:59:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:49.670 18:59:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@970 -- # echo 'killing process with pid 190790' 00:09:49.670 killing process with pid 190790 00:09:49.670 18:59:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@971 -- # kill 190790 00:09:49.670 18:59:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@976 -- # wait 190790 00:09:49.931 18:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:09:49.931 18:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # nvmf_fini 00:09:49.931 18:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@264 -- # local dev 00:09:49.931 18:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@267 -- # remove_target_ns 00:09:49.931 18:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:09:49.931 18:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:09:49.931 18:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_target_ns 00:09:51.843 18:59:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@268 -- # delete_main_bridge 00:09:51.843 18:59:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:09:51.843 18:59:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@130 -- # return 0 00:09:51.843 18:59:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:09:51.843 18:59:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:09:51.843 18:59:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:09:51.843 18:59:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:09:51.843 18:59:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:09:51.843 18:59:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:09:51.843 18:59:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:09:51.843 18:59:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:09:51.843 18:59:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:09:51.843 18:59:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:09:51.843 18:59:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:09:51.843 18:59:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:09:51.843 18:59:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:09:51.843 18:59:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:09:51.843 18:59:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:09:51.843 18:59:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:09:51.843 18:59:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:09:51.843 18:59:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@41 -- # _dev=0 00:09:51.843 18:59:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@41 -- # dev_map=() 00:09:51.843 18:59:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@284 -- # iptr 00:09:51.843 18:59:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@542 -- # iptables-save 00:09:51.843 18:59:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:09:51.843 18:59:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@542 -- # iptables-restore 00:09:51.843 00:09:51.843 real 0m17.318s 00:09:51.843 user 0m45.945s 00:09:51.843 sys 0m6.394s 00:09:51.843 18:59:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:51.843 18:59:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:51.843 ************************************ 00:09:51.843 END TEST nvmf_nmic 00:09:51.843 ************************************ 00:09:52.105 18:59:21 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:52.105 18:59:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:52.105 18:59:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:52.105 18:59:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:52.105 ************************************ 00:09:52.105 START TEST nvmf_fio_target 00:09:52.105 ************************************ 00:09:52.105 18:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:52.105 * Looking for test storage... 00:09:52.105 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:52.105 18:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:52.105 18:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lcov --version 00:09:52.105 18:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:52.105 18:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:52.105 18:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:52.105 18:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:52.105 18:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:52.105 18:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:09:52.105 18:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:09:52.105 18:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:09:52.105 18:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:09:52.105 18:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:09:52.105 18:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:09:52.105 18:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:09:52.105 18:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:52.105 18:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:09:52.105 18:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:09:52.105 18:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:52.105 18:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:52.105 18:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:09:52.105 18:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:09:52.105 18:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:52.105 18:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:09:52.105 18:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:09:52.105 18:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:09:52.105 18:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:09:52.105 18:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:52.105 18:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:09:52.105 18:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:09:52.105 18:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:52.105 18:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:52.105 18:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:09:52.105 18:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:52.105 18:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:52.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.105 --rc genhtml_branch_coverage=1 00:09:52.105 --rc genhtml_function_coverage=1 00:09:52.105 --rc genhtml_legend=1 00:09:52.105 --rc geninfo_all_blocks=1 00:09:52.105 --rc geninfo_unexecuted_blocks=1 00:09:52.105 00:09:52.105 ' 00:09:52.105 18:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:52.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.105 --rc genhtml_branch_coverage=1 00:09:52.105 --rc genhtml_function_coverage=1 00:09:52.105 --rc genhtml_legend=1 00:09:52.105 --rc geninfo_all_blocks=1 00:09:52.105 --rc geninfo_unexecuted_blocks=1 00:09:52.105 00:09:52.105 ' 00:09:52.105 18:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:52.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.105 --rc genhtml_branch_coverage=1 00:09:52.105 --rc genhtml_function_coverage=1 00:09:52.105 --rc genhtml_legend=1 00:09:52.105 --rc geninfo_all_blocks=1 00:09:52.105 --rc geninfo_unexecuted_blocks=1 00:09:52.105 00:09:52.105 ' 00:09:52.105 18:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:52.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.105 --rc genhtml_branch_coverage=1 00:09:52.105 --rc genhtml_function_coverage=1 00:09:52.105 --rc genhtml_legend=1 00:09:52.105 --rc geninfo_all_blocks=1 00:09:52.105 --rc geninfo_unexecuted_blocks=1 00:09:52.105 00:09:52.105 ' 00:09:52.105 18:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:52.105 18:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:09:52.105 18:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:52.105 18:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:52.105 18:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:52.105 18:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:52.105 18:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:52.105 18:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:09:52.105 18:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:52.105 18:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:09:52.105 18:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:52.105 18:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:52.105 18:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:52.105 18:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:09:52.105 18:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:09:52.105 18:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:52.105 18:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:52.105 18:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:09:52.105 18:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:52.105 18:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:52.105 18:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:52.367 18:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.367 18:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.367 18:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.367 18:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:09:52.368 18:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.368 18:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:09:52.368 18:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:09:52.368 18:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:09:52.368 18:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:09:52.368 18:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@50 -- # : 0 00:09:52.368 18:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:09:52.368 18:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:09:52.368 18:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:09:52.368 18:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:52.368 18:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:52.368 18:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:09:52.368 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:09:52.368 18:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:09:52.368 18:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:09:52.368 18:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@54 -- # have_pci_nics=0 00:09:52.368 18:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:52.368 18:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:52.368 18:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:52.368 18:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:09:52.368 18:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:09:52.368 18:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:52.368 18:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # prepare_net_devs 00:09:52.368 18:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # local -g is_hw=no 00:09:52.368 18:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@260 -- # remove_target_ns 00:09:52.368 18:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:09:52.368 18:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:09:52.368 18:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_target_ns 00:09:52.368 18:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:09:52.368 18:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:09:52.368 18:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # xtrace_disable 00:09:52.368 18:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:00.513 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:00.513 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@131 -- # pci_devs=() 00:10:00.513 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@131 -- # local -a pci_devs 00:10:00.513 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@132 -- # pci_net_devs=() 00:10:00.513 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:10:00.513 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@133 -- # pci_drivers=() 00:10:00.513 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@133 -- # local -A pci_drivers 00:10:00.513 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@135 -- # net_devs=() 00:10:00.513 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@135 -- # local -ga net_devs 00:10:00.513 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@136 -- # e810=() 00:10:00.513 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@136 -- # local -ga e810 00:10:00.513 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@137 -- # x722=() 00:10:00.513 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@137 -- # local -ga x722 00:10:00.513 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@138 -- # mlx=() 00:10:00.514 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@138 -- # local -ga mlx 00:10:00.514 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:00.514 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:00.514 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:00.514 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:00.514 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:00.514 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:00.514 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:00.514 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:00.514 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:00.514 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:00.514 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:00.514 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:00.514 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:10:00.514 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:10:00.514 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:10:00.514 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:10:00.514 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:10:00.514 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:10:00.514 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:10:00.514 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:00.514 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:00.514 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:10:00.514 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:10:00.514 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:00.514 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:00.514 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:10:00.514 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:10:00.514 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:00.514 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:00.514 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:10:00.514 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:10:00.514 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:00.514 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:00.514 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:10:00.514 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:10:00.514 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:10:00.514 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:10:00.514 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:10:00.514 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:00.514 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:10:00.514 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:00.514 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # [[ up == up ]] 00:10:00.514 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:10:00.514 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:00.514 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:00.514 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:00.514 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:10:00.514 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:10:00.514 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:00.514 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:10:00.514 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:00.514 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # [[ up == up ]] 00:10:00.514 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:10:00.514 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:00.514 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:00.514 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:00.514 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:10:00.514 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:10:00.514 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:10:00.514 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # is_hw=yes 00:10:00.514 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:10:00.514 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:10:00.514 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:10:00.514 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:10:00.514 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@257 -- # create_target_ns 00:10:00.514 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:10:00.514 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:10:00.514 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:10:00.514 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:00.514 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:10:00.514 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:10:00.514 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:00.514 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:00.514 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:10:00.514 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:10:00.514 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:10:00.514 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:10:00.514 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@27 -- # local -gA dev_map 00:10:00.514 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@28 -- # local -g _dev 00:10:00.514 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:10:00.514 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:10:00.514 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:10:00.514 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:10:00.514 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@44 -- # ips=() 00:10:00.514 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:10:00.514 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:10:00.514 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:10:00.514 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:10:00.514 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:10:00.514 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:10:00.514 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:10:00.514 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:10:00.514 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:10:00.514 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:10:00.514 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:10:00.514 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:10:00.514 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:10:00.514 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:10:00.514 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:10:00.514 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:10:00.514 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:10:00.514 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:10:00.514 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:10:00.514 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:10:00.514 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@11 -- # local val=167772161 00:10:00.514 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:10:00.514 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:10:00.514 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:10:00.514 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:10:00.514 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:10:00.514 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:10:00.515 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:10:00.515 10.0.0.1 00:10:00.515 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:10:00.515 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:10:00.515 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:00.515 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:00.515 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:10:00.515 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@11 -- # local val=167772162 00:10:00.515 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:10:00.515 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:10:00.515 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:10:00.515 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:10:00.515 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:10:00.515 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:10:00.515 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:10:00.515 10.0.0.2 00:10:00.515 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:10:00.515 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:10:00.515 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:10:00.515 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:10:00.515 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:10:00.515 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:10:00.515 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:10:00.515 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:00.515 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:00.515 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:10:00.515 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:10:00.515 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:10:00.515 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:10:00.515 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:10:00.515 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:10:00.515 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:10:00.515 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:10:00.515 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:10:00.515 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:10:00.515 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:10:00.515 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@38 -- # ping_ips 1 00:10:00.515 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:10:00.515 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:10:00.515 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:10:00.515 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:10:00.515 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:10:00.515 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:10:00.515 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:10:00.515 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:10:00.515 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:10:00.515 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@107 -- # local dev=initiator0 00:10:00.515 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:10:00.515 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:10:00.515 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:10:00.515 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:10:00.515 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:10:00.515 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:10:00.515 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:10:00.515 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:10:00.515 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:10:00.515 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:10:00.515 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:10:00.515 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:00.515 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:00.515 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:10:00.515 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:10:00.515 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:00.515 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.615 ms 00:10:00.515 00:10:00.515 --- 10.0.0.1 ping statistics --- 00:10:00.515 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:00.515 rtt min/avg/max/mdev = 0.615/0.615/0.615/0.000 ms 00:10:00.515 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:10:00.515 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:10:00.515 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:10:00.515 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:10:00.515 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:00.515 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:00.515 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@168 -- # get_net_dev target0 00:10:00.515 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@107 -- # local dev=target0 00:10:00.515 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:10:00.515 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:10:00.515 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:10:00.515 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:10:00.515 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:10:00.515 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:10:00.515 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:10:00.515 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:10:00.515 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:10:00.515 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:10:00.515 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:10:00.515 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:10:00.515 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:10:00.515 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:10:00.515 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:00.515 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.319 ms 00:10:00.515 00:10:00.515 --- 10.0.0.2 ping statistics --- 00:10:00.515 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:00.515 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:10:00.515 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@98 -- # (( pair++ )) 00:10:00.515 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:10:00.515 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:00.515 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@270 -- # return 0 00:10:00.515 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:10:00.515 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:10:00.515 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:10:00.515 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:10:00.515 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:10:00.515 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:10:00.515 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:10:00.515 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:10:00.515 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:10:00.515 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:10:00.515 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@107 -- # local dev=initiator0 00:10:00.515 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:10:00.515 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:10:00.515 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:10:00.515 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:10:00.515 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:10:00.516 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:10:00.516 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:10:00.516 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:10:00.516 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:10:00.516 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:00.516 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:10:00.516 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:10:00.516 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:10:00.516 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:10:00.516 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:10:00.516 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:10:00.516 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@107 -- # local dev=initiator1 00:10:00.516 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:10:00.516 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:10:00.516 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@109 -- # return 1 00:10:00.516 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@168 -- # dev= 00:10:00.516 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@169 -- # return 0 00:10:00.516 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:10:00.516 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:10:00.516 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:10:00.516 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:10:00.516 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:10:00.516 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:00.516 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:00.516 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@168 -- # get_net_dev target0 00:10:00.516 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@107 -- # local dev=target0 00:10:00.516 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:10:00.516 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:10:00.516 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:10:00.516 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:10:00.516 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:10:00.516 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:10:00.516 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:10:00.516 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:10:00.516 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:10:00.516 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:00.516 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:10:00.516 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:10:00.516 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:10:00.516 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:10:00.516 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:00.516 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:00.516 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@168 -- # get_net_dev target1 00:10:00.516 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@107 -- # local dev=target1 00:10:00.516 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:10:00.516 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:10:00.516 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@109 -- # return 1 00:10:00.516 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@168 -- # dev= 00:10:00.516 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@169 -- # return 0 00:10:00.516 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:10:00.516 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:00.516 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:10:00.516 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:10:00.516 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:00.516 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:10:00.516 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:10:00.516 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:00.516 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:10:00.516 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:00.516 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:00.516 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # nvmfpid=196713 00:10:00.516 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@329 -- # waitforlisten 196713 00:10:00.516 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:00.516 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@833 -- # '[' -z 196713 ']' 00:10:00.516 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:00.516 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:00.516 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:00.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:00.516 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:00.516 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:00.516 [2024-11-05 18:59:28.940162] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:10:00.516 [2024-11-05 18:59:28.940228] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:00.516 [2024-11-05 18:59:29.022860] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:00.516 [2024-11-05 18:59:29.064213] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:00.516 [2024-11-05 18:59:29.064250] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:00.516 [2024-11-05 18:59:29.064258] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:00.516 [2024-11-05 18:59:29.064264] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:00.516 [2024-11-05 18:59:29.064270] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:00.516 [2024-11-05 18:59:29.065786] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:00.516 [2024-11-05 18:59:29.065996] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:00.516 [2024-11-05 18:59:29.065997] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:00.516 [2024-11-05 18:59:29.065865] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:00.516 18:59:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:00.516 18:59:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@866 -- # return 0 00:10:00.516 18:59:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:10:00.516 18:59:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:00.516 18:59:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:00.516 18:59:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:00.516 18:59:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:00.777 [2024-11-05 18:59:29.945386] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:00.777 18:59:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:01.038 18:59:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:01.038 18:59:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:01.299 18:59:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:01.299 18:59:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:01.299 18:59:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:01.299 18:59:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:01.560 18:59:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:01.560 18:59:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:01.821 18:59:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:01.821 18:59:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:01.821 18:59:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:02.083 18:59:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:02.083 18:59:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:02.343 18:59:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:02.343 18:59:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:02.605 18:59:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:02.605 18:59:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:02.605 18:59:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:02.866 18:59:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:02.866 18:59:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:03.127 18:59:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:03.127 [2024-11-05 18:59:32.422043] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:03.127 18:59:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:03.387 18:59:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:03.648 18:59:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:05.030 18:59:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:05.030 18:59:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # local i=0 00:10:05.030 18:59:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:10:05.030 18:59:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # [[ -n 4 ]] 00:10:05.030 18:59:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_device_counter=4 00:10:05.030 18:59:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # sleep 2 00:10:07.573 18:59:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:10:07.573 18:59:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:10:07.573 18:59:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:10:07.573 18:59:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # nvme_devices=4 00:10:07.573 18:59:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:10:07.573 18:59:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # return 0 00:10:07.573 18:59:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:07.573 [global] 00:10:07.573 thread=1 00:10:07.573 invalidate=1 00:10:07.573 rw=write 00:10:07.573 time_based=1 00:10:07.573 runtime=1 00:10:07.573 ioengine=libaio 00:10:07.573 direct=1 00:10:07.573 bs=4096 00:10:07.573 iodepth=1 00:10:07.573 norandommap=0 00:10:07.573 numjobs=1 00:10:07.573 00:10:07.573 verify_dump=1 00:10:07.573 verify_backlog=512 00:10:07.573 verify_state_save=0 00:10:07.573 do_verify=1 00:10:07.573 verify=crc32c-intel 00:10:07.573 [job0] 00:10:07.573 filename=/dev/nvme0n1 00:10:07.573 [job1] 00:10:07.573 filename=/dev/nvme0n2 00:10:07.573 [job2] 00:10:07.573 filename=/dev/nvme0n3 00:10:07.573 [job3] 00:10:07.573 filename=/dev/nvme0n4 00:10:07.573 Could not set queue depth (nvme0n1) 00:10:07.573 Could not set queue depth (nvme0n2) 00:10:07.573 Could not set queue depth (nvme0n3) 00:10:07.573 Could not set queue depth (nvme0n4) 00:10:07.573 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:07.573 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:07.573 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:07.573 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:07.573 fio-3.35 00:10:07.573 Starting 4 threads 00:10:08.957 00:10:08.957 job0: (groupid=0, jobs=1): err= 0: pid=198629: Tue Nov 5 18:59:38 2024 00:10:08.957 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:10:08.957 slat (nsec): min=7910, max=61471, avg=25361.13, stdev=4206.21 00:10:08.957 clat (usec): min=757, max=1270, avg=1064.85, stdev=77.81 00:10:08.957 lat (usec): min=782, max=1295, avg=1090.21, stdev=77.97 00:10:08.957 clat percentiles (usec): 00:10:08.957 | 1.00th=[ 840], 5.00th=[ 922], 10.00th=[ 971], 20.00th=[ 1004], 00:10:08.957 | 30.00th=[ 1037], 40.00th=[ 1057], 50.00th=[ 1074], 60.00th=[ 1090], 00:10:08.957 | 70.00th=[ 1106], 80.00th=[ 1123], 90.00th=[ 1156], 95.00th=[ 1188], 00:10:08.957 | 99.00th=[ 1221], 99.50th=[ 1254], 99.90th=[ 1270], 99.95th=[ 1270], 00:10:08.957 | 99.99th=[ 1270] 00:10:08.957 write: IOPS=719, BW=2877KiB/s (2946kB/s)(2880KiB/1001msec); 0 zone resets 00:10:08.957 slat (nsec): min=9322, max=70907, avg=26940.43, stdev=9868.55 00:10:08.957 clat (usec): min=149, max=826, avg=573.98, stdev=114.73 00:10:08.957 lat (usec): min=182, max=877, avg=600.92, stdev=118.88 00:10:08.957 clat percentiles (usec): 00:10:08.957 | 1.00th=[ 306], 5.00th=[ 359], 10.00th=[ 420], 20.00th=[ 478], 00:10:08.957 | 30.00th=[ 515], 40.00th=[ 553], 50.00th=[ 578], 60.00th=[ 611], 00:10:08.957 | 70.00th=[ 644], 80.00th=[ 676], 90.00th=[ 717], 95.00th=[ 742], 00:10:08.957 | 99.00th=[ 799], 99.50th=[ 816], 99.90th=[ 824], 99.95th=[ 824], 00:10:08.957 | 99.99th=[ 824] 00:10:08.957 bw ( KiB/s): min= 4096, max= 4096, per=43.40%, avg=4096.00, stdev= 0.00, samples=1 00:10:08.957 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:08.957 lat (usec) : 250=0.24%, 500=15.02%, 750=40.67%, 1000=9.33% 00:10:08.957 lat (msec) : 2=34.74% 00:10:08.957 cpu : usr=2.30%, sys=2.80%, ctx=1232, majf=0, minf=1 00:10:08.957 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:08.957 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:08.957 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:08.957 issued rwts: total=512,720,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:08.957 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:08.957 job1: (groupid=0, jobs=1): err= 0: pid=198630: Tue Nov 5 18:59:38 2024 00:10:08.957 read: IOPS=17, BW=69.8KiB/s (71.4kB/s)(72.0KiB/1032msec) 00:10:08.957 slat (nsec): min=19931, max=26427, avg=25835.78, stdev=1478.60 00:10:08.957 clat (usec): min=40944, max=42065, avg=41709.98, stdev=373.48 00:10:08.957 lat (usec): min=40970, max=42091, avg=41735.81, stdev=373.38 00:10:08.957 clat percentiles (usec): 00:10:08.957 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:10:08.957 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:10:08.957 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:10:08.957 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:08.957 | 99.99th=[42206] 00:10:08.957 write: IOPS=496, BW=1984KiB/s (2032kB/s)(2048KiB/1032msec); 0 zone resets 00:10:08.957 slat (nsec): min=9808, max=67774, avg=32515.83, stdev=8992.81 00:10:08.957 clat (usec): min=154, max=916, avg=507.91, stdev=128.81 00:10:08.957 lat (usec): min=185, max=950, avg=540.43, stdev=131.59 00:10:08.957 clat percentiles (usec): 00:10:08.957 | 1.00th=[ 243], 5.00th=[ 277], 10.00th=[ 347], 20.00th=[ 392], 00:10:08.957 | 30.00th=[ 433], 40.00th=[ 486], 50.00th=[ 510], 60.00th=[ 537], 00:10:08.957 | 70.00th=[ 586], 80.00th=[ 619], 90.00th=[ 668], 95.00th=[ 717], 00:10:08.957 | 99.00th=[ 824], 99.50th=[ 848], 99.90th=[ 914], 99.95th=[ 914], 00:10:08.957 | 99.99th=[ 914] 00:10:08.957 bw ( KiB/s): min= 4096, max= 4096, per=43.40%, avg=4096.00, stdev= 0.00, samples=1 00:10:08.957 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:08.957 lat (usec) : 250=1.51%, 500=42.08%, 750=50.75%, 1000=2.26% 00:10:08.957 lat (msec) : 50=3.40% 00:10:08.957 cpu : usr=0.68%, sys=1.65%, ctx=531, majf=0, minf=1 00:10:08.957 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:08.957 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:08.957 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:08.957 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:08.957 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:08.957 job2: (groupid=0, jobs=1): err= 0: pid=198631: Tue Nov 5 18:59:38 2024 00:10:08.957 read: IOPS=16, BW=65.6KiB/s (67.1kB/s)(68.0KiB/1037msec) 00:10:08.957 slat (nsec): min=26109, max=29011, avg=28243.53, stdev=637.30 00:10:08.957 clat (usec): min=40949, max=42091, avg=41725.47, stdev=406.29 00:10:08.957 lat (usec): min=40977, max=42119, avg=41753.72, stdev=406.08 00:10:08.957 clat percentiles (usec): 00:10:08.958 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:10:08.958 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:10:08.958 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:10:08.958 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:08.958 | 99.99th=[42206] 00:10:08.958 write: IOPS=493, BW=1975KiB/s (2022kB/s)(2048KiB/1037msec); 0 zone resets 00:10:08.958 slat (nsec): min=9621, max=56922, avg=32476.97, stdev=10431.08 00:10:08.958 clat (usec): min=200, max=2083, avg=599.03, stdev=146.48 00:10:08.958 lat (usec): min=237, max=2119, avg=631.51, stdev=149.43 00:10:08.958 clat percentiles (usec): 00:10:08.958 | 1.00th=[ 293], 5.00th=[ 371], 10.00th=[ 416], 20.00th=[ 478], 00:10:08.958 | 30.00th=[ 537], 40.00th=[ 578], 50.00th=[ 603], 60.00th=[ 644], 00:10:08.958 | 70.00th=[ 676], 80.00th=[ 709], 90.00th=[ 750], 95.00th=[ 799], 00:10:08.958 | 99.00th=[ 906], 99.50th=[ 938], 99.90th=[ 2089], 99.95th=[ 2089], 00:10:08.958 | 99.99th=[ 2089] 00:10:08.958 bw ( KiB/s): min= 4096, max= 4096, per=43.40%, avg=4096.00, stdev= 0.00, samples=1 00:10:08.958 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:08.958 lat (usec) : 250=0.38%, 500=22.87%, 750=63.33%, 1000=10.02% 00:10:08.958 lat (msec) : 4=0.19%, 50=3.21% 00:10:08.958 cpu : usr=0.87%, sys=2.12%, ctx=531, majf=0, minf=1 00:10:08.958 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:08.958 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:08.958 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:08.958 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:08.958 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:08.958 job3: (groupid=0, jobs=1): err= 0: pid=198632: Tue Nov 5 18:59:38 2024 00:10:08.958 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:10:08.958 slat (nsec): min=9781, max=57583, avg=26994.94, stdev=3053.64 00:10:08.958 clat (usec): min=584, max=1219, avg=1018.73, stdev=77.63 00:10:08.958 lat (usec): min=610, max=1245, avg=1045.72, stdev=77.47 00:10:08.958 clat percentiles (usec): 00:10:08.958 | 1.00th=[ 717], 5.00th=[ 889], 10.00th=[ 947], 20.00th=[ 979], 00:10:08.958 | 30.00th=[ 996], 40.00th=[ 1012], 50.00th=[ 1029], 60.00th=[ 1045], 00:10:08.958 | 70.00th=[ 1057], 80.00th=[ 1074], 90.00th=[ 1090], 95.00th=[ 1123], 00:10:08.958 | 99.00th=[ 1172], 99.50th=[ 1188], 99.90th=[ 1221], 99.95th=[ 1221], 00:10:08.958 | 99.99th=[ 1221] 00:10:08.958 write: IOPS=702, BW=2809KiB/s (2877kB/s)(2812KiB/1001msec); 0 zone resets 00:10:08.958 slat (nsec): min=9171, max=63461, avg=30612.43, stdev=9048.01 00:10:08.958 clat (usec): min=233, max=938, avg=617.12, stdev=116.59 00:10:08.958 lat (usec): min=243, max=972, avg=647.73, stdev=120.18 00:10:08.958 clat percentiles (usec): 00:10:08.958 | 1.00th=[ 310], 5.00th=[ 424], 10.00th=[ 461], 20.00th=[ 523], 00:10:08.958 | 30.00th=[ 562], 40.00th=[ 594], 50.00th=[ 627], 60.00th=[ 660], 00:10:08.958 | 70.00th=[ 685], 80.00th=[ 709], 90.00th=[ 750], 95.00th=[ 791], 00:10:08.958 | 99.00th=[ 898], 99.50th=[ 914], 99.90th=[ 938], 99.95th=[ 938], 00:10:08.958 | 99.99th=[ 938] 00:10:08.958 bw ( KiB/s): min= 4096, max= 4096, per=43.40%, avg=4096.00, stdev= 0.00, samples=1 00:10:08.958 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:08.958 lat (usec) : 250=0.08%, 500=9.88%, 750=43.29%, 1000=18.02% 00:10:08.958 lat (msec) : 2=28.72% 00:10:08.958 cpu : usr=3.30%, sys=4.00%, ctx=1215, majf=0, minf=1 00:10:08.958 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:08.958 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:08.958 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:08.958 issued rwts: total=512,703,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:08.958 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:08.958 00:10:08.958 Run status group 0 (all jobs): 00:10:08.958 READ: bw=4085KiB/s (4183kB/s), 65.6KiB/s-2046KiB/s (67.1kB/s-2095kB/s), io=4236KiB (4338kB), run=1001-1037msec 00:10:08.958 WRITE: bw=9439KiB/s (9665kB/s), 1975KiB/s-2877KiB/s (2022kB/s-2946kB/s), io=9788KiB (10.0MB), run=1001-1037msec 00:10:08.958 00:10:08.958 Disk stats (read/write): 00:10:08.958 nvme0n1: ios=523/512, merge=0/0, ticks=537/297, in_queue=834, util=86.57% 00:10:08.958 nvme0n2: ios=37/512, merge=0/0, ticks=1527/240, in_queue=1767, util=96.73% 00:10:08.958 nvme0n3: ios=69/512, merge=0/0, ticks=1085/247, in_queue=1332, util=96.93% 00:10:08.958 nvme0n4: ios=466/512, merge=0/0, ticks=429/257, in_queue=686, util=89.40% 00:10:08.958 18:59:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:08.958 [global] 00:10:08.958 thread=1 00:10:08.958 invalidate=1 00:10:08.958 rw=randwrite 00:10:08.958 time_based=1 00:10:08.958 runtime=1 00:10:08.958 ioengine=libaio 00:10:08.958 direct=1 00:10:08.958 bs=4096 00:10:08.958 iodepth=1 00:10:08.958 norandommap=0 00:10:08.958 numjobs=1 00:10:08.958 00:10:08.958 verify_dump=1 00:10:08.958 verify_backlog=512 00:10:08.958 verify_state_save=0 00:10:08.958 do_verify=1 00:10:08.958 verify=crc32c-intel 00:10:08.958 [job0] 00:10:08.958 filename=/dev/nvme0n1 00:10:08.958 [job1] 00:10:08.958 filename=/dev/nvme0n2 00:10:08.958 [job2] 00:10:08.958 filename=/dev/nvme0n3 00:10:08.958 [job3] 00:10:08.958 filename=/dev/nvme0n4 00:10:08.958 Could not set queue depth (nvme0n1) 00:10:08.958 Could not set queue depth (nvme0n2) 00:10:08.958 Could not set queue depth (nvme0n3) 00:10:08.958 Could not set queue depth (nvme0n4) 00:10:09.218 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:09.218 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:09.218 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:09.218 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:09.218 fio-3.35 00:10:09.218 Starting 4 threads 00:10:10.602 00:10:10.602 job0: (groupid=0, jobs=1): err= 0: pid=199158: Tue Nov 5 18:59:39 2024 00:10:10.602 read: IOPS=19, BW=77.7KiB/s (79.6kB/s)(80.0KiB/1029msec) 00:10:10.602 slat (nsec): min=25092, max=29623, avg=25754.00, stdev=927.94 00:10:10.602 clat (usec): min=731, max=42495, avg=37781.12, stdev=12619.02 00:10:10.602 lat (usec): min=756, max=42521, avg=37806.87, stdev=12618.38 00:10:10.602 clat percentiles (usec): 00:10:10.602 | 1.00th=[ 734], 5.00th=[ 734], 10.00th=[ 1057], 20.00th=[41157], 00:10:10.602 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:10:10.602 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:10:10.602 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:10:10.602 | 99.99th=[42730] 00:10:10.602 write: IOPS=497, BW=1990KiB/s (2038kB/s)(2048KiB/1029msec); 0 zone resets 00:10:10.602 slat (nsec): min=9405, max=52828, avg=29070.43, stdev=8320.99 00:10:10.602 clat (usec): min=172, max=762, avg=495.09, stdev=113.36 00:10:10.602 lat (usec): min=185, max=784, avg=524.17, stdev=115.86 00:10:10.602 clat percentiles (usec): 00:10:10.602 | 1.00th=[ 249], 5.00th=[ 297], 10.00th=[ 359], 20.00th=[ 392], 00:10:10.602 | 30.00th=[ 429], 40.00th=[ 469], 50.00th=[ 502], 60.00th=[ 523], 00:10:10.602 | 70.00th=[ 553], 80.00th=[ 603], 90.00th=[ 644], 95.00th=[ 676], 00:10:10.602 | 99.00th=[ 717], 99.50th=[ 734], 99.90th=[ 766], 99.95th=[ 766], 00:10:10.602 | 99.99th=[ 766] 00:10:10.602 bw ( KiB/s): min= 4096, max= 4096, per=41.44%, avg=4096.00, stdev= 0.00, samples=1 00:10:10.602 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:10.602 lat (usec) : 250=1.13%, 500=46.05%, 750=48.87%, 1000=0.38% 00:10:10.602 lat (msec) : 2=0.19%, 50=3.38% 00:10:10.602 cpu : usr=0.97%, sys=1.17%, ctx=533, majf=0, minf=2 00:10:10.602 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:10.602 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:10.602 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:10.602 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:10.602 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:10.602 job1: (groupid=0, jobs=1): err= 0: pid=199159: Tue Nov 5 18:59:39 2024 00:10:10.602 read: IOPS=669, BW=2678KiB/s (2743kB/s)(2756KiB/1029msec) 00:10:10.602 slat (nsec): min=6905, max=62090, avg=23982.82, stdev=7333.40 00:10:10.602 clat (usec): min=165, max=41262, avg=853.94, stdev=2664.56 00:10:10.602 lat (usec): min=173, max=41288, avg=877.93, stdev=2664.80 00:10:10.602 clat percentiles (usec): 00:10:10.602 | 1.00th=[ 314], 5.00th=[ 457], 10.00th=[ 529], 20.00th=[ 562], 00:10:10.602 | 30.00th=[ 611], 40.00th=[ 668], 50.00th=[ 693], 60.00th=[ 734], 00:10:10.602 | 70.00th=[ 775], 80.00th=[ 791], 90.00th=[ 816], 95.00th=[ 832], 00:10:10.602 | 99.00th=[ 889], 99.50th=[ 963], 99.90th=[41157], 99.95th=[41157], 00:10:10.602 | 99.99th=[41157] 00:10:10.602 write: IOPS=995, BW=3981KiB/s (4076kB/s)(4096KiB/1029msec); 0 zone resets 00:10:10.602 slat (nsec): min=9457, max=70250, avg=27315.09, stdev=10067.49 00:10:10.602 clat (usec): min=114, max=3528, avg=373.72, stdev=132.64 00:10:10.602 lat (usec): min=124, max=3560, avg=401.04, stdev=135.43 00:10:10.602 clat percentiles (usec): 00:10:10.602 | 1.00th=[ 133], 5.00th=[ 229], 10.00th=[ 258], 20.00th=[ 285], 00:10:10.602 | 30.00th=[ 334], 40.00th=[ 359], 50.00th=[ 371], 60.00th=[ 392], 00:10:10.602 | 70.00th=[ 424], 80.00th=[ 445], 90.00th=[ 474], 95.00th=[ 506], 00:10:10.602 | 99.00th=[ 578], 99.50th=[ 611], 99.90th=[ 660], 99.95th=[ 3523], 00:10:10.602 | 99.99th=[ 3523] 00:10:10.602 bw ( KiB/s): min= 4096, max= 4096, per=41.44%, avg=4096.00, stdev= 0.00, samples=2 00:10:10.602 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:10:10.602 lat (usec) : 250=4.73%, 500=54.82%, 750=25.10%, 1000=15.12% 00:10:10.602 lat (msec) : 4=0.06%, 50=0.18% 00:10:10.602 cpu : usr=2.53%, sys=4.38%, ctx=1714, majf=0, minf=2 00:10:10.602 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:10.602 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:10.602 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:10.602 issued rwts: total=689,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:10.602 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:10.602 job2: (groupid=0, jobs=1): err= 0: pid=199161: Tue Nov 5 18:59:39 2024 00:10:10.602 read: IOPS=20, BW=81.1KiB/s (83.0kB/s)(84.0KiB/1036msec) 00:10:10.602 slat (nsec): min=10299, max=27650, avg=26340.71, stdev=3683.23 00:10:10.602 clat (usec): min=40843, max=41052, avg=40957.73, stdev=46.55 00:10:10.602 lat (usec): min=40854, max=41079, avg=40984.08, stdev=48.68 00:10:10.602 clat percentiles (usec): 00:10:10.602 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:10:10.602 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:10.603 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:10.603 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:10.603 | 99.99th=[41157] 00:10:10.603 write: IOPS=494, BW=1977KiB/s (2024kB/s)(2048KiB/1036msec); 0 zone resets 00:10:10.603 slat (nsec): min=9733, max=52651, avg=25717.67, stdev=11459.91 00:10:10.603 clat (usec): min=102, max=511, avg=308.73, stdev=63.37 00:10:10.603 lat (usec): min=127, max=545, avg=334.45, stdev=65.40 00:10:10.603 clat percentiles (usec): 00:10:10.603 | 1.00th=[ 143], 5.00th=[ 196], 10.00th=[ 227], 20.00th=[ 265], 00:10:10.603 | 30.00th=[ 281], 40.00th=[ 293], 50.00th=[ 310], 60.00th=[ 322], 00:10:10.603 | 70.00th=[ 343], 80.00th=[ 363], 90.00th=[ 388], 95.00th=[ 404], 00:10:10.603 | 99.00th=[ 449], 99.50th=[ 486], 99.90th=[ 510], 99.95th=[ 510], 00:10:10.603 | 99.99th=[ 510] 00:10:10.603 bw ( KiB/s): min= 4096, max= 4096, per=41.44%, avg=4096.00, stdev= 0.00, samples=1 00:10:10.603 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:10.603 lat (usec) : 250=14.63%, 500=81.24%, 750=0.19% 00:10:10.603 lat (msec) : 50=3.94% 00:10:10.603 cpu : usr=0.68%, sys=1.26%, ctx=536, majf=0, minf=1 00:10:10.603 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:10.603 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:10.603 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:10.603 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:10.603 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:10.603 job3: (groupid=0, jobs=1): err= 0: pid=199162: Tue Nov 5 18:59:39 2024 00:10:10.603 read: IOPS=20, BW=81.4KiB/s (83.3kB/s)(84.0KiB/1032msec) 00:10:10.603 slat (nsec): min=10224, max=26546, avg=25382.38, stdev=3477.01 00:10:10.603 clat (usec): min=40900, max=41488, avg=40990.61, stdev=120.61 00:10:10.603 lat (usec): min=40926, max=41498, avg=41016.00, stdev=117.32 00:10:10.603 clat percentiles (usec): 00:10:10.603 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:10:10.603 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:10.603 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:10.603 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:10:10.603 | 99.99th=[41681] 00:10:10.603 write: IOPS=496, BW=1984KiB/s (2032kB/s)(2048KiB/1032msec); 0 zone resets 00:10:10.603 slat (nsec): min=9332, max=47391, avg=24001.36, stdev=10734.56 00:10:10.603 clat (usec): min=124, max=492, avg=301.95, stdev=60.33 00:10:10.603 lat (usec): min=142, max=524, avg=325.95, stdev=61.13 00:10:10.603 clat percentiles (usec): 00:10:10.603 | 1.00th=[ 143], 5.00th=[ 194], 10.00th=[ 217], 20.00th=[ 258], 00:10:10.603 | 30.00th=[ 285], 40.00th=[ 293], 50.00th=[ 306], 60.00th=[ 318], 00:10:10.603 | 70.00th=[ 326], 80.00th=[ 343], 90.00th=[ 379], 95.00th=[ 400], 00:10:10.603 | 99.00th=[ 453], 99.50th=[ 453], 99.90th=[ 494], 99.95th=[ 494], 00:10:10.603 | 99.99th=[ 494] 00:10:10.603 bw ( KiB/s): min= 4104, max= 4104, per=41.52%, avg=4104.00, stdev= 0.00, samples=1 00:10:10.603 iops : min= 1026, max= 1026, avg=1026.00, stdev= 0.00, samples=1 00:10:10.603 lat (usec) : 250=17.64%, 500=78.42% 00:10:10.603 lat (msec) : 50=3.94% 00:10:10.603 cpu : usr=0.48%, sys=1.26%, ctx=534, majf=0, minf=1 00:10:10.603 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:10.603 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:10.603 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:10.603 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:10.603 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:10.603 00:10:10.603 Run status group 0 (all jobs): 00:10:10.603 READ: bw=2900KiB/s (2969kB/s), 77.7KiB/s-2678KiB/s (79.6kB/s-2743kB/s), io=3004KiB (3076kB), run=1029-1036msec 00:10:10.603 WRITE: bw=9884KiB/s (10.1MB/s), 1977KiB/s-3981KiB/s (2024kB/s-4076kB/s), io=10.0MiB (10.5MB), run=1029-1036msec 00:10:10.603 00:10:10.603 Disk stats (read/write): 00:10:10.603 nvme0n1: ios=64/512, merge=0/0, ticks=634/235, in_queue=869, util=91.48% 00:10:10.603 nvme0n2: ios=649/1024, merge=0/0, ticks=480/363, in_queue=843, util=90.83% 00:10:10.603 nvme0n3: ios=76/512, merge=0/0, ticks=1598/149, in_queue=1747, util=97.15% 00:10:10.603 nvme0n4: ios=50/512, merge=0/0, ticks=803/150, in_queue=953, util=98.08% 00:10:10.603 18:59:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:10.603 [global] 00:10:10.603 thread=1 00:10:10.603 invalidate=1 00:10:10.603 rw=write 00:10:10.603 time_based=1 00:10:10.603 runtime=1 00:10:10.603 ioengine=libaio 00:10:10.603 direct=1 00:10:10.603 bs=4096 00:10:10.603 iodepth=128 00:10:10.603 norandommap=0 00:10:10.603 numjobs=1 00:10:10.603 00:10:10.603 verify_dump=1 00:10:10.603 verify_backlog=512 00:10:10.603 verify_state_save=0 00:10:10.603 do_verify=1 00:10:10.603 verify=crc32c-intel 00:10:10.603 [job0] 00:10:10.603 filename=/dev/nvme0n1 00:10:10.603 [job1] 00:10:10.603 filename=/dev/nvme0n2 00:10:10.603 [job2] 00:10:10.603 filename=/dev/nvme0n3 00:10:10.603 [job3] 00:10:10.603 filename=/dev/nvme0n4 00:10:10.603 Could not set queue depth (nvme0n1) 00:10:10.603 Could not set queue depth (nvme0n2) 00:10:10.603 Could not set queue depth (nvme0n3) 00:10:10.603 Could not set queue depth (nvme0n4) 00:10:10.863 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:10.863 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:10.863 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:10.863 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:10.863 fio-3.35 00:10:10.863 Starting 4 threads 00:10:12.247 00:10:12.247 job0: (groupid=0, jobs=1): err= 0: pid=199690: Tue Nov 5 18:59:41 2024 00:10:12.247 read: IOPS=3235, BW=12.6MiB/s (13.3MB/s)(12.7MiB/1008msec) 00:10:12.247 slat (nsec): min=938, max=14364k, avg=121693.14, stdev=752420.73 00:10:12.247 clat (usec): min=2595, max=45274, avg=14472.03, stdev=4897.28 00:10:12.247 lat (usec): min=7651, max=45303, avg=14593.72, stdev=4970.83 00:10:12.247 clat percentiles (usec): 00:10:12.247 | 1.00th=[ 8160], 5.00th=[10290], 10.00th=[11207], 20.00th=[11731], 00:10:12.247 | 30.00th=[11994], 40.00th=[12649], 50.00th=[12911], 60.00th=[13435], 00:10:12.247 | 70.00th=[14091], 80.00th=[15795], 90.00th=[20317], 95.00th=[28181], 00:10:12.247 | 99.00th=[31851], 99.50th=[31851], 99.90th=[35390], 99.95th=[38011], 00:10:12.247 | 99.99th=[45351] 00:10:12.247 write: IOPS=3555, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1008msec); 0 zone resets 00:10:12.247 slat (nsec): min=1670, max=16740k, avg=163958.80, stdev=837307.83 00:10:12.247 clat (usec): min=8730, max=51313, avg=22396.09, stdev=11367.75 00:10:12.247 lat (usec): min=8739, max=51316, avg=22560.05, stdev=11419.57 00:10:12.247 clat percentiles (usec): 00:10:12.247 | 1.00th=[ 9896], 5.00th=[10028], 10.00th=[10552], 20.00th=[13698], 00:10:12.247 | 30.00th=[16581], 40.00th=[17433], 50.00th=[17433], 60.00th=[18482], 00:10:12.247 | 70.00th=[24249], 80.00th=[32637], 90.00th=[41681], 95.00th=[48497], 00:10:12.247 | 99.00th=[50070], 99.50th=[51119], 99.90th=[51119], 99.95th=[51119], 00:10:12.247 | 99.99th=[51119] 00:10:12.247 bw ( KiB/s): min=12728, max=15944, per=18.08%, avg=14336.00, stdev=2274.06, samples=2 00:10:12.247 iops : min= 3182, max= 3986, avg=3584.00, stdev=568.51, samples=2 00:10:12.247 lat (msec) : 4=0.01%, 10=4.70%, 20=72.48%, 50=21.99%, 100=0.82% 00:10:12.247 cpu : usr=1.99%, sys=4.07%, ctx=456, majf=0, minf=2 00:10:12.247 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:10:12.247 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:12.247 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:12.247 issued rwts: total=3261,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:12.247 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:12.247 job1: (groupid=0, jobs=1): err= 0: pid=199691: Tue Nov 5 18:59:41 2024 00:10:12.247 read: IOPS=8508, BW=33.2MiB/s (34.9MB/s)(33.4MiB/1006msec) 00:10:12.247 slat (nsec): min=908, max=6545.8k, avg=60561.88, stdev=415239.46 00:10:12.247 clat (usec): min=927, max=24279, avg=8178.07, stdev=2805.04 00:10:12.247 lat (usec): min=2953, max=24288, avg=8238.63, stdev=2829.39 00:10:12.247 clat percentiles (usec): 00:10:12.247 | 1.00th=[ 4817], 5.00th=[ 5538], 10.00th=[ 5932], 20.00th=[ 6325], 00:10:12.247 | 30.00th=[ 6718], 40.00th=[ 7046], 50.00th=[ 7570], 60.00th=[ 8029], 00:10:12.247 | 70.00th=[ 8455], 80.00th=[ 9372], 90.00th=[10945], 95.00th=[11994], 00:10:12.247 | 99.00th=[20579], 99.50th=[23462], 99.90th=[24249], 99.95th=[24249], 00:10:12.247 | 99.99th=[24249] 00:10:12.247 write: IOPS=8652, BW=33.8MiB/s (35.4MB/s)(34.0MiB/1006msec); 0 zone resets 00:10:12.247 slat (nsec): min=1608, max=10595k, avg=49535.53, stdev=380127.67 00:10:12.247 clat (usec): min=1232, max=26009, avg=6611.42, stdev=3296.76 00:10:12.247 lat (usec): min=1241, max=26020, avg=6660.95, stdev=3311.16 00:10:12.247 clat percentiles (usec): 00:10:12.247 | 1.00th=[ 2573], 5.00th=[ 3654], 10.00th=[ 3916], 20.00th=[ 4490], 00:10:12.247 | 30.00th=[ 5145], 40.00th=[ 5473], 50.00th=[ 5866], 60.00th=[ 6259], 00:10:12.247 | 70.00th=[ 6849], 80.00th=[ 8029], 90.00th=[ 9372], 95.00th=[11600], 00:10:12.247 | 99.00th=[21890], 99.50th=[24249], 99.90th=[25560], 99.95th=[26084], 00:10:12.247 | 99.99th=[26084] 00:10:12.247 bw ( KiB/s): min=32768, max=36864, per=43.91%, avg=34816.00, stdev=2896.31, samples=2 00:10:12.247 iops : min= 8192, max= 9216, avg=8704.00, stdev=724.08, samples=2 00:10:12.247 lat (usec) : 1000=0.01% 00:10:12.247 lat (msec) : 2=0.20%, 4=5.62%, 10=82.41%, 20=10.18%, 50=1.59% 00:10:12.247 cpu : usr=7.66%, sys=9.15%, ctx=370, majf=0, minf=2 00:10:12.247 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:10:12.248 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:12.248 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:12.248 issued rwts: total=8560,8704,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:12.248 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:12.248 job2: (groupid=0, jobs=1): err= 0: pid=199692: Tue Nov 5 18:59:41 2024 00:10:12.248 read: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec) 00:10:12.248 slat (nsec): min=937, max=16118k, avg=134846.12, stdev=942634.89 00:10:12.248 clat (usec): min=3549, max=51963, avg=16677.06, stdev=12245.79 00:10:12.248 lat (usec): min=3558, max=51972, avg=16811.91, stdev=12359.86 00:10:12.248 clat percentiles (usec): 00:10:12.248 | 1.00th=[ 4359], 5.00th=[ 5866], 10.00th=[ 7832], 20.00th=[ 8291], 00:10:12.248 | 30.00th=[ 8356], 40.00th=[ 8586], 50.00th=[ 9503], 60.00th=[11338], 00:10:12.248 | 70.00th=[19530], 80.00th=[27919], 90.00th=[40109], 95.00th=[42730], 00:10:12.248 | 99.00th=[45351], 99.50th=[45351], 99.90th=[49546], 99.95th=[49546], 00:10:12.248 | 99.99th=[52167] 00:10:12.248 write: IOPS=3681, BW=14.4MiB/s (15.1MB/s)(14.4MiB/1004msec); 0 zone resets 00:10:12.248 slat (nsec): min=1645, max=11284k, avg=133865.79, stdev=787585.76 00:10:12.248 clat (usec): min=1678, max=62480, avg=18230.26, stdev=13476.04 00:10:12.248 lat (usec): min=4292, max=62488, avg=18364.12, stdev=13580.09 00:10:12.248 clat percentiles (usec): 00:10:12.248 | 1.00th=[ 4621], 5.00th=[ 5604], 10.00th=[ 7308], 20.00th=[ 8225], 00:10:12.248 | 30.00th=[ 8455], 40.00th=[ 8848], 50.00th=[ 9634], 60.00th=[17433], 00:10:12.248 | 70.00th=[23462], 80.00th=[28443], 90.00th=[41681], 95.00th=[46400], 00:10:12.248 | 99.00th=[55837], 99.50th=[57934], 99.90th=[61080], 99.95th=[62653], 00:10:12.248 | 99.99th=[62653] 00:10:12.248 bw ( KiB/s): min=13504, max=15216, per=18.11%, avg=14360.00, stdev=1210.57, samples=2 00:10:12.248 iops : min= 3376, max= 3804, avg=3590.00, stdev=302.64, samples=2 00:10:12.248 lat (msec) : 2=0.01%, 4=0.16%, 10=52.28%, 20=15.91%, 50=30.27% 00:10:12.248 lat (msec) : 100=1.36% 00:10:12.248 cpu : usr=3.19%, sys=4.09%, ctx=258, majf=0, minf=1 00:10:12.248 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:10:12.248 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:12.248 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:12.248 issued rwts: total=3584,3696,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:12.248 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:12.248 job3: (groupid=0, jobs=1): err= 0: pid=199693: Tue Nov 5 18:59:41 2024 00:10:12.248 read: IOPS=3591, BW=14.0MiB/s (14.7MB/s)(14.2MiB/1013msec) 00:10:12.248 slat (nsec): min=957, max=14782k, avg=124582.61, stdev=888254.69 00:10:12.248 clat (usec): min=5046, max=38896, avg=13647.11, stdev=5198.56 00:10:12.248 lat (usec): min=5054, max=38906, avg=13771.69, stdev=5270.58 00:10:12.248 clat percentiles (usec): 00:10:12.248 | 1.00th=[ 5342], 5.00th=[ 9896], 10.00th=[10290], 20.00th=[10683], 00:10:12.248 | 30.00th=[10814], 40.00th=[10945], 50.00th=[11207], 60.00th=[12387], 00:10:12.248 | 70.00th=[13698], 80.00th=[16909], 90.00th=[20055], 95.00th=[24511], 00:10:12.248 | 99.00th=[34341], 99.50th=[35390], 99.90th=[39060], 99.95th=[39060], 00:10:12.248 | 99.99th=[39060] 00:10:12.248 write: IOPS=4043, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1013msec); 0 zone resets 00:10:12.248 slat (nsec): min=1674, max=8812.8k, avg=126234.50, stdev=529742.44 00:10:12.248 clat (usec): min=1208, max=40213, avg=19239.98, stdev=7972.07 00:10:12.248 lat (usec): min=1220, max=40222, avg=19366.21, stdev=8026.87 00:10:12.248 clat percentiles (usec): 00:10:12.248 | 1.00th=[ 3818], 5.00th=[ 6718], 10.00th=[ 8848], 20.00th=[12256], 00:10:12.248 | 30.00th=[16188], 40.00th=[17171], 50.00th=[17433], 60.00th=[19268], 00:10:12.248 | 70.00th=[23987], 80.00th=[26608], 90.00th=[30802], 95.00th=[33817], 00:10:12.248 | 99.00th=[36439], 99.50th=[38536], 99.90th=[40109], 99.95th=[40109], 00:10:12.248 | 99.99th=[40109] 00:10:12.248 bw ( KiB/s): min=15792, max=16384, per=20.29%, avg=16088.00, stdev=418.61, samples=2 00:10:12.248 iops : min= 3948, max= 4096, avg=4022.00, stdev=104.65, samples=2 00:10:12.248 lat (msec) : 2=0.03%, 4=0.52%, 10=10.02%, 20=64.80%, 50=24.63% 00:10:12.248 cpu : usr=2.67%, sys=3.95%, ctx=495, majf=0, minf=2 00:10:12.248 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:12.248 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:12.248 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:12.248 issued rwts: total=3638,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:12.248 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:12.248 00:10:12.248 Run status group 0 (all jobs): 00:10:12.248 READ: bw=73.4MiB/s (77.0MB/s), 12.6MiB/s-33.2MiB/s (13.3MB/s-34.9MB/s), io=74.4MiB (78.0MB), run=1004-1013msec 00:10:12.248 WRITE: bw=77.4MiB/s (81.2MB/s), 13.9MiB/s-33.8MiB/s (14.6MB/s-35.4MB/s), io=78.4MiB (82.2MB), run=1004-1013msec 00:10:12.248 00:10:12.248 Disk stats (read/write): 00:10:12.248 nvme0n1: ios=2650/3072, merge=0/0, ticks=20174/33788, in_queue=53962, util=97.90% 00:10:12.248 nvme0n2: ios=7510/7680, merge=0/0, ticks=55132/43823, in_queue=98955, util=89.50% 00:10:12.248 nvme0n3: ios=2474/2560, merge=0/0, ticks=20756/23077, in_queue=43833, util=93.05% 00:10:12.248 nvme0n4: ios=3129/3375, merge=0/0, ticks=40151/61892, in_queue=102043, util=96.48% 00:10:12.248 18:59:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:12.248 [global] 00:10:12.248 thread=1 00:10:12.248 invalidate=1 00:10:12.248 rw=randwrite 00:10:12.248 time_based=1 00:10:12.248 runtime=1 00:10:12.248 ioengine=libaio 00:10:12.248 direct=1 00:10:12.248 bs=4096 00:10:12.248 iodepth=128 00:10:12.248 norandommap=0 00:10:12.248 numjobs=1 00:10:12.248 00:10:12.248 verify_dump=1 00:10:12.248 verify_backlog=512 00:10:12.248 verify_state_save=0 00:10:12.248 do_verify=1 00:10:12.248 verify=crc32c-intel 00:10:12.248 [job0] 00:10:12.248 filename=/dev/nvme0n1 00:10:12.248 [job1] 00:10:12.248 filename=/dev/nvme0n2 00:10:12.248 [job2] 00:10:12.248 filename=/dev/nvme0n3 00:10:12.248 [job3] 00:10:12.248 filename=/dev/nvme0n4 00:10:12.248 Could not set queue depth (nvme0n1) 00:10:12.248 Could not set queue depth (nvme0n2) 00:10:12.248 Could not set queue depth (nvme0n3) 00:10:12.248 Could not set queue depth (nvme0n4) 00:10:12.509 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:12.509 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:12.509 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:12.509 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:12.509 fio-3.35 00:10:12.509 Starting 4 threads 00:10:13.896 00:10:13.897 job0: (groupid=0, jobs=1): err= 0: pid=200212: Tue Nov 5 18:59:43 2024 00:10:13.897 read: IOPS=4562, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1010msec) 00:10:13.897 slat (nsec): min=927, max=13122k, avg=76676.07, stdev=622333.67 00:10:13.897 clat (usec): min=1561, max=33844, avg=10075.85, stdev=4783.50 00:10:13.897 lat (usec): min=1568, max=33850, avg=10152.53, stdev=4836.78 00:10:13.897 clat percentiles (usec): 00:10:13.897 | 1.00th=[ 2900], 5.00th=[ 4113], 10.00th=[ 5669], 20.00th=[ 6849], 00:10:13.897 | 30.00th=[ 7504], 40.00th=[ 8160], 50.00th=[ 8455], 60.00th=[ 9765], 00:10:13.897 | 70.00th=[11076], 80.00th=[13304], 90.00th=[16909], 95.00th=[20055], 00:10:13.897 | 99.00th=[27395], 99.50th=[29492], 99.90th=[31065], 99.95th=[31065], 00:10:13.897 | 99.99th=[33817] 00:10:13.897 write: IOPS=4964, BW=19.4MiB/s (20.3MB/s)(19.6MiB/1010msec); 0 zone resets 00:10:13.897 slat (nsec): min=1547, max=12422k, avg=112318.00, stdev=697381.93 00:10:13.897 clat (usec): min=408, max=74924, avg=16317.25, stdev=17093.92 00:10:13.897 lat (usec): min=438, max=74933, avg=16429.57, stdev=17207.19 00:10:13.897 clat percentiles (usec): 00:10:13.897 | 1.00th=[ 1139], 5.00th=[ 2704], 10.00th=[ 4817], 20.00th=[ 6194], 00:10:13.897 | 30.00th=[ 6980], 40.00th=[ 7635], 50.00th=[ 8586], 60.00th=[10159], 00:10:13.897 | 70.00th=[14877], 80.00th=[24773], 90.00th=[40109], 95.00th=[63177], 00:10:13.897 | 99.00th=[72877], 99.50th=[73925], 99.90th=[74974], 99.95th=[74974], 00:10:13.897 | 99.99th=[74974] 00:10:13.897 bw ( KiB/s): min=14736, max=24352, per=22.60%, avg=19544.00, stdev=6799.54, samples=2 00:10:13.897 iops : min= 3684, max= 6088, avg=4886.00, stdev=1699.88, samples=2 00:10:13.897 lat (usec) : 500=0.01%, 750=0.06%, 1000=0.14% 00:10:13.897 lat (msec) : 2=1.38%, 4=4.99%, 10=52.65%, 20=26.11%, 50=10.54% 00:10:13.897 lat (msec) : 100=4.13% 00:10:13.897 cpu : usr=4.26%, sys=5.45%, ctx=356, majf=0, minf=1 00:10:13.897 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:10:13.897 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:13.897 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:13.897 issued rwts: total=4608,5014,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:13.897 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:13.897 job1: (groupid=0, jobs=1): err= 0: pid=200213: Tue Nov 5 18:59:43 2024 00:10:13.897 read: IOPS=6642, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1002msec) 00:10:13.897 slat (nsec): min=888, max=13212k, avg=72252.65, stdev=490491.07 00:10:13.897 clat (usec): min=3444, max=49300, avg=9094.56, stdev=3414.59 00:10:13.897 lat (usec): min=3453, max=49307, avg=9166.82, stdev=3472.08 00:10:13.897 clat percentiles (usec): 00:10:13.897 | 1.00th=[ 5080], 5.00th=[ 5997], 10.00th=[ 6325], 20.00th=[ 6849], 00:10:13.897 | 30.00th=[ 7570], 40.00th=[ 8455], 50.00th=[ 8717], 60.00th=[ 8979], 00:10:13.897 | 70.00th=[ 9241], 80.00th=[10159], 90.00th=[11731], 95.00th=[15401], 00:10:13.897 | 99.00th=[22414], 99.50th=[27395], 99.90th=[42730], 99.95th=[42730], 00:10:13.897 | 99.99th=[49546] 00:10:13.897 write: IOPS=6970, BW=27.2MiB/s (28.5MB/s)(27.3MiB/1002msec); 0 zone resets 00:10:13.897 slat (nsec): min=1480, max=13270k, avg=67456.03, stdev=465128.09 00:10:13.897 clat (usec): min=651, max=100731, avg=9531.97, stdev=10686.35 00:10:13.897 lat (usec): min=1163, max=100739, avg=9599.43, stdev=10740.94 00:10:13.897 clat percentiles (msec): 00:10:13.897 | 1.00th=[ 3], 5.00th=[ 5], 10.00th=[ 5], 20.00th=[ 7], 00:10:13.897 | 30.00th=[ 7], 40.00th=[ 8], 50.00th=[ 9], 60.00th=[ 9], 00:10:13.897 | 70.00th=[ 9], 80.00th=[ 9], 90.00th=[ 11], 95.00th=[ 18], 00:10:13.897 | 99.00th=[ 82], 99.50th=[ 90], 99.90th=[ 102], 99.95th=[ 102], 00:10:13.897 | 99.99th=[ 102] 00:10:13.897 bw ( KiB/s): min=22088, max=32768, per=31.71%, avg=27428.00, stdev=7551.90, samples=2 00:10:13.897 iops : min= 5522, max= 8192, avg=6857.00, stdev=1887.98, samples=2 00:10:13.897 lat (usec) : 750=0.01% 00:10:13.897 lat (msec) : 2=0.02%, 4=2.12%, 10=82.21%, 20=12.60%, 50=2.05% 00:10:13.897 lat (msec) : 100=0.88%, 250=0.11% 00:10:13.897 cpu : usr=4.80%, sys=6.89%, ctx=617, majf=0, minf=2 00:10:13.897 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:10:13.897 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:13.897 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:13.897 issued rwts: total=6656,6984,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:13.897 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:13.897 job2: (groupid=0, jobs=1): err= 0: pid=200214: Tue Nov 5 18:59:43 2024 00:10:13.897 read: IOPS=5598, BW=21.9MiB/s (22.9MB/s)(22.0MiB/1006msec) 00:10:13.897 slat (nsec): min=993, max=14185k, avg=91390.35, stdev=657262.59 00:10:13.897 clat (usec): min=3905, max=59570, avg=11250.99, stdev=6514.45 00:10:13.897 lat (usec): min=3914, max=59579, avg=11342.38, stdev=6574.64 00:10:13.897 clat percentiles (usec): 00:10:13.897 | 1.00th=[ 5866], 5.00th=[ 7111], 10.00th=[ 7242], 20.00th=[ 7767], 00:10:13.897 | 30.00th=[ 8225], 40.00th=[ 8979], 50.00th=[ 9372], 60.00th=[10028], 00:10:13.897 | 70.00th=[10814], 80.00th=[12649], 90.00th=[17171], 95.00th=[21103], 00:10:13.897 | 99.00th=[42730], 99.50th=[53740], 99.90th=[56886], 99.95th=[59507], 00:10:13.897 | 99.99th=[59507] 00:10:13.897 write: IOPS=5710, BW=22.3MiB/s (23.4MB/s)(22.4MiB/1006msec); 0 zone resets 00:10:13.897 slat (nsec): min=1608, max=11153k, avg=78926.60, stdev=504392.29 00:10:13.897 clat (usec): min=1155, max=59532, avg=11185.26, stdev=6345.40 00:10:13.897 lat (usec): min=1167, max=59534, avg=11264.19, stdev=6379.97 00:10:13.897 clat percentiles (usec): 00:10:13.897 | 1.00th=[ 3752], 5.00th=[ 5211], 10.00th=[ 5538], 20.00th=[ 6456], 00:10:13.897 | 30.00th=[ 7242], 40.00th=[ 8586], 50.00th=[ 9503], 60.00th=[10683], 00:10:13.897 | 70.00th=[12125], 80.00th=[14746], 90.00th=[18220], 95.00th=[24773], 00:10:13.897 | 99.00th=[33817], 99.50th=[39060], 99.90th=[46400], 99.95th=[46400], 00:10:13.897 | 99.99th=[59507] 00:10:13.897 bw ( KiB/s): min=20480, max=24624, per=26.07%, avg=22552.00, stdev=2930.25, samples=2 00:10:13.897 iops : min= 5120, max= 6156, avg=5638.00, stdev=732.56, samples=2 00:10:13.897 lat (msec) : 2=0.02%, 4=0.59%, 10=56.23%, 20=36.39%, 50=6.36% 00:10:13.897 lat (msec) : 100=0.41% 00:10:13.897 cpu : usr=4.58%, sys=6.67%, ctx=408, majf=0, minf=2 00:10:13.897 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:10:13.897 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:13.897 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:13.897 issued rwts: total=5632,5745,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:13.897 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:13.897 job3: (groupid=0, jobs=1): err= 0: pid=200215: Tue Nov 5 18:59:43 2024 00:10:13.897 read: IOPS=3854, BW=15.1MiB/s (15.8MB/s)(15.2MiB/1008msec) 00:10:13.897 slat (nsec): min=998, max=17946k, avg=154239.89, stdev=1103367.39 00:10:13.897 clat (usec): min=2796, max=98425, avg=19368.18, stdev=16754.08 00:10:13.897 lat (usec): min=5473, max=98454, avg=19522.42, stdev=16900.79 00:10:13.897 clat percentiles (usec): 00:10:13.897 | 1.00th=[ 5604], 5.00th=[ 8848], 10.00th=[ 9241], 20.00th=[10683], 00:10:13.897 | 30.00th=[11207], 40.00th=[11863], 50.00th=[12387], 60.00th=[13304], 00:10:13.897 | 70.00th=[15008], 80.00th=[20055], 90.00th=[50070], 95.00th=[60556], 00:10:13.897 | 99.00th=[82314], 99.50th=[82314], 99.90th=[86508], 99.95th=[93848], 00:10:13.897 | 99.99th=[98042] 00:10:13.897 write: IOPS=4063, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1008msec); 0 zone resets 00:10:13.897 slat (nsec): min=1619, max=11283k, avg=87069.58, stdev=534786.03 00:10:13.897 clat (usec): min=1183, max=74351, avg=12781.41, stdev=7003.91 00:10:13.897 lat (usec): min=1194, max=74360, avg=12868.48, stdev=7031.38 00:10:13.897 clat percentiles (usec): 00:10:13.897 | 1.00th=[ 4015], 5.00th=[ 8225], 10.00th=[ 8979], 20.00th=[ 9372], 00:10:13.897 | 30.00th=[10159], 40.00th=[11076], 50.00th=[11338], 60.00th=[11994], 00:10:13.897 | 70.00th=[12387], 80.00th=[12911], 90.00th=[17171], 95.00th=[23462], 00:10:13.897 | 99.00th=[53740], 99.50th=[63177], 99.90th=[63177], 99.95th=[63177], 00:10:13.897 | 99.99th=[73925] 00:10:13.897 bw ( KiB/s): min=12536, max=20232, per=18.94%, avg=16384.00, stdev=5441.89, samples=2 00:10:13.897 iops : min= 3134, max= 5058, avg=4096.00, stdev=1360.47, samples=2 00:10:13.897 lat (msec) : 2=0.18%, 4=0.31%, 10=20.20%, 20=65.53%, 50=8.70% 00:10:13.897 lat (msec) : 100=5.09% 00:10:13.897 cpu : usr=2.78%, sys=4.97%, ctx=391, majf=0, minf=1 00:10:13.897 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:13.897 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:13.897 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:13.897 issued rwts: total=3885,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:13.897 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:13.897 00:10:13.897 Run status group 0 (all jobs): 00:10:13.897 READ: bw=80.4MiB/s (84.3MB/s), 15.1MiB/s-25.9MiB/s (15.8MB/s-27.2MB/s), io=81.2MiB (85.1MB), run=1002-1010msec 00:10:13.897 WRITE: bw=84.5MiB/s (88.6MB/s), 15.9MiB/s-27.2MiB/s (16.6MB/s-28.5MB/s), io=85.3MiB (89.5MB), run=1002-1010msec 00:10:13.897 00:10:13.897 Disk stats (read/write): 00:10:13.897 nvme0n1: ios=4472/4608, merge=0/0, ticks=39608/58721, in_queue=98329, util=85.27% 00:10:13.897 nvme0n2: ios=5681/6143, merge=0/0, ticks=31303/32516, in_queue=63819, util=89.30% 00:10:13.897 nvme0n3: ios=4152/4608, merge=0/0, ticks=47413/54283, in_queue=101696, util=91.99% 00:10:13.897 nvme0n4: ios=3413/3584, merge=0/0, ticks=31059/22438, in_queue=53497, util=96.69% 00:10:13.897 18:59:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:13.897 18:59:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=200343 00:10:13.897 18:59:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:13.898 18:59:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:13.898 [global] 00:10:13.898 thread=1 00:10:13.898 invalidate=1 00:10:13.898 rw=read 00:10:13.898 time_based=1 00:10:13.898 runtime=10 00:10:13.898 ioengine=libaio 00:10:13.898 direct=1 00:10:13.898 bs=4096 00:10:13.898 iodepth=1 00:10:13.898 norandommap=1 00:10:13.898 numjobs=1 00:10:13.898 00:10:13.898 [job0] 00:10:13.898 filename=/dev/nvme0n1 00:10:13.898 [job1] 00:10:13.898 filename=/dev/nvme0n2 00:10:13.898 [job2] 00:10:13.898 filename=/dev/nvme0n3 00:10:13.898 [job3] 00:10:13.898 filename=/dev/nvme0n4 00:10:13.898 Could not set queue depth (nvme0n1) 00:10:13.898 Could not set queue depth (nvme0n2) 00:10:13.898 Could not set queue depth (nvme0n3) 00:10:13.898 Could not set queue depth (nvme0n4) 00:10:14.468 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:14.468 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:14.468 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:14.468 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:14.468 fio-3.35 00:10:14.468 Starting 4 threads 00:10:17.015 18:59:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:17.016 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=10457088, buflen=4096 00:10:17.016 fio: pid=200742, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:17.016 18:59:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:17.276 18:59:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:17.276 18:59:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:17.276 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=1699840, buflen=4096 00:10:17.276 fio: pid=200741, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:17.538 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=368640, buflen=4096 00:10:17.538 fio: pid=200739, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:17.538 18:59:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:17.538 18:59:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:17.538 18:59:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:17.538 18:59:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:17.538 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=307200, buflen=4096 00:10:17.538 fio: pid=200740, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:17.538 00:10:17.538 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=200739: Tue Nov 5 18:59:46 2024 00:10:17.538 read: IOPS=31, BW=123KiB/s (126kB/s)(360KiB/2915msec) 00:10:17.538 slat (usec): min=24, max=12632, avg=283.62, stdev=1729.52 00:10:17.538 clat (usec): min=939, max=42062, avg=31846.71, stdev=17576.90 00:10:17.538 lat (usec): min=1021, max=42090, avg=32133.17, stdev=17198.28 00:10:17.538 clat percentiles (usec): 00:10:17.538 | 1.00th=[ 938], 5.00th=[ 1057], 10.00th=[ 1123], 20.00th=[ 1188], 00:10:17.538 | 30.00th=[41157], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:10:17.538 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:10:17.538 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:17.538 | 99.99th=[42206] 00:10:17.538 bw ( KiB/s): min= 96, max= 104, per=2.40%, avg=97.60, stdev= 3.58, samples=5 00:10:17.538 iops : min= 24, max= 26, avg=24.40, stdev= 0.89, samples=5 00:10:17.538 lat (usec) : 1000=2.20% 00:10:17.538 lat (msec) : 2=21.98%, 50=74.73% 00:10:17.538 cpu : usr=0.00%, sys=0.17%, ctx=93, majf=0, minf=1 00:10:17.538 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:17.538 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:17.538 complete : 0=1.1%, 4=98.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:17.538 issued rwts: total=91,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:17.538 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:17.538 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=200740: Tue Nov 5 18:59:46 2024 00:10:17.538 read: IOPS=24, BW=96.6KiB/s (98.9kB/s)(300KiB/3105msec) 00:10:17.538 slat (usec): min=26, max=2619, avg=70.32, stdev=305.59 00:10:17.538 clat (usec): min=2906, max=42934, avg=41034.32, stdev=4488.88 00:10:17.538 lat (usec): min=2944, max=43943, avg=41105.20, stdev=4500.99 00:10:17.538 clat percentiles (usec): 00:10:17.538 | 1.00th=[ 2900], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:10:17.538 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41681], 60.00th=[41681], 00:10:17.538 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:10:17.538 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:10:17.538 | 99.99th=[42730] 00:10:17.538 bw ( KiB/s): min= 89, max= 104, per=2.38%, avg=96.17, stdev= 4.75, samples=6 00:10:17.538 iops : min= 22, max= 26, avg=24.00, stdev= 1.26, samples=6 00:10:17.538 lat (msec) : 4=1.32%, 50=97.37% 00:10:17.538 cpu : usr=0.00%, sys=0.16%, ctx=78, majf=0, minf=2 00:10:17.538 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:17.538 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:17.538 complete : 0=1.3%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:17.538 issued rwts: total=76,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:17.538 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:17.538 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=200741: Tue Nov 5 18:59:46 2024 00:10:17.538 read: IOPS=151, BW=604KiB/s (619kB/s)(1660KiB/2748msec) 00:10:17.538 slat (nsec): min=5200, max=68718, avg=13546.39, stdev=10004.28 00:10:17.538 clat (usec): min=525, max=42121, avg=6550.93, stdev=14211.58 00:10:17.538 lat (usec): min=532, max=42147, avg=6564.45, stdev=14216.58 00:10:17.538 clat percentiles (usec): 00:10:17.538 | 1.00th=[ 668], 5.00th=[ 709], 10.00th=[ 742], 20.00th=[ 775], 00:10:17.538 | 30.00th=[ 791], 40.00th=[ 799], 50.00th=[ 816], 60.00th=[ 840], 00:10:17.538 | 70.00th=[ 938], 80.00th=[ 988], 90.00th=[41681], 95.00th=[42206], 00:10:17.538 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:17.538 | 99.99th=[42206] 00:10:17.538 bw ( KiB/s): min= 96, max= 2160, per=12.59%, avg=508.80, stdev=923.05, samples=5 00:10:17.538 iops : min= 24, max= 540, avg=127.20, stdev=230.76, samples=5 00:10:17.538 lat (usec) : 750=10.58%, 1000=70.91% 00:10:17.538 lat (msec) : 2=4.33%, 50=13.94% 00:10:17.538 cpu : usr=0.11%, sys=0.25%, ctx=417, majf=0, minf=2 00:10:17.538 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:17.538 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:17.538 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:17.538 issued rwts: total=416,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:17.538 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:17.538 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=200742: Tue Nov 5 18:59:46 2024 00:10:17.538 read: IOPS=1003, BW=4013KiB/s (4109kB/s)(9.97MiB/2545msec) 00:10:17.538 slat (nsec): min=5360, max=80807, avg=25295.81, stdev=4312.81 00:10:17.538 clat (usec): min=532, max=1352, avg=957.02, stdev=78.10 00:10:17.538 lat (usec): min=545, max=1378, avg=982.32, stdev=79.56 00:10:17.538 clat percentiles (usec): 00:10:17.538 | 1.00th=[ 685], 5.00th=[ 799], 10.00th=[ 857], 20.00th=[ 914], 00:10:17.538 | 30.00th=[ 947], 40.00th=[ 963], 50.00th=[ 971], 60.00th=[ 979], 00:10:17.538 | 70.00th=[ 996], 80.00th=[ 1012], 90.00th=[ 1037], 95.00th=[ 1057], 00:10:17.538 | 99.00th=[ 1106], 99.50th=[ 1139], 99.90th=[ 1254], 99.95th=[ 1336], 00:10:17.538 | 99.99th=[ 1352] 00:10:17.538 bw ( KiB/s): min= 4008, max= 4104, per=100.00%, avg=4052.80, stdev=42.56, samples=5 00:10:17.538 iops : min= 1002, max= 1026, avg=1013.20, stdev=10.64, samples=5 00:10:17.538 lat (usec) : 750=1.84%, 1000=73.02% 00:10:17.538 lat (msec) : 2=25.10% 00:10:17.538 cpu : usr=1.14%, sys=3.38%, ctx=2555, majf=0, minf=2 00:10:17.538 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:17.538 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:17.538 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:17.539 issued rwts: total=2554,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:17.539 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:17.539 00:10:17.539 Run status group 0 (all jobs): 00:10:17.539 READ: bw=4036KiB/s (4133kB/s), 96.6KiB/s-4013KiB/s (98.9kB/s-4109kB/s), io=12.2MiB (12.8MB), run=2545-3105msec 00:10:17.539 00:10:17.539 Disk stats (read/write): 00:10:17.539 nvme0n1: ios=87/0, merge=0/0, ticks=2742/0, in_queue=2742, util=92.42% 00:10:17.539 nvme0n2: ios=73/0, merge=0/0, ticks=2997/0, in_queue=2997, util=94.53% 00:10:17.539 nvme0n3: ios=319/0, merge=0/0, ticks=2507/0, in_queue=2507, util=95.56% 00:10:17.539 nvme0n4: ios=2554/0, merge=0/0, ticks=2433/0, in_queue=2433, util=96.09% 00:10:17.799 18:59:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:17.799 18:59:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:18.060 18:59:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:18.060 18:59:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:18.060 18:59:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:18.060 18:59:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:18.320 18:59:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:18.320 18:59:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:18.581 18:59:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:18.581 18:59:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 200343 00:10:18.581 18:59:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:18.581 18:59:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:18.581 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:18.581 18:59:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:18.581 18:59:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1221 -- # local i=0 00:10:18.581 18:59:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:10:18.581 18:59:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:18.581 18:59:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:10:18.581 18:59:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:18.581 18:59:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1233 -- # return 0 00:10:18.581 18:59:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:18.581 18:59:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:18.581 nvmf hotplug test: fio failed as expected 00:10:18.581 18:59:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:18.842 18:59:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:18.842 18:59:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:18.842 18:59:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:18.842 18:59:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:18.842 18:59:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:18.842 18:59:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@335 -- # nvmfcleanup 00:10:18.842 18:59:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@99 -- # sync 00:10:18.842 18:59:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:10:18.842 18:59:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@102 -- # set +e 00:10:18.842 18:59:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@103 -- # for i in {1..20} 00:10:18.842 18:59:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:10:18.842 rmmod nvme_tcp 00:10:18.842 rmmod nvme_fabrics 00:10:18.842 rmmod nvme_keyring 00:10:18.842 18:59:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:10:18.842 18:59:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # set -e 00:10:18.842 18:59:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # return 0 00:10:18.842 18:59:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # '[' -n 196713 ']' 00:10:18.842 18:59:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@337 -- # killprocess 196713 00:10:18.842 18:59:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@952 -- # '[' -z 196713 ']' 00:10:18.842 18:59:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # kill -0 196713 00:10:18.842 18:59:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@957 -- # uname 00:10:18.842 18:59:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:18.842 18:59:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 196713 00:10:19.103 18:59:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:19.103 18:59:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:19.103 18:59:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 196713' 00:10:19.103 killing process with pid 196713 00:10:19.103 18:59:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@971 -- # kill 196713 00:10:19.103 18:59:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@976 -- # wait 196713 00:10:19.103 18:59:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:10:19.103 18:59:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # nvmf_fini 00:10:19.103 18:59:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@264 -- # local dev 00:10:19.103 18:59:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@267 -- # remove_target_ns 00:10:19.103 18:59:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:10:19.103 18:59:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:10:19.103 18:59:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_target_ns 00:10:21.647 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@268 -- # delete_main_bridge 00:10:21.647 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:10:21.647 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@130 -- # return 0 00:10:21.647 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:10:21.647 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:10:21.647 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:10:21.647 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:10:21.647 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:10:21.647 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:10:21.647 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:10:21.647 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:10:21.647 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:10:21.647 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:10:21.647 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:10:21.647 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:10:21.647 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:10:21.647 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:10:21.647 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:10:21.647 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:10:21.647 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:10:21.647 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@41 -- # _dev=0 00:10:21.647 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@41 -- # dev_map=() 00:10:21.647 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@284 -- # iptr 00:10:21.647 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@542 -- # iptables-save 00:10:21.647 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:10:21.647 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@542 -- # iptables-restore 00:10:21.647 00:10:21.647 real 0m29.168s 00:10:21.647 user 2m33.846s 00:10:21.647 sys 0m9.136s 00:10:21.647 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:21.647 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:21.647 ************************************ 00:10:21.647 END TEST nvmf_fio_target 00:10:21.647 ************************************ 00:10:21.647 18:59:50 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:21.647 18:59:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:21.647 18:59:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:21.647 18:59:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:21.647 ************************************ 00:10:21.647 START TEST nvmf_bdevio 00:10:21.647 ************************************ 00:10:21.647 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:21.647 * Looking for test storage... 00:10:21.647 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:21.647 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:21.647 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lcov --version 00:10:21.647 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:21.647 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:21.647 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:21.647 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:21.647 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:21.647 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:10:21.647 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:10:21.647 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:10:21.647 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:10:21.647 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:10:21.647 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:10:21.647 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:10:21.647 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:21.647 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:10:21.647 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:10:21.647 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:21.647 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:21.647 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:10:21.647 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:10:21.647 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:21.647 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:10:21.647 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:10:21.647 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:10:21.647 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:10:21.647 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:21.647 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:10:21.647 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:10:21.647 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:21.647 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:21.647 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:10:21.647 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:21.647 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:21.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:21.647 --rc genhtml_branch_coverage=1 00:10:21.647 --rc genhtml_function_coverage=1 00:10:21.647 --rc genhtml_legend=1 00:10:21.648 --rc geninfo_all_blocks=1 00:10:21.648 --rc geninfo_unexecuted_blocks=1 00:10:21.648 00:10:21.648 ' 00:10:21.648 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:21.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:21.648 --rc genhtml_branch_coverage=1 00:10:21.648 --rc genhtml_function_coverage=1 00:10:21.648 --rc genhtml_legend=1 00:10:21.648 --rc geninfo_all_blocks=1 00:10:21.648 --rc geninfo_unexecuted_blocks=1 00:10:21.648 00:10:21.648 ' 00:10:21.648 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:21.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:21.648 --rc genhtml_branch_coverage=1 00:10:21.648 --rc genhtml_function_coverage=1 00:10:21.648 --rc genhtml_legend=1 00:10:21.648 --rc geninfo_all_blocks=1 00:10:21.648 --rc geninfo_unexecuted_blocks=1 00:10:21.648 00:10:21.648 ' 00:10:21.648 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:21.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:21.648 --rc genhtml_branch_coverage=1 00:10:21.648 --rc genhtml_function_coverage=1 00:10:21.648 --rc genhtml_legend=1 00:10:21.648 --rc geninfo_all_blocks=1 00:10:21.648 --rc geninfo_unexecuted_blocks=1 00:10:21.648 00:10:21.648 ' 00:10:21.648 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:21.648 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:21.648 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:21.648 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:21.648 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:21.648 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:21.648 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:21.648 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:10:21.648 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:21.648 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:10:21.648 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:21.648 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:21.648 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:21.648 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:10:21.648 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:10:21.648 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:21.648 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:21.648 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:10:21.648 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:21.648 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:21.648 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:21.648 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.648 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.648 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.648 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:21.648 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.648 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:10:21.648 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:10:21.648 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:10:21.648 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:10:21.648 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@50 -- # : 0 00:10:21.648 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:10:21.648 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:10:21.648 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:10:21.648 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:21.648 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:21.648 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:10:21.648 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:10:21.648 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:10:21.648 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:10:21.648 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@54 -- # have_pci_nics=0 00:10:21.648 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:21.648 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:21.648 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:21.648 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:10:21.648 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:21.648 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # prepare_net_devs 00:10:21.648 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # local -g is_hw=no 00:10:21.648 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@260 -- # remove_target_ns 00:10:21.648 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:10:21.648 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:10:21.648 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_target_ns 00:10:21.648 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:10:21.648 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:10:21.648 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # xtrace_disable 00:10:21.648 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:29.788 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:29.788 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@131 -- # pci_devs=() 00:10:29.788 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@131 -- # local -a pci_devs 00:10:29.788 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@132 -- # pci_net_devs=() 00:10:29.788 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:10:29.788 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@133 -- # pci_drivers=() 00:10:29.788 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@133 -- # local -A pci_drivers 00:10:29.788 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@135 -- # net_devs=() 00:10:29.788 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@135 -- # local -ga net_devs 00:10:29.788 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@136 -- # e810=() 00:10:29.788 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@136 -- # local -ga e810 00:10:29.788 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@137 -- # x722=() 00:10:29.788 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@137 -- # local -ga x722 00:10:29.788 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@138 -- # mlx=() 00:10:29.788 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@138 -- # local -ga mlx 00:10:29.788 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:29.788 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:29.788 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:29.788 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:29.788 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:29.788 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:29.788 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:29.788 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:29.788 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:29.788 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:29.788 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:29.788 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:29.788 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:10:29.788 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:10:29.788 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:10:29.788 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:10:29.788 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:10:29.788 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:10:29.788 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:10:29.788 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:29.788 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:29.788 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:10:29.788 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:10:29.788 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:29.788 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:29.788 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:10:29.788 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:10:29.788 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:29.788 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:29.788 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:10:29.788 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:10:29.788 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:29.788 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:29.788 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:10:29.788 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:10:29.788 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:10:29.788 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:10:29.788 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:10:29.788 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:29.788 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:10:29.788 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:29.788 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # [[ up == up ]] 00:10:29.788 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:10:29.788 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:29.788 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:29.788 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:29.788 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:10:29.788 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:10:29.788 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:29.788 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:10:29.788 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:29.788 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # [[ up == up ]] 00:10:29.788 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:10:29.788 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:29.788 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:29.788 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:29.788 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:10:29.788 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:10:29.788 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:10:29.788 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # is_hw=yes 00:10:29.788 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:10:29.788 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:10:29.788 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:10:29.788 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:10:29.788 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@257 -- # create_target_ns 00:10:29.788 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:10:29.788 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:10:29.788 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:10:29.788 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:29.788 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:10:29.788 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:10:29.788 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:29.788 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:29.788 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:10:29.788 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:10:29.788 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:10:29.788 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:10:29.788 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@27 -- # local -gA dev_map 00:10:29.788 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@28 -- # local -g _dev 00:10:29.789 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:10:29.789 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:10:29.789 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:10:29.789 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:10:29.789 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@44 -- # ips=() 00:10:29.789 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:10:29.789 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:10:29.789 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:10:29.789 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:10:29.789 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:10:29.789 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:10:29.789 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:10:29.789 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:10:29.789 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:10:29.789 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:10:29.789 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:10:29.789 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:10:29.789 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:10:29.789 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:10:29.789 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:10:29.789 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:10:29.789 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:10:29.789 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:10:29.789 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:10:29.789 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:10:29.789 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@11 -- # local val=167772161 00:10:29.789 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:10:29.789 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:10:29.789 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:10:29.789 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:10:29.789 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:10:29.789 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:10:29.789 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:10:29.789 10.0.0.1 00:10:29.789 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:10:29.789 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:10:29.789 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:29.789 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:29.789 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:10:29.789 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@11 -- # local val=167772162 00:10:29.789 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:10:29.789 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:10:29.789 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:10:29.789 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:10:29.789 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:10:29.789 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:10:29.789 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:10:29.789 10.0.0.2 00:10:29.789 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:10:29.789 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:10:29.789 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:10:29.789 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:10:29.789 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:10:29.789 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:10:29.789 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:10:29.789 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:29.789 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:29.789 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:10:29.789 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:10:29.789 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:10:29.789 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:10:29.789 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:10:29.789 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:10:29.789 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:10:29.789 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:10:29.789 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:10:29.789 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:10:29.789 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:10:29.789 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@38 -- # ping_ips 1 00:10:29.789 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:10:29.789 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:10:29.789 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:10:29.789 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:10:29.789 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:10:29.789 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:10:29.789 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:10:29.789 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:10:29.789 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:10:29.789 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@107 -- # local dev=initiator0 00:10:29.789 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:10:29.789 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:10:29.789 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:10:29.789 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:10:29.789 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:10:29.789 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:10:29.789 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:10:29.789 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:10:29.789 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:10:29.789 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:10:29.789 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:10:29.789 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:29.789 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:29.789 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:10:29.789 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:10:29.789 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:29.789 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.544 ms 00:10:29.789 00:10:29.789 --- 10.0.0.1 ping statistics --- 00:10:29.789 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:29.789 rtt min/avg/max/mdev = 0.544/0.544/0.544/0.000 ms 00:10:29.789 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:10:29.789 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:10:29.789 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:10:29.789 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:10:29.789 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:29.789 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:29.789 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@168 -- # get_net_dev target0 00:10:29.789 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@107 -- # local dev=target0 00:10:29.789 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:10:29.789 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:10:29.789 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:10:29.789 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:10:29.789 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:10:29.789 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:10:29.789 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:10:29.789 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:10:29.790 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:10:29.790 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:10:29.790 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:10:29.790 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:10:29.790 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:10:29.790 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:10:29.790 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:29.790 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.274 ms 00:10:29.790 00:10:29.790 --- 10.0.0.2 ping statistics --- 00:10:29.790 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:29.790 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:10:29.790 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@98 -- # (( pair++ )) 00:10:29.790 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:10:29.790 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:29.790 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@270 -- # return 0 00:10:29.790 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:10:29.790 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:10:29.790 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:10:29.790 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:10:29.790 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:10:29.790 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:10:29.790 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:10:29.790 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:10:29.790 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:10:29.790 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:10:29.790 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@107 -- # local dev=initiator0 00:10:29.790 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:10:29.790 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:10:29.790 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:10:29.790 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:10:29.790 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:10:29.790 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:10:29.790 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:10:29.790 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:10:29.790 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:10:29.790 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:29.790 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:10:29.790 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:10:29.790 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:10:29.790 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:10:29.790 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:10:29.790 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:10:29.790 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@107 -- # local dev=initiator1 00:10:29.790 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:10:29.790 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:10:29.790 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@109 -- # return 1 00:10:29.790 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@168 -- # dev= 00:10:29.790 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@169 -- # return 0 00:10:29.790 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:10:29.790 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:10:29.790 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:10:29.790 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:10:29.790 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:10:29.790 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:29.790 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:29.790 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@168 -- # get_net_dev target0 00:10:29.790 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@107 -- # local dev=target0 00:10:29.790 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:10:29.790 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:10:29.790 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:10:29.790 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:10:29.790 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:10:29.790 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:10:29.790 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:10:29.790 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:10:29.790 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:10:29.790 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:29.790 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:10:29.790 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:10:29.790 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:10:29.790 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:10:29.790 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:29.790 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:29.790 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@168 -- # get_net_dev target1 00:10:29.790 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@107 -- # local dev=target1 00:10:29.790 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:10:29.790 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:10:29.790 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@109 -- # return 1 00:10:29.790 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@168 -- # dev= 00:10:29.790 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@169 -- # return 0 00:10:29.790 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:10:29.790 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:29.790 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:10:29.790 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:10:29.790 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:29.790 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:10:29.790 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:10:29.790 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:29.790 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:10:29.790 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:29.790 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:29.790 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # nvmfpid=205808 00:10:29.790 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@329 -- # waitforlisten 205808 00:10:29.790 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:29.790 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@833 -- # '[' -z 205808 ']' 00:10:29.790 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:29.790 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:29.790 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:29.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:29.790 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:29.790 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:29.790 [2024-11-05 18:59:58.161119] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:10:29.790 [2024-11-05 18:59:58.161185] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:29.790 [2024-11-05 18:59:58.260390] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:29.790 [2024-11-05 18:59:58.312499] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:29.790 [2024-11-05 18:59:58.312554] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:29.790 [2024-11-05 18:59:58.312562] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:29.790 [2024-11-05 18:59:58.312569] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:29.790 [2024-11-05 18:59:58.312576] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:29.790 [2024-11-05 18:59:58.315003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:29.790 [2024-11-05 18:59:58.315164] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:10:29.790 [2024-11-05 18:59:58.315290] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:10:29.790 [2024-11-05 18:59:58.315290] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:29.790 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:29.790 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@866 -- # return 0 00:10:29.790 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:10:29.791 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:29.791 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:29.791 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:29.791 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:29.791 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.791 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:29.791 [2024-11-05 18:59:59.038797] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:29.791 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.791 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:29.791 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.791 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:29.791 Malloc0 00:10:29.791 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.791 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:29.791 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.791 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:29.791 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.791 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:29.791 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.791 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:30.052 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.052 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:30.052 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.052 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:30.052 [2024-11-05 18:59:59.120692] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:30.052 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.052 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:30.052 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:30.052 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # config=() 00:10:30.052 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # local subsystem config 00:10:30.052 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:10:30.052 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:10:30.052 { 00:10:30.052 "params": { 00:10:30.052 "name": "Nvme$subsystem", 00:10:30.052 "trtype": "$TEST_TRANSPORT", 00:10:30.052 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:30.052 "adrfam": "ipv4", 00:10:30.052 "trsvcid": "$NVMF_PORT", 00:10:30.052 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:30.052 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:30.052 "hdgst": ${hdgst:-false}, 00:10:30.052 "ddgst": ${ddgst:-false} 00:10:30.052 }, 00:10:30.052 "method": "bdev_nvme_attach_controller" 00:10:30.052 } 00:10:30.052 EOF 00:10:30.052 )") 00:10:30.052 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # cat 00:10:30.052 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@396 -- # jq . 00:10:30.052 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@397 -- # IFS=, 00:10:30.052 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:10:30.052 "params": { 00:10:30.052 "name": "Nvme1", 00:10:30.052 "trtype": "tcp", 00:10:30.052 "traddr": "10.0.0.2", 00:10:30.052 "adrfam": "ipv4", 00:10:30.052 "trsvcid": "4420", 00:10:30.052 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:30.052 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:30.052 "hdgst": false, 00:10:30.052 "ddgst": false 00:10:30.052 }, 00:10:30.052 "method": "bdev_nvme_attach_controller" 00:10:30.052 }' 00:10:30.052 [2024-11-05 18:59:59.177695] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:10:30.052 [2024-11-05 18:59:59.177774] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid206149 ] 00:10:30.052 [2024-11-05 18:59:59.256166] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:30.052 [2024-11-05 18:59:59.300781] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:30.052 [2024-11-05 18:59:59.300854] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:30.052 [2024-11-05 18:59:59.301058] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:30.314 I/O targets: 00:10:30.314 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:30.314 00:10:30.314 00:10:30.314 CUnit - A unit testing framework for C - Version 2.1-3 00:10:30.314 http://cunit.sourceforge.net/ 00:10:30.314 00:10:30.314 00:10:30.314 Suite: bdevio tests on: Nvme1n1 00:10:30.314 Test: blockdev write read block ...passed 00:10:30.573 Test: blockdev write zeroes read block ...passed 00:10:30.573 Test: blockdev write zeroes read no split ...passed 00:10:30.573 Test: blockdev write zeroes read split ...passed 00:10:30.573 Test: blockdev write zeroes read split partial ...passed 00:10:30.573 Test: blockdev reset ...[2024-11-05 18:59:59.728874] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:10:30.573 [2024-11-05 18:59:59.728946] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5fb970 (9): Bad file descriptor 00:10:30.573 [2024-11-05 18:59:59.780595] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:10:30.573 passed 00:10:30.574 Test: blockdev write read 8 blocks ...passed 00:10:30.574 Test: blockdev write read size > 128k ...passed 00:10:30.574 Test: blockdev write read invalid size ...passed 00:10:30.574 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:30.574 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:30.574 Test: blockdev write read max offset ...passed 00:10:30.834 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:30.834 Test: blockdev writev readv 8 blocks ...passed 00:10:30.834 Test: blockdev writev readv 30 x 1block ...passed 00:10:30.834 Test: blockdev writev readv block ...passed 00:10:30.834 Test: blockdev writev readv size > 128k ...passed 00:10:30.834 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:30.834 Test: blockdev comparev and writev ...[2024-11-05 19:00:00.038483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:30.834 [2024-11-05 19:00:00.038510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:30.834 [2024-11-05 19:00:00.038522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:30.834 [2024-11-05 19:00:00.038528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:30.834 [2024-11-05 19:00:00.038877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:30.834 [2024-11-05 19:00:00.038885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:30.834 [2024-11-05 19:00:00.038895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:30.834 [2024-11-05 19:00:00.038901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:30.834 [2024-11-05 19:00:00.039264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:30.834 [2024-11-05 19:00:00.039271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:30.834 [2024-11-05 19:00:00.039281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:30.834 [2024-11-05 19:00:00.039287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:30.834 [2024-11-05 19:00:00.039639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:30.834 [2024-11-05 19:00:00.039648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:30.834 [2024-11-05 19:00:00.039658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:30.834 [2024-11-05 19:00:00.039664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:30.834 passed 00:10:30.834 Test: blockdev nvme passthru rw ...passed 00:10:30.834 Test: blockdev nvme passthru vendor specific ...[2024-11-05 19:00:00.122204] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:30.834 [2024-11-05 19:00:00.122221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:30.834 [2024-11-05 19:00:00.122450] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:30.834 [2024-11-05 19:00:00.122461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:30.834 [2024-11-05 19:00:00.122794] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:30.834 [2024-11-05 19:00:00.122802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:30.834 [2024-11-05 19:00:00.123060] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:30.834 [2024-11-05 19:00:00.123067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:30.834 passed 00:10:30.834 Test: blockdev nvme admin passthru ...passed 00:10:31.094 Test: blockdev copy ...passed 00:10:31.094 00:10:31.094 Run Summary: Type Total Ran Passed Failed Inactive 00:10:31.094 suites 1 1 n/a 0 0 00:10:31.094 tests 23 23 23 0 0 00:10:31.094 asserts 152 152 152 0 n/a 00:10:31.094 00:10:31.094 Elapsed time = 1.231 seconds 00:10:31.094 19:00:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:31.094 19:00:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.095 19:00:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:31.095 19:00:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.095 19:00:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:31.095 19:00:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:31.095 19:00:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@335 -- # nvmfcleanup 00:10:31.095 19:00:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@99 -- # sync 00:10:31.095 19:00:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:10:31.095 19:00:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@102 -- # set +e 00:10:31.095 19:00:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@103 -- # for i in {1..20} 00:10:31.095 19:00:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:10:31.095 rmmod nvme_tcp 00:10:31.095 rmmod nvme_fabrics 00:10:31.095 rmmod nvme_keyring 00:10:31.095 19:00:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:10:31.095 19:00:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # set -e 00:10:31.095 19:00:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # return 0 00:10:31.095 19:00:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # '[' -n 205808 ']' 00:10:31.095 19:00:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@337 -- # killprocess 205808 00:10:31.095 19:00:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@952 -- # '[' -z 205808 ']' 00:10:31.095 19:00:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # kill -0 205808 00:10:31.095 19:00:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@957 -- # uname 00:10:31.095 19:00:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:31.095 19:00:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 205808 00:10:31.355 19:00:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # process_name=reactor_3 00:10:31.355 19:00:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@962 -- # '[' reactor_3 = sudo ']' 00:10:31.355 19:00:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@970 -- # echo 'killing process with pid 205808' 00:10:31.355 killing process with pid 205808 00:10:31.355 19:00:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@971 -- # kill 205808 00:10:31.355 19:00:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@976 -- # wait 205808 00:10:31.355 19:00:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:10:31.355 19:00:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # nvmf_fini 00:10:31.355 19:00:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@264 -- # local dev 00:10:31.355 19:00:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@267 -- # remove_target_ns 00:10:31.355 19:00:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:10:31.355 19:00:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:10:31.355 19:00:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_target_ns 00:10:33.901 19:00:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@268 -- # delete_main_bridge 00:10:33.901 19:00:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:10:33.901 19:00:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@130 -- # return 0 00:10:33.901 19:00:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:10:33.901 19:00:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:10:33.901 19:00:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:10:33.901 19:00:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:10:33.901 19:00:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:10:33.901 19:00:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:10:33.901 19:00:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:10:33.901 19:00:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:10:33.901 19:00:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:10:33.901 19:00:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:10:33.902 19:00:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:10:33.902 19:00:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:10:33.902 19:00:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:10:33.902 19:00:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:10:33.902 19:00:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:10:33.902 19:00:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:10:33.902 19:00:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:10:33.902 19:00:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@41 -- # _dev=0 00:10:33.902 19:00:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@41 -- # dev_map=() 00:10:33.902 19:00:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@284 -- # iptr 00:10:33.902 19:00:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@542 -- # iptables-save 00:10:33.902 19:00:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:10:33.902 19:00:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@542 -- # iptables-restore 00:10:33.902 00:10:33.902 real 0m12.198s 00:10:33.902 user 0m13.618s 00:10:33.902 sys 0m6.078s 00:10:33.902 19:00:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:33.902 19:00:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:33.902 ************************************ 00:10:33.902 END TEST nvmf_bdevio 00:10:33.902 ************************************ 00:10:33.902 19:00:02 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # [[ tcp == \t\c\p ]] 00:10:33.902 19:00:02 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # [[ phy != phy ]] 00:10:33.902 19:00:02 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:33.902 19:00:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:33.902 19:00:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:33.902 19:00:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:33.902 ************************************ 00:10:33.902 START TEST nvmf_zcopy 00:10:33.902 ************************************ 00:10:33.902 19:00:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:33.902 * Looking for test storage... 00:10:33.902 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:33.902 19:00:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:33.902 19:00:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lcov --version 00:10:33.902 19:00:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:33.902 19:00:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:33.902 19:00:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:33.902 19:00:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:33.902 19:00:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:33.902 19:00:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:10:33.902 19:00:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:10:33.902 19:00:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:10:33.902 19:00:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:10:33.902 19:00:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:10:33.902 19:00:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:10:33.902 19:00:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:10:33.902 19:00:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:33.902 19:00:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:10:33.902 19:00:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:10:33.902 19:00:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:33.902 19:00:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:33.902 19:00:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:10:33.902 19:00:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:10:33.902 19:00:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:33.902 19:00:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:10:33.902 19:00:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:10:33.902 19:00:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:10:33.902 19:00:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:10:33.902 19:00:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:33.902 19:00:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:10:33.902 19:00:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:10:33.902 19:00:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:33.902 19:00:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:33.902 19:00:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:10:33.902 19:00:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:33.902 19:00:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:33.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:33.902 --rc genhtml_branch_coverage=1 00:10:33.902 --rc genhtml_function_coverage=1 00:10:33.902 --rc genhtml_legend=1 00:10:33.902 --rc geninfo_all_blocks=1 00:10:33.902 --rc geninfo_unexecuted_blocks=1 00:10:33.902 00:10:33.902 ' 00:10:33.902 19:00:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:33.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:33.902 --rc genhtml_branch_coverage=1 00:10:33.902 --rc genhtml_function_coverage=1 00:10:33.902 --rc genhtml_legend=1 00:10:33.902 --rc geninfo_all_blocks=1 00:10:33.902 --rc geninfo_unexecuted_blocks=1 00:10:33.902 00:10:33.902 ' 00:10:33.902 19:00:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:33.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:33.902 --rc genhtml_branch_coverage=1 00:10:33.902 --rc genhtml_function_coverage=1 00:10:33.902 --rc genhtml_legend=1 00:10:33.902 --rc geninfo_all_blocks=1 00:10:33.902 --rc geninfo_unexecuted_blocks=1 00:10:33.902 00:10:33.902 ' 00:10:33.902 19:00:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:33.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:33.902 --rc genhtml_branch_coverage=1 00:10:33.902 --rc genhtml_function_coverage=1 00:10:33.902 --rc genhtml_legend=1 00:10:33.902 --rc geninfo_all_blocks=1 00:10:33.902 --rc geninfo_unexecuted_blocks=1 00:10:33.902 00:10:33.902 ' 00:10:33.902 19:00:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:33.902 19:00:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:10:33.902 19:00:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:33.902 19:00:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:33.902 19:00:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:33.902 19:00:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:33.902 19:00:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:33.902 19:00:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:10:33.902 19:00:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:33.902 19:00:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:10:33.902 19:00:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:33.902 19:00:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:33.902 19:00:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:33.902 19:00:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:10:33.902 19:00:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:10:33.902 19:00:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:33.902 19:00:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:33.902 19:00:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:10:33.902 19:00:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:33.902 19:00:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:33.902 19:00:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:33.902 19:00:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.903 19:00:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.903 19:00:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.903 19:00:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:10:33.903 19:00:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.903 19:00:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:10:33.903 19:00:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:10:33.903 19:00:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:10:33.903 19:00:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:10:33.903 19:00:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@50 -- # : 0 00:10:33.903 19:00:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:10:33.903 19:00:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:10:33.903 19:00:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:10:33.903 19:00:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:33.903 19:00:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:33.903 19:00:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:10:33.903 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:10:33.903 19:00:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:10:33.903 19:00:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:10:33.903 19:00:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@54 -- # have_pci_nics=0 00:10:33.903 19:00:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:10:33.903 19:00:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:10:33.903 19:00:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:33.903 19:00:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # prepare_net_devs 00:10:33.903 19:00:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # local -g is_hw=no 00:10:33.903 19:00:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@260 -- # remove_target_ns 00:10:33.903 19:00:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:10:33.903 19:00:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:10:33.903 19:00:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_target_ns 00:10:33.903 19:00:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:10:33.903 19:00:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:10:33.903 19:00:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # xtrace_disable 00:10:33.903 19:00:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:40.488 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:40.488 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@131 -- # pci_devs=() 00:10:40.488 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@131 -- # local -a pci_devs 00:10:40.488 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@132 -- # pci_net_devs=() 00:10:40.488 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:10:40.488 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@133 -- # pci_drivers=() 00:10:40.488 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@133 -- # local -A pci_drivers 00:10:40.488 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@135 -- # net_devs=() 00:10:40.488 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@135 -- # local -ga net_devs 00:10:40.488 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@136 -- # e810=() 00:10:40.488 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@136 -- # local -ga e810 00:10:40.488 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@137 -- # x722=() 00:10:40.488 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@137 -- # local -ga x722 00:10:40.488 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@138 -- # mlx=() 00:10:40.488 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@138 -- # local -ga mlx 00:10:40.488 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:40.488 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:40.488 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:40.488 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:40.488 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:40.488 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:40.488 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:40.488 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:40.488 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:40.488 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:40.488 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:40.488 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:40.488 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:10:40.488 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:10:40.488 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:10:40.488 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:10:40.488 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:10:40.488 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:10:40.488 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:10:40.488 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:40.488 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:40.488 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:10:40.488 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:10:40.488 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:40.488 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:40.488 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:10:40.488 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:10:40.488 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:40.488 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:40.488 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:10:40.488 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:10:40.488 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:40.488 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:40.488 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:10:40.488 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:10:40.488 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:10:40.488 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:10:40.488 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:10:40.488 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:40.488 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:10:40.488 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:40.488 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # [[ up == up ]] 00:10:40.488 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:10:40.488 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:40.488 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:40.488 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:40.488 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:10:40.488 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:10:40.488 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:40.488 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:10:40.488 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:40.488 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # [[ up == up ]] 00:10:40.488 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:10:40.488 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:40.488 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:40.488 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:40.488 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:10:40.488 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:10:40.488 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:10:40.488 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # is_hw=yes 00:10:40.488 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:10:40.488 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:10:40.488 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:10:40.488 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:10:40.488 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@257 -- # create_target_ns 00:10:40.488 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:10:40.488 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:10:40.488 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:10:40.488 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:40.488 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:10:40.488 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:10:40.488 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:40.488 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:40.488 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:10:40.488 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:10:40.748 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:10:40.748 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:10:40.748 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@27 -- # local -gA dev_map 00:10:40.748 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@28 -- # local -g _dev 00:10:40.748 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:10:40.748 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:10:40.748 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:10:40.748 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:10:40.748 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@44 -- # ips=() 00:10:40.748 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:10:40.748 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:10:40.748 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:10:40.748 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:10:40.748 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:10:40.748 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:10:40.748 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:10:40.748 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:10:40.748 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:10:40.748 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:10:40.748 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:10:40.748 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:10:40.748 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:10:40.748 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:10:40.748 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:10:40.748 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:10:40.748 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:10:40.748 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:10:40.748 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:10:40.748 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:10:40.748 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@11 -- # local val=167772161 00:10:40.748 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:10:40.748 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:10:40.748 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:10:40.748 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:10:40.748 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:10:40.748 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:10:40.748 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:10:40.748 10.0.0.1 00:10:40.748 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:10:40.748 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:10:40.748 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:40.748 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:40.748 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:10:40.748 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@11 -- # local val=167772162 00:10:40.748 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:10:40.748 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:10:40.748 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:10:40.748 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:10:40.748 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:10:40.748 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:10:40.748 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:10:40.748 10.0.0.2 00:10:40.748 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:10:40.748 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:10:40.748 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:10:40.748 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:10:40.748 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:10:40.748 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:10:40.748 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:10:40.748 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:40.748 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:40.748 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:10:40.748 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:10:41.009 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:10:41.009 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:10:41.009 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:10:41.009 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:10:41.009 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:10:41.009 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:10:41.009 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:10:41.009 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:10:41.009 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:10:41.009 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@38 -- # ping_ips 1 00:10:41.009 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:10:41.009 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:10:41.009 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:10:41.009 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:10:41.009 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:10:41.009 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:10:41.009 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:10:41.009 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:10:41.010 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:10:41.010 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@107 -- # local dev=initiator0 00:10:41.010 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:10:41.010 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:10:41.010 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:10:41.010 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:10:41.010 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:10:41.010 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:10:41.010 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:10:41.010 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:10:41.010 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:10:41.010 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:10:41.010 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:10:41.010 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:41.010 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:41.010 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:10:41.010 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:10:41.010 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:41.010 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.666 ms 00:10:41.010 00:10:41.010 --- 10.0.0.1 ping statistics --- 00:10:41.010 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:41.010 rtt min/avg/max/mdev = 0.666/0.666/0.666/0.000 ms 00:10:41.010 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:10:41.010 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:10:41.010 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:10:41.010 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:10:41.010 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:41.010 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:41.010 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@168 -- # get_net_dev target0 00:10:41.010 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@107 -- # local dev=target0 00:10:41.010 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:10:41.010 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:10:41.010 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:10:41.010 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:10:41.010 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:10:41.010 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:10:41.010 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:10:41.010 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:10:41.010 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:10:41.010 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:10:41.010 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:10:41.010 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:10:41.010 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:10:41.010 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:10:41.010 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:41.010 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.227 ms 00:10:41.010 00:10:41.010 --- 10.0.0.2 ping statistics --- 00:10:41.010 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:41.010 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:10:41.010 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@98 -- # (( pair++ )) 00:10:41.010 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:10:41.010 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:41.010 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@270 -- # return 0 00:10:41.010 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:10:41.010 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:10:41.010 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:10:41.010 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:10:41.010 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:10:41.010 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:10:41.010 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:10:41.010 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:10:41.010 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:10:41.010 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:10:41.010 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@107 -- # local dev=initiator0 00:10:41.010 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:10:41.010 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:10:41.010 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:10:41.010 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:10:41.010 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:10:41.010 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:10:41.010 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:10:41.010 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:10:41.010 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:10:41.010 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:41.010 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:10:41.010 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:10:41.010 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:10:41.010 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:10:41.010 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:10:41.010 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:10:41.010 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@107 -- # local dev=initiator1 00:10:41.010 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:10:41.010 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:10:41.010 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@109 -- # return 1 00:10:41.010 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@168 -- # dev= 00:10:41.010 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@169 -- # return 0 00:10:41.010 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:10:41.010 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:10:41.010 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:10:41.010 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:10:41.010 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:10:41.010 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:41.010 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:41.010 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@168 -- # get_net_dev target0 00:10:41.010 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@107 -- # local dev=target0 00:10:41.010 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:10:41.010 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:10:41.010 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:10:41.010 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:10:41.010 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:10:41.010 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:10:41.010 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:10:41.010 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:10:41.010 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:10:41.010 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:41.010 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:10:41.010 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:10:41.010 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:10:41.010 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:10:41.010 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:41.010 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:41.010 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@168 -- # get_net_dev target1 00:10:41.010 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@107 -- # local dev=target1 00:10:41.010 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:10:41.010 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:10:41.010 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@109 -- # return 1 00:10:41.010 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@168 -- # dev= 00:10:41.010 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@169 -- # return 0 00:10:41.010 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:10:41.011 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:41.011 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:10:41.011 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:10:41.011 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:41.011 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:10:41.011 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:10:41.011 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:10:41.011 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:10:41.011 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:41.011 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:41.011 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # nvmfpid=210940 00:10:41.011 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@329 -- # waitforlisten 210940 00:10:41.011 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:41.011 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@833 -- # '[' -z 210940 ']' 00:10:41.011 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:41.011 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:41.011 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:41.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:41.011 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:41.011 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:41.011 [2024-11-05 19:00:10.325094] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:10:41.011 [2024-11-05 19:00:10.325165] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:41.271 [2024-11-05 19:00:10.429291] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:41.271 [2024-11-05 19:00:10.480062] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:41.271 [2024-11-05 19:00:10.480113] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:41.271 [2024-11-05 19:00:10.480121] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:41.271 [2024-11-05 19:00:10.480128] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:41.271 [2024-11-05 19:00:10.480135] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:41.271 [2024-11-05 19:00:10.480933] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:41.843 19:00:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:41.843 19:00:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@866 -- # return 0 00:10:41.843 19:00:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:10:41.843 19:00:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:41.843 19:00:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:42.104 19:00:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:42.104 19:00:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:10:42.104 19:00:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.104 19:00:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:42.104 [2024-11-05 19:00:11.190819] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:42.104 19:00:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.104 19:00:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:42.104 19:00:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.104 19:00:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:42.104 19:00:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.104 19:00:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@20 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:42.104 19:00:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.104 19:00:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:42.104 [2024-11-05 19:00:11.215143] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:42.104 19:00:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.104 19:00:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:42.104 19:00:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.104 19:00:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:42.104 19:00:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.104 19:00:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:10:42.104 19:00:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.105 19:00:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:42.105 malloc0 00:10:42.105 19:00:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.105 19:00:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:10:42.105 19:00:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.105 19:00:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:42.105 19:00:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.105 19:00:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:10:42.105 19:00:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@28 -- # gen_nvmf_target_json 00:10:42.105 19:00:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # config=() 00:10:42.105 19:00:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # local subsystem config 00:10:42.105 19:00:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:10:42.105 19:00:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:10:42.105 { 00:10:42.105 "params": { 00:10:42.105 "name": "Nvme$subsystem", 00:10:42.105 "trtype": "$TEST_TRANSPORT", 00:10:42.105 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:42.105 "adrfam": "ipv4", 00:10:42.105 "trsvcid": "$NVMF_PORT", 00:10:42.105 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:42.105 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:42.105 "hdgst": ${hdgst:-false}, 00:10:42.105 "ddgst": ${ddgst:-false} 00:10:42.105 }, 00:10:42.105 "method": "bdev_nvme_attach_controller" 00:10:42.105 } 00:10:42.105 EOF 00:10:42.105 )") 00:10:42.105 19:00:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # cat 00:10:42.105 19:00:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@396 -- # jq . 00:10:42.105 19:00:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@397 -- # IFS=, 00:10:42.105 19:00:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:10:42.105 "params": { 00:10:42.105 "name": "Nvme1", 00:10:42.105 "trtype": "tcp", 00:10:42.105 "traddr": "10.0.0.2", 00:10:42.105 "adrfam": "ipv4", 00:10:42.105 "trsvcid": "4420", 00:10:42.105 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:42.105 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:42.105 "hdgst": false, 00:10:42.105 "ddgst": false 00:10:42.105 }, 00:10:42.105 "method": "bdev_nvme_attach_controller" 00:10:42.105 }' 00:10:42.105 [2024-11-05 19:00:11.325767] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:10:42.105 [2024-11-05 19:00:11.325847] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid211345 ] 00:10:42.105 [2024-11-05 19:00:11.401933] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:42.365 [2024-11-05 19:00:11.443914] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:42.365 Running I/O for 10 seconds... 00:10:44.704 6656.00 IOPS, 52.00 MiB/s [2024-11-05T18:00:14.970Z] 6725.00 IOPS, 52.54 MiB/s [2024-11-05T18:00:15.912Z] 6744.00 IOPS, 52.69 MiB/s [2024-11-05T18:00:16.896Z] 6755.25 IOPS, 52.78 MiB/s [2024-11-05T18:00:17.876Z] 6962.20 IOPS, 54.39 MiB/s [2024-11-05T18:00:18.821Z] 7436.67 IOPS, 58.10 MiB/s [2024-11-05T18:00:19.764Z] 7770.71 IOPS, 60.71 MiB/s [2024-11-05T18:00:20.707Z] 8021.75 IOPS, 62.67 MiB/s [2024-11-05T18:00:21.649Z] 8216.89 IOPS, 64.19 MiB/s [2024-11-05T18:00:21.649Z] 8374.30 IOPS, 65.42 MiB/s 00:10:52.326 Latency(us) 00:10:52.326 [2024-11-05T18:00:21.649Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:52.326 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:10:52.326 Verification LBA range: start 0x0 length 0x1000 00:10:52.326 Nvme1n1 : 10.01 8376.12 65.44 0.00 0.00 15229.64 1727.15 28180.48 00:10:52.326 [2024-11-05T18:00:21.649Z] =================================================================================================================== 00:10:52.326 [2024-11-05T18:00:21.649Z] Total : 8376.12 65.44 0.00 0.00 15229.64 1727.15 28180.48 00:10:52.587 19:00:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@34 -- # perfpid=213458 00:10:52.587 19:00:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@36 -- # xtrace_disable 00:10:52.587 19:00:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:52.587 19:00:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:10:52.587 19:00:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@32 -- # gen_nvmf_target_json 00:10:52.587 19:00:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # config=() 00:10:52.587 19:00:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # local subsystem config 00:10:52.587 19:00:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:10:52.587 19:00:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:10:52.587 { 00:10:52.587 "params": { 00:10:52.587 "name": "Nvme$subsystem", 00:10:52.587 "trtype": "$TEST_TRANSPORT", 00:10:52.587 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:52.587 "adrfam": "ipv4", 00:10:52.587 "trsvcid": "$NVMF_PORT", 00:10:52.587 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:52.587 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:52.587 "hdgst": ${hdgst:-false}, 00:10:52.587 "ddgst": ${ddgst:-false} 00:10:52.587 }, 00:10:52.587 "method": "bdev_nvme_attach_controller" 00:10:52.587 } 00:10:52.587 EOF 00:10:52.587 )") 00:10:52.587 19:00:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # cat 00:10:52.587 [2024-11-05 19:00:21.748267] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.587 [2024-11-05 19:00:21.748296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.587 19:00:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@396 -- # jq . 00:10:52.587 19:00:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@397 -- # IFS=, 00:10:52.587 19:00:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:10:52.587 "params": { 00:10:52.587 "name": "Nvme1", 00:10:52.587 "trtype": "tcp", 00:10:52.587 "traddr": "10.0.0.2", 00:10:52.587 "adrfam": "ipv4", 00:10:52.587 "trsvcid": "4420", 00:10:52.587 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:52.587 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:52.587 "hdgst": false, 00:10:52.587 "ddgst": false 00:10:52.587 }, 00:10:52.587 "method": "bdev_nvme_attach_controller" 00:10:52.587 }' 00:10:52.587 [2024-11-05 19:00:21.760264] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.587 [2024-11-05 19:00:21.760273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.587 [2024-11-05 19:00:21.772293] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.587 [2024-11-05 19:00:21.772300] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.587 [2024-11-05 19:00:21.784325] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.587 [2024-11-05 19:00:21.784332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.587 [2024-11-05 19:00:21.791308] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:10:52.587 [2024-11-05 19:00:21.791356] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid213458 ] 00:10:52.587 [2024-11-05 19:00:21.796354] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.587 [2024-11-05 19:00:21.796361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.587 [2024-11-05 19:00:21.808384] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.587 [2024-11-05 19:00:21.808392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.587 [2024-11-05 19:00:21.820415] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.587 [2024-11-05 19:00:21.820423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.588 [2024-11-05 19:00:21.832447] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.588 [2024-11-05 19:00:21.832455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.588 [2024-11-05 19:00:21.844476] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.588 [2024-11-05 19:00:21.844484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.588 [2024-11-05 19:00:21.856508] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.588 [2024-11-05 19:00:21.856517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.588 [2024-11-05 19:00:21.860751] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:52.588 [2024-11-05 19:00:21.868539] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.588 [2024-11-05 19:00:21.868547] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.588 [2024-11-05 19:00:21.880571] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.588 [2024-11-05 19:00:21.880580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.588 [2024-11-05 19:00:21.892601] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.588 [2024-11-05 19:00:21.892611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.588 [2024-11-05 19:00:21.895714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:52.588 [2024-11-05 19:00:21.904632] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.588 [2024-11-05 19:00:21.904640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.849 [2024-11-05 19:00:21.916668] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.849 [2024-11-05 19:00:21.916681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.849 [2024-11-05 19:00:21.928696] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.849 [2024-11-05 19:00:21.928709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.849 [2024-11-05 19:00:21.940726] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.849 [2024-11-05 19:00:21.940736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.849 [2024-11-05 19:00:21.952762] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.849 [2024-11-05 19:00:21.952772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.849 [2024-11-05 19:00:21.964879] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.849 [2024-11-05 19:00:21.964893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.849 [2024-11-05 19:00:21.976908] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.849 [2024-11-05 19:00:21.976920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.849 [2024-11-05 19:00:21.988938] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.849 [2024-11-05 19:00:21.988947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.849 [2024-11-05 19:00:22.000969] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.849 [2024-11-05 19:00:22.000978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.849 [2024-11-05 19:00:22.013007] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.849 [2024-11-05 19:00:22.013020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.849 [2024-11-05 19:00:22.025030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.849 [2024-11-05 19:00:22.025037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.849 [2024-11-05 19:00:22.037060] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.849 [2024-11-05 19:00:22.037069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.849 [2024-11-05 19:00:22.049092] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.849 [2024-11-05 19:00:22.049100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.849 [2024-11-05 19:00:22.061130] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.849 [2024-11-05 19:00:22.061136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.849 [2024-11-05 19:00:22.073155] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.849 [2024-11-05 19:00:22.073163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.849 [2024-11-05 19:00:22.085186] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.849 [2024-11-05 19:00:22.085193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.849 [2024-11-05 19:00:22.097217] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.849 [2024-11-05 19:00:22.097226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.849 [2024-11-05 19:00:22.109247] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.849 [2024-11-05 19:00:22.109253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.849 [2024-11-05 19:00:22.121280] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.849 [2024-11-05 19:00:22.121287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.849 [2024-11-05 19:00:22.133311] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.849 [2024-11-05 19:00:22.133319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.849 [2024-11-05 19:00:22.145343] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.849 [2024-11-05 19:00:22.145352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.849 [2024-11-05 19:00:22.157372] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.849 [2024-11-05 19:00:22.157379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.849 [2024-11-05 19:00:22.169403] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.849 [2024-11-05 19:00:22.169414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.111 [2024-11-05 19:00:22.181434] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.111 [2024-11-05 19:00:22.181442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.111 [2024-11-05 19:00:22.193473] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.111 [2024-11-05 19:00:22.193487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.111 Running I/O for 5 seconds... 00:10:53.111 [2024-11-05 19:00:22.205500] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.111 [2024-11-05 19:00:22.205510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.111 [2024-11-05 19:00:22.221304] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.111 [2024-11-05 19:00:22.221320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.111 [2024-11-05 19:00:22.234389] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.111 [2024-11-05 19:00:22.234404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.111 [2024-11-05 19:00:22.248230] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.111 [2024-11-05 19:00:22.248247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.111 [2024-11-05 19:00:22.261666] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.111 [2024-11-05 19:00:22.261682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.111 [2024-11-05 19:00:22.275372] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.111 [2024-11-05 19:00:22.275389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.111 [2024-11-05 19:00:22.288770] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.111 [2024-11-05 19:00:22.288786] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.111 [2024-11-05 19:00:22.301559] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.111 [2024-11-05 19:00:22.301573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.111 [2024-11-05 19:00:22.314876] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.111 [2024-11-05 19:00:22.314890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.111 [2024-11-05 19:00:22.328610] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.111 [2024-11-05 19:00:22.328625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.111 [2024-11-05 19:00:22.342481] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.111 [2024-11-05 19:00:22.342496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.111 [2024-11-05 19:00:22.355651] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.111 [2024-11-05 19:00:22.355665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.111 [2024-11-05 19:00:22.368555] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.111 [2024-11-05 19:00:22.368570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.111 [2024-11-05 19:00:22.381755] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.111 [2024-11-05 19:00:22.381770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.111 [2024-11-05 19:00:22.395244] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.111 [2024-11-05 19:00:22.395259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.111 [2024-11-05 19:00:22.408524] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.111 [2024-11-05 19:00:22.408539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.111 [2024-11-05 19:00:22.422068] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.111 [2024-11-05 19:00:22.422086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.111 [2024-11-05 19:00:22.435466] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.111 [2024-11-05 19:00:22.435481] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.373 [2024-11-05 19:00:22.448011] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.373 [2024-11-05 19:00:22.448025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.373 [2024-11-05 19:00:22.461753] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.373 [2024-11-05 19:00:22.461768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.373 [2024-11-05 19:00:22.474984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.373 [2024-11-05 19:00:22.474999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.373 [2024-11-05 19:00:22.488024] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.373 [2024-11-05 19:00:22.488039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.373 [2024-11-05 19:00:22.501025] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.373 [2024-11-05 19:00:22.501039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.373 [2024-11-05 19:00:22.514403] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.373 [2024-11-05 19:00:22.514417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.373 [2024-11-05 19:00:22.527873] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.373 [2024-11-05 19:00:22.527890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.373 [2024-11-05 19:00:22.540963] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.373 [2024-11-05 19:00:22.540978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.373 [2024-11-05 19:00:22.554198] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.373 [2024-11-05 19:00:22.554213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.373 [2024-11-05 19:00:22.567692] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.373 [2024-11-05 19:00:22.567707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.373 [2024-11-05 19:00:22.580566] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.373 [2024-11-05 19:00:22.580580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.373 [2024-11-05 19:00:22.593471] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.373 [2024-11-05 19:00:22.593485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.373 [2024-11-05 19:00:22.606279] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.373 [2024-11-05 19:00:22.606293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.373 [2024-11-05 19:00:22.619209] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.373 [2024-11-05 19:00:22.619224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.373 [2024-11-05 19:00:22.632339] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.373 [2024-11-05 19:00:22.632354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.373 [2024-11-05 19:00:22.645691] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.373 [2024-11-05 19:00:22.645706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.373 [2024-11-05 19:00:22.658289] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.373 [2024-11-05 19:00:22.658304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.373 [2024-11-05 19:00:22.670799] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.373 [2024-11-05 19:00:22.670818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.373 [2024-11-05 19:00:22.683134] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.373 [2024-11-05 19:00:22.683149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.373 [2024-11-05 19:00:22.696630] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.373 [2024-11-05 19:00:22.696644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.634 [2024-11-05 19:00:22.709594] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.634 [2024-11-05 19:00:22.709609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.634 [2024-11-05 19:00:22.722917] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.634 [2024-11-05 19:00:22.722932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.634 [2024-11-05 19:00:22.735917] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.634 [2024-11-05 19:00:22.735932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.634 [2024-11-05 19:00:22.748396] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.634 [2024-11-05 19:00:22.748411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.634 [2024-11-05 19:00:22.761178] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.634 [2024-11-05 19:00:22.761194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.634 [2024-11-05 19:00:22.774087] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.634 [2024-11-05 19:00:22.774103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.634 [2024-11-05 19:00:22.787338] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.634 [2024-11-05 19:00:22.787355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.634 [2024-11-05 19:00:22.800484] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.634 [2024-11-05 19:00:22.800500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.634 [2024-11-05 19:00:22.813107] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.634 [2024-11-05 19:00:22.813123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.634 [2024-11-05 19:00:22.825545] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.634 [2024-11-05 19:00:22.825561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.634 [2024-11-05 19:00:22.837991] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.634 [2024-11-05 19:00:22.838007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.634 [2024-11-05 19:00:22.850548] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.634 [2024-11-05 19:00:22.850564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.634 [2024-11-05 19:00:22.863447] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.634 [2024-11-05 19:00:22.863462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.634 [2024-11-05 19:00:22.876583] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.634 [2024-11-05 19:00:22.876599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.634 [2024-11-05 19:00:22.889782] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.634 [2024-11-05 19:00:22.889797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.634 [2024-11-05 19:00:22.903210] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.634 [2024-11-05 19:00:22.903225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.634 [2024-11-05 19:00:22.915787] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.634 [2024-11-05 19:00:22.915802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.634 [2024-11-05 19:00:22.928916] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.634 [2024-11-05 19:00:22.928931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.634 [2024-11-05 19:00:22.942426] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.634 [2024-11-05 19:00:22.942441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.634 [2024-11-05 19:00:22.956293] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.634 [2024-11-05 19:00:22.956309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.895 [2024-11-05 19:00:22.968900] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.895 [2024-11-05 19:00:22.968915] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.895 [2024-11-05 19:00:22.982032] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.895 [2024-11-05 19:00:22.982047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.895 [2024-11-05 19:00:22.995199] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.895 [2024-11-05 19:00:22.995214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.895 [2024-11-05 19:00:23.008849] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.895 [2024-11-05 19:00:23.008865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.895 [2024-11-05 19:00:23.022188] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.895 [2024-11-05 19:00:23.022203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.895 [2024-11-05 19:00:23.035492] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.895 [2024-11-05 19:00:23.035508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.895 [2024-11-05 19:00:23.048216] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.895 [2024-11-05 19:00:23.048231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.895 [2024-11-05 19:00:23.060889] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.895 [2024-11-05 19:00:23.060905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.895 [2024-11-05 19:00:23.074379] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.895 [2024-11-05 19:00:23.074395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.895 [2024-11-05 19:00:23.087131] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.895 [2024-11-05 19:00:23.087147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.895 [2024-11-05 19:00:23.099906] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.895 [2024-11-05 19:00:23.099921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.895 [2024-11-05 19:00:23.113170] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.896 [2024-11-05 19:00:23.113186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.896 [2024-11-05 19:00:23.125954] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.896 [2024-11-05 19:00:23.125970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.896 [2024-11-05 19:00:23.139640] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.896 [2024-11-05 19:00:23.139656] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.896 [2024-11-05 19:00:23.152935] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.896 [2024-11-05 19:00:23.152951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.896 [2024-11-05 19:00:23.166276] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.896 [2024-11-05 19:00:23.166292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.896 [2024-11-05 19:00:23.179667] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.896 [2024-11-05 19:00:23.179682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.896 [2024-11-05 19:00:23.192131] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.896 [2024-11-05 19:00:23.192146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.896 19081.00 IOPS, 149.07 MiB/s [2024-11-05T18:00:23.219Z] [2024-11-05 19:00:23.205358] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.896 [2024-11-05 19:00:23.205374] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.896 [2024-11-05 19:00:23.218780] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.896 [2024-11-05 19:00:23.218796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.157 [2024-11-05 19:00:23.232156] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.157 [2024-11-05 19:00:23.232172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.157 [2024-11-05 19:00:23.244666] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.157 [2024-11-05 19:00:23.244681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.157 [2024-11-05 19:00:23.257158] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.157 [2024-11-05 19:00:23.257173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.157 [2024-11-05 19:00:23.269434] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.157 [2024-11-05 19:00:23.269449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.157 [2024-11-05 19:00:23.282802] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.157 [2024-11-05 19:00:23.282818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.157 [2024-11-05 19:00:23.295641] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.157 [2024-11-05 19:00:23.295656] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.157 [2024-11-05 19:00:23.309232] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.157 [2024-11-05 19:00:23.309249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.157 [2024-11-05 19:00:23.322768] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.157 [2024-11-05 19:00:23.322783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.157 [2024-11-05 19:00:23.336368] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.157 [2024-11-05 19:00:23.336382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.157 [2024-11-05 19:00:23.349630] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.157 [2024-11-05 19:00:23.349645] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.157 [2024-11-05 19:00:23.362803] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.157 [2024-11-05 19:00:23.362818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.157 [2024-11-05 19:00:23.375549] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.157 [2024-11-05 19:00:23.375564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.157 [2024-11-05 19:00:23.388539] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.157 [2024-11-05 19:00:23.388554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.157 [2024-11-05 19:00:23.401415] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.157 [2024-11-05 19:00:23.401430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.157 [2024-11-05 19:00:23.414175] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.157 [2024-11-05 19:00:23.414190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.157 [2024-11-05 19:00:23.427025] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.157 [2024-11-05 19:00:23.427040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.157 [2024-11-05 19:00:23.440501] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.157 [2024-11-05 19:00:23.440517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.157 [2024-11-05 19:00:23.453255] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.157 [2024-11-05 19:00:23.453270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.157 [2024-11-05 19:00:23.465690] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.157 [2024-11-05 19:00:23.465705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.157 [2024-11-05 19:00:23.478926] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.157 [2024-11-05 19:00:23.478941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.418 [2024-11-05 19:00:23.492017] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.418 [2024-11-05 19:00:23.492033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.418 [2024-11-05 19:00:23.505137] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.418 [2024-11-05 19:00:23.505152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.418 [2024-11-05 19:00:23.518312] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.418 [2024-11-05 19:00:23.518327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.418 [2024-11-05 19:00:23.531641] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.418 [2024-11-05 19:00:23.531656] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.418 [2024-11-05 19:00:23.544533] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.418 [2024-11-05 19:00:23.544548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.418 [2024-11-05 19:00:23.557163] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.418 [2024-11-05 19:00:23.557179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.418 [2024-11-05 19:00:23.570222] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.418 [2024-11-05 19:00:23.570239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.418 [2024-11-05 19:00:23.583171] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.418 [2024-11-05 19:00:23.583187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.418 [2024-11-05 19:00:23.596214] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.418 [2024-11-05 19:00:23.596230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.418 [2024-11-05 19:00:23.609443] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.418 [2024-11-05 19:00:23.609458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.418 [2024-11-05 19:00:23.622947] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.418 [2024-11-05 19:00:23.622963] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.418 [2024-11-05 19:00:23.635968] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.418 [2024-11-05 19:00:23.635984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.418 [2024-11-05 19:00:23.648458] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.418 [2024-11-05 19:00:23.648476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.418 [2024-11-05 19:00:23.661795] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.418 [2024-11-05 19:00:23.661810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.418 [2024-11-05 19:00:23.675006] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.418 [2024-11-05 19:00:23.675021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.418 [2024-11-05 19:00:23.688406] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.418 [2024-11-05 19:00:23.688422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.418 [2024-11-05 19:00:23.702010] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.418 [2024-11-05 19:00:23.702025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.418 [2024-11-05 19:00:23.715014] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.418 [2024-11-05 19:00:23.715029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.418 [2024-11-05 19:00:23.727272] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.418 [2024-11-05 19:00:23.727287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.418 [2024-11-05 19:00:23.740565] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.418 [2024-11-05 19:00:23.740581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.680 [2024-11-05 19:00:23.754021] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.680 [2024-11-05 19:00:23.754036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.680 [2024-11-05 19:00:23.767817] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.680 [2024-11-05 19:00:23.767832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.680 [2024-11-05 19:00:23.780918] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.680 [2024-11-05 19:00:23.780933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.680 [2024-11-05 19:00:23.793628] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.680 [2024-11-05 19:00:23.793643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.680 [2024-11-05 19:00:23.807000] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.680 [2024-11-05 19:00:23.807015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.680 [2024-11-05 19:00:23.820437] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.680 [2024-11-05 19:00:23.820452] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.680 [2024-11-05 19:00:23.833885] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.680 [2024-11-05 19:00:23.833902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.681 [2024-11-05 19:00:23.847194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.681 [2024-11-05 19:00:23.847210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.681 [2024-11-05 19:00:23.860491] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.681 [2024-11-05 19:00:23.860507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.681 [2024-11-05 19:00:23.873260] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.681 [2024-11-05 19:00:23.873275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.681 [2024-11-05 19:00:23.886356] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.681 [2024-11-05 19:00:23.886371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.681 [2024-11-05 19:00:23.899333] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.681 [2024-11-05 19:00:23.899351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.681 [2024-11-05 19:00:23.912452] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.681 [2024-11-05 19:00:23.912468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.681 [2024-11-05 19:00:23.925440] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.681 [2024-11-05 19:00:23.925454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.681 [2024-11-05 19:00:23.938384] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.681 [2024-11-05 19:00:23.938399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.681 [2024-11-05 19:00:23.951041] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.681 [2024-11-05 19:00:23.951056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.681 [2024-11-05 19:00:23.964298] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.681 [2024-11-05 19:00:23.964313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.681 [2024-11-05 19:00:23.977243] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.681 [2024-11-05 19:00:23.977258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.681 [2024-11-05 19:00:23.990412] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.681 [2024-11-05 19:00:23.990427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.681 [2024-11-05 19:00:24.003373] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.681 [2024-11-05 19:00:24.003389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.943 [2024-11-05 19:00:24.016957] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.943 [2024-11-05 19:00:24.016973] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.943 [2024-11-05 19:00:24.029491] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.943 [2024-11-05 19:00:24.029507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.943 [2024-11-05 19:00:24.042862] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.943 [2024-11-05 19:00:24.042877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.943 [2024-11-05 19:00:24.055519] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.943 [2024-11-05 19:00:24.055534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.943 [2024-11-05 19:00:24.068805] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.943 [2024-11-05 19:00:24.068820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.943 [2024-11-05 19:00:24.081490] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.943 [2024-11-05 19:00:24.081505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.943 [2024-11-05 19:00:24.093960] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.943 [2024-11-05 19:00:24.093976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.943 [2024-11-05 19:00:24.107141] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.943 [2024-11-05 19:00:24.107157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.943 [2024-11-05 19:00:24.120557] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.943 [2024-11-05 19:00:24.120572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.943 [2024-11-05 19:00:24.133048] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.943 [2024-11-05 19:00:24.133063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.943 [2024-11-05 19:00:24.146587] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.943 [2024-11-05 19:00:24.146605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.943 [2024-11-05 19:00:24.159983] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.943 [2024-11-05 19:00:24.159998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.943 [2024-11-05 19:00:24.172536] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.943 [2024-11-05 19:00:24.172552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.943 [2024-11-05 19:00:24.184844] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.943 [2024-11-05 19:00:24.184859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.943 [2024-11-05 19:00:24.198218] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.943 [2024-11-05 19:00:24.198233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.943 19182.00 IOPS, 149.86 MiB/s [2024-11-05T18:00:24.266Z] [2024-11-05 19:00:24.210500] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.943 [2024-11-05 19:00:24.210515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.943 [2024-11-05 19:00:24.223201] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.943 [2024-11-05 19:00:24.223216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.943 [2024-11-05 19:00:24.235991] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.943 [2024-11-05 19:00:24.236006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.943 [2024-11-05 19:00:24.248910] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.943 [2024-11-05 19:00:24.248925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.943 [2024-11-05 19:00:24.262264] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.943 [2024-11-05 19:00:24.262279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.205 [2024-11-05 19:00:24.274985] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.205 [2024-11-05 19:00:24.275000] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.205 [2024-11-05 19:00:24.288701] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.205 [2024-11-05 19:00:24.288716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.205 [2024-11-05 19:00:24.302099] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.205 [2024-11-05 19:00:24.302114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.205 [2024-11-05 19:00:24.315269] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.205 [2024-11-05 19:00:24.315285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.205 [2024-11-05 19:00:24.328498] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.205 [2024-11-05 19:00:24.328514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.205 [2024-11-05 19:00:24.341709] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.205 [2024-11-05 19:00:24.341726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.205 [2024-11-05 19:00:24.355496] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.205 [2024-11-05 19:00:24.355512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.205 [2024-11-05 19:00:24.368800] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.205 [2024-11-05 19:00:24.368816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.205 [2024-11-05 19:00:24.381973] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.205 [2024-11-05 19:00:24.381989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.205 [2024-11-05 19:00:24.394789] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.205 [2024-11-05 19:00:24.394804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.205 [2024-11-05 19:00:24.407657] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.205 [2024-11-05 19:00:24.407672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.205 [2024-11-05 19:00:24.420912] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.205 [2024-11-05 19:00:24.420928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.205 [2024-11-05 19:00:24.434546] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.205 [2024-11-05 19:00:24.434561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.205 [2024-11-05 19:00:24.447076] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.205 [2024-11-05 19:00:24.447091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.205 [2024-11-05 19:00:24.460139] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.205 [2024-11-05 19:00:24.460155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.205 [2024-11-05 19:00:24.473100] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.205 [2024-11-05 19:00:24.473116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.205 [2024-11-05 19:00:24.485647] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.205 [2024-11-05 19:00:24.485663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.205 [2024-11-05 19:00:24.499142] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.205 [2024-11-05 19:00:24.499159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.205 [2024-11-05 19:00:24.511852] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.205 [2024-11-05 19:00:24.511867] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.205 [2024-11-05 19:00:24.525456] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.205 [2024-11-05 19:00:24.525472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.467 [2024-11-05 19:00:24.538354] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.467 [2024-11-05 19:00:24.538370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.467 [2024-11-05 19:00:24.552082] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.467 [2024-11-05 19:00:24.552098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.467 [2024-11-05 19:00:24.564496] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.467 [2024-11-05 19:00:24.564512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.467 [2024-11-05 19:00:24.576976] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.467 [2024-11-05 19:00:24.576992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.467 [2024-11-05 19:00:24.590487] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.467 [2024-11-05 19:00:24.590502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.467 [2024-11-05 19:00:24.603721] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.467 [2024-11-05 19:00:24.603736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.467 [2024-11-05 19:00:24.616895] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.467 [2024-11-05 19:00:24.616912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.467 [2024-11-05 19:00:24.630357] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.467 [2024-11-05 19:00:24.630374] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.467 [2024-11-05 19:00:24.643499] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.467 [2024-11-05 19:00:24.643515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.467 [2024-11-05 19:00:24.656920] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.467 [2024-11-05 19:00:24.656936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.467 [2024-11-05 19:00:24.669601] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.467 [2024-11-05 19:00:24.669616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.467 [2024-11-05 19:00:24.682951] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.467 [2024-11-05 19:00:24.682966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.467 [2024-11-05 19:00:24.696191] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.467 [2024-11-05 19:00:24.696206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.467 [2024-11-05 19:00:24.709444] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.467 [2024-11-05 19:00:24.709459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.467 [2024-11-05 19:00:24.722802] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.467 [2024-11-05 19:00:24.722817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.467 [2024-11-05 19:00:24.736666] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.467 [2024-11-05 19:00:24.736681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.467 [2024-11-05 19:00:24.749999] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.467 [2024-11-05 19:00:24.750014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.467 [2024-11-05 19:00:24.763242] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.467 [2024-11-05 19:00:24.763257] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.467 [2024-11-05 19:00:24.776901] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.467 [2024-11-05 19:00:24.776917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.467 [2024-11-05 19:00:24.789813] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.467 [2024-11-05 19:00:24.789828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.729 [2024-11-05 19:00:24.802544] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.729 [2024-11-05 19:00:24.802559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.729 [2024-11-05 19:00:24.815630] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.729 [2024-11-05 19:00:24.815645] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.729 [2024-11-05 19:00:24.829202] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.729 [2024-11-05 19:00:24.829218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.729 [2024-11-05 19:00:24.842283] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.729 [2024-11-05 19:00:24.842300] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.729 [2024-11-05 19:00:24.855735] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.729 [2024-11-05 19:00:24.855755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.729 [2024-11-05 19:00:24.869345] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.729 [2024-11-05 19:00:24.869362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.729 [2024-11-05 19:00:24.882711] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.729 [2024-11-05 19:00:24.882728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.729 [2024-11-05 19:00:24.896013] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.729 [2024-11-05 19:00:24.896030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.729 [2024-11-05 19:00:24.909355] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.729 [2024-11-05 19:00:24.909370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.729 [2024-11-05 19:00:24.923005] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.729 [2024-11-05 19:00:24.923020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.729 [2024-11-05 19:00:24.936453] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.729 [2024-11-05 19:00:24.936469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.729 [2024-11-05 19:00:24.950038] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.729 [2024-11-05 19:00:24.950054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.729 [2024-11-05 19:00:24.962851] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.729 [2024-11-05 19:00:24.962867] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.729 [2024-11-05 19:00:24.976675] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.729 [2024-11-05 19:00:24.976691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.729 [2024-11-05 19:00:24.990278] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.729 [2024-11-05 19:00:24.990293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.729 [2024-11-05 19:00:25.003620] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.729 [2024-11-05 19:00:25.003635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.729 [2024-11-05 19:00:25.017030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.729 [2024-11-05 19:00:25.017046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.729 [2024-11-05 19:00:25.030454] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.729 [2024-11-05 19:00:25.030469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.729 [2024-11-05 19:00:25.043496] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.729 [2024-11-05 19:00:25.043511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.990 [2024-11-05 19:00:25.056623] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.990 [2024-11-05 19:00:25.056638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.990 [2024-11-05 19:00:25.070072] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.990 [2024-11-05 19:00:25.070087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.990 [2024-11-05 19:00:25.083248] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.990 [2024-11-05 19:00:25.083263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.990 [2024-11-05 19:00:25.096779] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.990 [2024-11-05 19:00:25.096794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.990 [2024-11-05 19:00:25.109452] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.990 [2024-11-05 19:00:25.109467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.990 [2024-11-05 19:00:25.122956] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.990 [2024-11-05 19:00:25.122972] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.990 [2024-11-05 19:00:25.136165] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.990 [2024-11-05 19:00:25.136184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.990 [2024-11-05 19:00:25.149689] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.990 [2024-11-05 19:00:25.149705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.990 [2024-11-05 19:00:25.163384] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.990 [2024-11-05 19:00:25.163399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.990 [2024-11-05 19:00:25.176063] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.990 [2024-11-05 19:00:25.176078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.990 [2024-11-05 19:00:25.189030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.990 [2024-11-05 19:00:25.189046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.990 [2024-11-05 19:00:25.201756] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.990 [2024-11-05 19:00:25.201771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.990 19204.33 IOPS, 150.03 MiB/s [2024-11-05T18:00:25.313Z] [2024-11-05 19:00:25.215051] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.990 [2024-11-05 19:00:25.215066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.990 [2024-11-05 19:00:25.228187] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.990 [2024-11-05 19:00:25.228202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.991 [2024-11-05 19:00:25.241136] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.991 [2024-11-05 19:00:25.241152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.991 [2024-11-05 19:00:25.254040] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.991 [2024-11-05 19:00:25.254054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.991 [2024-11-05 19:00:25.266784] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.991 [2024-11-05 19:00:25.266800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.991 [2024-11-05 19:00:25.280296] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.991 [2024-11-05 19:00:25.280311] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.991 [2024-11-05 19:00:25.293108] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.991 [2024-11-05 19:00:25.293122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.991 [2024-11-05 19:00:25.306559] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.991 [2024-11-05 19:00:25.306574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.251 [2024-11-05 19:00:25.319794] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.251 [2024-11-05 19:00:25.319809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.251 [2024-11-05 19:00:25.333095] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.251 [2024-11-05 19:00:25.333110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.251 [2024-11-05 19:00:25.346466] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.251 [2024-11-05 19:00:25.346481] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.251 [2024-11-05 19:00:25.359678] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.251 [2024-11-05 19:00:25.359693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.251 [2024-11-05 19:00:25.372753] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.251 [2024-11-05 19:00:25.372769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.251 [2024-11-05 19:00:25.386051] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.251 [2024-11-05 19:00:25.386071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.251 [2024-11-05 19:00:25.399270] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.251 [2024-11-05 19:00:25.399286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.251 [2024-11-05 19:00:25.412106] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.251 [2024-11-05 19:00:25.412121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.251 [2024-11-05 19:00:25.424959] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.251 [2024-11-05 19:00:25.424974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.251 [2024-11-05 19:00:25.437502] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.251 [2024-11-05 19:00:25.437517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.251 [2024-11-05 19:00:25.449762] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.251 [2024-11-05 19:00:25.449777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.251 [2024-11-05 19:00:25.462495] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.251 [2024-11-05 19:00:25.462510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.251 [2024-11-05 19:00:25.475093] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.251 [2024-11-05 19:00:25.475108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.251 [2024-11-05 19:00:25.488153] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.251 [2024-11-05 19:00:25.488168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.251 [2024-11-05 19:00:25.501321] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.251 [2024-11-05 19:00:25.501336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.251 [2024-11-05 19:00:25.514375] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.251 [2024-11-05 19:00:25.514390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.252 [2024-11-05 19:00:25.527764] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.252 [2024-11-05 19:00:25.527780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.252 [2024-11-05 19:00:25.541396] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.252 [2024-11-05 19:00:25.541411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.252 [2024-11-05 19:00:25.554704] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.252 [2024-11-05 19:00:25.554719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.252 [2024-11-05 19:00:25.567362] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.252 [2024-11-05 19:00:25.567377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.513 [2024-11-05 19:00:25.581094] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.513 [2024-11-05 19:00:25.581110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.513 [2024-11-05 19:00:25.593901] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.513 [2024-11-05 19:00:25.593915] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.513 [2024-11-05 19:00:25.607270] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.513 [2024-11-05 19:00:25.607286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.513 [2024-11-05 19:00:25.620868] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.513 [2024-11-05 19:00:25.620883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.513 [2024-11-05 19:00:25.633982] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.513 [2024-11-05 19:00:25.634002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.513 [2024-11-05 19:00:25.647483] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.513 [2024-11-05 19:00:25.647499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.513 [2024-11-05 19:00:25.661065] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.513 [2024-11-05 19:00:25.661079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.513 [2024-11-05 19:00:25.674557] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.513 [2024-11-05 19:00:25.674572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.513 [2024-11-05 19:00:25.687199] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.513 [2024-11-05 19:00:25.687214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.513 [2024-11-05 19:00:25.699664] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.513 [2024-11-05 19:00:25.699679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.513 [2024-11-05 19:00:25.712518] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.513 [2024-11-05 19:00:25.712533] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.513 [2024-11-05 19:00:25.725889] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.513 [2024-11-05 19:00:25.725904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.513 [2024-11-05 19:00:25.738425] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.513 [2024-11-05 19:00:25.738440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.513 [2024-11-05 19:00:25.752079] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.513 [2024-11-05 19:00:25.752094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.513 [2024-11-05 19:00:25.764677] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.513 [2024-11-05 19:00:25.764692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.513 [2024-11-05 19:00:25.777283] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.513 [2024-11-05 19:00:25.777298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.513 [2024-11-05 19:00:25.789803] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.513 [2024-11-05 19:00:25.789818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.513 [2024-11-05 19:00:25.802502] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.513 [2024-11-05 19:00:25.802517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.513 [2024-11-05 19:00:25.815921] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.513 [2024-11-05 19:00:25.815936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.513 [2024-11-05 19:00:25.829532] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.513 [2024-11-05 19:00:25.829547] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.776 [2024-11-05 19:00:25.843059] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.776 [2024-11-05 19:00:25.843074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.776 [2024-11-05 19:00:25.855653] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.776 [2024-11-05 19:00:25.855668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.776 [2024-11-05 19:00:25.868829] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.776 [2024-11-05 19:00:25.868844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.776 [2024-11-05 19:00:25.882377] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.776 [2024-11-05 19:00:25.882395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.776 [2024-11-05 19:00:25.895056] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.776 [2024-11-05 19:00:25.895072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.776 [2024-11-05 19:00:25.908305] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.776 [2024-11-05 19:00:25.908321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.776 [2024-11-05 19:00:25.921183] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.776 [2024-11-05 19:00:25.921199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.776 [2024-11-05 19:00:25.934918] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.776 [2024-11-05 19:00:25.934934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.776 [2024-11-05 19:00:25.947354] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.776 [2024-11-05 19:00:25.947369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.776 [2024-11-05 19:00:25.960619] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.776 [2024-11-05 19:00:25.960633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.776 [2024-11-05 19:00:25.973347] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.776 [2024-11-05 19:00:25.973362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.776 [2024-11-05 19:00:25.986871] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.776 [2024-11-05 19:00:25.986886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.776 [2024-11-05 19:00:26.000127] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.776 [2024-11-05 19:00:26.000142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.776 [2024-11-05 19:00:26.013835] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.776 [2024-11-05 19:00:26.013851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.776 [2024-11-05 19:00:26.026884] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.776 [2024-11-05 19:00:26.026900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.776 [2024-11-05 19:00:26.040133] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.776 [2024-11-05 19:00:26.040149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.776 [2024-11-05 19:00:26.053307] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.776 [2024-11-05 19:00:26.053323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.776 [2024-11-05 19:00:26.065858] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.776 [2024-11-05 19:00:26.065873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.776 [2024-11-05 19:00:26.078437] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.776 [2024-11-05 19:00:26.078452] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.776 [2024-11-05 19:00:26.091271] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.776 [2024-11-05 19:00:26.091286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.038 [2024-11-05 19:00:26.105010] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.038 [2024-11-05 19:00:26.105026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.038 [2024-11-05 19:00:26.117768] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.038 [2024-11-05 19:00:26.117784] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.038 [2024-11-05 19:00:26.130255] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.038 [2024-11-05 19:00:26.130271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.038 [2024-11-05 19:00:26.143055] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.038 [2024-11-05 19:00:26.143070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.038 [2024-11-05 19:00:26.156776] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.038 [2024-11-05 19:00:26.156793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.038 [2024-11-05 19:00:26.169163] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.038 [2024-11-05 19:00:26.169179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.038 [2024-11-05 19:00:26.182545] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.038 [2024-11-05 19:00:26.182561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.038 [2024-11-05 19:00:26.195805] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.038 [2024-11-05 19:00:26.195820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.038 [2024-11-05 19:00:26.208866] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.038 [2024-11-05 19:00:26.208882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.038 19238.25 IOPS, 150.30 MiB/s [2024-11-05T18:00:26.361Z] [2024-11-05 19:00:26.222354] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.038 [2024-11-05 19:00:26.222370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.038 [2024-11-05 19:00:26.235557] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.038 [2024-11-05 19:00:26.235573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.038 [2024-11-05 19:00:26.249133] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.038 [2024-11-05 19:00:26.249148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.038 [2024-11-05 19:00:26.261792] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.038 [2024-11-05 19:00:26.261807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.038 [2024-11-05 19:00:26.275090] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.038 [2024-11-05 19:00:26.275106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.038 [2024-11-05 19:00:26.288572] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.038 [2024-11-05 19:00:26.288588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.038 [2024-11-05 19:00:26.301856] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.038 [2024-11-05 19:00:26.301871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.038 [2024-11-05 19:00:26.314686] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.038 [2024-11-05 19:00:26.314702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.038 [2024-11-05 19:00:26.327154] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.038 [2024-11-05 19:00:26.327169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.038 [2024-11-05 19:00:26.340380] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.038 [2024-11-05 19:00:26.340396] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.038 [2024-11-05 19:00:26.352813] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.038 [2024-11-05 19:00:26.352829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.300 [2024-11-05 19:00:26.366486] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.300 [2024-11-05 19:00:26.366501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.300 [2024-11-05 19:00:26.379274] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.300 [2024-11-05 19:00:26.379289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.300 [2024-11-05 19:00:26.391906] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.300 [2024-11-05 19:00:26.391921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.300 [2024-11-05 19:00:26.405310] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.300 [2024-11-05 19:00:26.405326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.300 [2024-11-05 19:00:26.418809] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.300 [2024-11-05 19:00:26.418825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.300 [2024-11-05 19:00:26.432257] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.300 [2024-11-05 19:00:26.432272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.300 [2024-11-05 19:00:26.445785] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.300 [2024-11-05 19:00:26.445800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.300 [2024-11-05 19:00:26.458567] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.300 [2024-11-05 19:00:26.458583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.300 [2024-11-05 19:00:26.471262] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.300 [2024-11-05 19:00:26.471277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.300 [2024-11-05 19:00:26.484101] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.300 [2024-11-05 19:00:26.484117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.300 [2024-11-05 19:00:26.496576] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.300 [2024-11-05 19:00:26.496592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.300 [2024-11-05 19:00:26.509076] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.300 [2024-11-05 19:00:26.509091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.300 [2024-11-05 19:00:26.521546] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.300 [2024-11-05 19:00:26.521561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.300 [2024-11-05 19:00:26.534937] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.300 [2024-11-05 19:00:26.534952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.300 [2024-11-05 19:00:26.548238] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.300 [2024-11-05 19:00:26.548254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.300 [2024-11-05 19:00:26.561145] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.300 [2024-11-05 19:00:26.561161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.300 [2024-11-05 19:00:26.573782] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.300 [2024-11-05 19:00:26.573797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.300 [2024-11-05 19:00:26.586549] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.300 [2024-11-05 19:00:26.586564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.300 [2024-11-05 19:00:26.600199] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.300 [2024-11-05 19:00:26.600214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.300 [2024-11-05 19:00:26.613880] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.300 [2024-11-05 19:00:26.613899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.561 [2024-11-05 19:00:26.627273] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.561 [2024-11-05 19:00:26.627289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.561 [2024-11-05 19:00:26.640100] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.561 [2024-11-05 19:00:26.640116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.561 [2024-11-05 19:00:26.653415] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.561 [2024-11-05 19:00:26.653430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.561 [2024-11-05 19:00:26.667058] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.561 [2024-11-05 19:00:26.667073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.562 [2024-11-05 19:00:26.680172] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.562 [2024-11-05 19:00:26.680188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.562 [2024-11-05 19:00:26.692921] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.562 [2024-11-05 19:00:26.692937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.562 [2024-11-05 19:00:26.706081] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.562 [2024-11-05 19:00:26.706097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.562 [2024-11-05 19:00:26.719514] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.562 [2024-11-05 19:00:26.719529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.562 [2024-11-05 19:00:26.732926] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.562 [2024-11-05 19:00:26.732941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.562 [2024-11-05 19:00:26.746231] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.562 [2024-11-05 19:00:26.746245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.562 [2024-11-05 19:00:26.759655] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.562 [2024-11-05 19:00:26.759670] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.562 [2024-11-05 19:00:26.772462] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.562 [2024-11-05 19:00:26.772477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.562 [2024-11-05 19:00:26.786013] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.562 [2024-11-05 19:00:26.786029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.562 [2024-11-05 19:00:26.798777] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.562 [2024-11-05 19:00:26.798792] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.562 [2024-11-05 19:00:26.811951] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.562 [2024-11-05 19:00:26.811966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.562 [2024-11-05 19:00:26.824633] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.562 [2024-11-05 19:00:26.824647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.562 [2024-11-05 19:00:26.837414] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.562 [2024-11-05 19:00:26.837429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.562 [2024-11-05 19:00:26.850060] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.562 [2024-11-05 19:00:26.850075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.562 [2024-11-05 19:00:26.862489] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.562 [2024-11-05 19:00:26.862511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.562 [2024-11-05 19:00:26.875306] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.562 [2024-11-05 19:00:26.875322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.823 [2024-11-05 19:00:26.888547] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.823 [2024-11-05 19:00:26.888562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.823 [2024-11-05 19:00:26.901882] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.823 [2024-11-05 19:00:26.901897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.823 [2024-11-05 19:00:26.915545] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.823 [2024-11-05 19:00:26.915560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.823 [2024-11-05 19:00:26.928126] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.823 [2024-11-05 19:00:26.928140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.823 [2024-11-05 19:00:26.941351] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.823 [2024-11-05 19:00:26.941366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.823 [2024-11-05 19:00:26.953781] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.823 [2024-11-05 19:00:26.953796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.823 [2024-11-05 19:00:26.965936] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.823 [2024-11-05 19:00:26.965951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.823 [2024-11-05 19:00:26.979295] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.823 [2024-11-05 19:00:26.979310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.823 [2024-11-05 19:00:26.993012] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.823 [2024-11-05 19:00:26.993027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.823 [2024-11-05 19:00:27.005753] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.823 [2024-11-05 19:00:27.005768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.823 [2024-11-05 19:00:27.019281] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.823 [2024-11-05 19:00:27.019296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.823 [2024-11-05 19:00:27.032488] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.823 [2024-11-05 19:00:27.032503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.823 [2024-11-05 19:00:27.045543] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.823 [2024-11-05 19:00:27.045558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.823 [2024-11-05 19:00:27.058729] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.823 [2024-11-05 19:00:27.058744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.823 [2024-11-05 19:00:27.071907] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.823 [2024-11-05 19:00:27.071921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.823 [2024-11-05 19:00:27.084928] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.823 [2024-11-05 19:00:27.084943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.823 [2024-11-05 19:00:27.098455] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.823 [2024-11-05 19:00:27.098471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.823 [2024-11-05 19:00:27.111775] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.823 [2024-11-05 19:00:27.111794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.823 [2024-11-05 19:00:27.124778] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.823 [2024-11-05 19:00:27.124793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.823 [2024-11-05 19:00:27.137759] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.823 [2024-11-05 19:00:27.137774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.084 [2024-11-05 19:00:27.150982] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.084 [2024-11-05 19:00:27.150997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.084 [2024-11-05 19:00:27.164666] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.084 [2024-11-05 19:00:27.164681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.084 [2024-11-05 19:00:27.178034] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.084 [2024-11-05 19:00:27.178049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.084 [2024-11-05 19:00:27.191562] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.084 [2024-11-05 19:00:27.191578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.084 [2024-11-05 19:00:27.204948] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.084 [2024-11-05 19:00:27.204963] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.084 19249.80 IOPS, 150.39 MiB/s [2024-11-05T18:00:27.407Z] [2024-11-05 19:00:27.216066] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.084 [2024-11-05 19:00:27.216081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.084 00:10:58.084 Latency(us) 00:10:58.084 [2024-11-05T18:00:27.407Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:58.084 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:10:58.084 Nvme1n1 : 5.01 19249.43 150.39 0.00 0.00 6642.58 2853.55 16274.77 00:10:58.084 [2024-11-05T18:00:27.407Z] =================================================================================================================== 00:10:58.085 [2024-11-05T18:00:27.408Z] Total : 19249.43 150.39 0.00 0.00 6642.58 2853.55 16274.77 00:10:58.085 [2024-11-05 19:00:27.226851] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.085 [2024-11-05 19:00:27.226864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.085 [2024-11-05 19:00:27.238889] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.085 [2024-11-05 19:00:27.238903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.085 [2024-11-05 19:00:27.250914] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.085 [2024-11-05 19:00:27.250927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.085 [2024-11-05 19:00:27.262943] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.085 [2024-11-05 19:00:27.262954] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.085 [2024-11-05 19:00:27.274971] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.085 [2024-11-05 19:00:27.274980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.085 [2024-11-05 19:00:27.287002] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.085 [2024-11-05 19:00:27.287011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.085 [2024-11-05 19:00:27.299032] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.085 [2024-11-05 19:00:27.299040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.085 [2024-11-05 19:00:27.311067] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.085 [2024-11-05 19:00:27.311078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.085 [2024-11-05 19:00:27.323094] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.085 [2024-11-05 19:00:27.323103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.085 [2024-11-05 19:00:27.335124] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.085 [2024-11-05 19:00:27.335132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.085 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 37: kill: (213458) - No such process 00:10:58.085 19:00:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@44 -- # wait 213458 00:10:58.085 19:00:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@47 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:58.085 19:00:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.085 19:00:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:58.085 19:00:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.085 19:00:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@48 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:58.085 19:00:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.085 19:00:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:58.085 delay0 00:10:58.085 19:00:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.085 19:00:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:10:58.085 19:00:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.085 19:00:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:58.085 19:00:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.085 19:00:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:10:58.346 [2024-11-05 19:00:27.528922] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:11:06.486 Initializing NVMe Controllers 00:11:06.486 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:06.486 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:06.486 Initialization complete. Launching workers. 00:11:06.487 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 242, failed: 32272 00:11:06.487 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 32390, failed to submit 124 00:11:06.487 success 32314, unsuccessful 76, failed 0 00:11:06.487 19:00:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:11:06.487 19:00:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@55 -- # nvmftestfini 00:11:06.487 19:00:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@335 -- # nvmfcleanup 00:11:06.487 19:00:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@99 -- # sync 00:11:06.487 19:00:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:11:06.487 19:00:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@102 -- # set +e 00:11:06.487 19:00:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@103 -- # for i in {1..20} 00:11:06.487 19:00:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:11:06.487 rmmod nvme_tcp 00:11:06.487 rmmod nvme_fabrics 00:11:06.487 rmmod nvme_keyring 00:11:06.487 19:00:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:11:06.487 19:00:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # set -e 00:11:06.487 19:00:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # return 0 00:11:06.487 19:00:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # '[' -n 210940 ']' 00:11:06.487 19:00:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@337 -- # killprocess 210940 00:11:06.487 19:00:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@952 -- # '[' -z 210940 ']' 00:11:06.487 19:00:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # kill -0 210940 00:11:06.487 19:00:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@957 -- # uname 00:11:06.487 19:00:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:06.487 19:00:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 210940 00:11:06.487 19:00:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:11:06.487 19:00:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:11:06.487 19:00:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@970 -- # echo 'killing process with pid 210940' 00:11:06.487 killing process with pid 210940 00:11:06.487 19:00:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@971 -- # kill 210940 00:11:06.487 19:00:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@976 -- # wait 210940 00:11:06.487 19:00:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:11:06.487 19:00:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # nvmf_fini 00:11:06.487 19:00:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@264 -- # local dev 00:11:06.487 19:00:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@267 -- # remove_target_ns 00:11:06.487 19:00:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:11:06.487 19:00:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:11:06.487 19:00:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_target_ns 00:11:07.874 19:00:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@268 -- # delete_main_bridge 00:11:07.874 19:00:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:11:07.874 19:00:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@130 -- # return 0 00:11:07.874 19:00:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:11:07.874 19:00:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:11:07.874 19:00:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:11:07.874 19:00:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:11:07.874 19:00:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:11:07.874 19:00:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:11:07.874 19:00:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:11:07.874 19:00:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:11:07.874 19:00:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:11:07.874 19:00:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:11:07.874 19:00:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:11:07.874 19:00:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:11:07.874 19:00:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:11:07.874 19:00:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:11:07.874 19:00:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:11:07.874 19:00:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:11:07.874 19:00:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:11:07.874 19:00:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@41 -- # _dev=0 00:11:07.874 19:00:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@41 -- # dev_map=() 00:11:07.874 19:00:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@284 -- # iptr 00:11:07.874 19:00:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@542 -- # iptables-save 00:11:07.874 19:00:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:11:07.874 19:00:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@542 -- # iptables-restore 00:11:07.874 00:11:07.874 real 0m34.344s 00:11:07.874 user 0m45.788s 00:11:07.874 sys 0m11.390s 00:11:07.874 19:00:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:07.874 19:00:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:07.874 ************************************ 00:11:07.874 END TEST nvmf_zcopy 00:11:07.874 ************************************ 00:11:07.874 19:00:37 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@38 -- # trap - SIGINT SIGTERM EXIT 00:11:07.874 00:11:07.874 real 4m52.443s 00:11:07.874 user 11m42.568s 00:11:07.874 sys 1m43.869s 00:11:07.874 19:00:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:07.874 19:00:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:07.874 ************************************ 00:11:07.874 END TEST nvmf_target_core 00:11:07.874 ************************************ 00:11:07.874 19:00:37 nvmf_tcp -- nvmf/nvmf.sh@11 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:07.874 19:00:37 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:07.874 19:00:37 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:07.874 19:00:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:08.136 ************************************ 00:11:08.136 START TEST nvmf_target_extra 00:11:08.136 ************************************ 00:11:08.136 19:00:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:08.136 * Looking for test storage... 00:11:08.136 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:11:08.136 19:00:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:08.136 19:00:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lcov --version 00:11:08.136 19:00:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:08.136 19:00:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:08.136 19:00:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:08.136 19:00:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:08.136 19:00:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:08.136 19:00:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:11:08.136 19:00:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:11:08.136 19:00:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:11:08.136 19:00:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:11:08.136 19:00:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:11:08.136 19:00:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:11:08.136 19:00:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:11:08.136 19:00:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:08.136 19:00:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:11:08.136 19:00:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:11:08.136 19:00:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:08.136 19:00:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:08.136 19:00:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:11:08.136 19:00:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:11:08.136 19:00:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:08.136 19:00:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:11:08.136 19:00:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:11:08.136 19:00:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:11:08.136 19:00:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:11:08.136 19:00:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:08.136 19:00:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:11:08.136 19:00:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:11:08.136 19:00:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:08.136 19:00:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:08.136 19:00:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:11:08.136 19:00:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:08.136 19:00:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:08.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:08.136 --rc genhtml_branch_coverage=1 00:11:08.136 --rc genhtml_function_coverage=1 00:11:08.136 --rc genhtml_legend=1 00:11:08.136 --rc geninfo_all_blocks=1 00:11:08.136 --rc geninfo_unexecuted_blocks=1 00:11:08.136 00:11:08.136 ' 00:11:08.136 19:00:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:08.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:08.136 --rc genhtml_branch_coverage=1 00:11:08.136 --rc genhtml_function_coverage=1 00:11:08.136 --rc genhtml_legend=1 00:11:08.136 --rc geninfo_all_blocks=1 00:11:08.136 --rc geninfo_unexecuted_blocks=1 00:11:08.136 00:11:08.136 ' 00:11:08.136 19:00:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:08.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:08.136 --rc genhtml_branch_coverage=1 00:11:08.136 --rc genhtml_function_coverage=1 00:11:08.136 --rc genhtml_legend=1 00:11:08.136 --rc geninfo_all_blocks=1 00:11:08.136 --rc geninfo_unexecuted_blocks=1 00:11:08.136 00:11:08.136 ' 00:11:08.136 19:00:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:08.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:08.136 --rc genhtml_branch_coverage=1 00:11:08.136 --rc genhtml_function_coverage=1 00:11:08.136 --rc genhtml_legend=1 00:11:08.137 --rc geninfo_all_blocks=1 00:11:08.137 --rc geninfo_unexecuted_blocks=1 00:11:08.137 00:11:08.137 ' 00:11:08.137 19:00:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:08.137 19:00:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:11:08.137 19:00:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:08.137 19:00:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:08.137 19:00:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:08.137 19:00:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:08.137 19:00:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:08.137 19:00:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:11:08.137 19:00:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:08.137 19:00:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:11:08.137 19:00:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:08.137 19:00:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:08.137 19:00:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:08.137 19:00:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:11:08.137 19:00:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:11:08.137 19:00:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:08.137 19:00:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:08.137 19:00:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:11:08.137 19:00:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:08.137 19:00:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:08.137 19:00:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:08.137 19:00:37 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.137 19:00:37 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.137 19:00:37 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.137 19:00:37 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:11:08.137 19:00:37 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.137 19:00:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:11:08.137 19:00:37 nvmf_tcp.nvmf_target_extra -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:11:08.137 19:00:37 nvmf_tcp.nvmf_target_extra -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:11:08.137 19:00:37 nvmf_tcp.nvmf_target_extra -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:11:08.137 19:00:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@50 -- # : 0 00:11:08.137 19:00:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:11:08.137 19:00:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:11:08.137 19:00:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:11:08.137 19:00:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:08.137 19:00:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:08.137 19:00:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:11:08.137 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:11:08.137 19:00:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:11:08.137 19:00:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:11:08.137 19:00:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@54 -- # have_pci_nics=0 00:11:08.137 19:00:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:11:08.137 19:00:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:11:08.137 19:00:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:11:08.137 19:00:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:08.137 19:00:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:08.137 19:00:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:08.137 19:00:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:08.399 ************************************ 00:11:08.399 START TEST nvmf_example 00:11:08.399 ************************************ 00:11:08.399 19:00:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:08.399 * Looking for test storage... 00:11:08.399 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:08.399 19:00:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:08.399 19:00:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # lcov --version 00:11:08.399 19:00:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:08.399 19:00:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:08.399 19:00:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:08.399 19:00:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:08.399 19:00:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:08.399 19:00:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:11:08.399 19:00:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:11:08.399 19:00:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:11:08.399 19:00:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:11:08.399 19:00:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:11:08.399 19:00:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:11:08.399 19:00:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:11:08.399 19:00:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:08.399 19:00:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:11:08.399 19:00:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:11:08.399 19:00:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:08.399 19:00:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:08.399 19:00:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:11:08.399 19:00:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:11:08.399 19:00:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:08.399 19:00:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:11:08.399 19:00:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:11:08.399 19:00:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:11:08.399 19:00:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:11:08.399 19:00:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:08.399 19:00:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:11:08.399 19:00:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:11:08.399 19:00:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:08.399 19:00:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:08.399 19:00:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:11:08.399 19:00:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:08.399 19:00:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:08.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:08.399 --rc genhtml_branch_coverage=1 00:11:08.399 --rc genhtml_function_coverage=1 00:11:08.399 --rc genhtml_legend=1 00:11:08.399 --rc geninfo_all_blocks=1 00:11:08.399 --rc geninfo_unexecuted_blocks=1 00:11:08.399 00:11:08.399 ' 00:11:08.399 19:00:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:08.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:08.399 --rc genhtml_branch_coverage=1 00:11:08.399 --rc genhtml_function_coverage=1 00:11:08.399 --rc genhtml_legend=1 00:11:08.399 --rc geninfo_all_blocks=1 00:11:08.399 --rc geninfo_unexecuted_blocks=1 00:11:08.399 00:11:08.399 ' 00:11:08.399 19:00:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:08.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:08.399 --rc genhtml_branch_coverage=1 00:11:08.399 --rc genhtml_function_coverage=1 00:11:08.399 --rc genhtml_legend=1 00:11:08.399 --rc geninfo_all_blocks=1 00:11:08.399 --rc geninfo_unexecuted_blocks=1 00:11:08.399 00:11:08.399 ' 00:11:08.399 19:00:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:08.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:08.399 --rc genhtml_branch_coverage=1 00:11:08.399 --rc genhtml_function_coverage=1 00:11:08.399 --rc genhtml_legend=1 00:11:08.399 --rc geninfo_all_blocks=1 00:11:08.399 --rc geninfo_unexecuted_blocks=1 00:11:08.399 00:11:08.399 ' 00:11:08.399 19:00:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:08.399 19:00:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:11:08.399 19:00:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:08.399 19:00:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:08.399 19:00:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:08.399 19:00:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:08.400 19:00:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:08.400 19:00:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:11:08.400 19:00:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:08.400 19:00:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:11:08.400 19:00:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:08.400 19:00:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:08.400 19:00:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:08.400 19:00:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:11:08.400 19:00:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:11:08.400 19:00:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:08.400 19:00:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:08.400 19:00:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:11:08.400 19:00:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:08.400 19:00:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:08.400 19:00:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:08.400 19:00:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.400 19:00:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.400 19:00:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.400 19:00:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:11:08.400 19:00:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.400 19:00:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:11:08.400 19:00:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:11:08.400 19:00:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:11:08.400 19:00:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:11:08.400 19:00:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@50 -- # : 0 00:11:08.400 19:00:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:11:08.400 19:00:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:11:08.400 19:00:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:11:08.400 19:00:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:08.400 19:00:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:08.400 19:00:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:11:08.400 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:11:08.400 19:00:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:11:08.400 19:00:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:11:08.400 19:00:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@54 -- # have_pci_nics=0 00:11:08.400 19:00:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:11:08.400 19:00:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:11:08.400 19:00:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:11:08.400 19:00:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:11:08.400 19:00:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:11:08.400 19:00:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:11:08.400 19:00:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:11:08.400 19:00:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:11:08.400 19:00:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:08.400 19:00:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:08.400 19:00:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:11:08.400 19:00:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:11:08.400 19:00:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:08.400 19:00:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # prepare_net_devs 00:11:08.400 19:00:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # local -g is_hw=no 00:11:08.400 19:00:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@260 -- # remove_target_ns 00:11:08.400 19:00:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:11:08.400 19:00:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:11:08.400 19:00:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_target_ns 00:11:08.400 19:00:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:11:08.400 19:00:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:11:08.400 19:00:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # xtrace_disable 00:11:08.400 19:00:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:16.546 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:16.546 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@131 -- # pci_devs=() 00:11:16.546 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@131 -- # local -a pci_devs 00:11:16.546 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@132 -- # pci_net_devs=() 00:11:16.546 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:11:16.546 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@133 -- # pci_drivers=() 00:11:16.546 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@133 -- # local -A pci_drivers 00:11:16.546 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@135 -- # net_devs=() 00:11:16.547 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@135 -- # local -ga net_devs 00:11:16.547 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@136 -- # e810=() 00:11:16.547 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@136 -- # local -ga e810 00:11:16.547 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@137 -- # x722=() 00:11:16.547 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@137 -- # local -ga x722 00:11:16.547 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@138 -- # mlx=() 00:11:16.547 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@138 -- # local -ga mlx 00:11:16.547 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:16.547 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:16.547 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:16.547 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:16.547 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:16.547 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:16.547 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:16.547 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:16.547 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:16.547 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:16.547 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:16.547 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:16.547 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:11:16.547 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:11:16.547 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:11:16.547 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:11:16.547 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:11:16.547 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:11:16.547 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:11:16.547 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:16.547 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:16.547 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:11:16.547 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:11:16.547 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:16.547 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:16.547 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:11:16.547 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:11:16.547 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:16.547 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:16.547 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:11:16.547 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:11:16.547 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:16.547 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:16.547 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:11:16.547 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:11:16.547 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:11:16.547 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:11:16.547 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:11:16.547 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:16.547 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:11:16.547 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:16.547 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@234 -- # [[ up == up ]] 00:11:16.547 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:11:16.547 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:16.547 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:16.547 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:16.547 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:11:16.547 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:11:16.547 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:16.547 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:11:16.547 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:16.547 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@234 -- # [[ up == up ]] 00:11:16.547 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:11:16.547 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:16.547 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:16.547 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:16.547 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:11:16.547 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:11:16.547 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:11:16.547 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # is_hw=yes 00:11:16.547 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:11:16.547 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:11:16.547 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:11:16.547 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:11:16.547 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@257 -- # create_target_ns 00:11:16.547 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:11:16.547 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:11:16.547 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:11:16.547 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:16.547 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:11:16.547 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:11:16.547 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:16.547 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:16.547 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:11:16.547 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:11:16.547 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:11:16.547 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:11:16.547 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@27 -- # local -gA dev_map 00:11:16.547 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@28 -- # local -g _dev 00:11:16.547 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:11:16.547 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:11:16.547 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:11:16.547 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:11:16.547 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@44 -- # ips=() 00:11:16.547 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:11:16.547 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:11:16.547 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:11:16.547 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:11:16.547 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:11:16.547 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:11:16.547 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:11:16.547 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:11:16.547 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:11:16.547 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:11:16.547 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:11:16.547 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:11:16.547 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:11:16.547 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:11:16.547 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:11:16.547 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:11:16.547 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:11:16.547 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:11:16.547 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:11:16.547 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:11:16.547 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@11 -- # local val=167772161 00:11:16.547 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:11:16.547 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:11:16.548 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:11:16.548 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:11:16.548 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:11:16.548 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:11:16.548 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:11:16.548 10.0.0.1 00:11:16.548 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:11:16.548 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:11:16.548 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:16.548 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:16.548 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:11:16.548 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@11 -- # local val=167772162 00:11:16.548 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:11:16.548 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:11:16.548 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:11:16.548 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:11:16.548 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:11:16.548 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:11:16.548 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:11:16.548 10.0.0.2 00:11:16.548 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:11:16.548 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:11:16.548 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:11:16.548 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:11:16.548 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:11:16.548 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:11:16.548 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:11:16.548 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:16.548 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:16.548 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:11:16.548 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:11:16.548 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:11:16.548 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:11:16.548 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:11:16.548 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:11:16.548 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:11:16.548 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:11:16.548 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:11:16.548 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:11:16.548 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:11:16.548 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@38 -- # ping_ips 1 00:11:16.548 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:11:16.548 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:11:16.548 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:11:16.548 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:11:16.548 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:11:16.548 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:11:16.548 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:11:16.548 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:11:16.548 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:11:16.548 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@107 -- # local dev=initiator0 00:11:16.548 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:11:16.548 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:11:16.548 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:11:16.548 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:11:16.548 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:11:16.548 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:11:16.548 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:11:16.548 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:11:16.548 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:11:16.548 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:11:16.548 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:11:16.548 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:16.548 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:16.548 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:11:16.548 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:11:16.548 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:16.548 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.634 ms 00:11:16.548 00:11:16.548 --- 10.0.0.1 ping statistics --- 00:11:16.548 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:16.548 rtt min/avg/max/mdev = 0.634/0.634/0.634/0.000 ms 00:11:16.548 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:11:16.548 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:11:16.548 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:11:16.548 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:11:16.548 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:16.548 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:16.548 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@168 -- # get_net_dev target0 00:11:16.548 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@107 -- # local dev=target0 00:11:16.548 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:11:16.548 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:11:16.548 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:11:16.548 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:11:16.548 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:11:16.548 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:11:16.548 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:11:16.548 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:11:16.548 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:11:16.548 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:11:16.548 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:11:16.548 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:11:16.548 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:11:16.548 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:11:16.548 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:16.548 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.310 ms 00:11:16.548 00:11:16.548 --- 10.0.0.2 ping statistics --- 00:11:16.548 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:16.548 rtt min/avg/max/mdev = 0.310/0.310/0.310/0.000 ms 00:11:16.548 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@98 -- # (( pair++ )) 00:11:16.548 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:11:16.548 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:16.548 19:00:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@270 -- # return 0 00:11:16.548 19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:11:16.548 19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:11:16.548 19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:11:16.548 19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:11:16.548 19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:11:16.548 19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:11:16.548 19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:11:16.548 19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:11:16.548 19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:11:16.548 19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:11:16.548 19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@107 -- # local dev=initiator0 00:11:16.548 19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:11:16.548 19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:11:16.549 19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:11:16.549 19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:11:16.549 19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:11:16.549 19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:11:16.549 19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:11:16.549 19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:11:16.549 19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:11:16.549 19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:16.549 19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:11:16.549 19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:11:16.549 19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:11:16.549 19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:11:16.549 19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:11:16.549 19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:11:16.549 19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@107 -- # local dev=initiator1 00:11:16.549 19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:11:16.549 19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:11:16.549 19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@109 -- # return 1 00:11:16.549 19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@168 -- # dev= 00:11:16.549 19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@169 -- # return 0 00:11:16.549 19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:11:16.549 19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:11:16.549 19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:11:16.549 19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:11:16.549 19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:11:16.549 19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:16.549 19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:16.549 19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@168 -- # get_net_dev target0 00:11:16.549 19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@107 -- # local dev=target0 00:11:16.549 19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:11:16.549 19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:11:16.549 19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:11:16.549 19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:11:16.549 19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:11:16.549 19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:11:16.549 19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:11:16.549 19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:11:16.549 19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:11:16.549 19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:16.549 19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:11:16.549 19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:11:16.549 19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:11:16.549 19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:11:16.549 19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:16.549 19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:16.549 19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@168 -- # get_net_dev target1 00:11:16.549 19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@107 -- # local dev=target1 00:11:16.549 19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:11:16.549 19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:11:16.549 19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@109 -- # return 1 00:11:16.549 19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@168 -- # dev= 00:11:16.549 19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@169 -- # return 0 00:11:16.549 19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:11:16.549 19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:16.549 19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:11:16.549 19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:11:16.549 19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:16.549 19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:11:16.549 19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:11:16.549 19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:11:16.549 19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:11:16.549 19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:16.549 19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:16.549 19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:11:16.549 19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:11:16.549 19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=220306 00:11:16.549 19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:16.549 19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:11:16.549 19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 220306 00:11:16.549 19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@833 -- # '[' -z 220306 ']' 00:11:16.549 19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:16.549 19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:16.549 19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:16.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:16.549 19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:16.549 19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:16.810 19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:16.810 19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@866 -- # return 0 00:11:16.810 19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:11:16.810 19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:16.810 19:00:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:16.810 19:00:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:16.810 19:00:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.810 19:00:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:16.810 19:00:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.810 19:00:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:11:16.810 19:00:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.810 19:00:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:16.810 19:00:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.810 19:00:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:11:16.810 19:00:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:16.810 19:00:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.810 19:00:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:16.810 19:00:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.810 19:00:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:11:16.810 19:00:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:16.810 19:00:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.810 19:00:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:16.810 19:00:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.810 19:00:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:16.810 19:00:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.810 19:00:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:16.810 19:00:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.810 19:00:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:11:16.810 19:00:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:29.047 Initializing NVMe Controllers 00:11:29.047 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:29.047 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:29.047 Initialization complete. Launching workers. 00:11:29.047 ======================================================== 00:11:29.047 Latency(us) 00:11:29.047 Device Information : IOPS MiB/s Average min max 00:11:29.047 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18569.68 72.54 3446.01 652.23 17434.20 00:11:29.047 ======================================================== 00:11:29.047 Total : 18569.68 72.54 3446.01 652.23 17434.20 00:11:29.047 00:11:29.047 19:00:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:11:29.047 19:00:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:11:29.047 19:00:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@335 -- # nvmfcleanup 00:11:29.047 19:00:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@99 -- # sync 00:11:29.047 19:00:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:11:29.047 19:00:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@102 -- # set +e 00:11:29.047 19:00:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@103 -- # for i in {1..20} 00:11:29.047 19:00:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:11:29.047 rmmod nvme_tcp 00:11:29.047 rmmod nvme_fabrics 00:11:29.047 rmmod nvme_keyring 00:11:29.047 19:00:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:11:29.047 19:00:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # set -e 00:11:29.047 19:00:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # return 0 00:11:29.047 19:00:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # '[' -n 220306 ']' 00:11:29.047 19:00:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@337 -- # killprocess 220306 00:11:29.047 19:00:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@952 -- # '[' -z 220306 ']' 00:11:29.047 19:00:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # kill -0 220306 00:11:29.047 19:00:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@957 -- # uname 00:11:29.047 19:00:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:29.047 19:00:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 220306 00:11:29.047 19:00:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # process_name=nvmf 00:11:29.047 19:00:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@962 -- # '[' nvmf = sudo ']' 00:11:29.047 19:00:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@970 -- # echo 'killing process with pid 220306' 00:11:29.047 killing process with pid 220306 00:11:29.047 19:00:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@971 -- # kill 220306 00:11:29.047 19:00:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@976 -- # wait 220306 00:11:29.047 nvmf threads initialize successfully 00:11:29.047 bdev subsystem init successfully 00:11:29.047 created a nvmf target service 00:11:29.047 create targets's poll groups done 00:11:29.047 all subsystems of target started 00:11:29.047 nvmf target is running 00:11:29.047 all subsystems of target stopped 00:11:29.047 destroy targets's poll groups done 00:11:29.047 destroyed the nvmf target service 00:11:29.047 bdev subsystem finish successfully 00:11:29.047 nvmf threads destroy successfully 00:11:29.047 19:00:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:11:29.047 19:00:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # nvmf_fini 00:11:29.047 19:00:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@264 -- # local dev 00:11:29.047 19:00:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@267 -- # remove_target_ns 00:11:29.047 19:00:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:11:29.047 19:00:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:11:29.047 19:00:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_target_ns 00:11:29.619 19:00:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@268 -- # delete_main_bridge 00:11:29.619 19:00:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:11:29.619 19:00:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@130 -- # return 0 00:11:29.619 19:00:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:11:29.619 19:00:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:11:29.619 19:00:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:11:29.619 19:00:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:11:29.619 19:00:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:11:29.619 19:00:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:11:29.619 19:00:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:11:29.619 19:00:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:11:29.619 19:00:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:11:29.619 19:00:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:11:29.619 19:00:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:11:29.619 19:00:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:11:29.619 19:00:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:11:29.619 19:00:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:11:29.619 19:00:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:11:29.620 19:00:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:11:29.620 19:00:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:11:29.620 19:00:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@41 -- # _dev=0 00:11:29.620 19:00:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@41 -- # dev_map=() 00:11:29.620 19:00:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@284 -- # iptr 00:11:29.620 19:00:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@542 -- # iptables-save 00:11:29.620 19:00:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:11:29.620 19:00:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@542 -- # iptables-restore 00:11:29.620 19:00:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:11:29.620 19:00:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:29.620 19:00:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:29.620 00:11:29.620 real 0m21.313s 00:11:29.620 user 0m46.754s 00:11:29.620 sys 0m6.811s 00:11:29.620 19:00:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:29.620 19:00:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:29.620 ************************************ 00:11:29.620 END TEST nvmf_example 00:11:29.620 ************************************ 00:11:29.620 19:00:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:29.620 19:00:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:29.620 19:00:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:29.620 19:00:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:29.620 ************************************ 00:11:29.620 START TEST nvmf_filesystem 00:11:29.620 ************************************ 00:11:29.620 19:00:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:29.885 * Looking for test storage... 00:11:29.885 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:29.885 19:00:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:29.885 19:00:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lcov --version 00:11:29.885 19:00:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:29.885 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:29.885 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:29.885 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:29.885 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:29.885 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:29.885 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:29.885 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:29.885 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:29.885 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:29.885 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:29.886 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:29.886 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:29.886 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:29.886 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:29.886 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:29.886 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:29.886 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:29.886 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:29.886 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:29.886 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:29.886 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:29.886 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:29.886 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:29.886 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:29.886 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:29.886 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:29.886 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:29.886 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:29.886 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:29.886 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:29.886 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:29.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:29.886 --rc genhtml_branch_coverage=1 00:11:29.886 --rc genhtml_function_coverage=1 00:11:29.886 --rc genhtml_legend=1 00:11:29.886 --rc geninfo_all_blocks=1 00:11:29.886 --rc geninfo_unexecuted_blocks=1 00:11:29.886 00:11:29.886 ' 00:11:29.886 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:29.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:29.886 --rc genhtml_branch_coverage=1 00:11:29.886 --rc genhtml_function_coverage=1 00:11:29.886 --rc genhtml_legend=1 00:11:29.886 --rc geninfo_all_blocks=1 00:11:29.886 --rc geninfo_unexecuted_blocks=1 00:11:29.886 00:11:29.886 ' 00:11:29.886 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:29.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:29.886 --rc genhtml_branch_coverage=1 00:11:29.886 --rc genhtml_function_coverage=1 00:11:29.886 --rc genhtml_legend=1 00:11:29.886 --rc geninfo_all_blocks=1 00:11:29.886 --rc geninfo_unexecuted_blocks=1 00:11:29.886 00:11:29.886 ' 00:11:29.886 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:29.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:29.886 --rc genhtml_branch_coverage=1 00:11:29.886 --rc genhtml_function_coverage=1 00:11:29.886 --rc genhtml_legend=1 00:11:29.886 --rc geninfo_all_blocks=1 00:11:29.886 --rc geninfo_unexecuted_blocks=1 00:11:29.886 00:11:29.886 ' 00:11:29.886 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:11:29.886 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:11:29.886 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:11:29.886 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:11:29.886 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:11:29.886 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:11:29.886 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:11:29.886 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:11:29.886 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:11:29.886 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:11:29.886 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:11:29.886 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:11:29.886 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:11:29.886 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:11:29.886 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:11:29.886 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:11:29.886 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:11:29.886 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:11:29.886 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:11:29.886 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:11:29.886 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:11:29.886 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:11:29.886 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:11:29.886 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:11:29.886 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:11:29.886 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:11:29.886 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:11:29.886 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:11:29.886 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:29.886 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:11:29.886 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:11:29.886 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:11:29.886 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:11:29.886 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:11:29.886 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:11:29.886 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:11:29.887 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:11:29.887 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:11:29.887 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:11:29.887 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:11:29.887 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:11:29.887 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:11:29.887 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:11:29.887 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:11:29.887 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:11:29.887 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:11:29.887 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:11:29.887 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:11:29.887 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:11:29.887 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:11:29.887 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:11:29.887 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:11:29.887 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:11:29.887 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:11:29.887 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:11:29.887 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:11:29.887 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:11:29.887 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:11:29.887 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:11:29.887 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:11:29.887 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:11:29.887 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:11:29.887 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:11:29.887 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:11:29.887 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:11:29.887 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:11:29.887 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:11:29.887 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:11:29.887 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:11:29.887 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:11:29.887 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:11:29.887 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:11:29.887 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:11:29.887 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:11:29.887 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:11:29.887 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:11:29.887 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:11:29.887 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:11:29.887 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:11:29.887 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:11:29.887 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:11:29.887 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:11:29.887 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:11:29.887 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:11:29.887 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:11:29.887 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:11:29.887 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:11:29.887 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:11:29.887 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:11:29.887 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:11:29.887 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:11:29.887 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:11:29.887 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:11:29.887 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:11:29.887 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:11:29.887 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:11:29.887 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:11:29.887 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:11:29.887 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:11:29.887 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:29.887 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:29.887 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:29.887 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:29.887 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:29.887 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:29.887 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:11:29.887 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:29.887 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:11:29.887 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:11:29.887 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:11:29.887 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:11:29.887 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:11:29.887 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:11:29.887 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:11:29.887 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:11:29.887 #define SPDK_CONFIG_H 00:11:29.887 #define SPDK_CONFIG_AIO_FSDEV 1 00:11:29.887 #define SPDK_CONFIG_APPS 1 00:11:29.887 #define SPDK_CONFIG_ARCH native 00:11:29.887 #undef SPDK_CONFIG_ASAN 00:11:29.887 #undef SPDK_CONFIG_AVAHI 00:11:29.887 #undef SPDK_CONFIG_CET 00:11:29.887 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:11:29.887 #define SPDK_CONFIG_COVERAGE 1 00:11:29.887 #define SPDK_CONFIG_CROSS_PREFIX 00:11:29.887 #undef SPDK_CONFIG_CRYPTO 00:11:29.887 #undef SPDK_CONFIG_CRYPTO_MLX5 00:11:29.887 #undef SPDK_CONFIG_CUSTOMOCF 00:11:29.887 #undef SPDK_CONFIG_DAOS 00:11:29.887 #define SPDK_CONFIG_DAOS_DIR 00:11:29.887 #define SPDK_CONFIG_DEBUG 1 00:11:29.887 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:11:29.887 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:11:29.887 #define SPDK_CONFIG_DPDK_INC_DIR 00:11:29.887 #define SPDK_CONFIG_DPDK_LIB_DIR 00:11:29.887 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:11:29.887 #undef SPDK_CONFIG_DPDK_UADK 00:11:29.887 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:29.888 #define SPDK_CONFIG_EXAMPLES 1 00:11:29.888 #undef SPDK_CONFIG_FC 00:11:29.888 #define SPDK_CONFIG_FC_PATH 00:11:29.888 #define SPDK_CONFIG_FIO_PLUGIN 1 00:11:29.888 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:11:29.888 #define SPDK_CONFIG_FSDEV 1 00:11:29.888 #undef SPDK_CONFIG_FUSE 00:11:29.888 #undef SPDK_CONFIG_FUZZER 00:11:29.888 #define SPDK_CONFIG_FUZZER_LIB 00:11:29.888 #undef SPDK_CONFIG_GOLANG 00:11:29.888 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:11:29.888 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:11:29.888 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:11:29.888 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:11:29.888 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:11:29.888 #undef SPDK_CONFIG_HAVE_LIBBSD 00:11:29.888 #undef SPDK_CONFIG_HAVE_LZ4 00:11:29.888 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:11:29.888 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:11:29.888 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:11:29.888 #define SPDK_CONFIG_IDXD 1 00:11:29.888 #define SPDK_CONFIG_IDXD_KERNEL 1 00:11:29.888 #undef SPDK_CONFIG_IPSEC_MB 00:11:29.888 #define SPDK_CONFIG_IPSEC_MB_DIR 00:11:29.888 #define SPDK_CONFIG_ISAL 1 00:11:29.888 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:11:29.888 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:11:29.888 #define SPDK_CONFIG_LIBDIR 00:11:29.888 #undef SPDK_CONFIG_LTO 00:11:29.888 #define SPDK_CONFIG_MAX_LCORES 128 00:11:29.888 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:11:29.888 #define SPDK_CONFIG_NVME_CUSE 1 00:11:29.888 #undef SPDK_CONFIG_OCF 00:11:29.888 #define SPDK_CONFIG_OCF_PATH 00:11:29.888 #define SPDK_CONFIG_OPENSSL_PATH 00:11:29.888 #undef SPDK_CONFIG_PGO_CAPTURE 00:11:29.888 #define SPDK_CONFIG_PGO_DIR 00:11:29.888 #undef SPDK_CONFIG_PGO_USE 00:11:29.888 #define SPDK_CONFIG_PREFIX /usr/local 00:11:29.888 #undef SPDK_CONFIG_RAID5F 00:11:29.888 #undef SPDK_CONFIG_RBD 00:11:29.888 #define SPDK_CONFIG_RDMA 1 00:11:29.888 #define SPDK_CONFIG_RDMA_PROV verbs 00:11:29.888 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:11:29.888 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:11:29.888 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:11:29.888 #define SPDK_CONFIG_SHARED 1 00:11:29.888 #undef SPDK_CONFIG_SMA 00:11:29.888 #define SPDK_CONFIG_TESTS 1 00:11:29.888 #undef SPDK_CONFIG_TSAN 00:11:29.888 #define SPDK_CONFIG_UBLK 1 00:11:29.888 #define SPDK_CONFIG_UBSAN 1 00:11:29.888 #undef SPDK_CONFIG_UNIT_TESTS 00:11:29.888 #undef SPDK_CONFIG_URING 00:11:29.888 #define SPDK_CONFIG_URING_PATH 00:11:29.888 #undef SPDK_CONFIG_URING_ZNS 00:11:29.888 #undef SPDK_CONFIG_USDT 00:11:29.888 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:11:29.888 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:11:29.888 #define SPDK_CONFIG_VFIO_USER 1 00:11:29.888 #define SPDK_CONFIG_VFIO_USER_DIR 00:11:29.888 #define SPDK_CONFIG_VHOST 1 00:11:29.888 #define SPDK_CONFIG_VIRTIO 1 00:11:29.888 #undef SPDK_CONFIG_VTUNE 00:11:29.888 #define SPDK_CONFIG_VTUNE_DIR 00:11:29.888 #define SPDK_CONFIG_WERROR 1 00:11:29.888 #define SPDK_CONFIG_WPDK_DIR 00:11:29.888 #undef SPDK_CONFIG_XNVME 00:11:29.888 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:11:29.888 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:11:29.888 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:29.888 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:29.888 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:29.888 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:29.888 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:29.888 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.888 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.888 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.888 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:29.888 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.888 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:29.888 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:29.888 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:29.888 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:29.888 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:11:29.888 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:29.888 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:11:29.888 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:11:29.888 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:11:29.888 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:11:29.888 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:11:29.888 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:11:29.888 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:11:29.888 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:11:29.888 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:11:29.888 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:11:29.888 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:11:29.888 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:11:29.888 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:11:29.888 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:11:29.888 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:11:29.888 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:11:29.888 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:11:29.888 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:11:29.888 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:11:29.888 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:11:29.888 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:11:29.888 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:11:29.888 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:11:29.888 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:11:29.888 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:11:29.888 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:11:29.888 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:11:29.888 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:11:29.888 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:11:29.888 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:11:29.888 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:11:29.888 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:11:29.888 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:11:29.888 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:11:29.888 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:11:29.888 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:11:29.888 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:11:29.888 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:11:29.888 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:11:29.888 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:11:29.889 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:11:29.889 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:11:29.889 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:11:29.889 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:11:29.889 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:11:29.889 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:11:29.889 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:11:29.889 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:11:29.889 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:11:29.889 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:11:29.889 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:11:29.889 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:11:29.889 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:11:29.889 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:11:29.889 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:11:29.889 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:11:29.889 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:11:29.889 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:11:29.889 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:11:29.889 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:11:29.889 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:11:29.889 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:11:29.889 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:11:29.889 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:11:29.889 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:11:29.889 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:11:29.889 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:11:29.889 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:11:29.889 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:11:29.889 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:11:29.889 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:11:29.889 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:11:29.889 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:11:29.889 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:11:29.889 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:11:29.889 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:11:29.889 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:11:29.889 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:11:29.889 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:11:29.889 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:11:29.889 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:11:29.889 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:11:29.889 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:11:29.889 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:11:29.889 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:11:29.889 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:11:29.889 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:11:29.889 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:11:29.889 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:11:29.889 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:11:29.889 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:11:29.889 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:11:29.889 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:11:29.889 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:11:29.889 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:11:29.889 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:11:29.889 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:11:29.889 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:11:29.889 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:11:29.889 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:11:29.889 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:11:29.889 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:11:29.889 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:11:29.889 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:11:29.889 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:11:29.889 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:11:29.889 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:11:29.889 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:11:29.889 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:11:29.889 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:11:29.889 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:11:29.889 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:11:29.889 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:11:29.889 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:11:29.889 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:11:29.889 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:11:29.889 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:11:29.889 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:11:29.889 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:11:29.889 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:11:29.889 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:11:29.889 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:11:29.889 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:11:29.889 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:11:29.889 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:11:29.889 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:11:29.889 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:11:29.889 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:11:29.889 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:11:29.889 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:11:29.889 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:11:29.889 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:11:29.889 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:11:29.889 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:11:29.889 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:11:29.889 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:11:29.889 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:11:29.889 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:29.889 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:29.889 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:11:29.889 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:11:29.889 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:29.889 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:29.890 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:29.890 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:29.890 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:11:29.890 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:11:29.890 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:29.890 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:29.890 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # export PYTHONDONTWRITEBYTECODE=1 00:11:29.890 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # PYTHONDONTWRITEBYTECODE=1 00:11:29.890 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:29.890 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:29.890 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:29.890 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:29.890 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:11:29.890 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@203 -- # rm -rf /var/tmp/asan_suppression_file 00:11:29.890 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # cat 00:11:29.890 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # echo leak:libfuse3.so 00:11:29.890 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:29.890 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:29.890 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:29.890 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:29.890 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # '[' -z /var/spdk/dependencies ']' 00:11:29.890 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # export DEPENDENCY_DIR 00:11:29.890 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:29.890 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:29.890 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:29.890 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:29.890 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:29.890 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:29.890 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:29.890 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:29.890 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:29.890 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:29.890 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:29.890 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:29.890 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # _LCOV_MAIN=0 00:11:29.890 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@266 -- # _LCOV_LLVM=1 00:11:29.890 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV= 00:11:29.890 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ '' == *clang* ]] 00:11:29.890 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:11:29.890 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:11:29.890 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # _lcov_opt[_LCOV_MAIN]= 00:11:29.890 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # lcov_opt= 00:11:29.890 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@276 -- # '[' 0 -eq 0 ']' 00:11:29.890 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # export valgrind= 00:11:29.890 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # valgrind= 00:11:29.890 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # uname -s 00:11:29.890 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # '[' Linux = Linux ']' 00:11:29.890 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@284 -- # HUGEMEM=4096 00:11:29.890 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # export CLEAR_HUGE=yes 00:11:29.890 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # CLEAR_HUGE=yes 00:11:29.890 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # MAKE=make 00:11:29.890 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@288 -- # MAKEFLAGS=-j144 00:11:29.890 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # export HUGEMEM=4096 00:11:29.890 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # HUGEMEM=4096 00:11:29.890 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # NO_HUGE=() 00:11:29.890 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@307 -- # TEST_MODE= 00:11:29.890 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # for i in "$@" 00:11:29.890 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # case "$i" in 00:11:29.890 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@314 -- # TEST_TRANSPORT=tcp 00:11:29.890 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # [[ -z 223216 ]] 00:11:29.890 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # kill -0 223216 00:11:29.890 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:11:29.890 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@339 -- # [[ -v testdir ]] 00:11:29.890 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # local requested_size=2147483648 00:11:29.890 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@342 -- # local mount target_dir 00:11:29.891 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local -A mounts fss sizes avails uses 00:11:29.891 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # local source fs size avail mount use 00:11:29.891 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local storage_fallback storage_candidates 00:11:29.891 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # mktemp -udt spdk.XXXXXX 00:11:29.891 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # storage_fallback=/tmp/spdk.nTVhGY 00:11:29.891 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@354 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:11:29.891 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # [[ -n '' ]] 00:11:29.891 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # [[ -n '' ]] 00:11:29.891 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@366 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.nTVhGY/tests/target /tmp/spdk.nTVhGY 00:11:29.891 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@369 -- # requested_size=2214592512 00:11:29.891 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:29.891 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # df -T 00:11:29.891 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # grep -v Filesystem 00:11:29.891 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_devtmpfs 00:11:29.891 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=devtmpfs 00:11:29.891 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=67108864 00:11:29.891 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=67108864 00:11:29.891 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=0 00:11:29.891 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:29.891 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/pmem0 00:11:29.891 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=ext2 00:11:29.891 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=4096 00:11:29.891 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=5284429824 00:11:29.891 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=5284425728 00:11:29.891 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:29.891 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_root 00:11:30.155 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=overlay 00:11:30.155 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=122523402240 00:11:30.155 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=129356541952 00:11:30.155 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=6833139712 00:11:30.155 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:30.155 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:11:30.155 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:11:30.155 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=64671412224 00:11:30.155 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=64678268928 00:11:30.155 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=6856704 00:11:30.155 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:30.155 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:11:30.155 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:11:30.155 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=25847947264 00:11:30.155 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=25871310848 00:11:30.155 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=23363584 00:11:30.155 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:30.155 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=efivarfs 00:11:30.155 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=efivarfs 00:11:30.155 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=216064 00:11:30.155 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=507904 00:11:30.155 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=287744 00:11:30.155 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:30.155 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:11:30.155 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:11:30.155 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=64677392384 00:11:30.155 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=64678273024 00:11:30.155 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=880640 00:11:30.155 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:30.155 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:11:30.155 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:11:30.155 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=12935639040 00:11:30.155 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=12935651328 00:11:30.155 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=12288 00:11:30.155 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:30.155 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # printf '* Looking for test storage...\n' 00:11:30.155 * Looking for test storage... 00:11:30.155 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # local target_space new_size 00:11:30.155 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # for target_dir in "${storage_candidates[@]}" 00:11:30.155 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:30.155 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # awk '$1 !~ /Filesystem/{print $6}' 00:11:30.155 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # mount=/ 00:11:30.155 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # target_space=122523402240 00:11:30.155 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@386 -- # (( target_space == 0 || target_space < requested_size )) 00:11:30.155 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # (( target_space >= requested_size )) 00:11:30.155 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == tmpfs ]] 00:11:30.155 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == ramfs ]] 00:11:30.155 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ / == / ]] 00:11:30.155 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@392 -- # new_size=9047732224 00:11:30.155 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # (( new_size * 100 / sizes[/] > 95 )) 00:11:30.155 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:30.155 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:30.155 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@399 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:30.155 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:30.155 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # return 0 00:11:30.155 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set -o errtrace 00:11:30.155 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:11:30.155 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:11:30.155 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:11:30.155 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1683 -- # true 00:11:30.155 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # xtrace_fd 00:11:30.155 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:11:30.155 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:11:30.155 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:11:30.155 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:11:30.155 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:11:30.155 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:11:30.155 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:11:30.155 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:11:30.155 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:30.155 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lcov --version 00:11:30.155 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:30.155 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:30.155 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:30.155 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:30.155 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:30.155 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:30.155 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:30.155 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:30.155 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:30.155 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:30.155 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:30.155 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:30.155 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:30.155 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:30.155 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:30.155 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:30.155 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:30.155 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:30.155 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:30.155 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:30.155 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:30.155 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:30.155 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:30.155 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:30.155 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:30.155 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:30.155 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:30.155 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:30.155 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:30.155 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:30.155 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:30.155 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:30.156 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:30.156 --rc genhtml_branch_coverage=1 00:11:30.156 --rc genhtml_function_coverage=1 00:11:30.156 --rc genhtml_legend=1 00:11:30.156 --rc geninfo_all_blocks=1 00:11:30.156 --rc geninfo_unexecuted_blocks=1 00:11:30.156 00:11:30.156 ' 00:11:30.156 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:30.156 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:30.156 --rc genhtml_branch_coverage=1 00:11:30.156 --rc genhtml_function_coverage=1 00:11:30.156 --rc genhtml_legend=1 00:11:30.156 --rc geninfo_all_blocks=1 00:11:30.156 --rc geninfo_unexecuted_blocks=1 00:11:30.156 00:11:30.156 ' 00:11:30.156 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:30.156 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:30.156 --rc genhtml_branch_coverage=1 00:11:30.156 --rc genhtml_function_coverage=1 00:11:30.156 --rc genhtml_legend=1 00:11:30.156 --rc geninfo_all_blocks=1 00:11:30.156 --rc geninfo_unexecuted_blocks=1 00:11:30.156 00:11:30.156 ' 00:11:30.156 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:30.156 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:30.156 --rc genhtml_branch_coverage=1 00:11:30.156 --rc genhtml_function_coverage=1 00:11:30.156 --rc genhtml_legend=1 00:11:30.156 --rc geninfo_all_blocks=1 00:11:30.156 --rc geninfo_unexecuted_blocks=1 00:11:30.156 00:11:30.156 ' 00:11:30.156 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:30.156 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:11:30.156 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:30.156 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:30.156 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:30.156 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:30.156 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:30.156 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:11:30.156 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:30.156 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:11:30.156 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:30.156 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:30.156 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:30.156 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:11:30.156 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:11:30.156 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:30.156 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:30.156 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:30.156 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:30.156 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:30.156 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:30.156 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.156 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.156 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.156 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:30.156 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.156 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:11:30.156 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:11:30.156 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:11:30.156 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:11:30.156 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@50 -- # : 0 00:11:30.156 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:11:30.156 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:11:30.156 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:11:30.156 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:30.156 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:30.156 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:11:30.156 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:11:30.156 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:11:30.156 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:11:30.156 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@54 -- # have_pci_nics=0 00:11:30.156 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:11:30.156 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:30.156 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:11:30.156 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:11:30.156 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:30.156 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # prepare_net_devs 00:11:30.156 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # local -g is_hw=no 00:11:30.156 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@260 -- # remove_target_ns 00:11:30.156 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:11:30.156 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:11:30.156 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_target_ns 00:11:30.156 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:11:30.156 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:11:30.156 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # xtrace_disable 00:11:30.156 19:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:38.306 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:38.306 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@131 -- # pci_devs=() 00:11:38.306 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@131 -- # local -a pci_devs 00:11:38.306 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@132 -- # pci_net_devs=() 00:11:38.306 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:11:38.306 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@133 -- # pci_drivers=() 00:11:38.306 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@133 -- # local -A pci_drivers 00:11:38.306 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@135 -- # net_devs=() 00:11:38.306 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@135 -- # local -ga net_devs 00:11:38.306 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@136 -- # e810=() 00:11:38.306 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@136 -- # local -ga e810 00:11:38.306 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@137 -- # x722=() 00:11:38.306 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@137 -- # local -ga x722 00:11:38.306 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@138 -- # mlx=() 00:11:38.306 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@138 -- # local -ga mlx 00:11:38.306 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:38.306 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:38.306 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:38.306 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:38.306 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:38.306 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:38.306 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:38.306 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:38.306 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:38.306 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:38.306 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:38.306 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:38.306 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:11:38.306 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:11:38.306 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:11:38.306 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:11:38.306 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:11:38.306 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:11:38.306 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:11:38.306 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:38.306 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:38.307 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:11:38.307 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:11:38.307 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:38.307 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:38.307 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:11:38.307 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:11:38.307 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:38.307 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:38.307 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:11:38.307 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:11:38.307 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:38.307 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:38.307 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:11:38.307 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:11:38.307 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:11:38.307 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:11:38.307 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:11:38.307 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:38.307 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:11:38.307 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:38.307 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@234 -- # [[ up == up ]] 00:11:38.307 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:11:38.307 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:38.307 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:38.307 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:38.307 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:11:38.307 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:11:38.307 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:38.307 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:11:38.307 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:38.307 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@234 -- # [[ up == up ]] 00:11:38.307 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:11:38.307 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:38.307 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:38.307 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:38.307 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:11:38.307 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:11:38.307 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:11:38.307 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # is_hw=yes 00:11:38.307 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:11:38.307 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:11:38.307 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:11:38.307 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:11:38.307 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@257 -- # create_target_ns 00:11:38.307 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:11:38.307 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:11:38.307 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:11:38.307 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:38.307 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:11:38.307 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:11:38.307 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:38.307 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:38.307 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:11:38.307 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:11:38.307 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:11:38.307 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:11:38.307 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@27 -- # local -gA dev_map 00:11:38.307 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@28 -- # local -g _dev 00:11:38.307 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:11:38.307 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:11:38.307 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:11:38.307 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:11:38.307 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@44 -- # ips=() 00:11:38.307 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:11:38.307 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:11:38.307 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:11:38.307 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:11:38.307 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:11:38.307 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:11:38.307 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:11:38.307 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:11:38.307 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:11:38.307 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:11:38.307 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:11:38.307 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:11:38.307 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:11:38.307 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:11:38.307 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:11:38.307 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:11:38.307 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:11:38.307 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:11:38.307 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:11:38.307 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:11:38.307 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@11 -- # local val=167772161 00:11:38.307 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:11:38.307 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:11:38.307 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:11:38.307 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:11:38.307 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:11:38.307 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:11:38.307 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:11:38.307 10.0.0.1 00:11:38.307 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:11:38.307 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:11:38.307 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:38.307 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:38.307 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:11:38.307 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@11 -- # local val=167772162 00:11:38.307 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:11:38.307 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:11:38.307 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:11:38.307 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:11:38.307 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:11:38.307 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:11:38.307 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:11:38.307 10.0.0.2 00:11:38.307 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:11:38.307 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:11:38.307 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:11:38.308 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:11:38.308 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:11:38.308 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:11:38.308 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:11:38.308 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:38.308 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:38.308 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:11:38.308 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:11:38.308 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:11:38.308 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:11:38.308 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:11:38.308 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:11:38.308 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:11:38.308 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:11:38.308 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:11:38.308 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:11:38.308 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:11:38.308 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@38 -- # ping_ips 1 00:11:38.308 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:11:38.308 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:11:38.308 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:11:38.308 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:11:38.308 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:11:38.308 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:11:38.308 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:11:38.308 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:11:38.308 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:11:38.308 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@107 -- # local dev=initiator0 00:11:38.308 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:11:38.308 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:11:38.308 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:11:38.308 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:11:38.308 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:11:38.308 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:11:38.308 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:11:38.308 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:11:38.308 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:11:38.308 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:11:38.308 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:11:38.308 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:38.308 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:38.308 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:11:38.308 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:11:38.308 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:38.308 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.717 ms 00:11:38.308 00:11:38.308 --- 10.0.0.1 ping statistics --- 00:11:38.308 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:38.308 rtt min/avg/max/mdev = 0.717/0.717/0.717/0.000 ms 00:11:38.308 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:11:38.308 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:11:38.308 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:11:38.308 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:11:38.308 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:38.308 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:38.308 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@168 -- # get_net_dev target0 00:11:38.308 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@107 -- # local dev=target0 00:11:38.308 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:11:38.308 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:11:38.308 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:11:38.308 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:11:38.308 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:11:38.308 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:11:38.308 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:11:38.308 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:11:38.308 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:11:38.308 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:11:38.308 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:11:38.308 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:11:38.308 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:11:38.308 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:11:38.308 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:38.308 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.281 ms 00:11:38.308 00:11:38.308 --- 10.0.0.2 ping statistics --- 00:11:38.308 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:38.308 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:11:38.308 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@98 -- # (( pair++ )) 00:11:38.308 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:11:38.308 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:38.308 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@270 -- # return 0 00:11:38.308 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:11:38.308 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:11:38.308 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:11:38.308 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:11:38.308 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:11:38.308 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:11:38.308 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:11:38.308 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:11:38.308 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:11:38.308 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:11:38.308 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@107 -- # local dev=initiator0 00:11:38.308 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:11:38.308 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:11:38.308 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:11:38.308 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:11:38.308 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:11:38.308 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:11:38.308 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:11:38.308 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:11:38.308 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:11:38.308 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:38.308 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:11:38.308 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:11:38.308 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:11:38.308 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:11:38.308 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:11:38.308 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:11:38.308 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@107 -- # local dev=initiator1 00:11:38.308 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:11:38.308 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:11:38.308 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@109 -- # return 1 00:11:38.308 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@168 -- # dev= 00:11:38.308 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@169 -- # return 0 00:11:38.308 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:11:38.308 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:11:38.309 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:11:38.309 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:11:38.309 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:11:38.309 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:38.309 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:38.309 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@168 -- # get_net_dev target0 00:11:38.309 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@107 -- # local dev=target0 00:11:38.309 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:11:38.309 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:11:38.309 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:11:38.309 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:11:38.309 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:11:38.309 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:11:38.309 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:11:38.309 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:11:38.309 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:11:38.309 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:38.309 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:11:38.309 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:11:38.309 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:11:38.309 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:11:38.309 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:38.309 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:38.309 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@168 -- # get_net_dev target1 00:11:38.309 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@107 -- # local dev=target1 00:11:38.309 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:11:38.309 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:11:38.309 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@109 -- # return 1 00:11:38.309 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@168 -- # dev= 00:11:38.309 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@169 -- # return 0 00:11:38.309 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:11:38.309 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:38.309 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:11:38.309 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:11:38.309 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:38.309 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:11:38.309 19:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:11:38.309 19:01:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:11:38.309 19:01:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:38.309 19:01:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:38.309 19:01:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:38.309 ************************************ 00:11:38.309 START TEST nvmf_filesystem_no_in_capsule 00:11:38.309 ************************************ 00:11:38.309 19:01:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1127 -- # nvmf_filesystem_part 0 00:11:38.309 19:01:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:11:38.309 19:01:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:38.309 19:01:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:11:38.309 19:01:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:38.309 19:01:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:38.309 19:01:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@328 -- # nvmfpid=227025 00:11:38.309 19:01:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@329 -- # waitforlisten 227025 00:11:38.309 19:01:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:38.309 19:01:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # '[' -z 227025 ']' 00:11:38.309 19:01:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:38.309 19:01:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:38.309 19:01:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:38.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:38.309 19:01:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:38.309 19:01:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:38.309 [2024-11-05 19:01:07.151704] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:11:38.309 [2024-11-05 19:01:07.151803] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:38.309 [2024-11-05 19:01:07.239940] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:38.309 [2024-11-05 19:01:07.282321] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:38.309 [2024-11-05 19:01:07.282360] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:38.309 [2024-11-05 19:01:07.282368] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:38.309 [2024-11-05 19:01:07.282375] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:38.309 [2024-11-05 19:01:07.282380] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:38.309 [2024-11-05 19:01:07.284244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:38.309 [2024-11-05 19:01:07.284365] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:38.309 [2024-11-05 19:01:07.284512] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:38.309 [2024-11-05 19:01:07.284513] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:38.883 19:01:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:38.883 19:01:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@866 -- # return 0 00:11:38.883 19:01:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:11:38.883 19:01:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:38.883 19:01:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:38.883 19:01:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:38.883 19:01:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:38.883 19:01:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:38.883 19:01:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.883 19:01:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:38.883 [2024-11-05 19:01:07.999267] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:38.883 19:01:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.883 19:01:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:38.883 19:01:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.883 19:01:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:38.883 Malloc1 00:11:38.883 19:01:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.883 19:01:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:38.883 19:01:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.883 19:01:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:38.883 19:01:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.883 19:01:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:38.883 19:01:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.883 19:01:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:38.883 19:01:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.883 19:01:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:38.883 19:01:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.883 19:01:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:38.883 [2024-11-05 19:01:08.127277] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:38.883 19:01:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.883 19:01:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:38.883 19:01:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bdev_name=Malloc1 00:11:38.883 19:01:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local bdev_info 00:11:38.883 19:01:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bs 00:11:38.883 19:01:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local nb 00:11:38.883 19:01:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:38.883 19:01:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.883 19:01:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:38.884 19:01:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.884 19:01:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:11:38.884 { 00:11:38.884 "name": "Malloc1", 00:11:38.884 "aliases": [ 00:11:38.884 "39b421e4-e1cf-4962-aa18-7943ecb18878" 00:11:38.884 ], 00:11:38.884 "product_name": "Malloc disk", 00:11:38.884 "block_size": 512, 00:11:38.884 "num_blocks": 1048576, 00:11:38.884 "uuid": "39b421e4-e1cf-4962-aa18-7943ecb18878", 00:11:38.884 "assigned_rate_limits": { 00:11:38.884 "rw_ios_per_sec": 0, 00:11:38.884 "rw_mbytes_per_sec": 0, 00:11:38.884 "r_mbytes_per_sec": 0, 00:11:38.884 "w_mbytes_per_sec": 0 00:11:38.884 }, 00:11:38.884 "claimed": true, 00:11:38.884 "claim_type": "exclusive_write", 00:11:38.884 "zoned": false, 00:11:38.884 "supported_io_types": { 00:11:38.884 "read": true, 00:11:38.884 "write": true, 00:11:38.884 "unmap": true, 00:11:38.884 "flush": true, 00:11:38.884 "reset": true, 00:11:38.884 "nvme_admin": false, 00:11:38.884 "nvme_io": false, 00:11:38.884 "nvme_io_md": false, 00:11:38.884 "write_zeroes": true, 00:11:38.884 "zcopy": true, 00:11:38.884 "get_zone_info": false, 00:11:38.884 "zone_management": false, 00:11:38.884 "zone_append": false, 00:11:38.884 "compare": false, 00:11:38.884 "compare_and_write": false, 00:11:38.884 "abort": true, 00:11:38.884 "seek_hole": false, 00:11:38.884 "seek_data": false, 00:11:38.884 "copy": true, 00:11:38.884 "nvme_iov_md": false 00:11:38.884 }, 00:11:38.884 "memory_domains": [ 00:11:38.884 { 00:11:38.884 "dma_device_id": "system", 00:11:38.884 "dma_device_type": 1 00:11:38.884 }, 00:11:38.884 { 00:11:38.884 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:38.884 "dma_device_type": 2 00:11:38.884 } 00:11:38.884 ], 00:11:38.884 "driver_specific": {} 00:11:38.884 } 00:11:38.884 ]' 00:11:38.884 19:01:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:11:38.884 19:01:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # bs=512 00:11:38.884 19:01:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:11:39.145 19:01:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # nb=1048576 00:11:39.145 19:01:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1389 -- # bdev_size=512 00:11:39.145 19:01:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1390 -- # echo 512 00:11:39.145 19:01:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:39.145 19:01:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:40.529 19:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:40.529 19:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # local i=0 00:11:40.529 19:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:11:40.529 19:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:11:40.529 19:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # sleep 2 00:11:42.440 19:01:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:11:42.440 19:01:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:11:42.440 19:01:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:11:42.701 19:01:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:11:42.701 19:01:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:11:42.701 19:01:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # return 0 00:11:42.701 19:01:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:42.701 19:01:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:42.701 19:01:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:42.701 19:01:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:42.701 19:01:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:42.701 19:01:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:42.701 19:01:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:42.701 19:01:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:42.701 19:01:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:42.701 19:01:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:42.701 19:01:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:42.701 19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:43.271 19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:44.213 19:01:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:11:44.213 19:01:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:44.213 19:01:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:11:44.213 19:01:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:44.213 19:01:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:44.213 ************************************ 00:11:44.213 START TEST filesystem_ext4 00:11:44.213 ************************************ 00:11:44.473 19:01:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:44.473 19:01:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:44.473 19:01:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:44.473 19:01:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:44.473 19:01:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local fstype=ext4 00:11:44.473 19:01:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:11:44.473 19:01:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local i=0 00:11:44.473 19:01:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local force 00:11:44.473 19:01:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # '[' ext4 = ext4 ']' 00:11:44.473 19:01:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@934 -- # force=-F 00:11:44.473 19:01:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@939 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:44.473 mke2fs 1.47.0 (5-Feb-2023) 00:11:44.473 Discarding device blocks: 0/522240 done 00:11:44.473 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:44.473 Filesystem UUID: 9fb7df64-2d4d-4da6-99b1-41ed8077e7b0 00:11:44.473 Superblock backups stored on blocks: 00:11:44.473 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:44.473 00:11:44.473 Allocating group tables: 0/64 done 00:11:44.473 Writing inode tables: 0/64 done 00:11:47.019 Creating journal (8192 blocks): done 00:11:47.280 Writing superblocks and filesystem accounting information: 0/64 done 00:11:47.280 00:11:47.280 19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@947 -- # return 0 00:11:47.280 19:01:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:53.868 19:01:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:53.868 19:01:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:11:53.868 19:01:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:53.868 19:01:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:11:53.868 19:01:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:53.868 19:01:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:53.868 19:01:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 227025 00:11:53.868 19:01:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:53.868 19:01:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:53.868 19:01:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:53.868 19:01:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:53.868 00:11:53.868 real 0m8.936s 00:11:53.868 user 0m0.031s 00:11:53.868 sys 0m0.078s 00:11:53.868 19:01:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:53.868 19:01:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:53.868 ************************************ 00:11:53.868 END TEST filesystem_ext4 00:11:53.868 ************************************ 00:11:53.868 19:01:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:53.869 19:01:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:11:53.869 19:01:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:53.869 19:01:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:53.869 ************************************ 00:11:53.869 START TEST filesystem_btrfs 00:11:53.869 ************************************ 00:11:53.869 19:01:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:53.869 19:01:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:53.869 19:01:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:53.869 19:01:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:53.869 19:01:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local fstype=btrfs 00:11:53.869 19:01:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:11:53.869 19:01:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local i=0 00:11:53.869 19:01:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local force 00:11:53.869 19:01:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # '[' btrfs = ext4 ']' 00:11:53.869 19:01:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@936 -- # force=-f 00:11:53.869 19:01:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@939 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:53.869 btrfs-progs v6.8.1 00:11:53.869 See https://btrfs.readthedocs.io for more information. 00:11:53.869 00:11:53.869 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:53.869 NOTE: several default settings have changed in version 5.15, please make sure 00:11:53.869 this does not affect your deployments: 00:11:53.869 - DUP for metadata (-m dup) 00:11:53.869 - enabled no-holes (-O no-holes) 00:11:53.869 - enabled free-space-tree (-R free-space-tree) 00:11:53.869 00:11:53.869 Label: (null) 00:11:53.869 UUID: 8f8c5560-6f8d-4ebb-93f9-e09fc84691c8 00:11:53.869 Node size: 16384 00:11:53.869 Sector size: 4096 (CPU page size: 4096) 00:11:53.869 Filesystem size: 510.00MiB 00:11:53.869 Block group profiles: 00:11:53.869 Data: single 8.00MiB 00:11:53.869 Metadata: DUP 32.00MiB 00:11:53.869 System: DUP 8.00MiB 00:11:53.869 SSD detected: yes 00:11:53.869 Zoned device: no 00:11:53.869 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:53.869 Checksum: crc32c 00:11:53.869 Number of devices: 1 00:11:53.869 Devices: 00:11:53.869 ID SIZE PATH 00:11:53.869 1 510.00MiB /dev/nvme0n1p1 00:11:53.869 00:11:53.869 19:01:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@947 -- # return 0 00:11:53.869 19:01:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:54.812 19:01:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:54.812 19:01:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:11:54.812 19:01:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:54.812 19:01:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:11:54.812 19:01:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:54.812 19:01:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:54.812 19:01:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 227025 00:11:54.812 19:01:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:54.812 19:01:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:54.812 19:01:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:54.812 19:01:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:54.812 00:11:54.812 real 0m1.440s 00:11:54.812 user 0m0.028s 00:11:54.812 sys 0m0.123s 00:11:54.812 19:01:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:54.812 19:01:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:54.812 ************************************ 00:11:54.812 END TEST filesystem_btrfs 00:11:54.812 ************************************ 00:11:54.812 19:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:11:54.812 19:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:11:54.812 19:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:54.812 19:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:54.812 ************************************ 00:11:54.812 START TEST filesystem_xfs 00:11:54.812 ************************************ 00:11:54.812 19:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create xfs nvme0n1 00:11:54.812 19:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:54.812 19:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:54.812 19:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:54.812 19:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local fstype=xfs 00:11:54.812 19:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:11:54.812 19:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local i=0 00:11:54.813 19:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local force 00:11:54.813 19:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # '[' xfs = ext4 ']' 00:11:54.813 19:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@936 -- # force=-f 00:11:54.813 19:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@939 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:54.813 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:54.813 = sectsz=512 attr=2, projid32bit=1 00:11:54.813 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:54.813 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:54.813 data = bsize=4096 blocks=130560, imaxpct=25 00:11:54.813 = sunit=0 swidth=0 blks 00:11:54.813 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:54.813 log =internal log bsize=4096 blocks=16384, version=2 00:11:54.813 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:54.813 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:56.197 Discarding blocks...Done. 00:11:56.197 19:01:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@947 -- # return 0 00:11:56.197 19:01:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:57.707 19:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:57.707 19:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:11:57.707 19:01:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:57.707 19:01:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:11:57.707 19:01:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:11:57.707 19:01:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:57.968 19:01:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 227025 00:11:57.968 19:01:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:57.968 19:01:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:57.968 19:01:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:57.968 19:01:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:57.968 00:11:57.968 real 0m2.990s 00:11:57.968 user 0m0.029s 00:11:57.968 sys 0m0.076s 00:11:57.968 19:01:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:57.968 19:01:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:57.968 ************************************ 00:11:57.968 END TEST filesystem_xfs 00:11:57.968 ************************************ 00:11:57.968 19:01:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:58.228 19:01:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:58.228 19:01:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:58.228 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:58.228 19:01:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:58.228 19:01:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1221 -- # local i=0 00:11:58.228 19:01:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:11:58.228 19:01:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:58.228 19:01:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:11:58.228 19:01:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:58.228 19:01:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1233 -- # return 0 00:11:58.228 19:01:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:58.228 19:01:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.228 19:01:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:58.489 19:01:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.489 19:01:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:58.489 19:01:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 227025 00:11:58.489 19:01:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # '[' -z 227025 ']' 00:11:58.489 19:01:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # kill -0 227025 00:11:58.489 19:01:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@957 -- # uname 00:11:58.489 19:01:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:58.489 19:01:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 227025 00:11:58.489 19:01:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:58.489 19:01:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:58.489 19:01:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@970 -- # echo 'killing process with pid 227025' 00:11:58.489 killing process with pid 227025 00:11:58.489 19:01:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@971 -- # kill 227025 00:11:58.489 19:01:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@976 -- # wait 227025 00:11:58.750 19:01:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:58.750 00:11:58.750 real 0m20.767s 00:11:58.750 user 1m22.082s 00:11:58.750 sys 0m1.465s 00:11:58.750 19:01:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:58.750 19:01:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:58.750 ************************************ 00:11:58.750 END TEST nvmf_filesystem_no_in_capsule 00:11:58.750 ************************************ 00:11:58.750 19:01:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:11:58.750 19:01:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:58.750 19:01:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:58.750 19:01:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:58.750 ************************************ 00:11:58.750 START TEST nvmf_filesystem_in_capsule 00:11:58.750 ************************************ 00:11:58.750 19:01:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1127 -- # nvmf_filesystem_part 4096 00:11:58.750 19:01:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:11:58.750 19:01:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:58.750 19:01:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:11:58.750 19:01:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:58.750 19:01:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:58.750 19:01:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@328 -- # nvmfpid=231293 00:11:58.750 19:01:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@329 -- # waitforlisten 231293 00:11:58.750 19:01:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:58.750 19:01:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # '[' -z 231293 ']' 00:11:58.750 19:01:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:58.750 19:01:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:58.750 19:01:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:58.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:58.750 19:01:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:58.750 19:01:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:58.750 [2024-11-05 19:01:27.981238] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:11:58.750 [2024-11-05 19:01:27.981285] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:58.750 [2024-11-05 19:01:28.061682] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:59.015 [2024-11-05 19:01:28.097997] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:59.015 [2024-11-05 19:01:28.098031] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:59.015 [2024-11-05 19:01:28.098038] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:59.015 [2024-11-05 19:01:28.098045] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:59.015 [2024-11-05 19:01:28.098051] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:59.015 [2024-11-05 19:01:28.099593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:59.015 [2024-11-05 19:01:28.099727] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:59.015 [2024-11-05 19:01:28.099895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:59.015 [2024-11-05 19:01:28.100014] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:59.015 19:01:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:59.015 19:01:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@866 -- # return 0 00:11:59.015 19:01:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:11:59.015 19:01:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:59.015 19:01:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:59.015 19:01:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:59.015 19:01:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:59.015 19:01:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:11:59.015 19:01:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.015 19:01:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:59.015 [2024-11-05 19:01:28.240227] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:59.015 19:01:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.016 19:01:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:59.016 19:01:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.016 19:01:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:59.016 Malloc1 00:11:59.016 19:01:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.016 19:01:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:59.016 19:01:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.016 19:01:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:59.276 19:01:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.276 19:01:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:59.276 19:01:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.276 19:01:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:59.276 19:01:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.276 19:01:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:59.276 19:01:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.276 19:01:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:59.276 [2024-11-05 19:01:28.364540] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:59.276 19:01:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.276 19:01:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:59.276 19:01:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bdev_name=Malloc1 00:11:59.276 19:01:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local bdev_info 00:11:59.276 19:01:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bs 00:11:59.276 19:01:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local nb 00:11:59.276 19:01:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:59.276 19:01:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.276 19:01:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:59.276 19:01:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.276 19:01:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:11:59.276 { 00:11:59.276 "name": "Malloc1", 00:11:59.276 "aliases": [ 00:11:59.276 "20e381e0-afda-4f2b-9bb3-4aab351a0d67" 00:11:59.276 ], 00:11:59.276 "product_name": "Malloc disk", 00:11:59.276 "block_size": 512, 00:11:59.276 "num_blocks": 1048576, 00:11:59.276 "uuid": "20e381e0-afda-4f2b-9bb3-4aab351a0d67", 00:11:59.276 "assigned_rate_limits": { 00:11:59.276 "rw_ios_per_sec": 0, 00:11:59.276 "rw_mbytes_per_sec": 0, 00:11:59.276 "r_mbytes_per_sec": 0, 00:11:59.276 "w_mbytes_per_sec": 0 00:11:59.276 }, 00:11:59.276 "claimed": true, 00:11:59.276 "claim_type": "exclusive_write", 00:11:59.276 "zoned": false, 00:11:59.276 "supported_io_types": { 00:11:59.276 "read": true, 00:11:59.276 "write": true, 00:11:59.276 "unmap": true, 00:11:59.276 "flush": true, 00:11:59.276 "reset": true, 00:11:59.276 "nvme_admin": false, 00:11:59.276 "nvme_io": false, 00:11:59.276 "nvme_io_md": false, 00:11:59.276 "write_zeroes": true, 00:11:59.276 "zcopy": true, 00:11:59.276 "get_zone_info": false, 00:11:59.276 "zone_management": false, 00:11:59.276 "zone_append": false, 00:11:59.276 "compare": false, 00:11:59.276 "compare_and_write": false, 00:11:59.276 "abort": true, 00:11:59.276 "seek_hole": false, 00:11:59.276 "seek_data": false, 00:11:59.276 "copy": true, 00:11:59.276 "nvme_iov_md": false 00:11:59.276 }, 00:11:59.276 "memory_domains": [ 00:11:59.276 { 00:11:59.276 "dma_device_id": "system", 00:11:59.276 "dma_device_type": 1 00:11:59.276 }, 00:11:59.276 { 00:11:59.276 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:59.276 "dma_device_type": 2 00:11:59.276 } 00:11:59.276 ], 00:11:59.276 "driver_specific": {} 00:11:59.276 } 00:11:59.276 ]' 00:11:59.276 19:01:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:11:59.276 19:01:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # bs=512 00:11:59.276 19:01:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:11:59.276 19:01:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # nb=1048576 00:11:59.276 19:01:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1389 -- # bdev_size=512 00:11:59.276 19:01:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1390 -- # echo 512 00:11:59.276 19:01:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:59.276 19:01:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:01.188 19:01:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:12:01.188 19:01:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # local i=0 00:12:01.188 19:01:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:12:01.188 19:01:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:12:01.188 19:01:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # sleep 2 00:12:03.102 19:01:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:12:03.102 19:01:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:12:03.102 19:01:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:12:03.102 19:01:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:12:03.102 19:01:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:12:03.102 19:01:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # return 0 00:12:03.102 19:01:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:12:03.102 19:01:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:12:03.102 19:01:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:12:03.102 19:01:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:12:03.102 19:01:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:03.102 19:01:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:03.102 19:01:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:12:03.102 19:01:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:12:03.102 19:01:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:12:03.102 19:01:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:12:03.102 19:01:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:12:03.102 19:01:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:12:03.364 19:01:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:12:04.750 19:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:12:04.750 19:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:12:04.750 19:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:12:04.750 19:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:04.750 19:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:04.750 ************************************ 00:12:04.750 START TEST filesystem_in_capsule_ext4 00:12:04.750 ************************************ 00:12:04.750 19:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create ext4 nvme0n1 00:12:04.750 19:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:12:04.750 19:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:04.750 19:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:12:04.750 19:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local fstype=ext4 00:12:04.751 19:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:12:04.751 19:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local i=0 00:12:04.751 19:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local force 00:12:04.751 19:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # '[' ext4 = ext4 ']' 00:12:04.751 19:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@934 -- # force=-F 00:12:04.751 19:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@939 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:12:04.751 mke2fs 1.47.0 (5-Feb-2023) 00:12:04.751 Discarding device blocks: 0/522240 done 00:12:04.751 Creating filesystem with 522240 1k blocks and 130560 inodes 00:12:04.751 Filesystem UUID: 62f0af31-89df-460c-94fa-eff6ca3d1c1b 00:12:04.751 Superblock backups stored on blocks: 00:12:04.751 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:12:04.751 00:12:04.751 Allocating group tables: 0/64 done 00:12:04.751 Writing inode tables: 0/64 done 00:12:04.751 Creating journal (8192 blocks): done 00:12:06.970 Writing superblocks and filesystem accounting information: 0/6450/64 done 00:12:06.970 00:12:06.970 19:01:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@947 -- # return 0 00:12:06.970 19:01:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:12.255 19:01:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:12.516 19:01:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:12:12.516 19:01:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:12.516 19:01:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:12:12.516 19:01:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:12.516 19:01:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:12.516 19:01:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 231293 00:12:12.516 19:01:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:12.516 19:01:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:12.516 19:01:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:12.516 19:01:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:12.516 00:12:12.516 real 0m8.010s 00:12:12.516 user 0m0.028s 00:12:12.516 sys 0m0.076s 00:12:12.516 19:01:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:12.516 19:01:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:12.516 ************************************ 00:12:12.516 END TEST filesystem_in_capsule_ext4 00:12:12.516 ************************************ 00:12:12.516 19:01:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:12.516 19:01:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:12:12.516 19:01:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:12.516 19:01:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:12.516 ************************************ 00:12:12.516 START TEST filesystem_in_capsule_btrfs 00:12:12.516 ************************************ 00:12:12.516 19:01:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:12.516 19:01:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:12.516 19:01:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:12.516 19:01:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:12.516 19:01:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local fstype=btrfs 00:12:12.516 19:01:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:12:12.516 19:01:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local i=0 00:12:12.516 19:01:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local force 00:12:12.516 19:01:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # '[' btrfs = ext4 ']' 00:12:12.516 19:01:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@936 -- # force=-f 00:12:12.516 19:01:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@939 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:12.779 btrfs-progs v6.8.1 00:12:12.779 See https://btrfs.readthedocs.io for more information. 00:12:12.779 00:12:12.779 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:12.779 NOTE: several default settings have changed in version 5.15, please make sure 00:12:12.779 this does not affect your deployments: 00:12:12.779 - DUP for metadata (-m dup) 00:12:12.779 - enabled no-holes (-O no-holes) 00:12:12.779 - enabled free-space-tree (-R free-space-tree) 00:12:12.779 00:12:12.779 Label: (null) 00:12:12.779 UUID: 03fbf000-9688-43f1-8fc7-df0497991d7a 00:12:12.779 Node size: 16384 00:12:12.779 Sector size: 4096 (CPU page size: 4096) 00:12:12.779 Filesystem size: 510.00MiB 00:12:12.779 Block group profiles: 00:12:12.779 Data: single 8.00MiB 00:12:12.779 Metadata: DUP 32.00MiB 00:12:12.779 System: DUP 8.00MiB 00:12:12.779 SSD detected: yes 00:12:12.779 Zoned device: no 00:12:12.779 Features: extref, skinny-metadata, no-holes, free-space-tree 00:12:12.779 Checksum: crc32c 00:12:12.779 Number of devices: 1 00:12:12.779 Devices: 00:12:12.779 ID SIZE PATH 00:12:12.779 1 510.00MiB /dev/nvme0n1p1 00:12:12.779 00:12:12.779 19:01:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@947 -- # return 0 00:12:12.779 19:01:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:13.041 19:01:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:13.041 19:01:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:12:13.041 19:01:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:13.041 19:01:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:12:13.041 19:01:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:13.041 19:01:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:13.041 19:01:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 231293 00:12:13.041 19:01:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:13.041 19:01:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:13.041 19:01:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:13.041 19:01:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:13.041 00:12:13.041 real 0m0.485s 00:12:13.041 user 0m0.037s 00:12:13.041 sys 0m0.106s 00:12:13.041 19:01:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:13.041 19:01:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:13.041 ************************************ 00:12:13.041 END TEST filesystem_in_capsule_btrfs 00:12:13.041 ************************************ 00:12:13.041 19:01:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:12:13.041 19:01:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:12:13.041 19:01:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:13.041 19:01:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:13.041 ************************************ 00:12:13.041 START TEST filesystem_in_capsule_xfs 00:12:13.041 ************************************ 00:12:13.041 19:01:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create xfs nvme0n1 00:12:13.041 19:01:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:13.041 19:01:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:13.041 19:01:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:13.041 19:01:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local fstype=xfs 00:12:13.041 19:01:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:12:13.041 19:01:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local i=0 00:12:13.041 19:01:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local force 00:12:13.041 19:01:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # '[' xfs = ext4 ']' 00:12:13.041 19:01:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@936 -- # force=-f 00:12:13.041 19:01:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@939 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:13.302 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:13.302 = sectsz=512 attr=2, projid32bit=1 00:12:13.302 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:13.302 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:13.302 data = bsize=4096 blocks=130560, imaxpct=25 00:12:13.302 = sunit=0 swidth=0 blks 00:12:13.302 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:13.302 log =internal log bsize=4096 blocks=16384, version=2 00:12:13.302 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:13.302 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:14.245 Discarding blocks...Done. 00:12:14.245 19:01:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@947 -- # return 0 00:12:14.245 19:01:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:16.158 19:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:16.158 19:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:12:16.158 19:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:16.158 19:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:12:16.158 19:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:12:16.158 19:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:16.158 19:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 231293 00:12:16.158 19:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:16.158 19:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:16.419 19:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:16.419 19:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:16.419 00:12:16.419 real 0m3.139s 00:12:16.419 user 0m0.029s 00:12:16.419 sys 0m0.077s 00:12:16.419 19:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:16.419 19:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:16.419 ************************************ 00:12:16.419 END TEST filesystem_in_capsule_xfs 00:12:16.419 ************************************ 00:12:16.419 19:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:16.680 19:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:16.940 19:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:17.201 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:17.201 19:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:17.201 19:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1221 -- # local i=0 00:12:17.201 19:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:12:17.201 19:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:17.201 19:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:12:17.201 19:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:17.201 19:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1233 -- # return 0 00:12:17.201 19:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:17.201 19:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.201 19:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:17.201 19:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.201 19:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:17.201 19:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 231293 00:12:17.201 19:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # '[' -z 231293 ']' 00:12:17.201 19:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # kill -0 231293 00:12:17.201 19:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@957 -- # uname 00:12:17.201 19:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:17.201 19:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 231293 00:12:17.201 19:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:17.201 19:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:17.201 19:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@970 -- # echo 'killing process with pid 231293' 00:12:17.201 killing process with pid 231293 00:12:17.201 19:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@971 -- # kill 231293 00:12:17.201 19:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@976 -- # wait 231293 00:12:17.462 19:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:17.462 00:12:17.462 real 0m18.707s 00:12:17.462 user 1m13.896s 00:12:17.462 sys 0m1.378s 00:12:17.462 19:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:17.462 19:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:17.462 ************************************ 00:12:17.462 END TEST nvmf_filesystem_in_capsule 00:12:17.462 ************************************ 00:12:17.462 19:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:12:17.462 19:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@335 -- # nvmfcleanup 00:12:17.462 19:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@99 -- # sync 00:12:17.462 19:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:12:17.462 19:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@102 -- # set +e 00:12:17.462 19:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@103 -- # for i in {1..20} 00:12:17.462 19:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:12:17.462 rmmod nvme_tcp 00:12:17.462 rmmod nvme_fabrics 00:12:17.462 rmmod nvme_keyring 00:12:17.462 19:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:12:17.462 19:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # set -e 00:12:17.462 19:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # return 0 00:12:17.462 19:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # '[' -n '' ']' 00:12:17.462 19:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:12:17.462 19:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # nvmf_fini 00:12:17.462 19:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@264 -- # local dev 00:12:17.462 19:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@267 -- # remove_target_ns 00:12:17.462 19:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:12:17.462 19:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:12:17.462 19:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_target_ns 00:12:20.007 19:01:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@268 -- # delete_main_bridge 00:12:20.007 19:01:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:12:20.007 19:01:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@130 -- # return 0 00:12:20.007 19:01:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:12:20.007 19:01:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:12:20.007 19:01:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:12:20.007 19:01:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:12:20.007 19:01:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:12:20.007 19:01:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:12:20.007 19:01:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:12:20.007 19:01:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:12:20.007 19:01:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:12:20.007 19:01:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:12:20.007 19:01:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:12:20.007 19:01:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:12:20.007 19:01:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:12:20.007 19:01:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:12:20.007 19:01:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:12:20.007 19:01:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:12:20.007 19:01:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:12:20.007 19:01:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@41 -- # _dev=0 00:12:20.007 19:01:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@41 -- # dev_map=() 00:12:20.007 19:01:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@284 -- # iptr 00:12:20.007 19:01:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@542 -- # iptables-save 00:12:20.007 19:01:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:12:20.007 19:01:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@542 -- # iptables-restore 00:12:20.007 00:12:20.007 real 0m49.968s 00:12:20.007 user 2m38.379s 00:12:20.007 sys 0m8.879s 00:12:20.007 19:01:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:20.007 19:01:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:20.007 ************************************ 00:12:20.007 END TEST nvmf_filesystem 00:12:20.007 ************************************ 00:12:20.007 19:01:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:20.007 19:01:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:20.007 19:01:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:20.007 19:01:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:20.007 ************************************ 00:12:20.007 START TEST nvmf_target_discovery 00:12:20.007 ************************************ 00:12:20.007 19:01:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:20.007 * Looking for test storage... 00:12:20.007 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:20.007 19:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:20.007 19:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # lcov --version 00:12:20.007 19:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:20.007 19:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:20.007 19:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:20.007 19:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:20.007 19:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:20.007 19:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:12:20.007 19:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:12:20.007 19:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:12:20.007 19:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:12:20.007 19:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:12:20.007 19:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:12:20.007 19:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:12:20.007 19:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:20.007 19:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:12:20.007 19:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:12:20.007 19:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:20.007 19:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:20.007 19:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:12:20.007 19:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:12:20.007 19:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:20.007 19:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:12:20.007 19:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:12:20.007 19:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:12:20.007 19:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:12:20.007 19:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:20.007 19:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:12:20.007 19:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:12:20.007 19:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:20.007 19:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:20.007 19:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:12:20.007 19:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:20.007 19:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:20.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:20.007 --rc genhtml_branch_coverage=1 00:12:20.007 --rc genhtml_function_coverage=1 00:12:20.007 --rc genhtml_legend=1 00:12:20.007 --rc geninfo_all_blocks=1 00:12:20.007 --rc geninfo_unexecuted_blocks=1 00:12:20.007 00:12:20.007 ' 00:12:20.007 19:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:20.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:20.007 --rc genhtml_branch_coverage=1 00:12:20.007 --rc genhtml_function_coverage=1 00:12:20.008 --rc genhtml_legend=1 00:12:20.008 --rc geninfo_all_blocks=1 00:12:20.008 --rc geninfo_unexecuted_blocks=1 00:12:20.008 00:12:20.008 ' 00:12:20.008 19:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:20.008 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:20.008 --rc genhtml_branch_coverage=1 00:12:20.008 --rc genhtml_function_coverage=1 00:12:20.008 --rc genhtml_legend=1 00:12:20.008 --rc geninfo_all_blocks=1 00:12:20.008 --rc geninfo_unexecuted_blocks=1 00:12:20.008 00:12:20.008 ' 00:12:20.008 19:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:20.008 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:20.008 --rc genhtml_branch_coverage=1 00:12:20.008 --rc genhtml_function_coverage=1 00:12:20.008 --rc genhtml_legend=1 00:12:20.008 --rc geninfo_all_blocks=1 00:12:20.008 --rc geninfo_unexecuted_blocks=1 00:12:20.008 00:12:20.008 ' 00:12:20.008 19:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:20.008 19:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:12:20.008 19:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:20.008 19:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:20.008 19:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:20.008 19:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:20.008 19:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:20.008 19:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:12:20.008 19:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:20.008 19:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:12:20.008 19:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:20.008 19:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:20.008 19:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:20.008 19:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:12:20.008 19:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:12:20.008 19:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:20.008 19:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:20.008 19:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:12:20.008 19:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:20.008 19:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:20.008 19:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:20.008 19:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.008 19:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.008 19:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.008 19:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:12:20.008 19:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.008 19:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:12:20.008 19:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:12:20.008 19:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:12:20.008 19:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:12:20.008 19:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@50 -- # : 0 00:12:20.008 19:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:12:20.008 19:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:12:20.008 19:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:12:20.008 19:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:20.008 19:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:20.008 19:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:12:20.008 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:12:20.008 19:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:12:20.008 19:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:12:20.008 19:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@54 -- # have_pci_nics=0 00:12:20.008 19:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:12:20.008 19:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:12:20.008 19:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:12:20.008 19:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # nvmftestinit 00:12:20.008 19:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:12:20.008 19:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:20.008 19:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # prepare_net_devs 00:12:20.008 19:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # local -g is_hw=no 00:12:20.008 19:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@260 -- # remove_target_ns 00:12:20.008 19:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:12:20.008 19:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:12:20.008 19:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_target_ns 00:12:20.008 19:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:12:20.008 19:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:12:20.008 19:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # xtrace_disable 00:12:20.008 19:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:28.149 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:28.149 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@131 -- # pci_devs=() 00:12:28.149 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@131 -- # local -a pci_devs 00:12:28.149 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@132 -- # pci_net_devs=() 00:12:28.149 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:12:28.149 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@133 -- # pci_drivers=() 00:12:28.149 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@133 -- # local -A pci_drivers 00:12:28.149 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@135 -- # net_devs=() 00:12:28.149 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@135 -- # local -ga net_devs 00:12:28.149 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@136 -- # e810=() 00:12:28.149 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@136 -- # local -ga e810 00:12:28.149 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@137 -- # x722=() 00:12:28.149 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@137 -- # local -ga x722 00:12:28.149 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@138 -- # mlx=() 00:12:28.149 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@138 -- # local -ga mlx 00:12:28.149 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:28.149 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:28.149 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:28.149 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:28.149 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:28.150 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:28.150 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:28.150 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:28.150 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:28.150 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:28.150 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:28.150 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:28.150 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:12:28.150 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:12:28.150 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:12:28.150 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:12:28.150 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:12:28.150 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:12:28.150 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:12:28.150 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:28.150 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:28.150 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:12:28.150 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:12:28.150 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:28.150 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:28.150 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:12:28.150 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:12:28.150 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:28.150 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:28.150 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:12:28.150 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:12:28.150 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:28.150 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:28.150 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:12:28.150 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:12:28.150 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:12:28.150 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:12:28.150 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:12:28.150 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:28.150 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:12:28.150 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:28.150 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@234 -- # [[ up == up ]] 00:12:28.150 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:12:28.150 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:28.150 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:28.150 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:28.150 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:12:28.150 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:12:28.150 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:28.150 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:12:28.150 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:28.150 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@234 -- # [[ up == up ]] 00:12:28.150 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:12:28.150 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:28.150 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:28.150 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:28.150 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:12:28.150 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:12:28.150 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:12:28.150 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # is_hw=yes 00:12:28.150 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:12:28.150 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:12:28.150 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:12:28.150 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:12:28.150 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@257 -- # create_target_ns 00:12:28.150 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:12:28.150 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:12:28.150 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:12:28.150 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:28.150 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:12:28.150 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:12:28.150 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:28.150 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:28.150 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:12:28.150 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:12:28.150 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:12:28.150 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:12:28.150 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@27 -- # local -gA dev_map 00:12:28.150 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@28 -- # local -g _dev 00:12:28.150 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:12:28.150 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:12:28.150 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:12:28.150 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:12:28.150 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@44 -- # ips=() 00:12:28.150 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:12:28.150 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:12:28.150 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:12:28.150 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:12:28.150 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:12:28.150 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:12:28.150 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:12:28.150 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:12:28.150 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:12:28.150 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:12:28.150 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:12:28.150 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:12:28.150 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:12:28.150 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:12:28.150 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:12:28.150 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:12:28.150 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:12:28.150 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:12:28.150 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:12:28.150 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:12:28.150 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@11 -- # local val=167772161 00:12:28.150 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:12:28.150 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:12:28.150 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:12:28.150 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:12:28.150 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:12:28.150 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:12:28.151 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:12:28.151 10.0.0.1 00:12:28.151 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:12:28.151 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:12:28.151 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:28.151 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:28.151 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:12:28.151 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@11 -- # local val=167772162 00:12:28.151 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:12:28.151 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:12:28.151 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:12:28.151 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:12:28.151 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:12:28.151 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:12:28.151 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:12:28.151 10.0.0.2 00:12:28.151 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:12:28.151 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:12:28.151 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:12:28.151 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:12:28.151 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:12:28.151 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:12:28.151 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:12:28.151 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:28.151 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:28.151 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:12:28.151 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:12:28.151 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:12:28.151 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:12:28.151 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:12:28.151 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:12:28.151 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:12:28.151 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:12:28.151 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:12:28.151 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:12:28.151 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:12:28.151 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@38 -- # ping_ips 1 00:12:28.151 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:12:28.151 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:12:28.151 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:12:28.151 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:12:28.151 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:12:28.151 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:12:28.151 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:12:28.151 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:12:28.151 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:12:28.151 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@107 -- # local dev=initiator0 00:12:28.151 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:12:28.151 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:12:28.151 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:12:28.151 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:12:28.151 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:12:28.151 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:12:28.151 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:12:28.151 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:12:28.151 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:12:28.151 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:12:28.151 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:12:28.151 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:28.151 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:28.151 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:12:28.151 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:12:28.151 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:28.151 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.684 ms 00:12:28.151 00:12:28.151 --- 10.0.0.1 ping statistics --- 00:12:28.151 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:28.151 rtt min/avg/max/mdev = 0.684/0.684/0.684/0.000 ms 00:12:28.151 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:12:28.151 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:12:28.151 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:12:28.151 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:12:28.151 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:28.151 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:28.151 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@168 -- # get_net_dev target0 00:12:28.151 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@107 -- # local dev=target0 00:12:28.151 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:12:28.151 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:12:28.151 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:12:28.151 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:12:28.151 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:12:28.151 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:12:28.151 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:12:28.151 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:12:28.151 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:12:28.151 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:12:28.151 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:12:28.151 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:12:28.151 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:12:28.151 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:12:28.151 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:28.151 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.261 ms 00:12:28.151 00:12:28.151 --- 10.0.0.2 ping statistics --- 00:12:28.151 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:28.151 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:12:28.151 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@98 -- # (( pair++ )) 00:12:28.151 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:12:28.151 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:28.151 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@270 -- # return 0 00:12:28.151 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:12:28.151 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:12:28.151 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:12:28.151 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:12:28.151 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:12:28.151 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:12:28.151 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:12:28.151 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:12:28.151 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:12:28.151 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:12:28.151 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@107 -- # local dev=initiator0 00:12:28.151 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:12:28.152 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:12:28.152 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:12:28.152 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:12:28.152 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:12:28.152 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:12:28.152 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:12:28.152 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:12:28.152 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:12:28.152 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:28.152 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:12:28.152 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:12:28.152 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:12:28.152 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:12:28.152 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:12:28.152 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:12:28.152 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@107 -- # local dev=initiator1 00:12:28.152 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:12:28.152 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:12:28.152 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@109 -- # return 1 00:12:28.152 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@168 -- # dev= 00:12:28.152 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@169 -- # return 0 00:12:28.152 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:12:28.152 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:12:28.152 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:12:28.152 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:12:28.152 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:12:28.152 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:28.152 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:28.152 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@168 -- # get_net_dev target0 00:12:28.152 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@107 -- # local dev=target0 00:12:28.152 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:12:28.152 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:12:28.152 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:12:28.152 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:12:28.152 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:12:28.152 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:12:28.152 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:12:28.152 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:12:28.152 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:12:28.152 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:28.152 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:12:28.152 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:12:28.152 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:12:28.152 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:12:28.152 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:28.152 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:28.152 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@168 -- # get_net_dev target1 00:12:28.152 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@107 -- # local dev=target1 00:12:28.152 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:12:28.152 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:12:28.152 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@109 -- # return 1 00:12:28.152 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@168 -- # dev= 00:12:28.152 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@169 -- # return 0 00:12:28.152 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:12:28.152 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:28.152 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:12:28.152 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:12:28.152 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:28.152 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:12:28.152 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:12:28.152 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@16 -- # nvmfappstart -m 0xF 00:12:28.152 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:12:28.152 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:28.152 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:28.152 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # nvmfpid=239569 00:12:28.152 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@329 -- # waitforlisten 239569 00:12:28.152 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:28.152 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@833 -- # '[' -z 239569 ']' 00:12:28.152 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:28.152 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:28.152 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:28.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:28.152 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:28.152 19:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:28.152 [2024-11-05 19:01:56.544908] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:12:28.152 [2024-11-05 19:01:56.545009] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:28.152 [2024-11-05 19:01:56.628791] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:28.152 [2024-11-05 19:01:56.670558] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:28.152 [2024-11-05 19:01:56.670592] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:28.152 [2024-11-05 19:01:56.670600] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:28.152 [2024-11-05 19:01:56.670607] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:28.152 [2024-11-05 19:01:56.670613] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:28.152 [2024-11-05 19:01:56.672457] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:28.152 [2024-11-05 19:01:56.672578] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:28.152 [2024-11-05 19:01:56.672734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:28.152 [2024-11-05 19:01:56.672734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:28.152 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:28.152 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@866 -- # return 0 00:12:28.152 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:12:28.152 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:28.152 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:28.152 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:28.152 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:28.152 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.152 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:28.152 [2024-11-05 19:01:57.395274] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:28.152 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.152 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # seq 1 4 00:12:28.152 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # for i in $(seq 1 4) 00:12:28.152 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@22 -- # rpc_cmd bdev_null_create Null1 102400 512 00:12:28.152 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.152 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:28.152 Null1 00:12:28.152 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.152 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:28.152 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.153 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:28.153 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.153 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:12:28.153 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.153 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:28.153 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.153 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:28.153 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.153 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:28.153 [2024-11-05 19:01:57.451585] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:28.153 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.153 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # for i in $(seq 1 4) 00:12:28.153 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@22 -- # rpc_cmd bdev_null_create Null2 102400 512 00:12:28.153 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.153 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:28.153 Null2 00:12:28.153 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.153 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:12:28.153 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.153 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:28.414 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.414 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:12:28.414 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.414 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:28.414 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.414 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:12:28.414 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.414 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:28.414 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.414 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # for i in $(seq 1 4) 00:12:28.414 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@22 -- # rpc_cmd bdev_null_create Null3 102400 512 00:12:28.414 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.414 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:28.414 Null3 00:12:28.414 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.414 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:12:28.414 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.414 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:28.414 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.414 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:12:28.414 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.414 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:28.414 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.414 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:12:28.414 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.414 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:28.414 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.414 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # for i in $(seq 1 4) 00:12:28.414 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@22 -- # rpc_cmd bdev_null_create Null4 102400 512 00:12:28.414 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.414 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:28.414 Null4 00:12:28.414 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.414 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:12:28.414 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.414 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:28.414 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.414 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:12:28.414 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.414 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:28.414 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.414 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:12:28.414 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.414 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:28.414 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.414 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:28.414 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.414 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:28.414 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.414 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:12:28.414 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.414 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:28.414 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.414 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:12:28.675 00:12:28.675 Discovery Log Number of Records 6, Generation counter 6 00:12:28.675 =====Discovery Log Entry 0====== 00:12:28.675 trtype: tcp 00:12:28.675 adrfam: ipv4 00:12:28.675 subtype: current discovery subsystem 00:12:28.675 treq: not required 00:12:28.675 portid: 0 00:12:28.675 trsvcid: 4420 00:12:28.675 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:28.675 traddr: 10.0.0.2 00:12:28.675 eflags: explicit discovery connections, duplicate discovery information 00:12:28.675 sectype: none 00:12:28.675 =====Discovery Log Entry 1====== 00:12:28.675 trtype: tcp 00:12:28.675 adrfam: ipv4 00:12:28.675 subtype: nvme subsystem 00:12:28.675 treq: not required 00:12:28.675 portid: 0 00:12:28.675 trsvcid: 4420 00:12:28.675 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:28.675 traddr: 10.0.0.2 00:12:28.675 eflags: none 00:12:28.675 sectype: none 00:12:28.675 =====Discovery Log Entry 2====== 00:12:28.675 trtype: tcp 00:12:28.675 adrfam: ipv4 00:12:28.675 subtype: nvme subsystem 00:12:28.675 treq: not required 00:12:28.675 portid: 0 00:12:28.675 trsvcid: 4420 00:12:28.675 subnqn: nqn.2016-06.io.spdk:cnode2 00:12:28.675 traddr: 10.0.0.2 00:12:28.675 eflags: none 00:12:28.675 sectype: none 00:12:28.675 =====Discovery Log Entry 3====== 00:12:28.675 trtype: tcp 00:12:28.675 adrfam: ipv4 00:12:28.675 subtype: nvme subsystem 00:12:28.675 treq: not required 00:12:28.675 portid: 0 00:12:28.675 trsvcid: 4420 00:12:28.675 subnqn: nqn.2016-06.io.spdk:cnode3 00:12:28.675 traddr: 10.0.0.2 00:12:28.675 eflags: none 00:12:28.675 sectype: none 00:12:28.675 =====Discovery Log Entry 4====== 00:12:28.675 trtype: tcp 00:12:28.675 adrfam: ipv4 00:12:28.675 subtype: nvme subsystem 00:12:28.675 treq: not required 00:12:28.675 portid: 0 00:12:28.675 trsvcid: 4420 00:12:28.675 subnqn: nqn.2016-06.io.spdk:cnode4 00:12:28.675 traddr: 10.0.0.2 00:12:28.675 eflags: none 00:12:28.675 sectype: none 00:12:28.675 =====Discovery Log Entry 5====== 00:12:28.675 trtype: tcp 00:12:28.675 adrfam: ipv4 00:12:28.675 subtype: discovery subsystem referral 00:12:28.676 treq: not required 00:12:28.676 portid: 0 00:12:28.676 trsvcid: 4430 00:12:28.676 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:28.676 traddr: 10.0.0.2 00:12:28.676 eflags: none 00:12:28.676 sectype: none 00:12:28.676 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@34 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:12:28.676 Perform nvmf subsystem discovery via RPC 00:12:28.676 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_get_subsystems 00:12:28.676 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.676 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:28.676 [ 00:12:28.676 { 00:12:28.676 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:28.676 "subtype": "Discovery", 00:12:28.676 "listen_addresses": [ 00:12:28.676 { 00:12:28.676 "trtype": "TCP", 00:12:28.676 "adrfam": "IPv4", 00:12:28.676 "traddr": "10.0.0.2", 00:12:28.676 "trsvcid": "4420" 00:12:28.676 } 00:12:28.676 ], 00:12:28.676 "allow_any_host": true, 00:12:28.676 "hosts": [] 00:12:28.676 }, 00:12:28.676 { 00:12:28.676 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:28.676 "subtype": "NVMe", 00:12:28.676 "listen_addresses": [ 00:12:28.676 { 00:12:28.676 "trtype": "TCP", 00:12:28.676 "adrfam": "IPv4", 00:12:28.676 "traddr": "10.0.0.2", 00:12:28.676 "trsvcid": "4420" 00:12:28.676 } 00:12:28.676 ], 00:12:28.676 "allow_any_host": true, 00:12:28.676 "hosts": [], 00:12:28.676 "serial_number": "SPDK00000000000001", 00:12:28.676 "model_number": "SPDK bdev Controller", 00:12:28.676 "max_namespaces": 32, 00:12:28.676 "min_cntlid": 1, 00:12:28.676 "max_cntlid": 65519, 00:12:28.676 "namespaces": [ 00:12:28.676 { 00:12:28.676 "nsid": 1, 00:12:28.676 "bdev_name": "Null1", 00:12:28.676 "name": "Null1", 00:12:28.676 "nguid": "E13D252D44CC4F908C250F1A28259004", 00:12:28.676 "uuid": "e13d252d-44cc-4f90-8c25-0f1a28259004" 00:12:28.676 } 00:12:28.676 ] 00:12:28.676 }, 00:12:28.676 { 00:12:28.676 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:28.676 "subtype": "NVMe", 00:12:28.676 "listen_addresses": [ 00:12:28.676 { 00:12:28.676 "trtype": "TCP", 00:12:28.676 "adrfam": "IPv4", 00:12:28.676 "traddr": "10.0.0.2", 00:12:28.676 "trsvcid": "4420" 00:12:28.676 } 00:12:28.676 ], 00:12:28.676 "allow_any_host": true, 00:12:28.676 "hosts": [], 00:12:28.676 "serial_number": "SPDK00000000000002", 00:12:28.676 "model_number": "SPDK bdev Controller", 00:12:28.676 "max_namespaces": 32, 00:12:28.676 "min_cntlid": 1, 00:12:28.676 "max_cntlid": 65519, 00:12:28.676 "namespaces": [ 00:12:28.676 { 00:12:28.676 "nsid": 1, 00:12:28.676 "bdev_name": "Null2", 00:12:28.676 "name": "Null2", 00:12:28.676 "nguid": "9F2C06EE2DA24A6AB8618A3A796DD2B1", 00:12:28.676 "uuid": "9f2c06ee-2da2-4a6a-b861-8a3a796dd2b1" 00:12:28.676 } 00:12:28.676 ] 00:12:28.676 }, 00:12:28.676 { 00:12:28.676 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:12:28.676 "subtype": "NVMe", 00:12:28.676 "listen_addresses": [ 00:12:28.676 { 00:12:28.676 "trtype": "TCP", 00:12:28.676 "adrfam": "IPv4", 00:12:28.676 "traddr": "10.0.0.2", 00:12:28.676 "trsvcid": "4420" 00:12:28.676 } 00:12:28.676 ], 00:12:28.676 "allow_any_host": true, 00:12:28.676 "hosts": [], 00:12:28.676 "serial_number": "SPDK00000000000003", 00:12:28.676 "model_number": "SPDK bdev Controller", 00:12:28.676 "max_namespaces": 32, 00:12:28.676 "min_cntlid": 1, 00:12:28.676 "max_cntlid": 65519, 00:12:28.676 "namespaces": [ 00:12:28.676 { 00:12:28.676 "nsid": 1, 00:12:28.676 "bdev_name": "Null3", 00:12:28.676 "name": "Null3", 00:12:28.676 "nguid": "06F9318735614544860B3D9714D6C3F7", 00:12:28.676 "uuid": "06f93187-3561-4544-860b-3d9714d6c3f7" 00:12:28.676 } 00:12:28.676 ] 00:12:28.676 }, 00:12:28.676 { 00:12:28.676 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:12:28.676 "subtype": "NVMe", 00:12:28.676 "listen_addresses": [ 00:12:28.676 { 00:12:28.676 "trtype": "TCP", 00:12:28.676 "adrfam": "IPv4", 00:12:28.676 "traddr": "10.0.0.2", 00:12:28.676 "trsvcid": "4420" 00:12:28.676 } 00:12:28.676 ], 00:12:28.676 "allow_any_host": true, 00:12:28.676 "hosts": [], 00:12:28.676 "serial_number": "SPDK00000000000004", 00:12:28.676 "model_number": "SPDK bdev Controller", 00:12:28.676 "max_namespaces": 32, 00:12:28.676 "min_cntlid": 1, 00:12:28.676 "max_cntlid": 65519, 00:12:28.676 "namespaces": [ 00:12:28.676 { 00:12:28.676 "nsid": 1, 00:12:28.676 "bdev_name": "Null4", 00:12:28.676 "name": "Null4", 00:12:28.676 "nguid": "E579FFAB7C114BE88C5E0A1FB13A85DF", 00:12:28.676 "uuid": "e579ffab-7c11-4be8-8c5e-0a1fb13a85df" 00:12:28.676 } 00:12:28.676 ] 00:12:28.676 } 00:12:28.676 ] 00:12:28.676 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.676 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # seq 1 4 00:12:28.676 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # for i in $(seq 1 4) 00:12:28.676 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:28.676 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.676 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:28.676 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.676 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # rpc_cmd bdev_null_delete Null1 00:12:28.676 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.676 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:28.676 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.676 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # for i in $(seq 1 4) 00:12:28.676 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:12:28.676 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.676 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:28.676 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.676 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # rpc_cmd bdev_null_delete Null2 00:12:28.676 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.676 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:28.676 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.676 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # for i in $(seq 1 4) 00:12:28.676 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:12:28.676 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.676 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:28.676 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.676 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # rpc_cmd bdev_null_delete Null3 00:12:28.676 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.676 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:28.676 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.676 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # for i in $(seq 1 4) 00:12:28.676 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:12:28.676 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.676 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:28.676 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.676 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # rpc_cmd bdev_null_delete Null4 00:12:28.676 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.676 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:28.676 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.676 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:12:28.677 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.677 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:28.677 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.677 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_get_bdevs 00:12:28.677 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # jq -r '.[].name' 00:12:28.677 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.677 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:28.677 19:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.937 19:01:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # check_bdevs= 00:12:28.937 19:01:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@45 -- # '[' -n '' ']' 00:12:28.937 19:01:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:12:28.937 19:01:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@52 -- # nvmftestfini 00:12:28.937 19:01:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@335 -- # nvmfcleanup 00:12:28.937 19:01:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@99 -- # sync 00:12:28.937 19:01:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:12:28.937 19:01:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@102 -- # set +e 00:12:28.937 19:01:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@103 -- # for i in {1..20} 00:12:28.937 19:01:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:12:28.937 rmmod nvme_tcp 00:12:28.937 rmmod nvme_fabrics 00:12:28.937 rmmod nvme_keyring 00:12:28.937 19:01:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:12:28.937 19:01:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # set -e 00:12:28.937 19:01:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # return 0 00:12:28.937 19:01:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # '[' -n 239569 ']' 00:12:28.937 19:01:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@337 -- # killprocess 239569 00:12:28.937 19:01:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@952 -- # '[' -z 239569 ']' 00:12:28.937 19:01:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # kill -0 239569 00:12:28.937 19:01:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@957 -- # uname 00:12:28.937 19:01:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:28.937 19:01:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 239569 00:12:28.937 19:01:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:28.937 19:01:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:28.937 19:01:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@970 -- # echo 'killing process with pid 239569' 00:12:28.937 killing process with pid 239569 00:12:28.937 19:01:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@971 -- # kill 239569 00:12:28.937 19:01:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@976 -- # wait 239569 00:12:29.199 19:01:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:12:29.199 19:01:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # nvmf_fini 00:12:29.199 19:01:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@264 -- # local dev 00:12:29.199 19:01:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@267 -- # remove_target_ns 00:12:29.199 19:01:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:12:29.199 19:01:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:12:29.199 19:01:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_target_ns 00:12:31.113 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@268 -- # delete_main_bridge 00:12:31.113 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:12:31.113 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@130 -- # return 0 00:12:31.113 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:12:31.113 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:12:31.113 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:12:31.113 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:12:31.113 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:12:31.113 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:12:31.113 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:12:31.113 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:12:31.113 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:12:31.113 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:12:31.113 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:12:31.113 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:12:31.113 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:12:31.113 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:12:31.113 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:12:31.113 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:12:31.113 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:12:31.113 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@41 -- # _dev=0 00:12:31.113 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@41 -- # dev_map=() 00:12:31.113 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@284 -- # iptr 00:12:31.113 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@542 -- # iptables-save 00:12:31.113 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:12:31.113 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@542 -- # iptables-restore 00:12:31.113 00:12:31.113 real 0m11.451s 00:12:31.113 user 0m8.901s 00:12:31.113 sys 0m5.821s 00:12:31.113 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:31.113 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:31.113 ************************************ 00:12:31.113 END TEST nvmf_target_discovery 00:12:31.113 ************************************ 00:12:31.113 19:02:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:31.113 19:02:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:31.113 19:02:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:31.113 19:02:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:31.375 ************************************ 00:12:31.375 START TEST nvmf_referrals 00:12:31.375 ************************************ 00:12:31.375 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:31.375 * Looking for test storage... 00:12:31.375 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:31.375 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:31.375 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # lcov --version 00:12:31.375 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:31.375 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:31.375 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:31.375 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:31.375 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:31.375 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:12:31.375 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:12:31.375 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:12:31.375 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:12:31.375 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:12:31.375 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:12:31.375 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:12:31.375 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:31.375 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:12:31.375 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:12:31.375 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:31.375 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:31.375 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:12:31.375 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:12:31.375 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:31.375 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:12:31.375 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:12:31.375 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:12:31.375 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:12:31.375 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:31.375 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:12:31.375 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:12:31.375 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:31.375 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:31.375 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:12:31.375 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:31.375 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:31.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:31.375 --rc genhtml_branch_coverage=1 00:12:31.375 --rc genhtml_function_coverage=1 00:12:31.375 --rc genhtml_legend=1 00:12:31.375 --rc geninfo_all_blocks=1 00:12:31.375 --rc geninfo_unexecuted_blocks=1 00:12:31.375 00:12:31.375 ' 00:12:31.376 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:31.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:31.376 --rc genhtml_branch_coverage=1 00:12:31.376 --rc genhtml_function_coverage=1 00:12:31.376 --rc genhtml_legend=1 00:12:31.376 --rc geninfo_all_blocks=1 00:12:31.376 --rc geninfo_unexecuted_blocks=1 00:12:31.376 00:12:31.376 ' 00:12:31.376 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:31.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:31.376 --rc genhtml_branch_coverage=1 00:12:31.376 --rc genhtml_function_coverage=1 00:12:31.376 --rc genhtml_legend=1 00:12:31.376 --rc geninfo_all_blocks=1 00:12:31.376 --rc geninfo_unexecuted_blocks=1 00:12:31.376 00:12:31.376 ' 00:12:31.376 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:31.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:31.376 --rc genhtml_branch_coverage=1 00:12:31.376 --rc genhtml_function_coverage=1 00:12:31.376 --rc genhtml_legend=1 00:12:31.376 --rc geninfo_all_blocks=1 00:12:31.376 --rc geninfo_unexecuted_blocks=1 00:12:31.376 00:12:31.376 ' 00:12:31.376 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:31.376 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:12:31.376 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:31.376 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:31.376 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:31.376 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:31.376 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:31.376 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:12:31.376 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:31.376 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:12:31.376 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:31.376 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:31.376 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:31.376 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:12:31.376 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:12:31.376 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:31.376 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:31.376 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:12:31.376 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:31.376 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:31.376 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:31.376 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.376 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.376 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.376 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:12:31.376 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.376 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:12:31.376 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:12:31.376 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:12:31.376 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:12:31.376 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@50 -- # : 0 00:12:31.376 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:12:31.376 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:12:31.376 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:12:31.376 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:31.376 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:31.376 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:12:31.376 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:12:31.376 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:12:31.376 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:12:31.376 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@54 -- # have_pci_nics=0 00:12:31.376 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:12:31.376 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:12:31.376 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:12:31.376 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:12:31.376 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:12:31.376 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:12:31.376 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:12:31.376 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:12:31.376 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:31.376 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # prepare_net_devs 00:12:31.376 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # local -g is_hw=no 00:12:31.376 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@260 -- # remove_target_ns 00:12:31.376 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:12:31.376 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:12:31.376 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_target_ns 00:12:31.376 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:12:31.376 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:12:31.376 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # xtrace_disable 00:12:31.376 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:39.521 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:39.521 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@131 -- # pci_devs=() 00:12:39.521 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@131 -- # local -a pci_devs 00:12:39.521 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@132 -- # pci_net_devs=() 00:12:39.521 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:12:39.521 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@133 -- # pci_drivers=() 00:12:39.521 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@133 -- # local -A pci_drivers 00:12:39.521 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@135 -- # net_devs=() 00:12:39.521 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@135 -- # local -ga net_devs 00:12:39.521 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@136 -- # e810=() 00:12:39.521 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@136 -- # local -ga e810 00:12:39.521 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@137 -- # x722=() 00:12:39.521 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@137 -- # local -ga x722 00:12:39.521 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@138 -- # mlx=() 00:12:39.521 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@138 -- # local -ga mlx 00:12:39.521 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:39.521 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:39.521 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:39.521 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:39.521 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:39.521 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:39.521 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:39.521 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:39.521 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:39.521 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:39.521 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:39.521 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:39.521 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:12:39.521 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:12:39.521 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:12:39.521 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:12:39.521 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:12:39.521 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:12:39.521 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:12:39.521 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:39.521 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:39.521 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:12:39.521 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:12:39.521 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:39.521 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:39.521 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:12:39.521 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:12:39.521 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:39.521 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:39.521 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:12:39.521 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:12:39.521 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:39.521 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:39.521 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:12:39.521 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:12:39.521 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:12:39.521 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:12:39.521 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:12:39.521 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:39.521 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:12:39.521 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:39.521 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@234 -- # [[ up == up ]] 00:12:39.521 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:12:39.521 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:39.521 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:39.521 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:39.521 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:12:39.521 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:12:39.521 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:39.521 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:12:39.521 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:39.521 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@234 -- # [[ up == up ]] 00:12:39.521 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:12:39.521 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:39.521 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:39.521 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:39.521 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:12:39.521 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:12:39.521 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:12:39.521 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # is_hw=yes 00:12:39.521 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:12:39.521 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:12:39.521 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:12:39.521 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:12:39.521 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@257 -- # create_target_ns 00:12:39.521 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:12:39.521 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:12:39.521 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:12:39.521 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:39.521 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:12:39.521 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:12:39.521 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:39.521 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:39.522 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:12:39.522 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:12:39.522 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:12:39.522 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:12:39.522 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@27 -- # local -gA dev_map 00:12:39.522 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@28 -- # local -g _dev 00:12:39.522 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:12:39.522 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:12:39.522 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:12:39.522 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:12:39.522 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@44 -- # ips=() 00:12:39.522 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:12:39.522 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:12:39.522 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:12:39.522 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:12:39.522 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:12:39.522 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:12:39.522 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:12:39.522 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:12:39.522 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:12:39.522 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:12:39.522 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:12:39.522 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:12:39.522 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:12:39.522 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:12:39.522 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:12:39.522 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:12:39.522 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:12:39.522 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:12:39.522 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:12:39.522 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:12:39.522 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@11 -- # local val=167772161 00:12:39.522 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:12:39.522 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:12:39.522 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:12:39.522 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:12:39.522 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:12:39.522 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:12:39.522 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:12:39.522 10.0.0.1 00:12:39.522 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:12:39.522 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:12:39.522 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:39.522 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:39.522 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:12:39.522 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@11 -- # local val=167772162 00:12:39.522 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:12:39.522 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:12:39.522 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:12:39.522 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:12:39.522 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:12:39.522 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:12:39.522 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:12:39.522 10.0.0.2 00:12:39.522 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:12:39.522 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:12:39.522 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:12:39.522 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:12:39.522 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:12:39.522 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:12:39.522 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:12:39.522 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:39.522 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:39.522 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:12:39.522 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:12:39.522 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:12:39.522 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:12:39.522 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:12:39.522 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:12:39.522 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:12:39.522 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:12:39.522 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:12:39.522 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:12:39.522 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:12:39.522 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@38 -- # ping_ips 1 00:12:39.522 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:12:39.522 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:12:39.522 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:12:39.522 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:12:39.522 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:12:39.522 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:12:39.522 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:12:39.522 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:12:39.522 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:12:39.522 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@107 -- # local dev=initiator0 00:12:39.522 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:12:39.522 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:12:39.522 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:12:39.522 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:12:39.522 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:12:39.522 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:12:39.522 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:12:39.522 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:12:39.522 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:12:39.522 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:12:39.522 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:12:39.522 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:39.522 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:39.522 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:12:39.522 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:12:39.522 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:39.522 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.652 ms 00:12:39.522 00:12:39.522 --- 10.0.0.1 ping statistics --- 00:12:39.522 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:39.522 rtt min/avg/max/mdev = 0.652/0.652/0.652/0.000 ms 00:12:39.522 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:12:39.522 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:12:39.522 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:12:39.523 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:12:39.523 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:39.523 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:39.523 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@168 -- # get_net_dev target0 00:12:39.523 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@107 -- # local dev=target0 00:12:39.523 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:12:39.523 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:12:39.523 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:12:39.523 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:12:39.523 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:12:39.523 19:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:12:39.523 19:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:12:39.523 19:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:12:39.523 19:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:12:39.523 19:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:12:39.523 19:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:12:39.523 19:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:12:39.523 19:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:12:39.523 19:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:12:39.523 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:39.523 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.330 ms 00:12:39.523 00:12:39.523 --- 10.0.0.2 ping statistics --- 00:12:39.523 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:39.523 rtt min/avg/max/mdev = 0.330/0.330/0.330/0.000 ms 00:12:39.523 19:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@98 -- # (( pair++ )) 00:12:39.523 19:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:12:39.523 19:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:39.523 19:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@270 -- # return 0 00:12:39.523 19:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:12:39.523 19:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:12:39.523 19:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:12:39.523 19:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:12:39.523 19:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:12:39.523 19:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:12:39.523 19:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:12:39.523 19:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:12:39.523 19:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:12:39.523 19:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:12:39.523 19:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@107 -- # local dev=initiator0 00:12:39.523 19:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:12:39.523 19:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:12:39.523 19:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:12:39.523 19:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:12:39.523 19:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:12:39.523 19:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:12:39.523 19:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:12:39.523 19:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:12:39.523 19:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:12:39.523 19:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:39.523 19:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:12:39.523 19:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:12:39.523 19:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:12:39.523 19:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:12:39.523 19:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:12:39.523 19:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:12:39.523 19:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@107 -- # local dev=initiator1 00:12:39.523 19:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:12:39.523 19:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:12:39.523 19:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@109 -- # return 1 00:12:39.523 19:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@168 -- # dev= 00:12:39.523 19:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@169 -- # return 0 00:12:39.523 19:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:12:39.523 19:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:12:39.523 19:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:12:39.523 19:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:12:39.523 19:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:12:39.523 19:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:39.523 19:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:39.523 19:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@168 -- # get_net_dev target0 00:12:39.523 19:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@107 -- # local dev=target0 00:12:39.523 19:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:12:39.523 19:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:12:39.523 19:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:12:39.523 19:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:12:39.523 19:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:12:39.523 19:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:12:39.523 19:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:12:39.523 19:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:12:39.523 19:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:12:39.523 19:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:39.523 19:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:12:39.523 19:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:12:39.523 19:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:12:39.523 19:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:12:39.523 19:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:39.523 19:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:39.523 19:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@168 -- # get_net_dev target1 00:12:39.523 19:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@107 -- # local dev=target1 00:12:39.523 19:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:12:39.523 19:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:12:39.523 19:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@109 -- # return 1 00:12:39.523 19:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@168 -- # dev= 00:12:39.523 19:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@169 -- # return 0 00:12:39.523 19:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:12:39.523 19:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:39.523 19:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:12:39.523 19:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:12:39.523 19:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:39.523 19:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:12:39.523 19:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:12:39.523 19:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:12:39.523 19:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:12:39.523 19:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:39.523 19:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:39.524 19:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # nvmfpid=244041 00:12:39.524 19:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@329 -- # waitforlisten 244041 00:12:39.524 19:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@833 -- # '[' -z 244041 ']' 00:12:39.524 19:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:39.524 19:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:39.524 19:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:39.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:39.524 19:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:39.524 19:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:39.524 19:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:39.524 [2024-11-05 19:02:08.188291] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:12:39.524 [2024-11-05 19:02:08.188366] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:39.524 [2024-11-05 19:02:08.271487] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:39.524 [2024-11-05 19:02:08.313801] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:39.524 [2024-11-05 19:02:08.313837] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:39.524 [2024-11-05 19:02:08.313845] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:39.524 [2024-11-05 19:02:08.313852] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:39.524 [2024-11-05 19:02:08.313858] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:39.524 [2024-11-05 19:02:08.315454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:39.524 [2024-11-05 19:02:08.315591] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:39.524 [2024-11-05 19:02:08.315753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:39.524 [2024-11-05 19:02:08.315768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:39.784 19:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:39.784 19:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@866 -- # return 0 00:12:39.784 19:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:12:39.784 19:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:39.784 19:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:39.784 19:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:39.784 19:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:39.784 19:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.784 19:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:39.784 [2024-11-05 19:02:09.042854] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:39.784 19:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.784 19:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:12:39.784 19:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.784 19:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:39.784 [2024-11-05 19:02:09.059071] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:12:39.784 19:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.784 19:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:12:39.784 19:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.784 19:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:39.784 19:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.784 19:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:12:39.784 19:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.784 19:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:39.784 19:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.784 19:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:12:39.784 19:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.784 19:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:39.784 19:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.784 19:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:39.784 19:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:12:39.784 19:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.784 19:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:40.045 19:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.045 19:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:12:40.045 19:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:12:40.045 19:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:40.045 19:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:40.045 19:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:40.045 19:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.045 19:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:40.045 19:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:40.045 19:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.045 19:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:40.045 19:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:40.045 19:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:12:40.045 19:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:40.045 19:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:40.045 19:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:40.045 19:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:40.045 19:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:40.306 19:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:40.306 19:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:40.306 19:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:12:40.306 19:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.306 19:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:40.306 19:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.306 19:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:12:40.306 19:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.306 19:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:40.306 19:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.306 19:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:12:40.306 19:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.306 19:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:40.306 19:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.306 19:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:40.306 19:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:12:40.306 19:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.306 19:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:40.306 19:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.306 19:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:12:40.306 19:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:12:40.306 19:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:40.306 19:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:40.306 19:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:40.306 19:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:40.306 19:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:40.566 19:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:40.566 19:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:12:40.566 19:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:12:40.566 19:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.566 19:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:40.566 19:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.566 19:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:40.566 19:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.566 19:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:40.566 19:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.566 19:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:12:40.566 19:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:40.566 19:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:40.566 19:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:40.566 19:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.566 19:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:40.566 19:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:40.566 19:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.566 19:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:12:40.566 19:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:40.566 19:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:12:40.566 19:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:40.566 19:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:40.566 19:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:40.566 19:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:40.566 19:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:40.826 19:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:12:40.826 19:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:40.826 19:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:12:40.826 19:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:12:40.826 19:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:40.826 19:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:40.826 19:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:40.826 19:02:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:12:41.086 19:02:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:12:41.086 19:02:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:12:41.086 19:02:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:41.086 19:02:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:41.086 19:02:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:41.086 19:02:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:41.086 19:02:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:41.086 19:02:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.086 19:02:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:41.086 19:02:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.086 19:02:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:12:41.086 19:02:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:41.086 19:02:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:41.086 19:02:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:41.086 19:02:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.086 19:02:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:41.086 19:02:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:41.086 19:02:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.086 19:02:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:12:41.086 19:02:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:41.086 19:02:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:12:41.086 19:02:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:41.086 19:02:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:41.086 19:02:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:41.086 19:02:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:41.086 19:02:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:41.347 19:02:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:12:41.347 19:02:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:41.347 19:02:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:12:41.347 19:02:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:12:41.347 19:02:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:41.347 19:02:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:41.347 19:02:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:41.607 19:02:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:12:41.607 19:02:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:12:41.607 19:02:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:12:41.607 19:02:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:41.607 19:02:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:41.607 19:02:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:41.607 19:02:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:41.607 19:02:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:12:41.607 19:02:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.607 19:02:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:41.607 19:02:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.607 19:02:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:41.607 19:02:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:12:41.607 19:02:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.607 19:02:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:41.607 19:02:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.607 19:02:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:12:41.607 19:02:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:12:41.607 19:02:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:41.607 19:02:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:41.607 19:02:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:41.607 19:02:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:41.607 19:02:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:41.868 19:02:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:41.868 19:02:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:12:41.868 19:02:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:12:41.868 19:02:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:12:41.868 19:02:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@335 -- # nvmfcleanup 00:12:41.868 19:02:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@99 -- # sync 00:12:41.868 19:02:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:12:41.868 19:02:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@102 -- # set +e 00:12:41.868 19:02:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@103 -- # for i in {1..20} 00:12:41.868 19:02:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:12:41.868 rmmod nvme_tcp 00:12:41.868 rmmod nvme_fabrics 00:12:41.868 rmmod nvme_keyring 00:12:41.868 19:02:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:12:41.868 19:02:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # set -e 00:12:41.868 19:02:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # return 0 00:12:41.868 19:02:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # '[' -n 244041 ']' 00:12:41.868 19:02:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@337 -- # killprocess 244041 00:12:41.868 19:02:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@952 -- # '[' -z 244041 ']' 00:12:41.868 19:02:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # kill -0 244041 00:12:41.868 19:02:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@957 -- # uname 00:12:41.868 19:02:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:41.868 19:02:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 244041 00:12:42.129 19:02:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:42.129 19:02:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:42.129 19:02:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@970 -- # echo 'killing process with pid 244041' 00:12:42.129 killing process with pid 244041 00:12:42.129 19:02:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@971 -- # kill 244041 00:12:42.129 19:02:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@976 -- # wait 244041 00:12:42.130 19:02:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:12:42.130 19:02:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # nvmf_fini 00:12:42.130 19:02:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@264 -- # local dev 00:12:42.130 19:02:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@267 -- # remove_target_ns 00:12:42.130 19:02:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:12:42.130 19:02:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:12:42.130 19:02:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_target_ns 00:12:44.679 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@268 -- # delete_main_bridge 00:12:44.679 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:12:44.679 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@130 -- # return 0 00:12:44.679 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:12:44.679 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:12:44.679 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:12:44.679 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:12:44.679 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:12:44.679 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:12:44.679 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:12:44.679 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:12:44.679 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:12:44.679 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:12:44.679 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:12:44.679 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:12:44.679 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:12:44.679 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:12:44.679 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:12:44.679 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:12:44.679 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:12:44.679 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@41 -- # _dev=0 00:12:44.679 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@41 -- # dev_map=() 00:12:44.679 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@284 -- # iptr 00:12:44.679 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@542 -- # iptables-save 00:12:44.679 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:12:44.679 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@542 -- # iptables-restore 00:12:44.679 00:12:44.679 real 0m12.978s 00:12:44.679 user 0m15.226s 00:12:44.679 sys 0m6.395s 00:12:44.679 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:44.679 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:44.679 ************************************ 00:12:44.679 END TEST nvmf_referrals 00:12:44.679 ************************************ 00:12:44.679 19:02:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:44.679 19:02:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:44.679 19:02:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:44.679 19:02:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:44.679 ************************************ 00:12:44.679 START TEST nvmf_connect_disconnect 00:12:44.679 ************************************ 00:12:44.679 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:44.679 * Looking for test storage... 00:12:44.679 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:44.679 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:44.679 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # lcov --version 00:12:44.679 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:44.679 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:44.679 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:44.679 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:44.679 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:44.679 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:12:44.679 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:12:44.679 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:12:44.679 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:12:44.679 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:12:44.679 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:12:44.679 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:12:44.679 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:44.679 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:12:44.679 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:12:44.679 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:44.679 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:44.679 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:12:44.679 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:12:44.679 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:44.679 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:12:44.679 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:12:44.679 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:12:44.679 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:12:44.679 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:44.679 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:12:44.679 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:12:44.679 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:44.679 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:44.679 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:12:44.679 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:44.679 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:44.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:44.679 --rc genhtml_branch_coverage=1 00:12:44.679 --rc genhtml_function_coverage=1 00:12:44.679 --rc genhtml_legend=1 00:12:44.679 --rc geninfo_all_blocks=1 00:12:44.679 --rc geninfo_unexecuted_blocks=1 00:12:44.679 00:12:44.679 ' 00:12:44.679 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:44.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:44.679 --rc genhtml_branch_coverage=1 00:12:44.679 --rc genhtml_function_coverage=1 00:12:44.679 --rc genhtml_legend=1 00:12:44.679 --rc geninfo_all_blocks=1 00:12:44.679 --rc geninfo_unexecuted_blocks=1 00:12:44.679 00:12:44.679 ' 00:12:44.679 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:44.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:44.679 --rc genhtml_branch_coverage=1 00:12:44.679 --rc genhtml_function_coverage=1 00:12:44.679 --rc genhtml_legend=1 00:12:44.679 --rc geninfo_all_blocks=1 00:12:44.679 --rc geninfo_unexecuted_blocks=1 00:12:44.679 00:12:44.679 ' 00:12:44.679 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:44.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:44.680 --rc genhtml_branch_coverage=1 00:12:44.680 --rc genhtml_function_coverage=1 00:12:44.680 --rc genhtml_legend=1 00:12:44.680 --rc geninfo_all_blocks=1 00:12:44.680 --rc geninfo_unexecuted_blocks=1 00:12:44.680 00:12:44.680 ' 00:12:44.680 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:44.680 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:12:44.680 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:44.680 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:44.680 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:44.680 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:44.680 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:44.680 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:12:44.680 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:44.680 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:12:44.680 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:44.680 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:44.680 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:44.680 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:12:44.680 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:12:44.680 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:44.680 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:44.680 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:12:44.680 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:44.680 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:44.680 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:44.680 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.680 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.680 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.680 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:12:44.680 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.680 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:12:44.680 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:12:44.680 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:12:44.680 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:12:44.680 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@50 -- # : 0 00:12:44.680 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:12:44.680 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:12:44.680 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:12:44.680 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:44.680 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:44.680 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:12:44.680 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:12:44.680 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:12:44.680 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:12:44.680 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@54 -- # have_pci_nics=0 00:12:44.680 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:44.680 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:44.680 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:12:44.680 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:12:44.680 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:44.680 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # prepare_net_devs 00:12:44.680 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # local -g is_hw=no 00:12:44.680 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # remove_target_ns 00:12:44.680 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:12:44.680 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:12:44.680 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_target_ns 00:12:44.680 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:12:44.680 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:12:44.680 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # xtrace_disable 00:12:44.680 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:51.272 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:51.272 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@131 -- # pci_devs=() 00:12:51.272 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@131 -- # local -a pci_devs 00:12:51.272 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@132 -- # pci_net_devs=() 00:12:51.272 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:12:51.272 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@133 -- # pci_drivers=() 00:12:51.272 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@133 -- # local -A pci_drivers 00:12:51.272 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@135 -- # net_devs=() 00:12:51.272 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@135 -- # local -ga net_devs 00:12:51.272 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@136 -- # e810=() 00:12:51.272 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@136 -- # local -ga e810 00:12:51.272 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@137 -- # x722=() 00:12:51.272 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@137 -- # local -ga x722 00:12:51.272 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@138 -- # mlx=() 00:12:51.272 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@138 -- # local -ga mlx 00:12:51.272 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:51.272 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:51.272 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:51.272 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:51.272 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:51.272 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:51.272 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:51.272 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:51.272 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:51.272 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:51.272 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:51.272 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:51.272 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:12:51.272 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:12:51.272 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:12:51.272 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:12:51.272 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:12:51.272 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:12:51.272 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:12:51.272 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:51.272 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:51.272 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:12:51.272 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:12:51.272 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:51.272 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:51.272 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:12:51.272 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:12:51.272 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:51.272 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:51.272 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:12:51.272 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:12:51.272 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:51.272 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:51.272 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:12:51.272 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:12:51.272 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:12:51.272 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:12:51.272 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:12:51.272 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:51.272 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:12:51.272 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:51.272 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # [[ up == up ]] 00:12:51.272 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:12:51.272 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:51.272 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:51.272 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:51.272 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:12:51.272 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:12:51.272 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:51.272 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:12:51.272 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:51.272 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # [[ up == up ]] 00:12:51.272 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:12:51.272 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:51.272 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:51.272 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:51.272 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:12:51.272 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:12:51.272 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:12:51.272 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # is_hw=yes 00:12:51.272 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:12:51.272 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:12:51.273 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:12:51.273 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:12:51.273 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@257 -- # create_target_ns 00:12:51.273 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:12:51.273 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:12:51.273 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:12:51.273 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:51.273 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:12:51.273 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:12:51.273 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:51.273 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:51.273 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:12:51.273 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:12:51.535 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:12:51.535 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:12:51.535 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@27 -- # local -gA dev_map 00:12:51.535 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@28 -- # local -g _dev 00:12:51.535 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:12:51.535 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:12:51.535 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:12:51.535 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:12:51.535 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@44 -- # ips=() 00:12:51.535 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:12:51.535 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:12:51.535 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:12:51.535 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:12:51.535 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:12:51.535 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:12:51.535 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:12:51.535 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:12:51.535 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:12:51.535 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:12:51.535 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:12:51.535 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:12:51.535 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:12:51.535 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:12:51.535 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:12:51.535 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:12:51.535 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:12:51.535 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:12:51.535 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:12:51.535 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:12:51.535 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@11 -- # local val=167772161 00:12:51.535 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:12:51.535 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:12:51.535 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:12:51.535 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:12:51.535 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:12:51.535 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:12:51.535 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:12:51.535 10.0.0.1 00:12:51.535 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:12:51.535 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:12:51.535 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:51.535 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:51.535 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:12:51.535 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@11 -- # local val=167772162 00:12:51.535 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:12:51.535 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:12:51.535 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:12:51.535 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:12:51.535 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:12:51.535 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:12:51.535 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:12:51.535 10.0.0.2 00:12:51.535 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:12:51.535 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:12:51.535 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:12:51.535 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:12:51.535 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:12:51.535 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:12:51.535 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:12:51.535 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:51.535 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:51.535 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:12:51.535 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:12:51.796 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:12:51.796 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:12:51.796 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:12:51.796 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:12:51.796 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:12:51.797 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:12:51.797 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:12:51.797 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:12:51.797 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:12:51.797 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@38 -- # ping_ips 1 00:12:51.797 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:12:51.797 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:12:51.797 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:12:51.797 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:12:51.797 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:12:51.797 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:12:51.797 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:12:51.797 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:12:51.797 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:12:51.797 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@107 -- # local dev=initiator0 00:12:51.797 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:12:51.797 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:12:51.797 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:12:51.797 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:12:51.797 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:12:51.797 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:12:51.797 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:12:51.797 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:12:51.797 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:12:51.797 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:12:51.797 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:12:51.797 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:51.797 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:51.797 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:12:51.797 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:12:51.797 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:51.797 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.662 ms 00:12:51.797 00:12:51.797 --- 10.0.0.1 ping statistics --- 00:12:51.797 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:51.797 rtt min/avg/max/mdev = 0.662/0.662/0.662/0.000 ms 00:12:51.797 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:12:51.797 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:12:51.797 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:12:51.797 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:12:51.797 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:51.797 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:51.797 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@168 -- # get_net_dev target0 00:12:51.797 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@107 -- # local dev=target0 00:12:51.797 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:12:51.797 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:12:51.797 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:12:51.797 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:12:51.797 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:12:51.797 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:12:51.797 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:12:51.797 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:12:51.797 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:12:51.797 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:12:51.797 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:12:51.797 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:12:51.797 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:12:51.797 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:12:51.797 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:51.797 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.339 ms 00:12:51.797 00:12:51.797 --- 10.0.0.2 ping statistics --- 00:12:51.797 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:51.797 rtt min/avg/max/mdev = 0.339/0.339/0.339/0.000 ms 00:12:51.797 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@98 -- # (( pair++ )) 00:12:51.797 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:12:51.797 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:51.797 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # return 0 00:12:51.797 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:12:51.797 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:12:51.797 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:12:51.797 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:12:51.797 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:12:51.797 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:12:51.797 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:12:51.797 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:12:51.797 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:12:51.797 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:12:51.797 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@107 -- # local dev=initiator0 00:12:51.797 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:12:51.797 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:12:51.797 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:12:51.797 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:12:51.797 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:12:51.797 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:12:51.797 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:12:51.797 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:12:51.797 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:12:51.797 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:51.797 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:12:51.797 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:12:51.797 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:12:51.797 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:12:51.797 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:12:51.797 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:12:51.797 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@107 -- # local dev=initiator1 00:12:51.797 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:12:51.797 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:12:51.797 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@109 -- # return 1 00:12:51.797 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@168 -- # dev= 00:12:51.797 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@169 -- # return 0 00:12:51.797 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:12:51.797 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:12:51.797 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:12:51.797 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:12:51.797 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:12:51.797 19:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:51.797 19:02:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:51.797 19:02:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@168 -- # get_net_dev target0 00:12:51.798 19:02:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@107 -- # local dev=target0 00:12:51.798 19:02:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:12:51.798 19:02:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:12:51.798 19:02:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:12:51.798 19:02:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:12:51.798 19:02:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:12:51.798 19:02:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:12:51.798 19:02:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:12:51.798 19:02:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:12:51.798 19:02:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:12:51.798 19:02:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:51.798 19:02:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:12:51.798 19:02:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:12:51.798 19:02:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:12:51.798 19:02:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:12:51.798 19:02:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:51.798 19:02:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:51.798 19:02:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@168 -- # get_net_dev target1 00:12:51.798 19:02:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@107 -- # local dev=target1 00:12:51.798 19:02:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:12:51.798 19:02:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:12:51.798 19:02:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@109 -- # return 1 00:12:51.798 19:02:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@168 -- # dev= 00:12:51.798 19:02:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@169 -- # return 0 00:12:51.798 19:02:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:12:51.798 19:02:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:51.798 19:02:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:12:51.798 19:02:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:12:51.798 19:02:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:51.798 19:02:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:12:51.798 19:02:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:12:51.798 19:02:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:12:51.798 19:02:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:12:51.798 19:02:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:51.798 19:02:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:51.798 19:02:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # nvmfpid=249077 00:12:51.798 19:02:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # waitforlisten 249077 00:12:51.798 19:02:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:51.798 19:02:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # '[' -z 249077 ']' 00:12:51.798 19:02:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:51.798 19:02:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:51.798 19:02:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:51.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:51.798 19:02:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:51.798 19:02:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:52.059 [2024-11-05 19:02:21.138266] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:12:52.059 [2024-11-05 19:02:21.138314] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:52.059 [2024-11-05 19:02:21.216576] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:52.059 [2024-11-05 19:02:21.253003] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:52.059 [2024-11-05 19:02:21.253036] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:52.059 [2024-11-05 19:02:21.253044] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:52.059 [2024-11-05 19:02:21.253051] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:52.059 [2024-11-05 19:02:21.253057] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:52.059 [2024-11-05 19:02:21.254776] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:52.059 [2024-11-05 19:02:21.254844] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:52.059 [2024-11-05 19:02:21.255008] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:52.059 [2024-11-05 19:02:21.255009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:52.631 19:02:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:52.631 19:02:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@866 -- # return 0 00:12:52.631 19:02:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:12:52.631 19:02:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:52.631 19:02:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:52.892 19:02:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:52.892 19:02:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:52.892 19:02:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.892 19:02:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:52.892 [2024-11-05 19:02:21.988934] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:52.892 19:02:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.892 19:02:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:12:52.892 19:02:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.892 19:02:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:52.892 19:02:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.892 19:02:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:12:52.892 19:02:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:52.892 19:02:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.892 19:02:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:52.892 19:02:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.892 19:02:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:52.892 19:02:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.892 19:02:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:52.892 19:02:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.892 19:02:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:52.892 19:02:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.892 19:02:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:52.892 [2024-11-05 19:02:22.058132] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:52.892 19:02:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.892 19:02:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:12:52.892 19:02:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:12:52.892 19:02:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:12:57.098 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:00.546 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:03.847 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:08.054 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:11.356 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:11.356 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:13:11.356 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:13:11.356 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # nvmfcleanup 00:13:11.356 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@99 -- # sync 00:13:11.356 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:13:11.356 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # set +e 00:13:11.356 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # for i in {1..20} 00:13:11.356 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:13:11.356 rmmod nvme_tcp 00:13:11.356 rmmod nvme_fabrics 00:13:11.356 rmmod nvme_keyring 00:13:11.356 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:13:11.356 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # set -e 00:13:11.356 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # return 0 00:13:11.356 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # '[' -n 249077 ']' 00:13:11.356 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@337 -- # killprocess 249077 00:13:11.356 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # '[' -z 249077 ']' 00:13:11.356 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # kill -0 249077 00:13:11.356 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@957 -- # uname 00:13:11.356 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:11.356 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 249077 00:13:11.356 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:11.356 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:11.356 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@970 -- # echo 'killing process with pid 249077' 00:13:11.356 killing process with pid 249077 00:13:11.356 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@971 -- # kill 249077 00:13:11.356 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@976 -- # wait 249077 00:13:11.356 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:13:11.356 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # nvmf_fini 00:13:11.356 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@264 -- # local dev 00:13:11.356 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@267 -- # remove_target_ns 00:13:11.356 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:13:11.356 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:13:11.356 19:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_target_ns 00:13:13.903 19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@268 -- # delete_main_bridge 00:13:13.903 19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:13:13.903 19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@130 -- # return 0 00:13:13.903 19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:13:13.903 19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:13:13.903 19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:13:13.903 19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:13:13.903 19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:13:13.903 19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:13:13.903 19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:13:13.903 19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:13:13.903 19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:13:13.903 19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:13:13.903 19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:13:13.903 19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:13:13.903 19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:13:13.903 19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:13:13.903 19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:13:13.903 19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:13:13.903 19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:13:13.903 19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@41 -- # _dev=0 00:13:13.903 19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@41 -- # dev_map=() 00:13:13.903 19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@284 -- # iptr 00:13:13.903 19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@542 -- # iptables-save 00:13:13.903 19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:13:13.903 19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@542 -- # iptables-restore 00:13:13.903 00:13:13.903 real 0m29.239s 00:13:13.903 user 1m19.398s 00:13:13.903 sys 0m6.934s 00:13:13.903 19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:13.903 19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:13.903 ************************************ 00:13:13.903 END TEST nvmf_connect_disconnect 00:13:13.903 ************************************ 00:13:13.903 19:02:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:13:13.903 19:02:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:13.903 19:02:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:13.903 19:02:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:13.903 ************************************ 00:13:13.903 START TEST nvmf_multitarget 00:13:13.903 ************************************ 00:13:13.903 19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:13:13.903 * Looking for test storage... 00:13:13.903 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:13.903 19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:13.903 19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # lcov --version 00:13:13.903 19:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:13.903 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:13.903 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:13.903 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:13.903 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:13.903 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:13:13.903 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:13:13.903 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:13:13.903 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:13:13.903 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:13:13.903 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:13:13.903 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:13:13.903 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:13.903 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:13:13.903 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:13:13.903 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:13.903 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:13.903 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:13:13.903 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:13:13.903 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:13.903 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:13:13.903 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:13:13.903 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:13:13.903 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:13:13.903 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:13.903 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:13:13.903 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:13:13.903 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:13.903 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:13.903 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:13:13.903 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:13.903 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:13.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:13.903 --rc genhtml_branch_coverage=1 00:13:13.903 --rc genhtml_function_coverage=1 00:13:13.903 --rc genhtml_legend=1 00:13:13.903 --rc geninfo_all_blocks=1 00:13:13.903 --rc geninfo_unexecuted_blocks=1 00:13:13.903 00:13:13.903 ' 00:13:13.903 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:13.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:13.903 --rc genhtml_branch_coverage=1 00:13:13.903 --rc genhtml_function_coverage=1 00:13:13.903 --rc genhtml_legend=1 00:13:13.903 --rc geninfo_all_blocks=1 00:13:13.903 --rc geninfo_unexecuted_blocks=1 00:13:13.903 00:13:13.903 ' 00:13:13.903 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:13.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:13.903 --rc genhtml_branch_coverage=1 00:13:13.903 --rc genhtml_function_coverage=1 00:13:13.903 --rc genhtml_legend=1 00:13:13.903 --rc geninfo_all_blocks=1 00:13:13.903 --rc geninfo_unexecuted_blocks=1 00:13:13.903 00:13:13.903 ' 00:13:13.903 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:13.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:13.903 --rc genhtml_branch_coverage=1 00:13:13.903 --rc genhtml_function_coverage=1 00:13:13.903 --rc genhtml_legend=1 00:13:13.903 --rc geninfo_all_blocks=1 00:13:13.903 --rc geninfo_unexecuted_blocks=1 00:13:13.903 00:13:13.903 ' 00:13:13.904 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:13.904 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:13:13.904 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:13.904 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:13.904 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:13.904 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:13.904 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:13.904 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:13:13.904 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:13.904 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:13:13.904 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:13.904 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:13.904 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:13.904 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:13:13.904 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:13:13.904 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:13.904 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:13.904 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:13:13.904 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:13.904 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:13.904 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:13.904 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.904 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.904 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.904 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:13:13.904 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.904 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:13:13.904 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:13:13.904 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:13:13.904 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:13:13.904 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@50 -- # : 0 00:13:13.904 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:13:13.904 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:13:13.904 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:13:13.904 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:13.904 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:13.904 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:13:13.904 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:13:13.904 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:13:13.904 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:13:13.904 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@54 -- # have_pci_nics=0 00:13:13.904 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:13.904 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:13:13.904 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:13:13.904 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:13.904 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # prepare_net_devs 00:13:13.904 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # local -g is_hw=no 00:13:13.904 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@260 -- # remove_target_ns 00:13:13.904 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:13:13.904 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:13:13.904 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_target_ns 00:13:13.904 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:13:13.904 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:13:13.904 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # xtrace_disable 00:13:13.904 19:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:22.050 19:02:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:22.050 19:02:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@131 -- # pci_devs=() 00:13:22.050 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@131 -- # local -a pci_devs 00:13:22.050 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@132 -- # pci_net_devs=() 00:13:22.050 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:13:22.050 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@133 -- # pci_drivers=() 00:13:22.050 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@133 -- # local -A pci_drivers 00:13:22.050 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@135 -- # net_devs=() 00:13:22.050 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@135 -- # local -ga net_devs 00:13:22.050 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@136 -- # e810=() 00:13:22.050 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@136 -- # local -ga e810 00:13:22.050 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@137 -- # x722=() 00:13:22.050 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@137 -- # local -ga x722 00:13:22.050 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@138 -- # mlx=() 00:13:22.050 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@138 -- # local -ga mlx 00:13:22.050 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:22.050 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:22.050 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:22.050 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:22.050 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:22.050 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:22.050 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:22.050 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:22.050 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:22.050 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:22.050 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:22.050 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:22.050 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:13:22.050 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:13:22.050 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:13:22.050 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:13:22.050 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:13:22.050 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:13:22.050 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:13:22.050 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:22.050 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:22.050 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:13:22.050 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:13:22.050 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:22.050 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:22.050 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:13:22.050 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:13:22.050 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:22.050 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:22.050 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:13:22.050 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:13:22.050 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:22.050 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:22.050 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:13:22.050 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:13:22.050 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:13:22.050 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:13:22.050 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:13:22.050 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:22.050 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:13:22.050 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:22.050 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@234 -- # [[ up == up ]] 00:13:22.050 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:13:22.050 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:22.050 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:22.050 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:22.050 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:13:22.050 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:13:22.050 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:22.050 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:13:22.050 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:22.050 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@234 -- # [[ up == up ]] 00:13:22.050 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:13:22.050 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:22.050 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:22.050 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:22.050 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:13:22.050 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:13:22.050 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:13:22.050 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # is_hw=yes 00:13:22.050 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:13:22.050 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:13:22.050 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:13:22.050 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:13:22.051 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@257 -- # create_target_ns 00:13:22.051 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:13:22.051 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:13:22.051 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:13:22.051 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:22.051 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:13:22.051 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:13:22.051 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:22.051 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:22.051 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:13:22.051 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:13:22.051 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:13:22.051 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:13:22.051 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@27 -- # local -gA dev_map 00:13:22.051 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@28 -- # local -g _dev 00:13:22.051 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:13:22.051 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:13:22.051 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:13:22.051 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:13:22.051 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@44 -- # ips=() 00:13:22.051 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:13:22.051 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:13:22.051 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:13:22.051 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:13:22.051 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:13:22.051 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:13:22.051 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:13:22.051 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:13:22.051 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:13:22.051 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:13:22.051 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:13:22.051 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:13:22.051 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:13:22.051 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:13:22.051 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:13:22.051 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:13:22.051 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:13:22.051 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:13:22.051 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:13:22.051 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:13:22.051 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@11 -- # local val=167772161 00:13:22.051 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:13:22.051 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:13:22.051 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:13:22.051 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:13:22.051 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:13:22.051 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:13:22.051 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:13:22.051 10.0.0.1 00:13:22.051 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:13:22.051 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:13:22.051 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:22.051 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:22.051 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:13:22.051 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@11 -- # local val=167772162 00:13:22.051 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:13:22.051 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:13:22.051 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:13:22.051 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:13:22.051 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:13:22.051 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:13:22.051 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:13:22.051 10.0.0.2 00:13:22.051 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:13:22.051 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:13:22.051 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:13:22.051 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:13:22.051 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:13:22.051 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:13:22.051 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:13:22.051 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:22.051 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:22.051 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:13:22.051 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:13:22.051 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:13:22.051 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:13:22.051 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:13:22.051 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:13:22.051 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:13:22.051 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:13:22.051 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:13:22.051 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:13:22.051 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:13:22.051 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@38 -- # ping_ips 1 00:13:22.051 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:13:22.051 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:13:22.051 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:13:22.051 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:13:22.051 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:13:22.051 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:13:22.051 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:13:22.051 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:13:22.051 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:13:22.051 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@107 -- # local dev=initiator0 00:13:22.051 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:13:22.051 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:13:22.051 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:13:22.051 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:13:22.051 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:13:22.051 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:13:22.051 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:13:22.051 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:13:22.051 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:13:22.051 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:13:22.051 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:13:22.051 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:22.051 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:22.051 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:13:22.052 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:13:22.052 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:22.052 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.604 ms 00:13:22.052 00:13:22.052 --- 10.0.0.1 ping statistics --- 00:13:22.052 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:22.052 rtt min/avg/max/mdev = 0.604/0.604/0.604/0.000 ms 00:13:22.052 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:13:22.052 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:13:22.052 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:13:22.052 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:13:22.052 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:22.052 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:22.052 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@168 -- # get_net_dev target0 00:13:22.052 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@107 -- # local dev=target0 00:13:22.052 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:13:22.052 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:13:22.052 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:13:22.052 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:13:22.052 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:13:22.052 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:13:22.052 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:13:22.052 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:13:22.052 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:13:22.052 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:13:22.052 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:13:22.052 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:13:22.052 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:13:22.052 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:13:22.052 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:22.052 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.217 ms 00:13:22.052 00:13:22.052 --- 10.0.0.2 ping statistics --- 00:13:22.052 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:22.052 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:13:22.052 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@98 -- # (( pair++ )) 00:13:22.052 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:13:22.052 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:22.052 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@270 -- # return 0 00:13:22.052 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:13:22.052 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:13:22.052 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:13:22.052 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:13:22.052 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:13:22.052 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:13:22.052 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:13:22.052 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:13:22.052 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:13:22.052 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:13:22.052 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@107 -- # local dev=initiator0 00:13:22.052 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:13:22.052 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:13:22.052 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:13:22.052 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:13:22.052 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:13:22.052 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:13:22.052 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:13:22.052 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:13:22.052 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:13:22.052 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:22.052 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:13:22.052 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:13:22.052 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:13:22.052 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:13:22.052 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:13:22.052 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:13:22.052 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@107 -- # local dev=initiator1 00:13:22.052 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:13:22.052 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:13:22.052 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@109 -- # return 1 00:13:22.052 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@168 -- # dev= 00:13:22.052 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@169 -- # return 0 00:13:22.052 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:13:22.052 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:13:22.052 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:13:22.052 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:13:22.052 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:13:22.052 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:22.052 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:22.052 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@168 -- # get_net_dev target0 00:13:22.052 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@107 -- # local dev=target0 00:13:22.052 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:13:22.052 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:13:22.052 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:13:22.052 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:13:22.052 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:13:22.052 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:13:22.052 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:13:22.052 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:13:22.052 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:13:22.052 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:22.052 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:13:22.052 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:13:22.052 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:13:22.052 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:13:22.052 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:22.052 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:22.052 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@168 -- # get_net_dev target1 00:13:22.052 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@107 -- # local dev=target1 00:13:22.052 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:13:22.052 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:13:22.052 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@109 -- # return 1 00:13:22.052 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@168 -- # dev= 00:13:22.052 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@169 -- # return 0 00:13:22.052 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:13:22.052 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:22.052 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:13:22.052 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:13:22.052 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:22.052 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:13:22.052 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:13:22.052 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:13:22.052 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:13:22.052 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:22.052 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:22.052 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:22.052 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # nvmfpid=257225 00:13:22.053 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@329 -- # waitforlisten 257225 00:13:22.053 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@833 -- # '[' -z 257225 ']' 00:13:22.053 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:22.053 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:22.053 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:22.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:22.053 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:22.053 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:22.053 [2024-11-05 19:02:50.529777] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:13:22.053 [2024-11-05 19:02:50.529831] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:22.053 [2024-11-05 19:02:50.604336] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:22.053 [2024-11-05 19:02:50.641644] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:22.053 [2024-11-05 19:02:50.641676] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:22.053 [2024-11-05 19:02:50.641684] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:22.053 [2024-11-05 19:02:50.641691] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:22.053 [2024-11-05 19:02:50.641697] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:22.053 [2024-11-05 19:02:50.643482] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:22.053 [2024-11-05 19:02:50.643626] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:22.053 [2024-11-05 19:02:50.643644] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:22.053 [2024-11-05 19:02:50.643648] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:22.053 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:22.053 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@866 -- # return 0 00:13:22.053 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:13:22.053 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:22.053 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:22.053 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:22.053 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:22.053 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:22.053 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:13:22.053 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:13:22.053 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:13:22.053 "nvmf_tgt_1" 00:13:22.053 19:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:13:22.053 "nvmf_tgt_2" 00:13:22.053 19:02:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:22.053 19:02:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:13:22.053 19:02:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:13:22.053 19:02:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:13:22.053 true 00:13:22.053 19:02:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:13:22.314 true 00:13:22.314 19:02:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:22.314 19:02:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:13:22.314 19:02:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:13:22.314 19:02:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:13:22.314 19:02:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:13:22.314 19:02:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@335 -- # nvmfcleanup 00:13:22.314 19:02:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@99 -- # sync 00:13:22.314 19:02:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:13:22.314 19:02:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@102 -- # set +e 00:13:22.314 19:02:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@103 -- # for i in {1..20} 00:13:22.314 19:02:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:13:22.314 rmmod nvme_tcp 00:13:22.314 rmmod nvme_fabrics 00:13:22.314 rmmod nvme_keyring 00:13:22.314 19:02:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:13:22.314 19:02:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # set -e 00:13:22.314 19:02:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # return 0 00:13:22.314 19:02:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # '[' -n 257225 ']' 00:13:22.314 19:02:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@337 -- # killprocess 257225 00:13:22.314 19:02:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@952 -- # '[' -z 257225 ']' 00:13:22.314 19:02:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # kill -0 257225 00:13:22.314 19:02:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@957 -- # uname 00:13:22.314 19:02:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:22.314 19:02:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 257225 00:13:22.575 19:02:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:22.575 19:02:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:22.575 19:02:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@970 -- # echo 'killing process with pid 257225' 00:13:22.575 killing process with pid 257225 00:13:22.575 19:02:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@971 -- # kill 257225 00:13:22.575 19:02:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@976 -- # wait 257225 00:13:22.575 19:02:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:13:22.575 19:02:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # nvmf_fini 00:13:22.575 19:02:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@264 -- # local dev 00:13:22.575 19:02:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@267 -- # remove_target_ns 00:13:22.575 19:02:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:13:22.575 19:02:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:13:22.575 19:02:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_target_ns 00:13:25.121 19:02:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@268 -- # delete_main_bridge 00:13:25.121 19:02:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:13:25.121 19:02:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@130 -- # return 0 00:13:25.121 19:02:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:13:25.121 19:02:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:13:25.121 19:02:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:13:25.121 19:02:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:13:25.121 19:02:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:13:25.121 19:02:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:13:25.121 19:02:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:13:25.121 19:02:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:13:25.121 19:02:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:13:25.121 19:02:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:13:25.121 19:02:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:13:25.121 19:02:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:13:25.121 19:02:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:13:25.121 19:02:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:13:25.121 19:02:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:13:25.121 19:02:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:13:25.121 19:02:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:13:25.121 19:02:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@41 -- # _dev=0 00:13:25.121 19:02:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@41 -- # dev_map=() 00:13:25.121 19:02:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@284 -- # iptr 00:13:25.121 19:02:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@542 -- # iptables-save 00:13:25.121 19:02:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:13:25.121 19:02:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@542 -- # iptables-restore 00:13:25.121 00:13:25.121 real 0m11.026s 00:13:25.121 user 0m7.464s 00:13:25.121 sys 0m5.899s 00:13:25.121 19:02:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:25.121 19:02:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:25.121 ************************************ 00:13:25.121 END TEST nvmf_multitarget 00:13:25.121 ************************************ 00:13:25.121 19:02:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:13:25.121 19:02:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:25.121 19:02:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:25.121 19:02:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:25.121 ************************************ 00:13:25.121 START TEST nvmf_rpc 00:13:25.121 ************************************ 00:13:25.121 19:02:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:13:25.121 * Looking for test storage... 00:13:25.121 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:25.121 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:25.121 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:13:25.121 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:25.121 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:25.121 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:25.121 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:25.121 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:25.121 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:13:25.121 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:13:25.121 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:13:25.121 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:13:25.121 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:13:25.121 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:13:25.121 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:13:25.121 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:25.121 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:13:25.121 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:13:25.121 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:25.121 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:25.121 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:13:25.121 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:13:25.121 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:25.121 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:13:25.121 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:13:25.121 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:13:25.121 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:13:25.121 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:25.121 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:13:25.121 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:13:25.121 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:25.121 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:25.121 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:13:25.121 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:25.121 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:25.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:25.121 --rc genhtml_branch_coverage=1 00:13:25.121 --rc genhtml_function_coverage=1 00:13:25.121 --rc genhtml_legend=1 00:13:25.121 --rc geninfo_all_blocks=1 00:13:25.121 --rc geninfo_unexecuted_blocks=1 00:13:25.121 00:13:25.121 ' 00:13:25.121 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:25.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:25.121 --rc genhtml_branch_coverage=1 00:13:25.121 --rc genhtml_function_coverage=1 00:13:25.121 --rc genhtml_legend=1 00:13:25.121 --rc geninfo_all_blocks=1 00:13:25.121 --rc geninfo_unexecuted_blocks=1 00:13:25.121 00:13:25.121 ' 00:13:25.121 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:25.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:25.121 --rc genhtml_branch_coverage=1 00:13:25.121 --rc genhtml_function_coverage=1 00:13:25.121 --rc genhtml_legend=1 00:13:25.121 --rc geninfo_all_blocks=1 00:13:25.121 --rc geninfo_unexecuted_blocks=1 00:13:25.121 00:13:25.121 ' 00:13:25.121 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:25.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:25.121 --rc genhtml_branch_coverage=1 00:13:25.121 --rc genhtml_function_coverage=1 00:13:25.121 --rc genhtml_legend=1 00:13:25.121 --rc geninfo_all_blocks=1 00:13:25.121 --rc geninfo_unexecuted_blocks=1 00:13:25.121 00:13:25.121 ' 00:13:25.121 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:25.121 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:13:25.121 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:25.121 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:25.121 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:25.122 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:25.122 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:25.122 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:13:25.122 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:25.122 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:13:25.122 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:25.122 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:25.122 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:25.122 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:13:25.122 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:13:25.122 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:25.122 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:25.122 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:13:25.122 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:25.122 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:25.122 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:25.122 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.122 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.122 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.122 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:13:25.122 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.122 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:13:25.122 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:13:25.122 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:13:25.122 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:13:25.122 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@50 -- # : 0 00:13:25.122 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:13:25.122 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:13:25.122 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:13:25.122 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:25.122 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:25.122 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:13:25.122 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:13:25.122 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:13:25.122 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:13:25.122 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@54 -- # have_pci_nics=0 00:13:25.122 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:13:25.122 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:13:25.122 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:13:25.122 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:25.122 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # prepare_net_devs 00:13:25.122 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # local -g is_hw=no 00:13:25.122 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@260 -- # remove_target_ns 00:13:25.122 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:13:25.122 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:13:25.122 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_target_ns 00:13:25.122 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:13:25.122 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:13:25.122 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # xtrace_disable 00:13:25.122 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:33.262 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:33.262 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@131 -- # pci_devs=() 00:13:33.262 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@131 -- # local -a pci_devs 00:13:33.262 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@132 -- # pci_net_devs=() 00:13:33.262 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:13:33.262 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@133 -- # pci_drivers=() 00:13:33.262 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@133 -- # local -A pci_drivers 00:13:33.262 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@135 -- # net_devs=() 00:13:33.262 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@135 -- # local -ga net_devs 00:13:33.262 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@136 -- # e810=() 00:13:33.262 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@136 -- # local -ga e810 00:13:33.262 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@137 -- # x722=() 00:13:33.262 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@137 -- # local -ga x722 00:13:33.262 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@138 -- # mlx=() 00:13:33.262 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@138 -- # local -ga mlx 00:13:33.262 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:33.262 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:33.262 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:33.262 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:33.262 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:33.262 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:33.262 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:33.262 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:33.262 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:33.262 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:33.263 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:33.263 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:33.263 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:13:33.263 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:13:33.263 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:13:33.263 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:13:33.263 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:13:33.263 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:13:33.263 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:13:33.263 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:33.263 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:33.263 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:13:33.263 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:13:33.263 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:33.263 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:33.263 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:13:33.263 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:13:33.263 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:33.263 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:33.263 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:13:33.263 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:13:33.263 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:33.263 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:33.263 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:13:33.263 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:13:33.263 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:13:33.263 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:13:33.263 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:13:33.263 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:33.263 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:13:33.263 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:33.263 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@234 -- # [[ up == up ]] 00:13:33.263 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:13:33.263 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:33.263 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:33.263 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:33.263 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:13:33.263 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:13:33.263 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:33.263 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:13:33.263 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:33.263 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@234 -- # [[ up == up ]] 00:13:33.263 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:13:33.263 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:33.263 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:33.263 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:33.263 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:13:33.263 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:13:33.263 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:13:33.263 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # is_hw=yes 00:13:33.263 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:13:33.263 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:13:33.263 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:13:33.263 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:13:33.263 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@257 -- # create_target_ns 00:13:33.263 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:13:33.263 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:13:33.263 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:13:33.263 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:33.263 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:13:33.263 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:13:33.263 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:33.263 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:33.263 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:13:33.263 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:13:33.263 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:13:33.263 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:13:33.263 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@27 -- # local -gA dev_map 00:13:33.263 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@28 -- # local -g _dev 00:13:33.263 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:13:33.263 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:13:33.263 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:13:33.263 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:13:33.263 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@44 -- # ips=() 00:13:33.263 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:13:33.263 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:13:33.263 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:13:33.263 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:13:33.263 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:13:33.263 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:13:33.263 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:13:33.263 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:13:33.263 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:13:33.263 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:13:33.263 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:13:33.263 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:13:33.263 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:13:33.263 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:13:33.263 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:13:33.263 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:13:33.263 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:13:33.263 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:13:33.263 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:13:33.263 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:13:33.263 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@11 -- # local val=167772161 00:13:33.263 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:13:33.263 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:13:33.263 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:13:33.263 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:13:33.263 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:13:33.263 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:13:33.263 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:13:33.263 10.0.0.1 00:13:33.263 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:13:33.263 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:13:33.263 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:33.263 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:33.263 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:13:33.263 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@11 -- # local val=167772162 00:13:33.263 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:13:33.263 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:13:33.263 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:13:33.263 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:13:33.263 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:13:33.263 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:13:33.263 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:13:33.263 10.0.0.2 00:13:33.264 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:13:33.264 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:13:33.264 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:13:33.264 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:13:33.264 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:13:33.264 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:13:33.264 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:13:33.264 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:33.264 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:33.264 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:13:33.264 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:13:33.264 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:13:33.264 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:13:33.264 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:13:33.264 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:13:33.264 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:13:33.264 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:13:33.264 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:13:33.264 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:13:33.264 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:13:33.264 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@38 -- # ping_ips 1 00:13:33.264 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:13:33.264 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:13:33.264 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:13:33.264 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:13:33.264 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:13:33.264 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:13:33.264 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:13:33.264 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:13:33.264 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:13:33.264 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@107 -- # local dev=initiator0 00:13:33.264 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:13:33.264 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:13:33.264 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:13:33.264 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:13:33.264 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:13:33.264 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:13:33.264 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:13:33.264 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:13:33.264 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:13:33.264 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:13:33.264 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:13:33.264 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:33.264 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:33.264 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:13:33.264 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:13:33.264 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:33.264 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.687 ms 00:13:33.264 00:13:33.264 --- 10.0.0.1 ping statistics --- 00:13:33.264 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:33.264 rtt min/avg/max/mdev = 0.687/0.687/0.687/0.000 ms 00:13:33.264 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:13:33.264 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:13:33.264 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:13:33.264 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:13:33.264 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:33.264 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:33.264 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@168 -- # get_net_dev target0 00:13:33.264 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@107 -- # local dev=target0 00:13:33.264 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:13:33.264 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:13:33.264 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:13:33.264 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:13:33.264 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:13:33.264 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:13:33.264 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:13:33.264 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:13:33.264 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:13:33.264 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:13:33.264 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:13:33.264 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:13:33.264 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:13:33.264 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:13:33.264 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:33.264 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.328 ms 00:13:33.264 00:13:33.264 --- 10.0.0.2 ping statistics --- 00:13:33.264 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:33.264 rtt min/avg/max/mdev = 0.328/0.328/0.328/0.000 ms 00:13:33.264 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@98 -- # (( pair++ )) 00:13:33.264 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:13:33.264 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:33.264 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@270 -- # return 0 00:13:33.264 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:13:33.264 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:13:33.264 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:13:33.264 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:13:33.264 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:13:33.264 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:13:33.264 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:13:33.264 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:13:33.264 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:13:33.264 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:13:33.264 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@107 -- # local dev=initiator0 00:13:33.264 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:13:33.264 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:13:33.264 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:13:33.264 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:13:33.264 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:13:33.264 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:13:33.264 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:13:33.264 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:13:33.264 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:13:33.264 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:33.264 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:13:33.264 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:13:33.264 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:13:33.264 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:13:33.264 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:13:33.264 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:13:33.264 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@107 -- # local dev=initiator1 00:13:33.264 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:13:33.264 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:13:33.264 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@109 -- # return 1 00:13:33.264 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@168 -- # dev= 00:13:33.264 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@169 -- # return 0 00:13:33.264 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:13:33.264 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:13:33.265 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:13:33.265 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:13:33.265 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:13:33.265 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:33.265 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:33.265 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@168 -- # get_net_dev target0 00:13:33.265 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@107 -- # local dev=target0 00:13:33.265 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:13:33.265 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:13:33.265 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:13:33.265 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:13:33.265 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:13:33.265 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:13:33.265 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:13:33.265 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:13:33.265 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:13:33.265 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:33.265 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:13:33.265 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:13:33.265 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:13:33.265 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:13:33.265 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:33.265 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:33.265 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@168 -- # get_net_dev target1 00:13:33.265 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@107 -- # local dev=target1 00:13:33.265 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:13:33.265 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:13:33.265 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@109 -- # return 1 00:13:33.265 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@168 -- # dev= 00:13:33.265 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@169 -- # return 0 00:13:33.265 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:13:33.265 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:33.265 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:13:33.265 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:13:33.265 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:33.265 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:13:33.265 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:13:33.265 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:13:33.265 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:13:33.265 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:33.265 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:33.265 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # nvmfpid=261719 00:13:33.265 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@329 -- # waitforlisten 261719 00:13:33.265 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:33.265 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@833 -- # '[' -z 261719 ']' 00:13:33.265 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:33.265 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:33.265 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:33.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:33.265 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:33.265 19:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:33.265 [2024-11-05 19:03:01.851582] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:13:33.265 [2024-11-05 19:03:01.851654] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:33.265 [2024-11-05 19:03:01.934227] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:33.265 [2024-11-05 19:03:01.976208] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:33.265 [2024-11-05 19:03:01.976242] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:33.265 [2024-11-05 19:03:01.976251] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:33.265 [2024-11-05 19:03:01.976258] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:33.265 [2024-11-05 19:03:01.976264] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:33.265 [2024-11-05 19:03:01.977794] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:33.265 [2024-11-05 19:03:01.977844] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:33.265 [2024-11-05 19:03:01.978046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:33.265 [2024-11-05 19:03:01.978046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:33.525 19:03:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:33.525 19:03:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@866 -- # return 0 00:13:33.525 19:03:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:13:33.525 19:03:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:33.525 19:03:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:33.525 19:03:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:33.525 19:03:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:13:33.525 19:03:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.525 19:03:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:33.525 19:03:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.525 19:03:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:13:33.525 "tick_rate": 2400000000, 00:13:33.525 "poll_groups": [ 00:13:33.525 { 00:13:33.525 "name": "nvmf_tgt_poll_group_000", 00:13:33.525 "admin_qpairs": 0, 00:13:33.525 "io_qpairs": 0, 00:13:33.525 "current_admin_qpairs": 0, 00:13:33.525 "current_io_qpairs": 0, 00:13:33.525 "pending_bdev_io": 0, 00:13:33.525 "completed_nvme_io": 0, 00:13:33.525 "transports": [] 00:13:33.525 }, 00:13:33.525 { 00:13:33.525 "name": "nvmf_tgt_poll_group_001", 00:13:33.525 "admin_qpairs": 0, 00:13:33.525 "io_qpairs": 0, 00:13:33.525 "current_admin_qpairs": 0, 00:13:33.525 "current_io_qpairs": 0, 00:13:33.525 "pending_bdev_io": 0, 00:13:33.525 "completed_nvme_io": 0, 00:13:33.526 "transports": [] 00:13:33.526 }, 00:13:33.526 { 00:13:33.526 "name": "nvmf_tgt_poll_group_002", 00:13:33.526 "admin_qpairs": 0, 00:13:33.526 "io_qpairs": 0, 00:13:33.526 "current_admin_qpairs": 0, 00:13:33.526 "current_io_qpairs": 0, 00:13:33.526 "pending_bdev_io": 0, 00:13:33.526 "completed_nvme_io": 0, 00:13:33.526 "transports": [] 00:13:33.526 }, 00:13:33.526 { 00:13:33.526 "name": "nvmf_tgt_poll_group_003", 00:13:33.526 "admin_qpairs": 0, 00:13:33.526 "io_qpairs": 0, 00:13:33.526 "current_admin_qpairs": 0, 00:13:33.526 "current_io_qpairs": 0, 00:13:33.526 "pending_bdev_io": 0, 00:13:33.526 "completed_nvme_io": 0, 00:13:33.526 "transports": [] 00:13:33.526 } 00:13:33.526 ] 00:13:33.526 }' 00:13:33.526 19:03:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:13:33.526 19:03:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:13:33.526 19:03:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:13:33.526 19:03:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:13:33.526 19:03:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:13:33.526 19:03:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:13:33.526 19:03:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:13:33.526 19:03:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:33.526 19:03:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.526 19:03:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:33.526 [2024-11-05 19:03:02.821764] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:33.526 19:03:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.526 19:03:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:13:33.526 19:03:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.526 19:03:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:33.526 19:03:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.786 19:03:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:13:33.786 "tick_rate": 2400000000, 00:13:33.786 "poll_groups": [ 00:13:33.786 { 00:13:33.786 "name": "nvmf_tgt_poll_group_000", 00:13:33.786 "admin_qpairs": 0, 00:13:33.786 "io_qpairs": 0, 00:13:33.786 "current_admin_qpairs": 0, 00:13:33.786 "current_io_qpairs": 0, 00:13:33.786 "pending_bdev_io": 0, 00:13:33.786 "completed_nvme_io": 0, 00:13:33.786 "transports": [ 00:13:33.786 { 00:13:33.787 "trtype": "TCP" 00:13:33.787 } 00:13:33.787 ] 00:13:33.787 }, 00:13:33.787 { 00:13:33.787 "name": "nvmf_tgt_poll_group_001", 00:13:33.787 "admin_qpairs": 0, 00:13:33.787 "io_qpairs": 0, 00:13:33.787 "current_admin_qpairs": 0, 00:13:33.787 "current_io_qpairs": 0, 00:13:33.787 "pending_bdev_io": 0, 00:13:33.787 "completed_nvme_io": 0, 00:13:33.787 "transports": [ 00:13:33.787 { 00:13:33.787 "trtype": "TCP" 00:13:33.787 } 00:13:33.787 ] 00:13:33.787 }, 00:13:33.787 { 00:13:33.787 "name": "nvmf_tgt_poll_group_002", 00:13:33.787 "admin_qpairs": 0, 00:13:33.787 "io_qpairs": 0, 00:13:33.787 "current_admin_qpairs": 0, 00:13:33.787 "current_io_qpairs": 0, 00:13:33.787 "pending_bdev_io": 0, 00:13:33.787 "completed_nvme_io": 0, 00:13:33.787 "transports": [ 00:13:33.787 { 00:13:33.787 "trtype": "TCP" 00:13:33.787 } 00:13:33.787 ] 00:13:33.787 }, 00:13:33.787 { 00:13:33.787 "name": "nvmf_tgt_poll_group_003", 00:13:33.787 "admin_qpairs": 0, 00:13:33.787 "io_qpairs": 0, 00:13:33.787 "current_admin_qpairs": 0, 00:13:33.787 "current_io_qpairs": 0, 00:13:33.787 "pending_bdev_io": 0, 00:13:33.787 "completed_nvme_io": 0, 00:13:33.787 "transports": [ 00:13:33.787 { 00:13:33.787 "trtype": "TCP" 00:13:33.787 } 00:13:33.787 ] 00:13:33.787 } 00:13:33.787 ] 00:13:33.787 }' 00:13:33.787 19:03:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:13:33.787 19:03:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:33.787 19:03:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:33.787 19:03:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:33.787 19:03:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:13:33.787 19:03:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:13:33.787 19:03:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:33.787 19:03:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:33.787 19:03:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:33.787 19:03:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:13:33.787 19:03:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:13:33.787 19:03:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:13:33.787 19:03:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:13:33.787 19:03:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:33.787 19:03:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.787 19:03:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:33.787 Malloc1 00:13:33.787 19:03:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.787 19:03:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:33.787 19:03:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.787 19:03:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:33.787 19:03:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.787 19:03:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:33.787 19:03:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.787 19:03:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:33.787 19:03:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.787 19:03:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:13:33.787 19:03:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.787 19:03:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:33.787 19:03:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.787 19:03:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:33.787 19:03:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.787 19:03:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:33.787 [2024-11-05 19:03:03.024096] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:33.787 19:03:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.787 19:03:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:13:33.787 19:03:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:13:33.787 19:03:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:13:33.787 19:03:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:13:33.787 19:03:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:33.787 19:03:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:13:33.787 19:03:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:33.787 19:03:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:13:33.787 19:03:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:33.787 19:03:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:13:33.787 19:03:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:13:33.787 19:03:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:13:33.787 [2024-11-05 19:03:03.061012] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:13:33.787 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:33.787 could not add new controller: failed to write to nvme-fabrics device 00:13:33.787 19:03:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:13:33.787 19:03:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:33.787 19:03:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:33.787 19:03:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:33.787 19:03:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:33.787 19:03:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.787 19:03:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:33.787 19:03:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.787 19:03:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:35.697 19:03:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:13:35.697 19:03:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:13:35.697 19:03:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:13:35.697 19:03:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:13:35.697 19:03:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:13:37.607 19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:13:37.607 19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:13:37.607 19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:13:37.607 19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:13:37.607 19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:13:37.607 19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:13:37.607 19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:37.607 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:37.607 19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:37.607 19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:13:37.607 19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:13:37.607 19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:37.607 19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:13:37.607 19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:37.607 19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:13:37.607 19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:37.607 19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.607 19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:37.607 19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.607 19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:37.607 19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:13:37.607 19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:37.607 19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:13:37.607 19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:37.607 19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:13:37.607 19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:37.607 19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:13:37.607 19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:37.607 19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:13:37.607 19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:13:37.607 19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:37.607 [2024-11-05 19:03:06.776195] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:13:37.607 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:37.607 could not add new controller: failed to write to nvme-fabrics device 00:13:37.607 19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:13:37.607 19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:37.607 19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:37.607 19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:37.607 19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:13:37.607 19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.607 19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:37.607 19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.607 19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:38.990 19:03:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:13:38.990 19:03:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:13:38.990 19:03:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:13:38.990 19:03:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:13:38.990 19:03:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:13:41.535 19:03:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:13:41.535 19:03:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:13:41.535 19:03:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:13:41.535 19:03:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:13:41.535 19:03:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:13:41.535 19:03:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:13:41.535 19:03:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:41.535 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:41.535 19:03:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:41.535 19:03:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:13:41.535 19:03:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:13:41.535 19:03:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:41.535 19:03:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:13:41.535 19:03:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:41.535 19:03:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:13:41.535 19:03:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:41.535 19:03:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.535 19:03:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:41.535 19:03:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.535 19:03:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:13:41.535 19:03:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:41.535 19:03:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:41.535 19:03:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.535 19:03:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:41.535 19:03:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.535 19:03:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:41.535 19:03:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.535 19:03:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:41.535 [2024-11-05 19:03:10.495125] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:41.535 19:03:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.535 19:03:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:41.535 19:03:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.535 19:03:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:41.535 19:03:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.535 19:03:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:41.535 19:03:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.535 19:03:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:41.535 19:03:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.535 19:03:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:42.919 19:03:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:42.920 19:03:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:13:42.920 19:03:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:13:42.920 19:03:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:13:42.920 19:03:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:13:44.834 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:13:44.834 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:13:44.835 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:13:44.835 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:13:44.835 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:13:44.835 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:13:44.835 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:45.095 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:45.095 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:45.095 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:13:45.095 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:13:45.095 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:45.095 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:13:45.095 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:45.095 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:13:45.095 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:45.095 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.095 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:45.095 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.095 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:45.095 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.095 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:45.095 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.096 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:45.096 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:45.096 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.096 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:45.096 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.096 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:45.096 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.096 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:45.096 [2024-11-05 19:03:14.263835] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:45.096 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.096 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:45.096 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.096 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:45.096 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.096 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:45.096 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.096 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:45.096 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.096 19:03:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:46.478 19:03:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:46.479 19:03:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:13:46.479 19:03:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:13:46.479 19:03:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:13:46.479 19:03:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:13:49.021 19:03:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:13:49.021 19:03:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:13:49.021 19:03:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:13:49.021 19:03:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:13:49.021 19:03:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:13:49.021 19:03:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:13:49.021 19:03:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:49.021 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:49.021 19:03:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:49.021 19:03:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:13:49.021 19:03:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:13:49.021 19:03:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:49.021 19:03:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:13:49.021 19:03:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:49.021 19:03:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:13:49.021 19:03:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:49.021 19:03:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.021 19:03:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:49.021 19:03:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.021 19:03:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:49.021 19:03:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.021 19:03:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:49.021 19:03:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.021 19:03:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:49.021 19:03:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:49.021 19:03:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.021 19:03:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:49.021 19:03:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.021 19:03:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:49.021 19:03:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.021 19:03:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:49.021 [2024-11-05 19:03:17.976301] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:49.021 19:03:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.021 19:03:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:49.021 19:03:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.021 19:03:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:49.021 19:03:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.021 19:03:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:49.021 19:03:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.021 19:03:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:49.021 19:03:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.021 19:03:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:50.404 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:50.404 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:13:50.404 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:13:50.404 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:13:50.404 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:13:52.318 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:13:52.318 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:13:52.318 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:13:52.318 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:13:52.318 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:13:52.318 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:13:52.318 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:52.318 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:52.318 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:52.318 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:13:52.318 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:13:52.318 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:52.578 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:13:52.578 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:52.578 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:13:52.578 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:52.578 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.578 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:52.578 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.578 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:52.578 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.578 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:52.578 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.578 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:52.578 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:52.578 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.578 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:52.578 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.578 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:52.578 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.578 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:52.578 [2024-11-05 19:03:21.699474] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:52.578 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.578 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:52.578 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.578 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:52.578 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.579 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:52.579 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.579 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:52.579 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.579 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:54.488 19:03:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:54.488 19:03:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:13:54.488 19:03:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:13:54.488 19:03:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:13:54.488 19:03:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:13:56.398 19:03:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:13:56.398 19:03:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:13:56.398 19:03:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:13:56.398 19:03:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:13:56.398 19:03:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:13:56.398 19:03:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:13:56.398 19:03:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:56.398 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:56.398 19:03:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:56.398 19:03:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:13:56.398 19:03:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:13:56.398 19:03:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:56.398 19:03:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:13:56.398 19:03:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:56.398 19:03:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:13:56.398 19:03:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:56.398 19:03:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.398 19:03:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:56.398 19:03:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.398 19:03:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:56.398 19:03:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.399 19:03:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:56.399 19:03:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.399 19:03:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:56.399 19:03:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:56.399 19:03:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.399 19:03:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:56.399 19:03:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.399 19:03:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:56.399 19:03:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.399 19:03:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:56.399 [2024-11-05 19:03:25.453822] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:56.399 19:03:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.399 19:03:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:56.399 19:03:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.399 19:03:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:56.399 19:03:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.399 19:03:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:56.399 19:03:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.399 19:03:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:56.399 19:03:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.399 19:03:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:57.781 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:57.781 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:13:57.781 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:13:57.782 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:13:57.782 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:13:59.694 19:03:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:13:59.694 19:03:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:13:59.694 19:03:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:13:59.694 19:03:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:13:59.694 19:03:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:13:59.694 19:03:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:13:59.694 19:03:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:59.955 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:59.955 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:59.955 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:13:59.955 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:13:59.955 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:59.955 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:13:59.955 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:59.955 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:13:59.955 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:59.955 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.955 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:59.955 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.955 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:59.955 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.955 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:59.955 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.955 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:13:59.955 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:59.955 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:59.955 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.955 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:59.955 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.955 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:59.955 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.955 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:59.955 [2024-11-05 19:03:29.168195] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:59.955 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.955 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:59.955 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.955 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:59.955 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.955 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:59.955 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.955 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:59.955 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.955 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:59.955 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.955 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:59.955 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.955 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:59.955 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.955 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:59.955 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.955 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:59.955 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:59.955 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.955 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:59.955 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.955 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:59.955 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.955 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:59.955 [2024-11-05 19:03:29.232330] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:59.955 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.955 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:59.955 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.955 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:59.955 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.955 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:59.955 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.955 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:59.955 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.955 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:59.955 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.955 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:59.955 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.955 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:59.955 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.955 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:00.216 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.216 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:00.216 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:00.216 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.216 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:00.216 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.216 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:00.216 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.216 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:00.216 [2024-11-05 19:03:29.300525] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:00.216 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.216 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:00.216 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.216 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:00.216 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.216 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:00.216 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.216 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:00.216 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.216 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:00.216 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.216 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:00.216 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.216 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:00.216 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.216 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:00.216 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.216 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:00.216 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:00.216 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.216 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:00.216 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.216 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:00.216 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.216 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:00.216 [2024-11-05 19:03:29.364770] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:00.216 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.216 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:00.216 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.216 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:00.216 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.216 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:00.216 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.216 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:00.216 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.216 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:00.216 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.216 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:00.216 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.216 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:00.216 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.216 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:00.216 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.216 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:00.216 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:00.216 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.216 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:00.216 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.216 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:00.216 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.216 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:00.216 [2024-11-05 19:03:29.432981] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:00.216 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.216 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:00.216 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.216 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:00.216 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.217 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:00.217 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.217 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:00.217 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.217 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:00.217 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.217 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:00.217 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.217 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:00.217 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.217 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:00.217 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.217 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:14:00.217 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.217 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:00.217 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.217 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:14:00.217 "tick_rate": 2400000000, 00:14:00.217 "poll_groups": [ 00:14:00.217 { 00:14:00.217 "name": "nvmf_tgt_poll_group_000", 00:14:00.217 "admin_qpairs": 0, 00:14:00.217 "io_qpairs": 224, 00:14:00.217 "current_admin_qpairs": 0, 00:14:00.217 "current_io_qpairs": 0, 00:14:00.217 "pending_bdev_io": 0, 00:14:00.217 "completed_nvme_io": 225, 00:14:00.217 "transports": [ 00:14:00.217 { 00:14:00.217 "trtype": "TCP" 00:14:00.217 } 00:14:00.217 ] 00:14:00.217 }, 00:14:00.217 { 00:14:00.217 "name": "nvmf_tgt_poll_group_001", 00:14:00.217 "admin_qpairs": 1, 00:14:00.217 "io_qpairs": 223, 00:14:00.217 "current_admin_qpairs": 0, 00:14:00.217 "current_io_qpairs": 0, 00:14:00.217 "pending_bdev_io": 0, 00:14:00.217 "completed_nvme_io": 326, 00:14:00.217 "transports": [ 00:14:00.217 { 00:14:00.217 "trtype": "TCP" 00:14:00.217 } 00:14:00.217 ] 00:14:00.217 }, 00:14:00.217 { 00:14:00.217 "name": "nvmf_tgt_poll_group_002", 00:14:00.217 "admin_qpairs": 6, 00:14:00.217 "io_qpairs": 218, 00:14:00.217 "current_admin_qpairs": 0, 00:14:00.217 "current_io_qpairs": 0, 00:14:00.217 "pending_bdev_io": 0, 00:14:00.217 "completed_nvme_io": 220, 00:14:00.217 "transports": [ 00:14:00.217 { 00:14:00.217 "trtype": "TCP" 00:14:00.217 } 00:14:00.217 ] 00:14:00.217 }, 00:14:00.217 { 00:14:00.217 "name": "nvmf_tgt_poll_group_003", 00:14:00.217 "admin_qpairs": 0, 00:14:00.217 "io_qpairs": 224, 00:14:00.217 "current_admin_qpairs": 0, 00:14:00.217 "current_io_qpairs": 0, 00:14:00.217 "pending_bdev_io": 0, 00:14:00.217 "completed_nvme_io": 468, 00:14:00.217 "transports": [ 00:14:00.217 { 00:14:00.217 "trtype": "TCP" 00:14:00.217 } 00:14:00.217 ] 00:14:00.217 } 00:14:00.217 ] 00:14:00.217 }' 00:14:00.217 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:14:00.217 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:14:00.217 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:14:00.217 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:00.479 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:14:00.479 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:14:00.479 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:14:00.479 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:14:00.479 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:00.479 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:14:00.479 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:14:00.479 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:14:00.479 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:14:00.479 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@335 -- # nvmfcleanup 00:14:00.479 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@99 -- # sync 00:14:00.479 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:14:00.479 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@102 -- # set +e 00:14:00.479 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@103 -- # for i in {1..20} 00:14:00.479 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:14:00.479 rmmod nvme_tcp 00:14:00.479 rmmod nvme_fabrics 00:14:00.479 rmmod nvme_keyring 00:14:00.479 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:14:00.479 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # set -e 00:14:00.479 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # return 0 00:14:00.479 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # '[' -n 261719 ']' 00:14:00.479 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@337 -- # killprocess 261719 00:14:00.479 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@952 -- # '[' -z 261719 ']' 00:14:00.479 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # kill -0 261719 00:14:00.479 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@957 -- # uname 00:14:00.479 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:00.479 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 261719 00:14:00.480 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:00.480 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:00.480 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 261719' 00:14:00.480 killing process with pid 261719 00:14:00.480 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@971 -- # kill 261719 00:14:00.480 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@976 -- # wait 261719 00:14:00.745 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:14:00.745 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # nvmf_fini 00:14:00.745 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@264 -- # local dev 00:14:00.745 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@267 -- # remove_target_ns 00:14:00.745 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:14:00.745 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:14:00.745 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_target_ns 00:14:02.656 19:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@268 -- # delete_main_bridge 00:14:02.656 19:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:14:02.656 19:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@130 -- # return 0 00:14:02.656 19:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:14:02.656 19:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:14:02.656 19:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:14:02.656 19:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:14:02.656 19:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:14:02.656 19:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:14:02.656 19:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:14:02.656 19:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:14:02.656 19:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:14:02.656 19:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:14:02.656 19:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:14:02.656 19:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:14:02.656 19:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:14:02.656 19:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:14:02.656 19:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:14:02.656 19:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:14:02.656 19:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:14:02.656 19:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@41 -- # _dev=0 00:14:02.656 19:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@41 -- # dev_map=() 00:14:02.656 19:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@284 -- # iptr 00:14:02.656 19:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@542 -- # iptables-save 00:14:02.656 19:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:14:02.656 19:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@542 -- # iptables-restore 00:14:02.656 00:14:02.656 real 0m38.010s 00:14:02.656 user 1m53.411s 00:14:02.656 sys 0m8.022s 00:14:02.656 19:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:02.656 19:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:02.656 ************************************ 00:14:02.656 END TEST nvmf_rpc 00:14:02.656 ************************************ 00:14:02.957 19:03:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:14:02.957 19:03:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:02.957 19:03:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:02.957 19:03:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:02.957 ************************************ 00:14:02.957 START TEST nvmf_invalid 00:14:02.957 ************************************ 00:14:02.958 19:03:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:14:02.958 * Looking for test storage... 00:14:02.958 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:02.958 19:03:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:02.958 19:03:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # lcov --version 00:14:02.958 19:03:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:02.958 19:03:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:02.958 19:03:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:02.958 19:03:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:02.958 19:03:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:02.958 19:03:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:14:02.958 19:03:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:14:02.958 19:03:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:14:02.958 19:03:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:14:02.958 19:03:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:14:02.958 19:03:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:14:02.958 19:03:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:14:02.958 19:03:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:02.958 19:03:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:14:02.958 19:03:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:14:02.958 19:03:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:02.958 19:03:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:02.958 19:03:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:14:02.958 19:03:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:14:02.958 19:03:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:02.958 19:03:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:14:02.958 19:03:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:14:02.958 19:03:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:14:02.958 19:03:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:14:02.958 19:03:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:02.958 19:03:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:14:02.958 19:03:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:14:02.958 19:03:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:02.958 19:03:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:02.958 19:03:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:14:02.958 19:03:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:02.958 19:03:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:02.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:02.958 --rc genhtml_branch_coverage=1 00:14:02.958 --rc genhtml_function_coverage=1 00:14:02.958 --rc genhtml_legend=1 00:14:02.958 --rc geninfo_all_blocks=1 00:14:02.958 --rc geninfo_unexecuted_blocks=1 00:14:02.958 00:14:02.958 ' 00:14:02.958 19:03:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:02.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:02.958 --rc genhtml_branch_coverage=1 00:14:02.958 --rc genhtml_function_coverage=1 00:14:02.958 --rc genhtml_legend=1 00:14:02.958 --rc geninfo_all_blocks=1 00:14:02.958 --rc geninfo_unexecuted_blocks=1 00:14:02.958 00:14:02.958 ' 00:14:02.958 19:03:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:02.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:02.958 --rc genhtml_branch_coverage=1 00:14:02.958 --rc genhtml_function_coverage=1 00:14:02.958 --rc genhtml_legend=1 00:14:02.958 --rc geninfo_all_blocks=1 00:14:02.958 --rc geninfo_unexecuted_blocks=1 00:14:02.958 00:14:02.958 ' 00:14:02.958 19:03:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:02.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:02.958 --rc genhtml_branch_coverage=1 00:14:02.958 --rc genhtml_function_coverage=1 00:14:02.958 --rc genhtml_legend=1 00:14:02.958 --rc geninfo_all_blocks=1 00:14:02.958 --rc geninfo_unexecuted_blocks=1 00:14:02.958 00:14:02.958 ' 00:14:02.958 19:03:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:02.958 19:03:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:14:02.958 19:03:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:02.958 19:03:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:02.958 19:03:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:02.958 19:03:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:02.958 19:03:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:02.958 19:03:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:14:02.958 19:03:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:02.958 19:03:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:14:02.958 19:03:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:02.958 19:03:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:02.958 19:03:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:02.958 19:03:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:14:02.958 19:03:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:14:02.958 19:03:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:02.958 19:03:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:02.958 19:03:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:14:02.958 19:03:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:02.958 19:03:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:02.958 19:03:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:02.958 19:03:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.958 19:03:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.958 19:03:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.958 19:03:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:14:02.958 19:03:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.958 19:03:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:14:02.959 19:03:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:14:02.959 19:03:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:14:02.959 19:03:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:14:02.959 19:03:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@50 -- # : 0 00:14:02.959 19:03:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:14:02.959 19:03:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:14:02.959 19:03:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:14:02.959 19:03:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:02.959 19:03:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:02.959 19:03:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:14:02.959 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:14:02.959 19:03:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:14:02.959 19:03:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:14:02.959 19:03:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@54 -- # have_pci_nics=0 00:14:02.959 19:03:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:14:02.959 19:03:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:02.959 19:03:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:14:02.959 19:03:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:14:02.959 19:03:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:14:02.959 19:03:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:14:02.959 19:03:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:14:02.959 19:03:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:02.959 19:03:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # prepare_net_devs 00:14:02.959 19:03:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # local -g is_hw=no 00:14:02.959 19:03:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@260 -- # remove_target_ns 00:14:02.959 19:03:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:14:02.959 19:03:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:14:02.959 19:03:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_target_ns 00:14:02.959 19:03:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:14:02.959 19:03:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:14:02.959 19:03:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # xtrace_disable 00:14:02.959 19:03:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:11.253 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:11.253 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@131 -- # pci_devs=() 00:14:11.253 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@131 -- # local -a pci_devs 00:14:11.253 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@132 -- # pci_net_devs=() 00:14:11.253 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:14:11.253 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@133 -- # pci_drivers=() 00:14:11.253 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@133 -- # local -A pci_drivers 00:14:11.253 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@135 -- # net_devs=() 00:14:11.253 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@135 -- # local -ga net_devs 00:14:11.253 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@136 -- # e810=() 00:14:11.253 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@136 -- # local -ga e810 00:14:11.253 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@137 -- # x722=() 00:14:11.253 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@137 -- # local -ga x722 00:14:11.253 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@138 -- # mlx=() 00:14:11.253 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@138 -- # local -ga mlx 00:14:11.253 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:11.253 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:11.253 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:11.253 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:11.253 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:11.253 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:11.253 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:11.253 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:11.253 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:11.253 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:11.253 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:11.253 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:11.253 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:14:11.253 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:14:11.253 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:14:11.253 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:14:11.253 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:14:11.253 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:14:11.253 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:14:11.254 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:11.254 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:11.254 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:14:11.254 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:14:11.254 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:11.254 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:11.254 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:14:11.254 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:14:11.254 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:11.254 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:11.254 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:14:11.254 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:14:11.254 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:11.254 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:11.254 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:14:11.254 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:14:11.254 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:14:11.254 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:14:11.254 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:14:11.254 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:11.254 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:14:11.254 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:11.254 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@234 -- # [[ up == up ]] 00:14:11.254 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:14:11.254 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:11.254 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:11.254 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:11.254 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:14:11.254 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:14:11.254 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:11.254 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:14:11.254 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:11.254 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@234 -- # [[ up == up ]] 00:14:11.254 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:14:11.254 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:11.254 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:11.254 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:11.254 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:14:11.254 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:14:11.254 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:14:11.254 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # is_hw=yes 00:14:11.254 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:14:11.254 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:14:11.254 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:14:11.254 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:14:11.254 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@257 -- # create_target_ns 00:14:11.254 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:14:11.254 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:14:11.254 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:14:11.254 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:11.254 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:14:11.254 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:14:11.254 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:11.254 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:11.254 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:14:11.254 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:14:11.254 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:14:11.254 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:14:11.254 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@27 -- # local -gA dev_map 00:14:11.254 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@28 -- # local -g _dev 00:14:11.254 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:14:11.254 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:14:11.254 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:14:11.254 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:14:11.254 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@44 -- # ips=() 00:14:11.254 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:14:11.254 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:14:11.254 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:14:11.254 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:14:11.254 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:14:11.254 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:14:11.254 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:14:11.254 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:14:11.254 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:14:11.254 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:14:11.254 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:14:11.254 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:14:11.254 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:14:11.254 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:14:11.254 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:14:11.254 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:14:11.254 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:14:11.254 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:14:11.254 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:14:11.254 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:14:11.254 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@11 -- # local val=167772161 00:14:11.254 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:14:11.254 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:14:11.254 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:14:11.254 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:14:11.254 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:14:11.254 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:14:11.254 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:14:11.254 10.0.0.1 00:14:11.254 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:14:11.254 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:14:11.254 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:11.254 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:11.254 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:14:11.254 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@11 -- # local val=167772162 00:14:11.254 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:14:11.254 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:14:11.254 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:14:11.254 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:14:11.254 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:14:11.254 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:14:11.254 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:14:11.254 10.0.0.2 00:14:11.254 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:14:11.254 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:14:11.254 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:14:11.254 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:14:11.254 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:14:11.254 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:14:11.255 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:14:11.255 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:11.255 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:11.255 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:14:11.255 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:14:11.255 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:14:11.255 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:14:11.255 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:14:11.255 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:14:11.255 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:14:11.255 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:14:11.255 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:14:11.255 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:14:11.255 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:14:11.255 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@38 -- # ping_ips 1 00:14:11.255 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:14:11.255 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:14:11.255 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:14:11.255 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:14:11.255 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:14:11.255 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:14:11.255 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:14:11.255 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:14:11.255 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:14:11.255 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@107 -- # local dev=initiator0 00:14:11.255 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:14:11.255 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:14:11.255 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:14:11.255 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:14:11.255 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:14:11.255 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:14:11.255 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:14:11.255 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:14:11.255 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:14:11.255 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:14:11.255 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:14:11.255 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:11.255 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:11.255 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:14:11.255 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:14:11.255 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:11.255 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.625 ms 00:14:11.255 00:14:11.255 --- 10.0.0.1 ping statistics --- 00:14:11.255 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:11.255 rtt min/avg/max/mdev = 0.625/0.625/0.625/0.000 ms 00:14:11.255 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:14:11.255 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:14:11.255 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:14:11.255 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:14:11.255 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:11.255 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:11.255 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@168 -- # get_net_dev target0 00:14:11.255 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@107 -- # local dev=target0 00:14:11.255 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:14:11.255 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:14:11.255 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:14:11.255 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:14:11.255 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:14:11.255 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:14:11.255 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:14:11.255 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:14:11.255 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:14:11.255 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:14:11.255 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:14:11.255 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:14:11.255 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:14:11.255 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:14:11.255 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:11.255 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.277 ms 00:14:11.255 00:14:11.255 --- 10.0.0.2 ping statistics --- 00:14:11.255 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:11.255 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:14:11.255 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@98 -- # (( pair++ )) 00:14:11.255 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:14:11.255 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:11.255 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@270 -- # return 0 00:14:11.255 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:14:11.255 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:14:11.255 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:14:11.255 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:14:11.255 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:14:11.255 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:14:11.255 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:14:11.255 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:14:11.255 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:14:11.255 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:14:11.255 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@107 -- # local dev=initiator0 00:14:11.255 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:14:11.255 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:14:11.255 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:14:11.255 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:14:11.255 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:14:11.255 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:14:11.255 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:14:11.255 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:14:11.255 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:14:11.255 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:11.255 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:14:11.255 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:14:11.255 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:14:11.255 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:14:11.255 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:14:11.255 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:14:11.255 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@107 -- # local dev=initiator1 00:14:11.255 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:14:11.255 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:14:11.255 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@109 -- # return 1 00:14:11.255 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@168 -- # dev= 00:14:11.255 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@169 -- # return 0 00:14:11.255 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:14:11.255 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:14:11.255 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:14:11.255 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:14:11.255 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:14:11.255 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:11.256 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:11.256 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@168 -- # get_net_dev target0 00:14:11.256 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@107 -- # local dev=target0 00:14:11.256 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:14:11.256 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:14:11.256 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:14:11.256 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:14:11.256 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:14:11.256 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:14:11.256 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:14:11.256 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:14:11.256 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:14:11.256 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:11.256 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:14:11.256 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:14:11.256 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:14:11.256 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:14:11.256 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:11.256 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:11.256 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@168 -- # get_net_dev target1 00:14:11.256 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@107 -- # local dev=target1 00:14:11.256 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:14:11.256 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:14:11.256 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@109 -- # return 1 00:14:11.256 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@168 -- # dev= 00:14:11.256 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@169 -- # return 0 00:14:11.256 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:14:11.256 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:11.256 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:14:11.256 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:14:11.256 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:11.256 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:14:11.256 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:14:11.256 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:14:11.256 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:14:11.256 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:11.256 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:11.256 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # nvmfpid=272070 00:14:11.256 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@329 -- # waitforlisten 272070 00:14:11.256 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:11.256 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@833 -- # '[' -z 272070 ']' 00:14:11.256 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:11.256 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:11.256 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:11.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:11.256 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:11.256 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:11.256 [2024-11-05 19:03:39.666639] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:14:11.256 [2024-11-05 19:03:39.666696] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:11.256 [2024-11-05 19:03:39.745769] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:11.256 [2024-11-05 19:03:39.784478] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:11.256 [2024-11-05 19:03:39.784511] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:11.256 [2024-11-05 19:03:39.784519] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:11.256 [2024-11-05 19:03:39.784526] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:11.256 [2024-11-05 19:03:39.784532] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:11.256 [2024-11-05 19:03:39.786341] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:11.256 [2024-11-05 19:03:39.786454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:11.256 [2024-11-05 19:03:39.786609] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:11.256 [2024-11-05 19:03:39.786610] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:11.256 19:03:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:11.256 19:03:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@866 -- # return 0 00:14:11.256 19:03:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:14:11.256 19:03:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:11.256 19:03:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:11.256 19:03:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:11.256 19:03:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:14:11.256 19:03:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode8465 00:14:11.517 [2024-11-05 19:03:40.660697] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:14:11.517 19:03:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:14:11.517 { 00:14:11.517 "nqn": "nqn.2016-06.io.spdk:cnode8465", 00:14:11.517 "tgt_name": "foobar", 00:14:11.517 "method": "nvmf_create_subsystem", 00:14:11.517 "req_id": 1 00:14:11.517 } 00:14:11.517 Got JSON-RPC error response 00:14:11.517 response: 00:14:11.517 { 00:14:11.517 "code": -32603, 00:14:11.517 "message": "Unable to find target foobar" 00:14:11.517 }' 00:14:11.517 19:03:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:14:11.517 { 00:14:11.517 "nqn": "nqn.2016-06.io.spdk:cnode8465", 00:14:11.517 "tgt_name": "foobar", 00:14:11.517 "method": "nvmf_create_subsystem", 00:14:11.517 "req_id": 1 00:14:11.517 } 00:14:11.517 Got JSON-RPC error response 00:14:11.517 response: 00:14:11.517 { 00:14:11.517 "code": -32603, 00:14:11.517 "message": "Unable to find target foobar" 00:14:11.517 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:14:11.517 19:03:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:14:11.517 19:03:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode13973 00:14:11.778 [2024-11-05 19:03:40.853328] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13973: invalid serial number 'SPDKISFASTANDAWESOME' 00:14:11.778 19:03:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:14:11.778 { 00:14:11.778 "nqn": "nqn.2016-06.io.spdk:cnode13973", 00:14:11.778 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:14:11.778 "method": "nvmf_create_subsystem", 00:14:11.778 "req_id": 1 00:14:11.778 } 00:14:11.778 Got JSON-RPC error response 00:14:11.778 response: 00:14:11.778 { 00:14:11.778 "code": -32602, 00:14:11.778 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:14:11.778 }' 00:14:11.778 19:03:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:14:11.778 { 00:14:11.778 "nqn": "nqn.2016-06.io.spdk:cnode13973", 00:14:11.778 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:14:11.778 "method": "nvmf_create_subsystem", 00:14:11.778 "req_id": 1 00:14:11.778 } 00:14:11.778 Got JSON-RPC error response 00:14:11.778 response: 00:14:11.778 { 00:14:11.778 "code": -32602, 00:14:11.778 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:14:11.778 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:14:11.778 19:03:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:14:11.778 19:03:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode25690 00:14:11.778 [2024-11-05 19:03:41.045907] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25690: invalid model number 'SPDK_Controller' 00:14:11.778 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:14:11.778 { 00:14:11.778 "nqn": "nqn.2016-06.io.spdk:cnode25690", 00:14:11.778 "model_number": "SPDK_Controller\u001f", 00:14:11.778 "method": "nvmf_create_subsystem", 00:14:11.778 "req_id": 1 00:14:11.778 } 00:14:11.778 Got JSON-RPC error response 00:14:11.778 response: 00:14:11.778 { 00:14:11.778 "code": -32602, 00:14:11.778 "message": "Invalid MN SPDK_Controller\u001f" 00:14:11.778 }' 00:14:11.778 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:14:11.778 { 00:14:11.778 "nqn": "nqn.2016-06.io.spdk:cnode25690", 00:14:11.778 "model_number": "SPDK_Controller\u001f", 00:14:11.778 "method": "nvmf_create_subsystem", 00:14:11.778 "req_id": 1 00:14:11.778 } 00:14:11.778 Got JSON-RPC error response 00:14:11.778 response: 00:14:11.778 { 00:14:11.778 "code": -32602, 00:14:11.778 "message": "Invalid MN SPDK_Controller\u001f" 00:14:11.778 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:14:11.778 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:14:11.778 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:14:11.778 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:14:11.778 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:14:11.778 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:14:11.778 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:14:11.778 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:11.778 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:14:11.778 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:14:11.778 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:14:11.778 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:11.778 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:11.778 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:14:11.778 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:14:11.778 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:14:11.778 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:11.778 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:12.040 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:14:12.040 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:14:12.040 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:14:12.040 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:12.040 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:12.040 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:14:12.040 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:14:12.040 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:14:12.040 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:12.040 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:12.040 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:14:12.040 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:14:12.040 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:14:12.040 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:12.040 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:12.040 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:14:12.040 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:14:12.040 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:14:12.040 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:12.040 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:12.040 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:14:12.040 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:14:12.040 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:14:12.040 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:12.040 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:12.041 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:14:12.041 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:14:12.041 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:14:12.041 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:12.041 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:12.041 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:14:12.041 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:14:12.041 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:14:12.041 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:12.041 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:12.041 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:14:12.041 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:14:12.041 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:14:12.041 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:12.041 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:12.041 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:14:12.041 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:14:12.041 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:14:12.041 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:12.041 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:12.041 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:14:12.041 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:14:12.041 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:14:12.041 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:12.041 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:12.041 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:14:12.041 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:14:12.041 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:14:12.041 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:12.041 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:12.041 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:14:12.041 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:14:12.041 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:14:12.041 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:12.041 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:12.041 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:14:12.041 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:14:12.041 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:14:12.041 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:12.041 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:12.041 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:14:12.041 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:14:12.041 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:14:12.041 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:12.041 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:12.041 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:14:12.041 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:14:12.041 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:14:12.041 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:12.041 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:12.041 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:14:12.041 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:14:12.041 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:14:12.041 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:12.041 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:12.041 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:14:12.041 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:14:12.041 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:14:12.041 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:12.041 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:12.041 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:14:12.041 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:14:12.041 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:14:12.041 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:12.041 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:12.041 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:14:12.041 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:14:12.041 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:14:12.041 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:12.041 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:12.041 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ z == \- ]] 00:14:12.041 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'zK'\''`im`2G=_+ HXG[8.D' 00:14:12.041 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'zK'\''`im`2G=_+ HXG[8.D' nqn.2016-06.io.spdk:cnode10207 00:14:12.303 [2024-11-05 19:03:41.403044] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10207: invalid serial number 'zK'`im`2G=_+ HXG[8.D' 00:14:12.303 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:14:12.303 { 00:14:12.303 "nqn": "nqn.2016-06.io.spdk:cnode10207", 00:14:12.303 "serial_number": "zK'\''`im`2G=_+ HXG[8.\u007fD", 00:14:12.303 "method": "nvmf_create_subsystem", 00:14:12.303 "req_id": 1 00:14:12.303 } 00:14:12.303 Got JSON-RPC error response 00:14:12.303 response: 00:14:12.303 { 00:14:12.303 "code": -32602, 00:14:12.303 "message": "Invalid SN zK'\''`im`2G=_+ HXG[8.\u007fD" 00:14:12.303 }' 00:14:12.303 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:14:12.303 { 00:14:12.303 "nqn": "nqn.2016-06.io.spdk:cnode10207", 00:14:12.303 "serial_number": "zK'`im`2G=_+ HXG[8.\u007fD", 00:14:12.303 "method": "nvmf_create_subsystem", 00:14:12.303 "req_id": 1 00:14:12.303 } 00:14:12.303 Got JSON-RPC error response 00:14:12.303 response: 00:14:12.303 { 00:14:12.303 "code": -32602, 00:14:12.303 "message": "Invalid SN zK'`im`2G=_+ HXG[8.\u007fD" 00:14:12.303 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:14:12.303 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:14:12.303 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:14:12.303 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:14:12.303 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:14:12.303 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:14:12.303 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:14:12.303 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:12.304 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:14:12.304 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:14:12.304 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:14:12.304 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:12.304 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:12.304 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:14:12.304 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:14:12.304 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:14:12.304 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:12.304 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:12.304 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:14:12.304 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:14:12.304 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:14:12.304 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:12.304 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:12.304 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:14:12.304 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:14:12.304 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:14:12.304 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:12.304 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:12.304 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:14:12.304 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:14:12.304 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:14:12.304 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:12.304 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:12.304 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:14:12.304 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:14:12.304 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:14:12.304 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:12.304 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:12.304 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:14:12.304 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:14:12.304 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:14:12.304 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:12.304 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:12.304 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:14:12.304 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:14:12.304 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:14:12.304 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:12.304 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:12.304 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:14:12.304 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:14:12.304 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:14:12.304 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:12.304 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:12.304 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:14:12.304 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:14:12.304 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:14:12.304 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:12.304 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:12.304 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:14:12.304 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:14:12.304 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:14:12.304 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:12.304 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:12.304 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:14:12.304 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:14:12.304 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:14:12.304 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:12.304 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:12.304 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:14:12.304 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:14:12.304 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:14:12.304 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:12.304 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:12.304 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:14:12.304 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:14:12.304 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:14:12.304 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:12.304 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:12.304 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:14:12.304 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:14:12.304 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:14:12.304 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:12.304 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:12.304 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:14:12.304 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:14:12.304 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:14:12.304 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:12.304 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:12.304 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:14:12.304 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:14:12.304 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:14:12.304 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:12.304 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:12.304 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:14:12.304 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:14:12.304 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:14:12.304 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:12.304 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:12.304 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:14:12.304 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:14:12.304 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:14:12.304 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:12.304 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:12.304 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:14:12.304 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:14:12.304 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:14:12.304 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:12.304 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:12.304 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:14:12.304 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:14:12.304 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:14:12.304 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:12.304 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:12.304 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:14:12.304 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:14:12.304 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:14:12.304 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:12.304 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:12.304 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:14:12.304 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:14:12.304 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:14:12.304 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:12.304 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:12.304 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:14:12.304 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:14:12.304 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:14:12.304 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:12.566 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:12.566 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:14:12.566 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:14:12.566 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:14:12.566 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:12.566 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:12.566 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:14:12.566 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:14:12.566 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:14:12.566 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:12.566 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:12.566 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:14:12.566 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:14:12.566 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:14:12.566 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:12.566 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:12.566 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:14:12.566 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:14:12.566 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:14:12.566 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:12.566 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:12.566 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:14:12.566 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:14:12.566 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:14:12.566 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:12.566 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:12.566 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:14:12.566 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:14:12.566 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:14:12.566 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:12.566 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:12.566 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:14:12.566 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:14:12.566 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:14:12.566 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:12.566 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:12.566 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:14:12.566 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:14:12.566 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:14:12.566 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:12.566 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:12.566 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:14:12.566 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:14:12.566 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:14:12.566 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:12.566 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:12.566 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:14:12.566 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:14:12.566 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:14:12.566 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:12.566 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:12.566 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:14:12.566 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:14:12.566 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:14:12.566 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:12.566 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:12.566 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:14:12.566 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:14:12.566 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:14:12.566 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:12.566 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:12.566 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:14:12.566 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:14:12.566 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:14:12.566 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:12.566 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:12.566 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:14:12.566 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:14:12.566 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:14:12.566 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:12.566 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:12.566 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:14:12.566 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:14:12.566 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:14:12.566 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:12.566 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:12.566 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:14:12.566 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:14:12.566 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:14:12.566 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:12.566 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:12.566 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:14:12.566 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:14:12.566 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:14:12.566 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:12.566 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:12.566 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ t == \- ]] 00:14:12.566 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'tZU2I#>AA7WL!LY_mE4<&Z'\''v5|%'\''y~CHFqT-Y8J+W' 00:14:12.566 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'tZU2I#>AA7WL!LY_mE4<&Z'\''v5|%'\''y~CHFqT-Y8J+W' nqn.2016-06.io.spdk:cnode19488 00:14:12.828 [2024-11-05 19:03:41.916700] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19488: invalid model number 'tZU2I#>AA7WL!LY_mE4<&Z'v5|%'y~CHFqT-Y8J+W' 00:14:12.828 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:14:12.828 { 00:14:12.828 "nqn": "nqn.2016-06.io.spdk:cnode19488", 00:14:12.828 "model_number": "tZU2I#>AA7WL!LY_mE4<&Z'\''v5|%'\''y~CHFqT-Y8J+W", 00:14:12.828 "method": "nvmf_create_subsystem", 00:14:12.828 "req_id": 1 00:14:12.828 } 00:14:12.828 Got JSON-RPC error response 00:14:12.828 response: 00:14:12.828 { 00:14:12.828 "code": -32602, 00:14:12.828 "message": "Invalid MN tZU2I#>AA7WL!LY_mE4<&Z'\''v5|%'\''y~CHFqT-Y8J+W" 00:14:12.828 }' 00:14:12.828 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:14:12.828 { 00:14:12.828 "nqn": "nqn.2016-06.io.spdk:cnode19488", 00:14:12.828 "model_number": "tZU2I#>AA7WL!LY_mE4<&Z'v5|%'y~CHFqT-Y8J+W", 00:14:12.828 "method": "nvmf_create_subsystem", 00:14:12.828 "req_id": 1 00:14:12.828 } 00:14:12.828 Got JSON-RPC error response 00:14:12.828 response: 00:14:12.828 { 00:14:12.828 "code": -32602, 00:14:12.828 "message": "Invalid MN tZU2I#>AA7WL!LY_mE4<&Z'v5|%'y~CHFqT-Y8J+W" 00:14:12.828 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:14:12.828 19:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:14:12.828 [2024-11-05 19:03:42.101387] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:12.828 19:03:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:14:13.089 19:03:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a 10.0.0.1 -s 4421 00:14:13.350 [2024-11-05 19:03:42.470517] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:14:13.350 19:03:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # out='request: 00:14:13.350 { 00:14:13.350 "nqn": "nqn.2016-06.io.spdk:cnode", 00:14:13.350 "listen_address": { 00:14:13.350 "trtype": "tcp", 00:14:13.350 "traddr": "10.0.0.1", 00:14:13.350 "trsvcid": "4421" 00:14:13.350 }, 00:14:13.350 "method": "nvmf_subsystem_remove_listener", 00:14:13.350 "req_id": 1 00:14:13.350 } 00:14:13.350 Got JSON-RPC error response 00:14:13.350 response: 00:14:13.350 { 00:14:13.350 "code": -32602, 00:14:13.350 "message": "Invalid parameters" 00:14:13.350 }' 00:14:13.350 19:03:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@65 -- # [[ request: 00:14:13.350 { 00:14:13.350 "nqn": "nqn.2016-06.io.spdk:cnode", 00:14:13.350 "listen_address": { 00:14:13.350 "trtype": "tcp", 00:14:13.350 "traddr": "10.0.0.1", 00:14:13.350 "trsvcid": "4421" 00:14:13.350 }, 00:14:13.350 "method": "nvmf_subsystem_remove_listener", 00:14:13.350 "req_id": 1 00:14:13.350 } 00:14:13.350 Got JSON-RPC error response 00:14:13.350 response: 00:14:13.350 { 00:14:13.350 "code": -32602, 00:14:13.350 "message": "Invalid parameters" 00:14:13.350 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:14:13.350 19:03:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode24053 -i 0 00:14:13.350 [2024-11-05 19:03:42.659070] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24053: invalid cntlid range [0-65519] 00:14:13.611 19:03:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@68 -- # out='request: 00:14:13.611 { 00:14:13.611 "nqn": "nqn.2016-06.io.spdk:cnode24053", 00:14:13.611 "min_cntlid": 0, 00:14:13.611 "method": "nvmf_create_subsystem", 00:14:13.611 "req_id": 1 00:14:13.611 } 00:14:13.611 Got JSON-RPC error response 00:14:13.611 response: 00:14:13.611 { 00:14:13.611 "code": -32602, 00:14:13.611 "message": "Invalid cntlid range [0-65519]" 00:14:13.611 }' 00:14:13.611 19:03:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # [[ request: 00:14:13.611 { 00:14:13.611 "nqn": "nqn.2016-06.io.spdk:cnode24053", 00:14:13.611 "min_cntlid": 0, 00:14:13.611 "method": "nvmf_create_subsystem", 00:14:13.611 "req_id": 1 00:14:13.611 } 00:14:13.611 Got JSON-RPC error response 00:14:13.611 response: 00:14:13.611 { 00:14:13.611 "code": -32602, 00:14:13.611 "message": "Invalid cntlid range [0-65519]" 00:14:13.611 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:13.611 19:03:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9588 -i 65520 00:14:13.611 [2024-11-05 19:03:42.847694] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9588: invalid cntlid range [65520-65519] 00:14:13.611 19:03:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # out='request: 00:14:13.611 { 00:14:13.611 "nqn": "nqn.2016-06.io.spdk:cnode9588", 00:14:13.611 "min_cntlid": 65520, 00:14:13.611 "method": "nvmf_create_subsystem", 00:14:13.611 "req_id": 1 00:14:13.611 } 00:14:13.611 Got JSON-RPC error response 00:14:13.611 response: 00:14:13.611 { 00:14:13.611 "code": -32602, 00:14:13.611 "message": "Invalid cntlid range [65520-65519]" 00:14:13.611 }' 00:14:13.611 19:03:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@71 -- # [[ request: 00:14:13.611 { 00:14:13.611 "nqn": "nqn.2016-06.io.spdk:cnode9588", 00:14:13.611 "min_cntlid": 65520, 00:14:13.611 "method": "nvmf_create_subsystem", 00:14:13.611 "req_id": 1 00:14:13.611 } 00:14:13.611 Got JSON-RPC error response 00:14:13.611 response: 00:14:13.611 { 00:14:13.612 "code": -32602, 00:14:13.612 "message": "Invalid cntlid range [65520-65519]" 00:14:13.612 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:13.612 19:03:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode17780 -I 0 00:14:13.873 [2024-11-05 19:03:43.032249] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17780: invalid cntlid range [1-0] 00:14:13.873 19:03:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@72 -- # out='request: 00:14:13.873 { 00:14:13.873 "nqn": "nqn.2016-06.io.spdk:cnode17780", 00:14:13.873 "max_cntlid": 0, 00:14:13.873 "method": "nvmf_create_subsystem", 00:14:13.873 "req_id": 1 00:14:13.873 } 00:14:13.873 Got JSON-RPC error response 00:14:13.873 response: 00:14:13.873 { 00:14:13.873 "code": -32602, 00:14:13.873 "message": "Invalid cntlid range [1-0]" 00:14:13.873 }' 00:14:13.873 19:03:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # [[ request: 00:14:13.873 { 00:14:13.873 "nqn": "nqn.2016-06.io.spdk:cnode17780", 00:14:13.873 "max_cntlid": 0, 00:14:13.873 "method": "nvmf_create_subsystem", 00:14:13.873 "req_id": 1 00:14:13.873 } 00:14:13.873 Got JSON-RPC error response 00:14:13.873 response: 00:14:13.873 { 00:14:13.873 "code": -32602, 00:14:13.873 "message": "Invalid cntlid range [1-0]" 00:14:13.873 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:13.873 19:03:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode12678 -I 65520 00:14:14.134 [2024-11-05 19:03:43.212831] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12678: invalid cntlid range [1-65520] 00:14:14.134 19:03:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # out='request: 00:14:14.134 { 00:14:14.134 "nqn": "nqn.2016-06.io.spdk:cnode12678", 00:14:14.134 "max_cntlid": 65520, 00:14:14.134 "method": "nvmf_create_subsystem", 00:14:14.134 "req_id": 1 00:14:14.134 } 00:14:14.134 Got JSON-RPC error response 00:14:14.134 response: 00:14:14.134 { 00:14:14.134 "code": -32602, 00:14:14.134 "message": "Invalid cntlid range [1-65520]" 00:14:14.134 }' 00:14:14.134 19:03:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # [[ request: 00:14:14.134 { 00:14:14.134 "nqn": "nqn.2016-06.io.spdk:cnode12678", 00:14:14.134 "max_cntlid": 65520, 00:14:14.134 "method": "nvmf_create_subsystem", 00:14:14.134 "req_id": 1 00:14:14.134 } 00:14:14.134 Got JSON-RPC error response 00:14:14.134 response: 00:14:14.134 { 00:14:14.134 "code": -32602, 00:14:14.134 "message": "Invalid cntlid range [1-65520]" 00:14:14.134 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:14.134 19:03:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode17596 -i 6 -I 5 00:14:14.134 [2024-11-05 19:03:43.393421] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17596: invalid cntlid range [6-5] 00:14:14.134 19:03:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # out='request: 00:14:14.134 { 00:14:14.134 "nqn": "nqn.2016-06.io.spdk:cnode17596", 00:14:14.134 "min_cntlid": 6, 00:14:14.134 "max_cntlid": 5, 00:14:14.134 "method": "nvmf_create_subsystem", 00:14:14.134 "req_id": 1 00:14:14.134 } 00:14:14.134 Got JSON-RPC error response 00:14:14.134 response: 00:14:14.134 { 00:14:14.134 "code": -32602, 00:14:14.134 "message": "Invalid cntlid range [6-5]" 00:14:14.134 }' 00:14:14.134 19:03:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # [[ request: 00:14:14.134 { 00:14:14.134 "nqn": "nqn.2016-06.io.spdk:cnode17596", 00:14:14.134 "min_cntlid": 6, 00:14:14.134 "max_cntlid": 5, 00:14:14.134 "method": "nvmf_create_subsystem", 00:14:14.134 "req_id": 1 00:14:14.134 } 00:14:14.134 Got JSON-RPC error response 00:14:14.134 response: 00:14:14.134 { 00:14:14.134 "code": -32602, 00:14:14.134 "message": "Invalid cntlid range [6-5]" 00:14:14.134 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:14.134 19:03:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:14:14.395 19:03:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@82 -- # out='request: 00:14:14.395 { 00:14:14.395 "name": "foobar", 00:14:14.395 "method": "nvmf_delete_target", 00:14:14.395 "req_id": 1 00:14:14.395 } 00:14:14.395 Got JSON-RPC error response 00:14:14.395 response: 00:14:14.395 { 00:14:14.395 "code": -32602, 00:14:14.395 "message": "The specified target doesn'\''t exist, cannot delete it." 00:14:14.395 }' 00:14:14.395 19:03:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # [[ request: 00:14:14.395 { 00:14:14.395 "name": "foobar", 00:14:14.395 "method": "nvmf_delete_target", 00:14:14.395 "req_id": 1 00:14:14.395 } 00:14:14.395 Got JSON-RPC error response 00:14:14.395 response: 00:14:14.395 { 00:14:14.395 "code": -32602, 00:14:14.395 "message": "The specified target doesn't exist, cannot delete it." 00:14:14.395 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:14:14.395 19:03:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:14:14.395 19:03:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@86 -- # nvmftestfini 00:14:14.395 19:03:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@335 -- # nvmfcleanup 00:14:14.395 19:03:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@99 -- # sync 00:14:14.395 19:03:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:14:14.395 19:03:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@102 -- # set +e 00:14:14.395 19:03:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@103 -- # for i in {1..20} 00:14:14.395 19:03:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:14:14.395 rmmod nvme_tcp 00:14:14.395 rmmod nvme_fabrics 00:14:14.395 rmmod nvme_keyring 00:14:14.395 19:03:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:14:14.395 19:03:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # set -e 00:14:14.395 19:03:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # return 0 00:14:14.395 19:03:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # '[' -n 272070 ']' 00:14:14.395 19:03:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@337 -- # killprocess 272070 00:14:14.395 19:03:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@952 -- # '[' -z 272070 ']' 00:14:14.395 19:03:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # kill -0 272070 00:14:14.395 19:03:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@957 -- # uname 00:14:14.395 19:03:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:14.395 19:03:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 272070 00:14:14.395 19:03:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:14.395 19:03:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:14.395 19:03:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 272070' 00:14:14.395 killing process with pid 272070 00:14:14.395 19:03:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@971 -- # kill 272070 00:14:14.395 19:03:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@976 -- # wait 272070 00:14:14.656 19:03:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:14:14.656 19:03:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # nvmf_fini 00:14:14.656 19:03:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@264 -- # local dev 00:14:14.656 19:03:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@267 -- # remove_target_ns 00:14:14.656 19:03:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:14:14.656 19:03:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:14:14.656 19:03:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_target_ns 00:14:16.569 19:03:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@268 -- # delete_main_bridge 00:14:16.569 19:03:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:14:16.569 19:03:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@130 -- # return 0 00:14:16.569 19:03:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:14:16.569 19:03:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:14:16.569 19:03:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:14:16.569 19:03:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:14:16.569 19:03:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:14:16.569 19:03:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:14:16.569 19:03:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:14:16.569 19:03:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:14:16.569 19:03:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:14:16.569 19:03:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:14:16.569 19:03:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:14:16.569 19:03:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:14:16.569 19:03:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:14:16.569 19:03:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:14:16.569 19:03:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:14:16.569 19:03:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:14:16.569 19:03:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:14:16.569 19:03:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@41 -- # _dev=0 00:14:16.569 19:03:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@41 -- # dev_map=() 00:14:16.569 19:03:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@284 -- # iptr 00:14:16.569 19:03:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@542 -- # iptables-save 00:14:16.569 19:03:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:14:16.569 19:03:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@542 -- # iptables-restore 00:14:16.569 00:14:16.569 real 0m13.831s 00:14:16.569 user 0m20.551s 00:14:16.569 sys 0m6.440s 00:14:16.569 19:03:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:16.569 19:03:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:16.569 ************************************ 00:14:16.569 END TEST nvmf_invalid 00:14:16.569 ************************************ 00:14:16.830 19:03:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:16.830 19:03:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:16.830 19:03:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:16.830 19:03:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:16.830 ************************************ 00:14:16.830 START TEST nvmf_connect_stress 00:14:16.830 ************************************ 00:14:16.830 19:03:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:16.830 * Looking for test storage... 00:14:16.830 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:16.830 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:16.830 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:14:16.830 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:16.831 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:16.831 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:16.831 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:16.831 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:16.831 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:14:16.831 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:14:16.831 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:14:16.831 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:14:16.831 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:14:16.831 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:14:16.831 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:14:16.831 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:16.831 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:14:16.831 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:14:16.831 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:16.831 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:16.831 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:14:16.831 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:14:16.831 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:16.831 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:14:16.831 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:14:16.831 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:14:16.831 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:14:16.831 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:16.831 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:14:17.093 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:14:17.093 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:17.093 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:17.093 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:14:17.093 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:17.093 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:17.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:17.093 --rc genhtml_branch_coverage=1 00:14:17.093 --rc genhtml_function_coverage=1 00:14:17.093 --rc genhtml_legend=1 00:14:17.093 --rc geninfo_all_blocks=1 00:14:17.093 --rc geninfo_unexecuted_blocks=1 00:14:17.093 00:14:17.093 ' 00:14:17.093 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:17.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:17.093 --rc genhtml_branch_coverage=1 00:14:17.093 --rc genhtml_function_coverage=1 00:14:17.093 --rc genhtml_legend=1 00:14:17.093 --rc geninfo_all_blocks=1 00:14:17.093 --rc geninfo_unexecuted_blocks=1 00:14:17.093 00:14:17.093 ' 00:14:17.093 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:17.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:17.093 --rc genhtml_branch_coverage=1 00:14:17.093 --rc genhtml_function_coverage=1 00:14:17.093 --rc genhtml_legend=1 00:14:17.093 --rc geninfo_all_blocks=1 00:14:17.093 --rc geninfo_unexecuted_blocks=1 00:14:17.093 00:14:17.093 ' 00:14:17.093 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:17.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:17.093 --rc genhtml_branch_coverage=1 00:14:17.093 --rc genhtml_function_coverage=1 00:14:17.093 --rc genhtml_legend=1 00:14:17.093 --rc geninfo_all_blocks=1 00:14:17.093 --rc geninfo_unexecuted_blocks=1 00:14:17.093 00:14:17.093 ' 00:14:17.093 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:17.093 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:14:17.093 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:17.093 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:17.093 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:17.093 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:17.093 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:17.093 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:14:17.093 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:17.093 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:14:17.093 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:17.093 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:17.093 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:17.093 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:14:17.093 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:14:17.093 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:17.093 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:17.093 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:14:17.093 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:17.093 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:17.093 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:17.093 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.093 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.093 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.093 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:14:17.093 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.093 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:14:17.093 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:14:17.093 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:14:17.093 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:14:17.093 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@50 -- # : 0 00:14:17.093 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:14:17.093 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:14:17.093 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:14:17.093 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:17.093 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:17.093 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:14:17.093 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:14:17.093 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:14:17.093 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:14:17.093 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@54 -- # have_pci_nics=0 00:14:17.093 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:14:17.093 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:14:17.093 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:17.093 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # prepare_net_devs 00:14:17.093 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # local -g is_hw=no 00:14:17.093 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@260 -- # remove_target_ns 00:14:17.093 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:14:17.093 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:14:17.093 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_target_ns 00:14:17.093 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:14:17.093 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:14:17.093 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # xtrace_disable 00:14:17.093 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:25.241 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:25.241 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@131 -- # pci_devs=() 00:14:25.241 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@131 -- # local -a pci_devs 00:14:25.241 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@132 -- # pci_net_devs=() 00:14:25.241 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:14:25.241 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@133 -- # pci_drivers=() 00:14:25.241 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@133 -- # local -A pci_drivers 00:14:25.241 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@135 -- # net_devs=() 00:14:25.241 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@135 -- # local -ga net_devs 00:14:25.241 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@136 -- # e810=() 00:14:25.241 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@136 -- # local -ga e810 00:14:25.241 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@137 -- # x722=() 00:14:25.241 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@137 -- # local -ga x722 00:14:25.241 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@138 -- # mlx=() 00:14:25.241 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@138 -- # local -ga mlx 00:14:25.241 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:25.241 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:25.241 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:25.241 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:25.241 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:25.241 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:25.241 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:25.241 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:25.241 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:25.241 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:25.241 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:25.241 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:25.241 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:14:25.241 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:14:25.241 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:14:25.241 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:14:25.241 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:14:25.241 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:14:25.241 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:14:25.241 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:25.241 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:25.241 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:14:25.241 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:14:25.241 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:25.241 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:25.241 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:14:25.241 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:14:25.241 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:25.241 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:25.241 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:14:25.241 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:14:25.241 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:25.241 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:25.241 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:14:25.241 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:14:25.241 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:14:25.241 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:14:25.241 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:14:25.241 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:25.241 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:14:25.241 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:25.241 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@234 -- # [[ up == up ]] 00:14:25.241 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:14:25.241 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:25.241 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:25.241 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:25.241 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:14:25.241 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:14:25.241 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:25.241 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:14:25.241 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:25.241 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@234 -- # [[ up == up ]] 00:14:25.241 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:14:25.241 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:25.242 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:25.242 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:25.242 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:14:25.242 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:14:25.242 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:14:25.242 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # is_hw=yes 00:14:25.242 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:14:25.242 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:14:25.242 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:14:25.242 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:14:25.242 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@257 -- # create_target_ns 00:14:25.242 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:14:25.242 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:14:25.242 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:14:25.242 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:25.242 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:14:25.242 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:14:25.242 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:25.242 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:25.242 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:14:25.242 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:14:25.242 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:14:25.242 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:14:25.242 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@27 -- # local -gA dev_map 00:14:25.242 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@28 -- # local -g _dev 00:14:25.242 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:14:25.242 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:14:25.242 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:14:25.242 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:14:25.242 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@44 -- # ips=() 00:14:25.242 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:14:25.242 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:14:25.242 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:14:25.242 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:14:25.242 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:14:25.242 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:14:25.242 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:14:25.242 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:14:25.242 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:14:25.242 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:14:25.242 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:14:25.242 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:14:25.242 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:14:25.242 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:14:25.242 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:14:25.242 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:14:25.242 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:14:25.242 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:14:25.242 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:14:25.242 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:14:25.242 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@11 -- # local val=167772161 00:14:25.242 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:14:25.242 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:14:25.242 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:14:25.242 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:14:25.242 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:14:25.242 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:14:25.242 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:14:25.242 10.0.0.1 00:14:25.242 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:14:25.242 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:14:25.242 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:25.242 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:25.242 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:14:25.242 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@11 -- # local val=167772162 00:14:25.242 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:14:25.242 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:14:25.242 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:14:25.242 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:14:25.242 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:14:25.242 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:14:25.242 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:14:25.242 10.0.0.2 00:14:25.242 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:14:25.242 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:14:25.242 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:14:25.242 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:14:25.242 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:14:25.242 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:14:25.242 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:14:25.242 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:25.242 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:25.242 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:14:25.242 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:14:25.242 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:14:25.242 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:14:25.242 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:14:25.242 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:14:25.242 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:14:25.242 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:14:25.242 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:14:25.242 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:14:25.242 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:14:25.242 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@38 -- # ping_ips 1 00:14:25.242 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:14:25.242 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:14:25.242 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:14:25.242 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:14:25.242 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:14:25.242 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:14:25.242 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:14:25.242 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:14:25.242 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:14:25.242 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@107 -- # local dev=initiator0 00:14:25.242 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:14:25.242 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:14:25.242 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:14:25.242 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:14:25.243 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:14:25.243 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:14:25.243 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:14:25.243 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:14:25.243 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:14:25.243 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:14:25.243 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:14:25.243 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:25.243 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:25.243 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:14:25.243 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:14:25.243 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:25.243 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.634 ms 00:14:25.243 00:14:25.243 --- 10.0.0.1 ping statistics --- 00:14:25.243 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:25.243 rtt min/avg/max/mdev = 0.634/0.634/0.634/0.000 ms 00:14:25.243 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:14:25.243 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:14:25.243 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:14:25.243 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:14:25.243 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:25.243 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:25.243 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@168 -- # get_net_dev target0 00:14:25.243 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@107 -- # local dev=target0 00:14:25.243 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:14:25.243 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:14:25.243 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:14:25.243 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:14:25.243 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:14:25.243 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:14:25.243 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:14:25.243 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:14:25.243 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:14:25.243 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:14:25.243 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:14:25.243 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:14:25.243 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:14:25.243 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:14:25.243 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:25.243 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.280 ms 00:14:25.243 00:14:25.243 --- 10.0.0.2 ping statistics --- 00:14:25.243 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:25.243 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:14:25.243 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@98 -- # (( pair++ )) 00:14:25.243 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:14:25.243 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:25.243 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@270 -- # return 0 00:14:25.243 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:14:25.243 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:14:25.243 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:14:25.243 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:14:25.243 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:14:25.243 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:14:25.243 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:14:25.243 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:14:25.243 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:14:25.243 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:14:25.243 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@107 -- # local dev=initiator0 00:14:25.243 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:14:25.243 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:14:25.243 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:14:25.243 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:14:25.243 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:14:25.243 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:14:25.243 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:14:25.243 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:14:25.243 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:14:25.243 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:25.243 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:14:25.243 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:14:25.243 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:14:25.243 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:14:25.243 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:14:25.243 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:14:25.243 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@107 -- # local dev=initiator1 00:14:25.243 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:14:25.243 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:14:25.243 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@109 -- # return 1 00:14:25.243 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@168 -- # dev= 00:14:25.243 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@169 -- # return 0 00:14:25.243 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:14:25.243 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:14:25.243 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:14:25.243 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:14:25.243 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:14:25.243 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:25.243 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:25.243 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@168 -- # get_net_dev target0 00:14:25.243 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@107 -- # local dev=target0 00:14:25.243 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:14:25.243 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:14:25.243 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:14:25.243 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:14:25.243 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:14:25.243 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:14:25.243 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:14:25.243 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:14:25.243 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:14:25.243 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:25.243 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:14:25.243 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:14:25.243 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:14:25.243 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:14:25.243 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:25.243 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:25.243 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@168 -- # get_net_dev target1 00:14:25.243 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@107 -- # local dev=target1 00:14:25.243 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:14:25.243 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:14:25.243 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@109 -- # return 1 00:14:25.243 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@168 -- # dev= 00:14:25.243 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@169 -- # return 0 00:14:25.244 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:14:25.244 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:25.244 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:14:25.244 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:14:25.244 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:25.244 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:14:25.244 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:14:25.244 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:14:25.244 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:14:25.244 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:25.244 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:25.244 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # nvmfpid=277273 00:14:25.244 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@329 -- # waitforlisten 277273 00:14:25.244 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:25.244 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@833 -- # '[' -z 277273 ']' 00:14:25.244 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:25.244 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:25.244 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:25.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:25.244 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:25.244 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:25.244 [2024-11-05 19:03:53.854742] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:14:25.244 [2024-11-05 19:03:53.854808] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:25.244 [2024-11-05 19:03:53.923048] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:25.244 [2024-11-05 19:03:53.955002] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:25.244 [2024-11-05 19:03:53.955034] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:25.244 [2024-11-05 19:03:53.955040] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:25.244 [2024-11-05 19:03:53.955045] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:25.244 [2024-11-05 19:03:53.955049] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:25.244 [2024-11-05 19:03:53.956200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:25.244 [2024-11-05 19:03:53.956363] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:25.244 [2024-11-05 19:03:53.956365] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:25.244 19:03:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:25.244 19:03:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@866 -- # return 0 00:14:25.244 19:03:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:14:25.244 19:03:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:25.244 19:03:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:25.244 19:03:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:25.244 19:03:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:25.244 19:03:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.244 19:03:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:25.244 [2024-11-05 19:03:54.076119] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:25.244 19:03:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.244 19:03:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:25.244 19:03:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.244 19:03:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:25.244 19:03:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.244 19:03:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:25.244 19:03:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.244 19:03:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:25.244 [2024-11-05 19:03:54.100364] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:25.244 19:03:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.244 19:03:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:25.244 19:03:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.244 19:03:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:25.244 NULL1 00:14:25.244 19:03:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.244 19:03:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=277299 00:14:25.244 19:03:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:25.244 19:03:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:14:25.244 19:03:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:25.244 19:03:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:14:25.244 19:03:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:25.244 19:03:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:25.244 19:03:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:25.244 19:03:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:25.244 19:03:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:25.244 19:03:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:25.244 19:03:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:25.244 19:03:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:25.244 19:03:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:25.244 19:03:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:25.244 19:03:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:25.244 19:03:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:25.244 19:03:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:25.244 19:03:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:25.244 19:03:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:25.244 19:03:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:25.244 19:03:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:25.244 19:03:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:25.244 19:03:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:25.244 19:03:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:25.244 19:03:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:25.244 19:03:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:25.244 19:03:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:25.244 19:03:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:25.244 19:03:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:25.244 19:03:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:25.244 19:03:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:25.244 19:03:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:25.244 19:03:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:25.244 19:03:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:25.244 19:03:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:25.244 19:03:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:25.244 19:03:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:25.244 19:03:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:25.244 19:03:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:25.244 19:03:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:25.244 19:03:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:25.244 19:03:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:25.244 19:03:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:25.244 19:03:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:25.244 19:03:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 277299 00:14:25.244 19:03:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:25.245 19:03:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.245 19:03:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:25.245 19:03:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.245 19:03:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 277299 00:14:25.245 19:03:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:25.245 19:03:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.245 19:03:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:25.817 19:03:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.817 19:03:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 277299 00:14:25.817 19:03:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:25.817 19:03:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.817 19:03:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:26.078 19:03:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.078 19:03:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 277299 00:14:26.078 19:03:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:26.078 19:03:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.078 19:03:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:26.339 19:03:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.339 19:03:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 277299 00:14:26.339 19:03:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:26.339 19:03:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.339 19:03:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:26.599 19:03:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.599 19:03:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 277299 00:14:26.599 19:03:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:26.599 19:03:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.599 19:03:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:26.860 19:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.860 19:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 277299 00:14:26.860 19:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:26.860 19:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.860 19:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:27.432 19:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.432 19:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 277299 00:14:27.432 19:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:27.432 19:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.432 19:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:27.693 19:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.693 19:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 277299 00:14:27.693 19:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:27.693 19:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.693 19:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:27.953 19:03:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.953 19:03:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 277299 00:14:27.953 19:03:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:27.953 19:03:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.953 19:03:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:28.214 19:03:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.214 19:03:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 277299 00:14:28.214 19:03:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:28.214 19:03:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.214 19:03:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:28.475 19:03:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.475 19:03:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 277299 00:14:28.475 19:03:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:28.736 19:03:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.736 19:03:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:28.997 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.997 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 277299 00:14:28.997 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:28.997 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.997 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:29.258 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.258 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 277299 00:14:29.258 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:29.258 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.258 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:29.519 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.519 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 277299 00:14:29.519 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:29.519 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.519 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:29.780 19:03:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.780 19:03:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 277299 00:14:29.780 19:03:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:29.780 19:03:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.780 19:03:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:30.351 19:03:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.351 19:03:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 277299 00:14:30.351 19:03:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:30.351 19:03:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.351 19:03:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:30.612 19:03:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.612 19:03:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 277299 00:14:30.612 19:03:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:30.612 19:03:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.612 19:03:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:30.874 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.874 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 277299 00:14:30.874 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:30.874 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.874 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:31.135 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.135 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 277299 00:14:31.135 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:31.135 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.135 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:31.706 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.706 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 277299 00:14:31.706 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:31.706 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.706 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:31.967 19:04:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.967 19:04:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 277299 00:14:31.967 19:04:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:31.967 19:04:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.967 19:04:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:32.228 19:04:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.228 19:04:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 277299 00:14:32.228 19:04:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:32.228 19:04:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.228 19:04:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:32.488 19:04:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.488 19:04:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 277299 00:14:32.488 19:04:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:32.488 19:04:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.488 19:04:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:32.748 19:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.748 19:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 277299 00:14:32.748 19:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:32.748 19:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.748 19:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:33.318 19:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.318 19:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 277299 00:14:33.318 19:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:33.319 19:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.319 19:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:33.578 19:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.578 19:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 277299 00:14:33.578 19:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:33.578 19:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.578 19:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:33.839 19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.839 19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 277299 00:14:33.839 19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:33.839 19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.839 19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:34.099 19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.099 19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 277299 00:14:34.099 19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:34.099 19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.099 19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:34.359 19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.359 19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 277299 00:14:34.359 19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:34.359 19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.359 19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:34.929 19:04:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.929 19:04:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 277299 00:14:34.929 19:04:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:34.929 19:04:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.929 19:04:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:34.929 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:35.190 19:04:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.190 19:04:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 277299 00:14:35.190 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (277299) - No such process 00:14:35.190 19:04:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 277299 00:14:35.190 19:04:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:35.190 19:04:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:35.190 19:04:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:14:35.190 19:04:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@335 -- # nvmfcleanup 00:14:35.190 19:04:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@99 -- # sync 00:14:35.190 19:04:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:14:35.190 19:04:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@102 -- # set +e 00:14:35.190 19:04:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@103 -- # for i in {1..20} 00:14:35.190 19:04:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:14:35.190 rmmod nvme_tcp 00:14:35.190 rmmod nvme_fabrics 00:14:35.190 rmmod nvme_keyring 00:14:35.190 19:04:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:14:35.190 19:04:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # set -e 00:14:35.190 19:04:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # return 0 00:14:35.190 19:04:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # '[' -n 277273 ']' 00:14:35.190 19:04:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@337 -- # killprocess 277273 00:14:35.190 19:04:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@952 -- # '[' -z 277273 ']' 00:14:35.190 19:04:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # kill -0 277273 00:14:35.190 19:04:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@957 -- # uname 00:14:35.190 19:04:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:35.190 19:04:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 277273 00:14:35.190 19:04:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:14:35.190 19:04:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:14:35.190 19:04:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@970 -- # echo 'killing process with pid 277273' 00:14:35.190 killing process with pid 277273 00:14:35.190 19:04:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@971 -- # kill 277273 00:14:35.190 19:04:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@976 -- # wait 277273 00:14:35.450 19:04:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:14:35.451 19:04:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # nvmf_fini 00:14:35.451 19:04:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@264 -- # local dev 00:14:35.451 19:04:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@267 -- # remove_target_ns 00:14:35.451 19:04:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:14:35.451 19:04:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:14:35.451 19:04:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_target_ns 00:14:37.362 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@268 -- # delete_main_bridge 00:14:37.362 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:14:37.362 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@130 -- # return 0 00:14:37.362 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:14:37.362 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:14:37.362 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:14:37.362 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:14:37.362 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:14:37.362 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:14:37.362 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:14:37.362 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:14:37.362 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:14:37.362 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:14:37.362 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:14:37.362 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:14:37.362 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:14:37.362 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:14:37.362 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:14:37.362 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:14:37.362 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:14:37.362 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@41 -- # _dev=0 00:14:37.362 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@41 -- # dev_map=() 00:14:37.362 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@284 -- # iptr 00:14:37.362 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@542 -- # iptables-save 00:14:37.362 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:14:37.362 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@542 -- # iptables-restore 00:14:37.362 00:14:37.362 real 0m20.703s 00:14:37.362 user 0m40.494s 00:14:37.362 sys 0m9.085s 00:14:37.362 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:37.362 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:37.362 ************************************ 00:14:37.362 END TEST nvmf_connect_stress 00:14:37.362 ************************************ 00:14:37.624 19:04:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:37.624 19:04:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:37.624 19:04:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:37.624 19:04:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:37.624 ************************************ 00:14:37.624 START TEST nvmf_fused_ordering 00:14:37.624 ************************************ 00:14:37.624 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:37.624 * Looking for test storage... 00:14:37.624 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:37.624 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:37.624 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # lcov --version 00:14:37.624 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:37.624 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:37.624 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:37.624 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:37.624 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:37.624 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:14:37.624 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:14:37.624 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:14:37.624 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:14:37.624 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:14:37.624 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:14:37.624 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:14:37.624 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:37.624 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:14:37.624 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:14:37.624 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:37.624 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:37.624 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:14:37.624 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:14:37.624 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:37.624 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:14:37.624 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:14:37.624 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:14:37.624 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:14:37.624 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:37.624 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:14:37.624 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:14:37.624 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:37.624 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:37.624 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:14:37.624 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:37.624 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:37.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:37.624 --rc genhtml_branch_coverage=1 00:14:37.624 --rc genhtml_function_coverage=1 00:14:37.624 --rc genhtml_legend=1 00:14:37.624 --rc geninfo_all_blocks=1 00:14:37.624 --rc geninfo_unexecuted_blocks=1 00:14:37.624 00:14:37.624 ' 00:14:37.624 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:37.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:37.624 --rc genhtml_branch_coverage=1 00:14:37.624 --rc genhtml_function_coverage=1 00:14:37.624 --rc genhtml_legend=1 00:14:37.624 --rc geninfo_all_blocks=1 00:14:37.624 --rc geninfo_unexecuted_blocks=1 00:14:37.624 00:14:37.624 ' 00:14:37.624 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:37.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:37.624 --rc genhtml_branch_coverage=1 00:14:37.624 --rc genhtml_function_coverage=1 00:14:37.624 --rc genhtml_legend=1 00:14:37.624 --rc geninfo_all_blocks=1 00:14:37.624 --rc geninfo_unexecuted_blocks=1 00:14:37.624 00:14:37.624 ' 00:14:37.624 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:37.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:37.624 --rc genhtml_branch_coverage=1 00:14:37.624 --rc genhtml_function_coverage=1 00:14:37.624 --rc genhtml_legend=1 00:14:37.624 --rc geninfo_all_blocks=1 00:14:37.624 --rc geninfo_unexecuted_blocks=1 00:14:37.624 00:14:37.624 ' 00:14:37.624 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:37.624 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:14:37.886 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:37.886 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:37.886 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:37.886 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:37.886 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:37.886 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:14:37.886 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:37.886 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:14:37.886 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:37.886 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:37.886 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:37.886 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:14:37.886 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:14:37.886 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:37.886 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:37.886 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:14:37.886 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:37.886 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:37.886 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:37.886 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.886 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.886 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.886 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:14:37.886 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.886 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:14:37.886 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:14:37.886 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:14:37.886 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:14:37.887 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@50 -- # : 0 00:14:37.887 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:14:37.887 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:14:37.887 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:14:37.887 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:37.887 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:37.887 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:14:37.887 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:14:37.887 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:14:37.887 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:14:37.887 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@54 -- # have_pci_nics=0 00:14:37.887 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:14:37.887 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:14:37.887 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:37.887 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # prepare_net_devs 00:14:37.887 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # local -g is_hw=no 00:14:37.887 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@260 -- # remove_target_ns 00:14:37.887 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:14:37.887 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:14:37.887 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_target_ns 00:14:37.887 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:14:37.887 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:14:37.887 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # xtrace_disable 00:14:37.887 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:46.028 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:46.028 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@131 -- # pci_devs=() 00:14:46.028 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@131 -- # local -a pci_devs 00:14:46.029 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@132 -- # pci_net_devs=() 00:14:46.029 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:14:46.029 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@133 -- # pci_drivers=() 00:14:46.029 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@133 -- # local -A pci_drivers 00:14:46.029 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@135 -- # net_devs=() 00:14:46.029 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@135 -- # local -ga net_devs 00:14:46.029 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@136 -- # e810=() 00:14:46.029 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@136 -- # local -ga e810 00:14:46.029 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@137 -- # x722=() 00:14:46.029 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@137 -- # local -ga x722 00:14:46.029 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@138 -- # mlx=() 00:14:46.029 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@138 -- # local -ga mlx 00:14:46.029 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:46.029 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:46.029 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:46.029 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:46.029 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:46.029 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:46.029 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:46.029 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:46.029 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:46.029 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:46.029 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:46.029 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:46.029 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:14:46.029 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:14:46.029 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:14:46.029 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:14:46.029 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:14:46.029 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:14:46.029 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:14:46.029 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:46.029 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:46.029 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:14:46.029 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:14:46.029 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:46.029 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:46.029 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:14:46.029 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:14:46.029 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:46.029 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:46.029 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:14:46.029 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:14:46.029 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:46.029 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:46.029 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:14:46.029 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:14:46.029 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:14:46.029 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:14:46.029 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:14:46.029 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:46.029 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:14:46.029 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:46.029 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@234 -- # [[ up == up ]] 00:14:46.029 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:14:46.029 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:46.029 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:46.029 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:46.029 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:14:46.029 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:14:46.029 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:46.029 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:14:46.029 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:46.029 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@234 -- # [[ up == up ]] 00:14:46.029 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:14:46.029 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:46.029 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:46.029 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:46.029 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:14:46.029 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:14:46.029 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:14:46.029 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # is_hw=yes 00:14:46.029 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:14:46.029 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:14:46.029 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:14:46.029 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:14:46.029 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@257 -- # create_target_ns 00:14:46.029 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:14:46.029 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:14:46.029 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:14:46.029 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:46.029 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:14:46.029 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:14:46.029 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:46.029 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:46.029 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:14:46.029 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:14:46.029 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:14:46.029 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:14:46.029 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@27 -- # local -gA dev_map 00:14:46.029 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@28 -- # local -g _dev 00:14:46.029 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:14:46.029 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:14:46.029 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:14:46.029 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:14:46.029 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@44 -- # ips=() 00:14:46.029 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:14:46.029 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:14:46.030 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:14:46.030 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:14:46.030 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:14:46.030 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:14:46.030 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:14:46.030 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:14:46.030 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:14:46.030 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:14:46.030 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:14:46.030 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:14:46.030 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:14:46.030 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:14:46.030 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:14:46.030 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:14:46.030 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:14:46.030 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:14:46.030 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:14:46.030 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:14:46.030 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@11 -- # local val=167772161 00:14:46.030 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:14:46.030 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:14:46.030 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:14:46.030 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:14:46.030 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:14:46.030 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:14:46.030 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:14:46.030 10.0.0.1 00:14:46.030 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:14:46.030 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:14:46.030 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:46.030 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:46.030 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:14:46.030 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@11 -- # local val=167772162 00:14:46.030 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:14:46.030 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:14:46.030 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:14:46.030 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:14:46.030 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:14:46.030 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:14:46.030 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:14:46.030 10.0.0.2 00:14:46.030 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:14:46.030 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:14:46.030 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:14:46.030 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:14:46.030 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:14:46.030 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:14:46.030 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:14:46.030 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:46.030 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:46.030 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:14:46.030 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:14:46.030 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:14:46.030 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:14:46.030 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:14:46.030 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:14:46.030 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:14:46.030 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:14:46.030 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:14:46.030 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:14:46.030 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:14:46.030 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@38 -- # ping_ips 1 00:14:46.030 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:14:46.030 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:14:46.030 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:14:46.030 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:14:46.030 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:14:46.030 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:14:46.030 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:14:46.030 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:14:46.030 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:14:46.030 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@107 -- # local dev=initiator0 00:14:46.030 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:14:46.030 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:14:46.030 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:14:46.030 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:14:46.030 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:14:46.030 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:14:46.030 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:14:46.030 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:14:46.030 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:14:46.030 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:14:46.030 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:14:46.030 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:46.030 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:46.030 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:14:46.030 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:14:46.030 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:46.030 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.694 ms 00:14:46.030 00:14:46.030 --- 10.0.0.1 ping statistics --- 00:14:46.030 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:46.030 rtt min/avg/max/mdev = 0.694/0.694/0.694/0.000 ms 00:14:46.030 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:14:46.030 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:14:46.030 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:14:46.030 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:14:46.030 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:46.030 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:46.030 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@168 -- # get_net_dev target0 00:14:46.030 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@107 -- # local dev=target0 00:14:46.030 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:14:46.030 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:14:46.030 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:14:46.031 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:14:46.031 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:14:46.031 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:14:46.031 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:14:46.031 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:14:46.031 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:14:46.031 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:14:46.031 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:14:46.031 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:14:46.031 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:14:46.031 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:14:46.031 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:46.031 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.288 ms 00:14:46.031 00:14:46.031 --- 10.0.0.2 ping statistics --- 00:14:46.031 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:46.031 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:14:46.031 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@98 -- # (( pair++ )) 00:14:46.031 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:14:46.031 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:46.031 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@270 -- # return 0 00:14:46.031 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:14:46.031 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:14:46.031 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:14:46.031 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:14:46.031 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:14:46.031 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:14:46.031 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:14:46.031 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:14:46.031 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:14:46.031 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:14:46.031 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@107 -- # local dev=initiator0 00:14:46.031 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:14:46.031 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:14:46.031 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:14:46.031 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:14:46.031 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:14:46.031 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:14:46.031 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:14:46.031 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:14:46.031 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:14:46.031 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:46.031 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:14:46.031 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:14:46.031 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:14:46.031 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:14:46.031 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:14:46.031 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:14:46.031 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@107 -- # local dev=initiator1 00:14:46.031 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:14:46.031 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:14:46.031 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@109 -- # return 1 00:14:46.031 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@168 -- # dev= 00:14:46.031 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@169 -- # return 0 00:14:46.031 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:14:46.031 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:14:46.031 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:14:46.031 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:14:46.031 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:14:46.031 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:46.031 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:46.031 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@168 -- # get_net_dev target0 00:14:46.031 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@107 -- # local dev=target0 00:14:46.031 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:14:46.031 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:14:46.031 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:14:46.031 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:14:46.031 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:14:46.031 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:14:46.031 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:14:46.031 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:14:46.031 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:14:46.031 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:46.031 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:14:46.031 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:14:46.031 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:14:46.031 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:14:46.031 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:46.031 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:46.031 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@168 -- # get_net_dev target1 00:14:46.031 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@107 -- # local dev=target1 00:14:46.031 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:14:46.031 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:14:46.031 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@109 -- # return 1 00:14:46.031 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@168 -- # dev= 00:14:46.031 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@169 -- # return 0 00:14:46.031 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:14:46.031 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:46.031 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:14:46.031 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:14:46.031 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:46.031 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:14:46.031 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:14:46.031 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:14:46.031 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:14:46.031 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:46.031 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:46.031 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # nvmfpid=283669 00:14:46.031 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@329 -- # waitforlisten 283669 00:14:46.031 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # '[' -z 283669 ']' 00:14:46.031 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:46.031 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:46.031 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:46.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:46.031 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:46.031 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:46.032 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:46.032 [2024-11-05 19:04:14.401022] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:14:46.032 [2024-11-05 19:04:14.401099] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:46.032 [2024-11-05 19:04:14.501094] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:46.032 [2024-11-05 19:04:14.551221] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:46.032 [2024-11-05 19:04:14.551267] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:46.032 [2024-11-05 19:04:14.551275] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:46.032 [2024-11-05 19:04:14.551283] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:46.032 [2024-11-05 19:04:14.551289] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:46.032 [2024-11-05 19:04:14.552064] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:46.032 19:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:46.032 19:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@866 -- # return 0 00:14:46.032 19:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:14:46.032 19:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:46.032 19:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:46.032 19:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:46.032 19:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:46.032 19:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.032 19:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:46.032 [2024-11-05 19:04:15.263161] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:46.032 19:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.032 19:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:46.032 19:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.032 19:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:46.032 19:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.032 19:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:46.032 19:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.032 19:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:46.032 [2024-11-05 19:04:15.279470] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:46.032 19:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.032 19:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:46.032 19:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.032 19:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:46.032 NULL1 00:14:46.032 19:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.032 19:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:14:46.032 19:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.032 19:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:46.032 19:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.032 19:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:46.032 19:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.032 19:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:46.032 19:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.032 19:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:14:46.032 [2024-11-05 19:04:15.338346] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:14:46.032 [2024-11-05 19:04:15.338390] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid283725 ] 00:14:46.603 Attached to nqn.2016-06.io.spdk:cnode1 00:14:46.603 Namespace ID: 1 size: 1GB 00:14:46.603 fused_ordering(0) 00:14:46.603 fused_ordering(1) 00:14:46.603 fused_ordering(2) 00:14:46.603 fused_ordering(3) 00:14:46.603 fused_ordering(4) 00:14:46.603 fused_ordering(5) 00:14:46.603 fused_ordering(6) 00:14:46.603 fused_ordering(7) 00:14:46.603 fused_ordering(8) 00:14:46.603 fused_ordering(9) 00:14:46.603 fused_ordering(10) 00:14:46.603 fused_ordering(11) 00:14:46.603 fused_ordering(12) 00:14:46.603 fused_ordering(13) 00:14:46.603 fused_ordering(14) 00:14:46.603 fused_ordering(15) 00:14:46.603 fused_ordering(16) 00:14:46.603 fused_ordering(17) 00:14:46.603 fused_ordering(18) 00:14:46.603 fused_ordering(19) 00:14:46.603 fused_ordering(20) 00:14:46.603 fused_ordering(21) 00:14:46.603 fused_ordering(22) 00:14:46.603 fused_ordering(23) 00:14:46.603 fused_ordering(24) 00:14:46.603 fused_ordering(25) 00:14:46.603 fused_ordering(26) 00:14:46.603 fused_ordering(27) 00:14:46.603 fused_ordering(28) 00:14:46.603 fused_ordering(29) 00:14:46.603 fused_ordering(30) 00:14:46.603 fused_ordering(31) 00:14:46.603 fused_ordering(32) 00:14:46.603 fused_ordering(33) 00:14:46.603 fused_ordering(34) 00:14:46.603 fused_ordering(35) 00:14:46.603 fused_ordering(36) 00:14:46.603 fused_ordering(37) 00:14:46.603 fused_ordering(38) 00:14:46.603 fused_ordering(39) 00:14:46.603 fused_ordering(40) 00:14:46.603 fused_ordering(41) 00:14:46.603 fused_ordering(42) 00:14:46.603 fused_ordering(43) 00:14:46.603 fused_ordering(44) 00:14:46.603 fused_ordering(45) 00:14:46.603 fused_ordering(46) 00:14:46.603 fused_ordering(47) 00:14:46.603 fused_ordering(48) 00:14:46.603 fused_ordering(49) 00:14:46.603 fused_ordering(50) 00:14:46.603 fused_ordering(51) 00:14:46.603 fused_ordering(52) 00:14:46.603 fused_ordering(53) 00:14:46.603 fused_ordering(54) 00:14:46.603 fused_ordering(55) 00:14:46.603 fused_ordering(56) 00:14:46.603 fused_ordering(57) 00:14:46.603 fused_ordering(58) 00:14:46.603 fused_ordering(59) 00:14:46.603 fused_ordering(60) 00:14:46.603 fused_ordering(61) 00:14:46.603 fused_ordering(62) 00:14:46.603 fused_ordering(63) 00:14:46.603 fused_ordering(64) 00:14:46.603 fused_ordering(65) 00:14:46.603 fused_ordering(66) 00:14:46.603 fused_ordering(67) 00:14:46.603 fused_ordering(68) 00:14:46.603 fused_ordering(69) 00:14:46.603 fused_ordering(70) 00:14:46.603 fused_ordering(71) 00:14:46.603 fused_ordering(72) 00:14:46.603 fused_ordering(73) 00:14:46.603 fused_ordering(74) 00:14:46.603 fused_ordering(75) 00:14:46.603 fused_ordering(76) 00:14:46.603 fused_ordering(77) 00:14:46.603 fused_ordering(78) 00:14:46.603 fused_ordering(79) 00:14:46.603 fused_ordering(80) 00:14:46.603 fused_ordering(81) 00:14:46.603 fused_ordering(82) 00:14:46.603 fused_ordering(83) 00:14:46.603 fused_ordering(84) 00:14:46.603 fused_ordering(85) 00:14:46.603 fused_ordering(86) 00:14:46.603 fused_ordering(87) 00:14:46.603 fused_ordering(88) 00:14:46.603 fused_ordering(89) 00:14:46.603 fused_ordering(90) 00:14:46.603 fused_ordering(91) 00:14:46.603 fused_ordering(92) 00:14:46.603 fused_ordering(93) 00:14:46.603 fused_ordering(94) 00:14:46.603 fused_ordering(95) 00:14:46.603 fused_ordering(96) 00:14:46.603 fused_ordering(97) 00:14:46.603 fused_ordering(98) 00:14:46.603 fused_ordering(99) 00:14:46.603 fused_ordering(100) 00:14:46.603 fused_ordering(101) 00:14:46.603 fused_ordering(102) 00:14:46.603 fused_ordering(103) 00:14:46.603 fused_ordering(104) 00:14:46.603 fused_ordering(105) 00:14:46.603 fused_ordering(106) 00:14:46.603 fused_ordering(107) 00:14:46.603 fused_ordering(108) 00:14:46.603 fused_ordering(109) 00:14:46.603 fused_ordering(110) 00:14:46.603 fused_ordering(111) 00:14:46.603 fused_ordering(112) 00:14:46.603 fused_ordering(113) 00:14:46.603 fused_ordering(114) 00:14:46.603 fused_ordering(115) 00:14:46.603 fused_ordering(116) 00:14:46.603 fused_ordering(117) 00:14:46.603 fused_ordering(118) 00:14:46.603 fused_ordering(119) 00:14:46.603 fused_ordering(120) 00:14:46.603 fused_ordering(121) 00:14:46.603 fused_ordering(122) 00:14:46.603 fused_ordering(123) 00:14:46.603 fused_ordering(124) 00:14:46.603 fused_ordering(125) 00:14:46.603 fused_ordering(126) 00:14:46.603 fused_ordering(127) 00:14:46.603 fused_ordering(128) 00:14:46.603 fused_ordering(129) 00:14:46.603 fused_ordering(130) 00:14:46.603 fused_ordering(131) 00:14:46.603 fused_ordering(132) 00:14:46.603 fused_ordering(133) 00:14:46.603 fused_ordering(134) 00:14:46.603 fused_ordering(135) 00:14:46.603 fused_ordering(136) 00:14:46.603 fused_ordering(137) 00:14:46.603 fused_ordering(138) 00:14:46.603 fused_ordering(139) 00:14:46.603 fused_ordering(140) 00:14:46.603 fused_ordering(141) 00:14:46.603 fused_ordering(142) 00:14:46.603 fused_ordering(143) 00:14:46.603 fused_ordering(144) 00:14:46.603 fused_ordering(145) 00:14:46.603 fused_ordering(146) 00:14:46.603 fused_ordering(147) 00:14:46.603 fused_ordering(148) 00:14:46.603 fused_ordering(149) 00:14:46.603 fused_ordering(150) 00:14:46.603 fused_ordering(151) 00:14:46.603 fused_ordering(152) 00:14:46.603 fused_ordering(153) 00:14:46.603 fused_ordering(154) 00:14:46.603 fused_ordering(155) 00:14:46.603 fused_ordering(156) 00:14:46.603 fused_ordering(157) 00:14:46.603 fused_ordering(158) 00:14:46.603 fused_ordering(159) 00:14:46.603 fused_ordering(160) 00:14:46.603 fused_ordering(161) 00:14:46.603 fused_ordering(162) 00:14:46.603 fused_ordering(163) 00:14:46.603 fused_ordering(164) 00:14:46.603 fused_ordering(165) 00:14:46.603 fused_ordering(166) 00:14:46.603 fused_ordering(167) 00:14:46.603 fused_ordering(168) 00:14:46.603 fused_ordering(169) 00:14:46.603 fused_ordering(170) 00:14:46.603 fused_ordering(171) 00:14:46.603 fused_ordering(172) 00:14:46.603 fused_ordering(173) 00:14:46.603 fused_ordering(174) 00:14:46.603 fused_ordering(175) 00:14:46.603 fused_ordering(176) 00:14:46.603 fused_ordering(177) 00:14:46.603 fused_ordering(178) 00:14:46.603 fused_ordering(179) 00:14:46.603 fused_ordering(180) 00:14:46.603 fused_ordering(181) 00:14:46.603 fused_ordering(182) 00:14:46.603 fused_ordering(183) 00:14:46.603 fused_ordering(184) 00:14:46.603 fused_ordering(185) 00:14:46.604 fused_ordering(186) 00:14:46.604 fused_ordering(187) 00:14:46.604 fused_ordering(188) 00:14:46.604 fused_ordering(189) 00:14:46.604 fused_ordering(190) 00:14:46.604 fused_ordering(191) 00:14:46.604 fused_ordering(192) 00:14:46.604 fused_ordering(193) 00:14:46.604 fused_ordering(194) 00:14:46.604 fused_ordering(195) 00:14:46.604 fused_ordering(196) 00:14:46.604 fused_ordering(197) 00:14:46.604 fused_ordering(198) 00:14:46.604 fused_ordering(199) 00:14:46.604 fused_ordering(200) 00:14:46.604 fused_ordering(201) 00:14:46.604 fused_ordering(202) 00:14:46.604 fused_ordering(203) 00:14:46.604 fused_ordering(204) 00:14:46.604 fused_ordering(205) 00:14:46.864 fused_ordering(206) 00:14:46.864 fused_ordering(207) 00:14:46.864 fused_ordering(208) 00:14:46.864 fused_ordering(209) 00:14:46.864 fused_ordering(210) 00:14:46.864 fused_ordering(211) 00:14:46.864 fused_ordering(212) 00:14:46.864 fused_ordering(213) 00:14:46.864 fused_ordering(214) 00:14:46.864 fused_ordering(215) 00:14:46.864 fused_ordering(216) 00:14:46.864 fused_ordering(217) 00:14:46.864 fused_ordering(218) 00:14:46.864 fused_ordering(219) 00:14:46.864 fused_ordering(220) 00:14:46.864 fused_ordering(221) 00:14:46.864 fused_ordering(222) 00:14:46.864 fused_ordering(223) 00:14:46.864 fused_ordering(224) 00:14:46.864 fused_ordering(225) 00:14:46.864 fused_ordering(226) 00:14:46.864 fused_ordering(227) 00:14:46.864 fused_ordering(228) 00:14:46.864 fused_ordering(229) 00:14:46.864 fused_ordering(230) 00:14:46.864 fused_ordering(231) 00:14:46.864 fused_ordering(232) 00:14:46.864 fused_ordering(233) 00:14:46.864 fused_ordering(234) 00:14:46.864 fused_ordering(235) 00:14:46.864 fused_ordering(236) 00:14:46.864 fused_ordering(237) 00:14:46.864 fused_ordering(238) 00:14:46.864 fused_ordering(239) 00:14:46.864 fused_ordering(240) 00:14:46.864 fused_ordering(241) 00:14:46.864 fused_ordering(242) 00:14:46.864 fused_ordering(243) 00:14:46.864 fused_ordering(244) 00:14:46.864 fused_ordering(245) 00:14:46.864 fused_ordering(246) 00:14:46.864 fused_ordering(247) 00:14:46.864 fused_ordering(248) 00:14:46.864 fused_ordering(249) 00:14:46.864 fused_ordering(250) 00:14:46.864 fused_ordering(251) 00:14:46.864 fused_ordering(252) 00:14:46.864 fused_ordering(253) 00:14:46.864 fused_ordering(254) 00:14:46.864 fused_ordering(255) 00:14:46.864 fused_ordering(256) 00:14:46.864 fused_ordering(257) 00:14:46.864 fused_ordering(258) 00:14:46.864 fused_ordering(259) 00:14:46.864 fused_ordering(260) 00:14:46.864 fused_ordering(261) 00:14:46.864 fused_ordering(262) 00:14:46.864 fused_ordering(263) 00:14:46.864 fused_ordering(264) 00:14:46.864 fused_ordering(265) 00:14:46.864 fused_ordering(266) 00:14:46.864 fused_ordering(267) 00:14:46.864 fused_ordering(268) 00:14:46.864 fused_ordering(269) 00:14:46.864 fused_ordering(270) 00:14:46.864 fused_ordering(271) 00:14:46.864 fused_ordering(272) 00:14:46.864 fused_ordering(273) 00:14:46.864 fused_ordering(274) 00:14:46.864 fused_ordering(275) 00:14:46.864 fused_ordering(276) 00:14:46.864 fused_ordering(277) 00:14:46.864 fused_ordering(278) 00:14:46.864 fused_ordering(279) 00:14:46.864 fused_ordering(280) 00:14:46.864 fused_ordering(281) 00:14:46.864 fused_ordering(282) 00:14:46.864 fused_ordering(283) 00:14:46.864 fused_ordering(284) 00:14:46.864 fused_ordering(285) 00:14:46.864 fused_ordering(286) 00:14:46.864 fused_ordering(287) 00:14:46.864 fused_ordering(288) 00:14:46.864 fused_ordering(289) 00:14:46.864 fused_ordering(290) 00:14:46.864 fused_ordering(291) 00:14:46.864 fused_ordering(292) 00:14:46.864 fused_ordering(293) 00:14:46.864 fused_ordering(294) 00:14:46.864 fused_ordering(295) 00:14:46.864 fused_ordering(296) 00:14:46.864 fused_ordering(297) 00:14:46.864 fused_ordering(298) 00:14:46.864 fused_ordering(299) 00:14:46.864 fused_ordering(300) 00:14:46.864 fused_ordering(301) 00:14:46.864 fused_ordering(302) 00:14:46.864 fused_ordering(303) 00:14:46.864 fused_ordering(304) 00:14:46.864 fused_ordering(305) 00:14:46.864 fused_ordering(306) 00:14:46.864 fused_ordering(307) 00:14:46.864 fused_ordering(308) 00:14:46.864 fused_ordering(309) 00:14:46.864 fused_ordering(310) 00:14:46.864 fused_ordering(311) 00:14:46.864 fused_ordering(312) 00:14:46.864 fused_ordering(313) 00:14:46.864 fused_ordering(314) 00:14:46.864 fused_ordering(315) 00:14:46.864 fused_ordering(316) 00:14:46.864 fused_ordering(317) 00:14:46.864 fused_ordering(318) 00:14:46.864 fused_ordering(319) 00:14:46.864 fused_ordering(320) 00:14:46.864 fused_ordering(321) 00:14:46.864 fused_ordering(322) 00:14:46.864 fused_ordering(323) 00:14:46.864 fused_ordering(324) 00:14:46.864 fused_ordering(325) 00:14:46.864 fused_ordering(326) 00:14:46.864 fused_ordering(327) 00:14:46.864 fused_ordering(328) 00:14:46.864 fused_ordering(329) 00:14:46.864 fused_ordering(330) 00:14:46.864 fused_ordering(331) 00:14:46.864 fused_ordering(332) 00:14:46.864 fused_ordering(333) 00:14:46.864 fused_ordering(334) 00:14:46.864 fused_ordering(335) 00:14:46.864 fused_ordering(336) 00:14:46.864 fused_ordering(337) 00:14:46.864 fused_ordering(338) 00:14:46.864 fused_ordering(339) 00:14:46.864 fused_ordering(340) 00:14:46.864 fused_ordering(341) 00:14:46.864 fused_ordering(342) 00:14:46.864 fused_ordering(343) 00:14:46.864 fused_ordering(344) 00:14:46.864 fused_ordering(345) 00:14:46.864 fused_ordering(346) 00:14:46.864 fused_ordering(347) 00:14:46.864 fused_ordering(348) 00:14:46.864 fused_ordering(349) 00:14:46.864 fused_ordering(350) 00:14:46.864 fused_ordering(351) 00:14:46.864 fused_ordering(352) 00:14:46.864 fused_ordering(353) 00:14:46.864 fused_ordering(354) 00:14:46.864 fused_ordering(355) 00:14:46.864 fused_ordering(356) 00:14:46.864 fused_ordering(357) 00:14:46.864 fused_ordering(358) 00:14:46.864 fused_ordering(359) 00:14:46.864 fused_ordering(360) 00:14:46.864 fused_ordering(361) 00:14:46.864 fused_ordering(362) 00:14:46.864 fused_ordering(363) 00:14:46.864 fused_ordering(364) 00:14:46.864 fused_ordering(365) 00:14:46.864 fused_ordering(366) 00:14:46.864 fused_ordering(367) 00:14:46.864 fused_ordering(368) 00:14:46.864 fused_ordering(369) 00:14:46.864 fused_ordering(370) 00:14:46.864 fused_ordering(371) 00:14:46.864 fused_ordering(372) 00:14:46.864 fused_ordering(373) 00:14:46.864 fused_ordering(374) 00:14:46.864 fused_ordering(375) 00:14:46.864 fused_ordering(376) 00:14:46.864 fused_ordering(377) 00:14:46.864 fused_ordering(378) 00:14:46.864 fused_ordering(379) 00:14:46.864 fused_ordering(380) 00:14:46.864 fused_ordering(381) 00:14:46.864 fused_ordering(382) 00:14:46.864 fused_ordering(383) 00:14:46.864 fused_ordering(384) 00:14:46.864 fused_ordering(385) 00:14:46.864 fused_ordering(386) 00:14:46.864 fused_ordering(387) 00:14:46.864 fused_ordering(388) 00:14:46.864 fused_ordering(389) 00:14:46.864 fused_ordering(390) 00:14:46.864 fused_ordering(391) 00:14:46.864 fused_ordering(392) 00:14:46.864 fused_ordering(393) 00:14:46.864 fused_ordering(394) 00:14:46.864 fused_ordering(395) 00:14:46.864 fused_ordering(396) 00:14:46.864 fused_ordering(397) 00:14:46.864 fused_ordering(398) 00:14:46.864 fused_ordering(399) 00:14:46.864 fused_ordering(400) 00:14:46.864 fused_ordering(401) 00:14:46.864 fused_ordering(402) 00:14:46.864 fused_ordering(403) 00:14:46.864 fused_ordering(404) 00:14:46.864 fused_ordering(405) 00:14:46.864 fused_ordering(406) 00:14:46.864 fused_ordering(407) 00:14:46.864 fused_ordering(408) 00:14:46.864 fused_ordering(409) 00:14:46.864 fused_ordering(410) 00:14:47.124 fused_ordering(411) 00:14:47.124 fused_ordering(412) 00:14:47.124 fused_ordering(413) 00:14:47.124 fused_ordering(414) 00:14:47.124 fused_ordering(415) 00:14:47.124 fused_ordering(416) 00:14:47.124 fused_ordering(417) 00:14:47.124 fused_ordering(418) 00:14:47.124 fused_ordering(419) 00:14:47.124 fused_ordering(420) 00:14:47.125 fused_ordering(421) 00:14:47.125 fused_ordering(422) 00:14:47.125 fused_ordering(423) 00:14:47.125 fused_ordering(424) 00:14:47.125 fused_ordering(425) 00:14:47.125 fused_ordering(426) 00:14:47.125 fused_ordering(427) 00:14:47.125 fused_ordering(428) 00:14:47.125 fused_ordering(429) 00:14:47.125 fused_ordering(430) 00:14:47.125 fused_ordering(431) 00:14:47.125 fused_ordering(432) 00:14:47.125 fused_ordering(433) 00:14:47.125 fused_ordering(434) 00:14:47.125 fused_ordering(435) 00:14:47.125 fused_ordering(436) 00:14:47.125 fused_ordering(437) 00:14:47.125 fused_ordering(438) 00:14:47.125 fused_ordering(439) 00:14:47.125 fused_ordering(440) 00:14:47.125 fused_ordering(441) 00:14:47.125 fused_ordering(442) 00:14:47.125 fused_ordering(443) 00:14:47.125 fused_ordering(444) 00:14:47.125 fused_ordering(445) 00:14:47.125 fused_ordering(446) 00:14:47.125 fused_ordering(447) 00:14:47.125 fused_ordering(448) 00:14:47.125 fused_ordering(449) 00:14:47.125 fused_ordering(450) 00:14:47.125 fused_ordering(451) 00:14:47.125 fused_ordering(452) 00:14:47.125 fused_ordering(453) 00:14:47.125 fused_ordering(454) 00:14:47.125 fused_ordering(455) 00:14:47.125 fused_ordering(456) 00:14:47.125 fused_ordering(457) 00:14:47.125 fused_ordering(458) 00:14:47.125 fused_ordering(459) 00:14:47.125 fused_ordering(460) 00:14:47.125 fused_ordering(461) 00:14:47.125 fused_ordering(462) 00:14:47.125 fused_ordering(463) 00:14:47.125 fused_ordering(464) 00:14:47.125 fused_ordering(465) 00:14:47.125 fused_ordering(466) 00:14:47.125 fused_ordering(467) 00:14:47.125 fused_ordering(468) 00:14:47.125 fused_ordering(469) 00:14:47.125 fused_ordering(470) 00:14:47.125 fused_ordering(471) 00:14:47.125 fused_ordering(472) 00:14:47.125 fused_ordering(473) 00:14:47.125 fused_ordering(474) 00:14:47.125 fused_ordering(475) 00:14:47.125 fused_ordering(476) 00:14:47.125 fused_ordering(477) 00:14:47.125 fused_ordering(478) 00:14:47.125 fused_ordering(479) 00:14:47.125 fused_ordering(480) 00:14:47.125 fused_ordering(481) 00:14:47.125 fused_ordering(482) 00:14:47.125 fused_ordering(483) 00:14:47.125 fused_ordering(484) 00:14:47.125 fused_ordering(485) 00:14:47.125 fused_ordering(486) 00:14:47.125 fused_ordering(487) 00:14:47.125 fused_ordering(488) 00:14:47.125 fused_ordering(489) 00:14:47.125 fused_ordering(490) 00:14:47.125 fused_ordering(491) 00:14:47.125 fused_ordering(492) 00:14:47.125 fused_ordering(493) 00:14:47.125 fused_ordering(494) 00:14:47.125 fused_ordering(495) 00:14:47.125 fused_ordering(496) 00:14:47.125 fused_ordering(497) 00:14:47.125 fused_ordering(498) 00:14:47.125 fused_ordering(499) 00:14:47.125 fused_ordering(500) 00:14:47.125 fused_ordering(501) 00:14:47.125 fused_ordering(502) 00:14:47.125 fused_ordering(503) 00:14:47.125 fused_ordering(504) 00:14:47.125 fused_ordering(505) 00:14:47.125 fused_ordering(506) 00:14:47.125 fused_ordering(507) 00:14:47.125 fused_ordering(508) 00:14:47.125 fused_ordering(509) 00:14:47.125 fused_ordering(510) 00:14:47.125 fused_ordering(511) 00:14:47.125 fused_ordering(512) 00:14:47.125 fused_ordering(513) 00:14:47.125 fused_ordering(514) 00:14:47.125 fused_ordering(515) 00:14:47.125 fused_ordering(516) 00:14:47.125 fused_ordering(517) 00:14:47.125 fused_ordering(518) 00:14:47.125 fused_ordering(519) 00:14:47.125 fused_ordering(520) 00:14:47.125 fused_ordering(521) 00:14:47.125 fused_ordering(522) 00:14:47.125 fused_ordering(523) 00:14:47.125 fused_ordering(524) 00:14:47.125 fused_ordering(525) 00:14:47.125 fused_ordering(526) 00:14:47.125 fused_ordering(527) 00:14:47.125 fused_ordering(528) 00:14:47.125 fused_ordering(529) 00:14:47.125 fused_ordering(530) 00:14:47.125 fused_ordering(531) 00:14:47.125 fused_ordering(532) 00:14:47.125 fused_ordering(533) 00:14:47.125 fused_ordering(534) 00:14:47.125 fused_ordering(535) 00:14:47.125 fused_ordering(536) 00:14:47.125 fused_ordering(537) 00:14:47.125 fused_ordering(538) 00:14:47.125 fused_ordering(539) 00:14:47.125 fused_ordering(540) 00:14:47.125 fused_ordering(541) 00:14:47.125 fused_ordering(542) 00:14:47.125 fused_ordering(543) 00:14:47.125 fused_ordering(544) 00:14:47.125 fused_ordering(545) 00:14:47.125 fused_ordering(546) 00:14:47.125 fused_ordering(547) 00:14:47.125 fused_ordering(548) 00:14:47.125 fused_ordering(549) 00:14:47.125 fused_ordering(550) 00:14:47.125 fused_ordering(551) 00:14:47.125 fused_ordering(552) 00:14:47.125 fused_ordering(553) 00:14:47.125 fused_ordering(554) 00:14:47.125 fused_ordering(555) 00:14:47.125 fused_ordering(556) 00:14:47.125 fused_ordering(557) 00:14:47.125 fused_ordering(558) 00:14:47.125 fused_ordering(559) 00:14:47.125 fused_ordering(560) 00:14:47.125 fused_ordering(561) 00:14:47.125 fused_ordering(562) 00:14:47.125 fused_ordering(563) 00:14:47.125 fused_ordering(564) 00:14:47.125 fused_ordering(565) 00:14:47.125 fused_ordering(566) 00:14:47.125 fused_ordering(567) 00:14:47.125 fused_ordering(568) 00:14:47.125 fused_ordering(569) 00:14:47.125 fused_ordering(570) 00:14:47.125 fused_ordering(571) 00:14:47.125 fused_ordering(572) 00:14:47.125 fused_ordering(573) 00:14:47.125 fused_ordering(574) 00:14:47.125 fused_ordering(575) 00:14:47.125 fused_ordering(576) 00:14:47.125 fused_ordering(577) 00:14:47.125 fused_ordering(578) 00:14:47.125 fused_ordering(579) 00:14:47.125 fused_ordering(580) 00:14:47.125 fused_ordering(581) 00:14:47.125 fused_ordering(582) 00:14:47.125 fused_ordering(583) 00:14:47.125 fused_ordering(584) 00:14:47.125 fused_ordering(585) 00:14:47.125 fused_ordering(586) 00:14:47.125 fused_ordering(587) 00:14:47.125 fused_ordering(588) 00:14:47.125 fused_ordering(589) 00:14:47.125 fused_ordering(590) 00:14:47.125 fused_ordering(591) 00:14:47.125 fused_ordering(592) 00:14:47.125 fused_ordering(593) 00:14:47.125 fused_ordering(594) 00:14:47.125 fused_ordering(595) 00:14:47.125 fused_ordering(596) 00:14:47.125 fused_ordering(597) 00:14:47.125 fused_ordering(598) 00:14:47.125 fused_ordering(599) 00:14:47.125 fused_ordering(600) 00:14:47.125 fused_ordering(601) 00:14:47.125 fused_ordering(602) 00:14:47.125 fused_ordering(603) 00:14:47.125 fused_ordering(604) 00:14:47.125 fused_ordering(605) 00:14:47.125 fused_ordering(606) 00:14:47.125 fused_ordering(607) 00:14:47.125 fused_ordering(608) 00:14:47.125 fused_ordering(609) 00:14:47.125 fused_ordering(610) 00:14:47.125 fused_ordering(611) 00:14:47.125 fused_ordering(612) 00:14:47.125 fused_ordering(613) 00:14:47.125 fused_ordering(614) 00:14:47.125 fused_ordering(615) 00:14:47.695 fused_ordering(616) 00:14:47.695 fused_ordering(617) 00:14:47.695 fused_ordering(618) 00:14:47.695 fused_ordering(619) 00:14:47.695 fused_ordering(620) 00:14:47.695 fused_ordering(621) 00:14:47.695 fused_ordering(622) 00:14:47.695 fused_ordering(623) 00:14:47.695 fused_ordering(624) 00:14:47.695 fused_ordering(625) 00:14:47.695 fused_ordering(626) 00:14:47.695 fused_ordering(627) 00:14:47.695 fused_ordering(628) 00:14:47.695 fused_ordering(629) 00:14:47.695 fused_ordering(630) 00:14:47.695 fused_ordering(631) 00:14:47.695 fused_ordering(632) 00:14:47.695 fused_ordering(633) 00:14:47.695 fused_ordering(634) 00:14:47.695 fused_ordering(635) 00:14:47.695 fused_ordering(636) 00:14:47.695 fused_ordering(637) 00:14:47.695 fused_ordering(638) 00:14:47.695 fused_ordering(639) 00:14:47.695 fused_ordering(640) 00:14:47.695 fused_ordering(641) 00:14:47.695 fused_ordering(642) 00:14:47.695 fused_ordering(643) 00:14:47.695 fused_ordering(644) 00:14:47.695 fused_ordering(645) 00:14:47.695 fused_ordering(646) 00:14:47.695 fused_ordering(647) 00:14:47.695 fused_ordering(648) 00:14:47.695 fused_ordering(649) 00:14:47.695 fused_ordering(650) 00:14:47.695 fused_ordering(651) 00:14:47.695 fused_ordering(652) 00:14:47.695 fused_ordering(653) 00:14:47.695 fused_ordering(654) 00:14:47.695 fused_ordering(655) 00:14:47.695 fused_ordering(656) 00:14:47.695 fused_ordering(657) 00:14:47.695 fused_ordering(658) 00:14:47.695 fused_ordering(659) 00:14:47.695 fused_ordering(660) 00:14:47.695 fused_ordering(661) 00:14:47.695 fused_ordering(662) 00:14:47.695 fused_ordering(663) 00:14:47.695 fused_ordering(664) 00:14:47.695 fused_ordering(665) 00:14:47.695 fused_ordering(666) 00:14:47.695 fused_ordering(667) 00:14:47.695 fused_ordering(668) 00:14:47.695 fused_ordering(669) 00:14:47.695 fused_ordering(670) 00:14:47.695 fused_ordering(671) 00:14:47.695 fused_ordering(672) 00:14:47.695 fused_ordering(673) 00:14:47.695 fused_ordering(674) 00:14:47.695 fused_ordering(675) 00:14:47.695 fused_ordering(676) 00:14:47.695 fused_ordering(677) 00:14:47.695 fused_ordering(678) 00:14:47.695 fused_ordering(679) 00:14:47.695 fused_ordering(680) 00:14:47.695 fused_ordering(681) 00:14:47.695 fused_ordering(682) 00:14:47.695 fused_ordering(683) 00:14:47.695 fused_ordering(684) 00:14:47.695 fused_ordering(685) 00:14:47.695 fused_ordering(686) 00:14:47.695 fused_ordering(687) 00:14:47.695 fused_ordering(688) 00:14:47.695 fused_ordering(689) 00:14:47.695 fused_ordering(690) 00:14:47.695 fused_ordering(691) 00:14:47.695 fused_ordering(692) 00:14:47.695 fused_ordering(693) 00:14:47.695 fused_ordering(694) 00:14:47.695 fused_ordering(695) 00:14:47.695 fused_ordering(696) 00:14:47.695 fused_ordering(697) 00:14:47.695 fused_ordering(698) 00:14:47.695 fused_ordering(699) 00:14:47.695 fused_ordering(700) 00:14:47.695 fused_ordering(701) 00:14:47.695 fused_ordering(702) 00:14:47.695 fused_ordering(703) 00:14:47.695 fused_ordering(704) 00:14:47.695 fused_ordering(705) 00:14:47.695 fused_ordering(706) 00:14:47.695 fused_ordering(707) 00:14:47.695 fused_ordering(708) 00:14:47.695 fused_ordering(709) 00:14:47.695 fused_ordering(710) 00:14:47.695 fused_ordering(711) 00:14:47.695 fused_ordering(712) 00:14:47.695 fused_ordering(713) 00:14:47.695 fused_ordering(714) 00:14:47.695 fused_ordering(715) 00:14:47.695 fused_ordering(716) 00:14:47.695 fused_ordering(717) 00:14:47.695 fused_ordering(718) 00:14:47.695 fused_ordering(719) 00:14:47.695 fused_ordering(720) 00:14:47.695 fused_ordering(721) 00:14:47.695 fused_ordering(722) 00:14:47.695 fused_ordering(723) 00:14:47.695 fused_ordering(724) 00:14:47.695 fused_ordering(725) 00:14:47.695 fused_ordering(726) 00:14:47.695 fused_ordering(727) 00:14:47.695 fused_ordering(728) 00:14:47.695 fused_ordering(729) 00:14:47.695 fused_ordering(730) 00:14:47.695 fused_ordering(731) 00:14:47.695 fused_ordering(732) 00:14:47.695 fused_ordering(733) 00:14:47.695 fused_ordering(734) 00:14:47.695 fused_ordering(735) 00:14:47.695 fused_ordering(736) 00:14:47.695 fused_ordering(737) 00:14:47.695 fused_ordering(738) 00:14:47.695 fused_ordering(739) 00:14:47.695 fused_ordering(740) 00:14:47.695 fused_ordering(741) 00:14:47.695 fused_ordering(742) 00:14:47.696 fused_ordering(743) 00:14:47.696 fused_ordering(744) 00:14:47.696 fused_ordering(745) 00:14:47.696 fused_ordering(746) 00:14:47.696 fused_ordering(747) 00:14:47.696 fused_ordering(748) 00:14:47.696 fused_ordering(749) 00:14:47.696 fused_ordering(750) 00:14:47.696 fused_ordering(751) 00:14:47.696 fused_ordering(752) 00:14:47.696 fused_ordering(753) 00:14:47.696 fused_ordering(754) 00:14:47.696 fused_ordering(755) 00:14:47.696 fused_ordering(756) 00:14:47.696 fused_ordering(757) 00:14:47.696 fused_ordering(758) 00:14:47.696 fused_ordering(759) 00:14:47.696 fused_ordering(760) 00:14:47.696 fused_ordering(761) 00:14:47.696 fused_ordering(762) 00:14:47.696 fused_ordering(763) 00:14:47.696 fused_ordering(764) 00:14:47.696 fused_ordering(765) 00:14:47.696 fused_ordering(766) 00:14:47.696 fused_ordering(767) 00:14:47.696 fused_ordering(768) 00:14:47.696 fused_ordering(769) 00:14:47.696 fused_ordering(770) 00:14:47.696 fused_ordering(771) 00:14:47.696 fused_ordering(772) 00:14:47.696 fused_ordering(773) 00:14:47.696 fused_ordering(774) 00:14:47.696 fused_ordering(775) 00:14:47.696 fused_ordering(776) 00:14:47.696 fused_ordering(777) 00:14:47.696 fused_ordering(778) 00:14:47.696 fused_ordering(779) 00:14:47.696 fused_ordering(780) 00:14:47.696 fused_ordering(781) 00:14:47.696 fused_ordering(782) 00:14:47.696 fused_ordering(783) 00:14:47.696 fused_ordering(784) 00:14:47.696 fused_ordering(785) 00:14:47.696 fused_ordering(786) 00:14:47.696 fused_ordering(787) 00:14:47.696 fused_ordering(788) 00:14:47.696 fused_ordering(789) 00:14:47.696 fused_ordering(790) 00:14:47.696 fused_ordering(791) 00:14:47.696 fused_ordering(792) 00:14:47.696 fused_ordering(793) 00:14:47.696 fused_ordering(794) 00:14:47.696 fused_ordering(795) 00:14:47.696 fused_ordering(796) 00:14:47.696 fused_ordering(797) 00:14:47.696 fused_ordering(798) 00:14:47.696 fused_ordering(799) 00:14:47.696 fused_ordering(800) 00:14:47.696 fused_ordering(801) 00:14:47.696 fused_ordering(802) 00:14:47.696 fused_ordering(803) 00:14:47.696 fused_ordering(804) 00:14:47.696 fused_ordering(805) 00:14:47.696 fused_ordering(806) 00:14:47.696 fused_ordering(807) 00:14:47.696 fused_ordering(808) 00:14:47.696 fused_ordering(809) 00:14:47.696 fused_ordering(810) 00:14:47.696 fused_ordering(811) 00:14:47.696 fused_ordering(812) 00:14:47.696 fused_ordering(813) 00:14:47.696 fused_ordering(814) 00:14:47.696 fused_ordering(815) 00:14:47.696 fused_ordering(816) 00:14:47.696 fused_ordering(817) 00:14:47.696 fused_ordering(818) 00:14:47.696 fused_ordering(819) 00:14:47.696 fused_ordering(820) 00:14:48.267 fused_ordering(821) 00:14:48.267 fused_ordering(822) 00:14:48.267 fused_ordering(823) 00:14:48.267 fused_ordering(824) 00:14:48.267 fused_ordering(825) 00:14:48.267 fused_ordering(826) 00:14:48.267 fused_ordering(827) 00:14:48.267 fused_ordering(828) 00:14:48.267 fused_ordering(829) 00:14:48.267 fused_ordering(830) 00:14:48.267 fused_ordering(831) 00:14:48.267 fused_ordering(832) 00:14:48.267 fused_ordering(833) 00:14:48.267 fused_ordering(834) 00:14:48.267 fused_ordering(835) 00:14:48.267 fused_ordering(836) 00:14:48.267 fused_ordering(837) 00:14:48.267 fused_ordering(838) 00:14:48.267 fused_ordering(839) 00:14:48.267 fused_ordering(840) 00:14:48.267 fused_ordering(841) 00:14:48.267 fused_ordering(842) 00:14:48.267 fused_ordering(843) 00:14:48.267 fused_ordering(844) 00:14:48.267 fused_ordering(845) 00:14:48.267 fused_ordering(846) 00:14:48.267 fused_ordering(847) 00:14:48.267 fused_ordering(848) 00:14:48.267 fused_ordering(849) 00:14:48.267 fused_ordering(850) 00:14:48.267 fused_ordering(851) 00:14:48.267 fused_ordering(852) 00:14:48.267 fused_ordering(853) 00:14:48.267 fused_ordering(854) 00:14:48.267 fused_ordering(855) 00:14:48.267 fused_ordering(856) 00:14:48.267 fused_ordering(857) 00:14:48.267 fused_ordering(858) 00:14:48.267 fused_ordering(859) 00:14:48.267 fused_ordering(860) 00:14:48.267 fused_ordering(861) 00:14:48.267 fused_ordering(862) 00:14:48.267 fused_ordering(863) 00:14:48.267 fused_ordering(864) 00:14:48.267 fused_ordering(865) 00:14:48.267 fused_ordering(866) 00:14:48.267 fused_ordering(867) 00:14:48.267 fused_ordering(868) 00:14:48.267 fused_ordering(869) 00:14:48.267 fused_ordering(870) 00:14:48.267 fused_ordering(871) 00:14:48.267 fused_ordering(872) 00:14:48.267 fused_ordering(873) 00:14:48.267 fused_ordering(874) 00:14:48.267 fused_ordering(875) 00:14:48.267 fused_ordering(876) 00:14:48.267 fused_ordering(877) 00:14:48.267 fused_ordering(878) 00:14:48.267 fused_ordering(879) 00:14:48.267 fused_ordering(880) 00:14:48.267 fused_ordering(881) 00:14:48.267 fused_ordering(882) 00:14:48.267 fused_ordering(883) 00:14:48.267 fused_ordering(884) 00:14:48.267 fused_ordering(885) 00:14:48.267 fused_ordering(886) 00:14:48.267 fused_ordering(887) 00:14:48.267 fused_ordering(888) 00:14:48.267 fused_ordering(889) 00:14:48.267 fused_ordering(890) 00:14:48.267 fused_ordering(891) 00:14:48.267 fused_ordering(892) 00:14:48.267 fused_ordering(893) 00:14:48.267 fused_ordering(894) 00:14:48.267 fused_ordering(895) 00:14:48.267 fused_ordering(896) 00:14:48.267 fused_ordering(897) 00:14:48.267 fused_ordering(898) 00:14:48.267 fused_ordering(899) 00:14:48.267 fused_ordering(900) 00:14:48.267 fused_ordering(901) 00:14:48.267 fused_ordering(902) 00:14:48.267 fused_ordering(903) 00:14:48.267 fused_ordering(904) 00:14:48.267 fused_ordering(905) 00:14:48.267 fused_ordering(906) 00:14:48.267 fused_ordering(907) 00:14:48.267 fused_ordering(908) 00:14:48.267 fused_ordering(909) 00:14:48.267 fused_ordering(910) 00:14:48.267 fused_ordering(911) 00:14:48.267 fused_ordering(912) 00:14:48.267 fused_ordering(913) 00:14:48.267 fused_ordering(914) 00:14:48.267 fused_ordering(915) 00:14:48.267 fused_ordering(916) 00:14:48.267 fused_ordering(917) 00:14:48.267 fused_ordering(918) 00:14:48.267 fused_ordering(919) 00:14:48.267 fused_ordering(920) 00:14:48.267 fused_ordering(921) 00:14:48.267 fused_ordering(922) 00:14:48.267 fused_ordering(923) 00:14:48.267 fused_ordering(924) 00:14:48.267 fused_ordering(925) 00:14:48.267 fused_ordering(926) 00:14:48.267 fused_ordering(927) 00:14:48.267 fused_ordering(928) 00:14:48.267 fused_ordering(929) 00:14:48.267 fused_ordering(930) 00:14:48.267 fused_ordering(931) 00:14:48.267 fused_ordering(932) 00:14:48.267 fused_ordering(933) 00:14:48.267 fused_ordering(934) 00:14:48.267 fused_ordering(935) 00:14:48.267 fused_ordering(936) 00:14:48.267 fused_ordering(937) 00:14:48.267 fused_ordering(938) 00:14:48.267 fused_ordering(939) 00:14:48.267 fused_ordering(940) 00:14:48.267 fused_ordering(941) 00:14:48.267 fused_ordering(942) 00:14:48.267 fused_ordering(943) 00:14:48.267 fused_ordering(944) 00:14:48.267 fused_ordering(945) 00:14:48.267 fused_ordering(946) 00:14:48.267 fused_ordering(947) 00:14:48.267 fused_ordering(948) 00:14:48.267 fused_ordering(949) 00:14:48.267 fused_ordering(950) 00:14:48.267 fused_ordering(951) 00:14:48.267 fused_ordering(952) 00:14:48.267 fused_ordering(953) 00:14:48.267 fused_ordering(954) 00:14:48.267 fused_ordering(955) 00:14:48.267 fused_ordering(956) 00:14:48.267 fused_ordering(957) 00:14:48.267 fused_ordering(958) 00:14:48.267 fused_ordering(959) 00:14:48.267 fused_ordering(960) 00:14:48.267 fused_ordering(961) 00:14:48.267 fused_ordering(962) 00:14:48.267 fused_ordering(963) 00:14:48.267 fused_ordering(964) 00:14:48.267 fused_ordering(965) 00:14:48.267 fused_ordering(966) 00:14:48.267 fused_ordering(967) 00:14:48.267 fused_ordering(968) 00:14:48.267 fused_ordering(969) 00:14:48.267 fused_ordering(970) 00:14:48.267 fused_ordering(971) 00:14:48.267 fused_ordering(972) 00:14:48.267 fused_ordering(973) 00:14:48.267 fused_ordering(974) 00:14:48.267 fused_ordering(975) 00:14:48.267 fused_ordering(976) 00:14:48.267 fused_ordering(977) 00:14:48.267 fused_ordering(978) 00:14:48.267 fused_ordering(979) 00:14:48.267 fused_ordering(980) 00:14:48.267 fused_ordering(981) 00:14:48.267 fused_ordering(982) 00:14:48.267 fused_ordering(983) 00:14:48.267 fused_ordering(984) 00:14:48.267 fused_ordering(985) 00:14:48.267 fused_ordering(986) 00:14:48.267 fused_ordering(987) 00:14:48.267 fused_ordering(988) 00:14:48.267 fused_ordering(989) 00:14:48.267 fused_ordering(990) 00:14:48.267 fused_ordering(991) 00:14:48.267 fused_ordering(992) 00:14:48.267 fused_ordering(993) 00:14:48.267 fused_ordering(994) 00:14:48.267 fused_ordering(995) 00:14:48.267 fused_ordering(996) 00:14:48.267 fused_ordering(997) 00:14:48.267 fused_ordering(998) 00:14:48.267 fused_ordering(999) 00:14:48.267 fused_ordering(1000) 00:14:48.267 fused_ordering(1001) 00:14:48.267 fused_ordering(1002) 00:14:48.267 fused_ordering(1003) 00:14:48.267 fused_ordering(1004) 00:14:48.267 fused_ordering(1005) 00:14:48.267 fused_ordering(1006) 00:14:48.267 fused_ordering(1007) 00:14:48.267 fused_ordering(1008) 00:14:48.267 fused_ordering(1009) 00:14:48.267 fused_ordering(1010) 00:14:48.267 fused_ordering(1011) 00:14:48.267 fused_ordering(1012) 00:14:48.267 fused_ordering(1013) 00:14:48.267 fused_ordering(1014) 00:14:48.267 fused_ordering(1015) 00:14:48.267 fused_ordering(1016) 00:14:48.267 fused_ordering(1017) 00:14:48.268 fused_ordering(1018) 00:14:48.268 fused_ordering(1019) 00:14:48.268 fused_ordering(1020) 00:14:48.268 fused_ordering(1021) 00:14:48.268 fused_ordering(1022) 00:14:48.268 fused_ordering(1023) 00:14:48.268 19:04:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:14:48.268 19:04:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:14:48.268 19:04:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@335 -- # nvmfcleanup 00:14:48.268 19:04:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@99 -- # sync 00:14:48.268 19:04:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:14:48.268 19:04:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@102 -- # set +e 00:14:48.268 19:04:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@103 -- # for i in {1..20} 00:14:48.268 19:04:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:14:48.268 rmmod nvme_tcp 00:14:48.268 rmmod nvme_fabrics 00:14:48.268 rmmod nvme_keyring 00:14:48.268 19:04:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:14:48.268 19:04:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # set -e 00:14:48.268 19:04:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # return 0 00:14:48.268 19:04:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # '[' -n 283669 ']' 00:14:48.528 19:04:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@337 -- # killprocess 283669 00:14:48.528 19:04:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # '[' -z 283669 ']' 00:14:48.528 19:04:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # kill -0 283669 00:14:48.528 19:04:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@957 -- # uname 00:14:48.528 19:04:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:48.528 19:04:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 283669 00:14:48.528 19:04:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:14:48.528 19:04:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:14:48.528 19:04:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@970 -- # echo 'killing process with pid 283669' 00:14:48.528 killing process with pid 283669 00:14:48.528 19:04:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@971 -- # kill 283669 00:14:48.528 19:04:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@976 -- # wait 283669 00:14:48.528 19:04:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:14:48.528 19:04:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # nvmf_fini 00:14:48.528 19:04:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@264 -- # local dev 00:14:48.528 19:04:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@267 -- # remove_target_ns 00:14:48.528 19:04:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:14:48.529 19:04:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:14:48.529 19:04:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_target_ns 00:14:51.075 19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@268 -- # delete_main_bridge 00:14:51.075 19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:14:51.075 19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@130 -- # return 0 00:14:51.075 19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:14:51.075 19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:14:51.075 19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:14:51.075 19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:14:51.075 19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:14:51.075 19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:14:51.075 19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:14:51.075 19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:14:51.075 19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:14:51.075 19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:14:51.075 19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:14:51.075 19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:14:51.075 19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:14:51.075 19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:14:51.075 19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:14:51.075 19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:14:51.075 19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:14:51.075 19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@41 -- # _dev=0 00:14:51.075 19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@41 -- # dev_map=() 00:14:51.075 19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@284 -- # iptr 00:14:51.075 19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@542 -- # iptables-save 00:14:51.075 19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:14:51.075 19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@542 -- # iptables-restore 00:14:51.075 00:14:51.075 real 0m13.152s 00:14:51.075 user 0m6.874s 00:14:51.075 sys 0m6.918s 00:14:51.075 19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:51.075 19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:51.075 ************************************ 00:14:51.075 END TEST nvmf_fused_ordering 00:14:51.075 ************************************ 00:14:51.075 19:04:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:14:51.075 19:04:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:51.075 19:04:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:51.075 19:04:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:51.076 ************************************ 00:14:51.076 START TEST nvmf_ns_masking 00:14:51.076 ************************************ 00:14:51.076 19:04:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1127 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:14:51.076 * Looking for test storage... 00:14:51.076 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:51.076 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:51.076 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # lcov --version 00:14:51.076 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:51.076 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:51.076 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:51.076 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:51.076 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:51.076 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:14:51.076 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:14:51.076 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:14:51.076 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:14:51.076 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:14:51.076 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:14:51.076 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:14:51.076 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:51.076 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:14:51.076 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:14:51.076 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:51.076 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:51.076 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:14:51.076 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:14:51.076 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:51.076 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:14:51.076 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:14:51.076 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:14:51.076 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:14:51.076 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:51.076 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:14:51.076 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:14:51.076 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:51.076 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:51.076 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:14:51.076 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:51.076 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:51.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:51.076 --rc genhtml_branch_coverage=1 00:14:51.076 --rc genhtml_function_coverage=1 00:14:51.076 --rc genhtml_legend=1 00:14:51.076 --rc geninfo_all_blocks=1 00:14:51.076 --rc geninfo_unexecuted_blocks=1 00:14:51.076 00:14:51.076 ' 00:14:51.076 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:51.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:51.076 --rc genhtml_branch_coverage=1 00:14:51.076 --rc genhtml_function_coverage=1 00:14:51.076 --rc genhtml_legend=1 00:14:51.076 --rc geninfo_all_blocks=1 00:14:51.076 --rc geninfo_unexecuted_blocks=1 00:14:51.076 00:14:51.076 ' 00:14:51.076 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:51.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:51.076 --rc genhtml_branch_coverage=1 00:14:51.076 --rc genhtml_function_coverage=1 00:14:51.076 --rc genhtml_legend=1 00:14:51.076 --rc geninfo_all_blocks=1 00:14:51.076 --rc geninfo_unexecuted_blocks=1 00:14:51.076 00:14:51.076 ' 00:14:51.076 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:51.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:51.076 --rc genhtml_branch_coverage=1 00:14:51.076 --rc genhtml_function_coverage=1 00:14:51.076 --rc genhtml_legend=1 00:14:51.076 --rc geninfo_all_blocks=1 00:14:51.076 --rc geninfo_unexecuted_blocks=1 00:14:51.076 00:14:51.076 ' 00:14:51.076 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:51.076 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:14:51.076 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:51.076 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:51.076 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:51.076 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:51.076 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:51.076 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:14:51.076 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:51.076 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:14:51.076 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:51.076 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:51.076 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:51.076 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:14:51.076 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:14:51.076 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:51.076 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:51.076 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:14:51.076 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:51.076 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:51.076 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:51.076 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.076 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.076 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.076 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:14:51.076 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.076 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:14:51.076 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:14:51.076 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:14:51.076 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:14:51.076 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@50 -- # : 0 00:14:51.077 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:14:51.077 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:14:51.077 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:14:51.077 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:51.077 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:51.077 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:14:51.077 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:14:51.077 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:14:51.077 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:14:51.077 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@54 -- # have_pci_nics=0 00:14:51.077 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:51.077 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:14:51.077 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:14:51.077 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:14:51.077 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=c813445f-19ec-4488-bd7f-cc25ec992d7b 00:14:51.077 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:14:51.077 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=4270730f-c992-49e0-8bfb-dbf9410b317c 00:14:51.077 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:14:51.077 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:14:51.077 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:14:51.077 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:14:51.077 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=eadeb6e0-e599-4f19-98bf-06f71a099456 00:14:51.077 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:14:51.077 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:14:51.077 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:51.077 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # prepare_net_devs 00:14:51.077 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # local -g is_hw=no 00:14:51.077 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@260 -- # remove_target_ns 00:14:51.077 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:14:51.077 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:14:51.077 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_target_ns 00:14:51.077 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:14:51.077 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:14:51.077 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # xtrace_disable 00:14:51.077 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:59.216 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:59.216 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@131 -- # pci_devs=() 00:14:59.216 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@131 -- # local -a pci_devs 00:14:59.216 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@132 -- # pci_net_devs=() 00:14:59.216 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:14:59.216 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@133 -- # pci_drivers=() 00:14:59.216 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@133 -- # local -A pci_drivers 00:14:59.216 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@135 -- # net_devs=() 00:14:59.216 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@135 -- # local -ga net_devs 00:14:59.216 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@136 -- # e810=() 00:14:59.216 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@136 -- # local -ga e810 00:14:59.216 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@137 -- # x722=() 00:14:59.216 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@137 -- # local -ga x722 00:14:59.216 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@138 -- # mlx=() 00:14:59.216 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@138 -- # local -ga mlx 00:14:59.216 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:59.216 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:59.216 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:59.216 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:59.216 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:59.216 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:59.216 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:59.216 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:59.216 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:59.216 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:59.216 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:59.216 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:59.216 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:14:59.216 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:14:59.216 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:14:59.216 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:14:59.216 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:14:59.216 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:14:59.216 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:14:59.216 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:59.216 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:59.216 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:14:59.216 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:14:59.216 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:59.216 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:59.216 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:14:59.216 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:14:59.216 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:59.216 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:59.216 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:14:59.216 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:14:59.216 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:59.216 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:59.216 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:14:59.216 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:14:59.216 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:14:59.216 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:14:59.216 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:14:59.216 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:59.216 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:14:59.216 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:59.216 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@234 -- # [[ up == up ]] 00:14:59.216 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:14:59.216 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:59.216 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:59.216 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:59.216 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:14:59.217 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:14:59.217 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:59.217 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:14:59.217 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:59.217 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@234 -- # [[ up == up ]] 00:14:59.217 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:14:59.217 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:59.217 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:59.217 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:59.217 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:14:59.217 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:14:59.217 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:14:59.217 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # is_hw=yes 00:14:59.217 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:14:59.217 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:14:59.217 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:14:59.217 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:14:59.217 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@257 -- # create_target_ns 00:14:59.217 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:14:59.217 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:14:59.217 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:14:59.217 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:59.217 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:14:59.217 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:14:59.217 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:59.217 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:59.217 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:14:59.217 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:14:59.217 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:14:59.217 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:14:59.217 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@27 -- # local -gA dev_map 00:14:59.217 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@28 -- # local -g _dev 00:14:59.217 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:14:59.217 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:14:59.217 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:14:59.217 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:14:59.217 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@44 -- # ips=() 00:14:59.217 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:14:59.217 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:14:59.217 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:14:59.217 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:14:59.217 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:14:59.217 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:14:59.217 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:14:59.217 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:14:59.217 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:14:59.217 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:14:59.217 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:14:59.217 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:14:59.217 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:14:59.217 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:14:59.217 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:14:59.217 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:14:59.217 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:14:59.217 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:14:59.217 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:14:59.217 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:14:59.217 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@11 -- # local val=167772161 00:14:59.217 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:14:59.217 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:14:59.217 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:14:59.217 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:14:59.217 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:14:59.217 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:14:59.217 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:14:59.217 10.0.0.1 00:14:59.217 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:14:59.217 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:14:59.217 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:59.217 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:59.217 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:14:59.217 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@11 -- # local val=167772162 00:14:59.217 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:14:59.217 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:14:59.217 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:14:59.217 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:14:59.217 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:14:59.217 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:14:59.217 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:14:59.217 10.0.0.2 00:14:59.217 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:14:59.217 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:14:59.217 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:14:59.217 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:14:59.217 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:14:59.217 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:14:59.217 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:14:59.217 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:59.217 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:59.217 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:14:59.217 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:14:59.217 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:14:59.217 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:14:59.217 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:14:59.217 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:14:59.217 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:14:59.217 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:14:59.217 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:14:59.217 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:14:59.217 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:14:59.217 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@38 -- # ping_ips 1 00:14:59.217 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:14:59.217 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:14:59.217 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:14:59.217 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:14:59.217 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:14:59.217 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:14:59.217 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:14:59.217 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:14:59.218 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:14:59.218 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@107 -- # local dev=initiator0 00:14:59.218 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:14:59.218 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:14:59.218 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:14:59.218 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:14:59.218 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:14:59.218 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:14:59.218 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:14:59.218 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:14:59.218 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:14:59.218 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:14:59.218 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:14:59.218 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:59.218 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:59.218 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:14:59.218 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:14:59.218 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:59.218 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.634 ms 00:14:59.218 00:14:59.218 --- 10.0.0.1 ping statistics --- 00:14:59.218 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:59.218 rtt min/avg/max/mdev = 0.634/0.634/0.634/0.000 ms 00:14:59.218 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:14:59.218 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:14:59.218 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:14:59.218 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:14:59.218 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:59.218 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:59.218 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@168 -- # get_net_dev target0 00:14:59.218 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@107 -- # local dev=target0 00:14:59.218 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:14:59.218 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:14:59.218 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:14:59.218 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:14:59.218 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:14:59.218 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:14:59.218 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:14:59.218 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:14:59.218 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:14:59.218 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:14:59.218 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:14:59.218 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:14:59.218 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:14:59.218 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:14:59.218 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:59.218 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.277 ms 00:14:59.218 00:14:59.218 --- 10.0.0.2 ping statistics --- 00:14:59.218 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:59.218 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:14:59.218 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@98 -- # (( pair++ )) 00:14:59.218 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:14:59.218 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:59.218 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@270 -- # return 0 00:14:59.218 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:14:59.218 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:14:59.218 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:14:59.218 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:14:59.218 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:14:59.218 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:14:59.218 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:14:59.218 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:14:59.218 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:14:59.218 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:14:59.218 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@107 -- # local dev=initiator0 00:14:59.218 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:14:59.218 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:14:59.218 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:14:59.218 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:14:59.218 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:14:59.218 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:14:59.218 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:14:59.218 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:14:59.218 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:14:59.218 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:59.218 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:14:59.218 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:14:59.218 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:14:59.218 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:14:59.218 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:14:59.218 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:14:59.218 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@107 -- # local dev=initiator1 00:14:59.218 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:14:59.218 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:14:59.218 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@109 -- # return 1 00:14:59.218 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@168 -- # dev= 00:14:59.218 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@169 -- # return 0 00:14:59.218 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:14:59.218 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:14:59.218 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:14:59.218 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:14:59.218 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:14:59.218 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:59.218 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:59.218 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@168 -- # get_net_dev target0 00:14:59.218 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@107 -- # local dev=target0 00:14:59.218 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:14:59.218 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:14:59.218 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:14:59.218 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:14:59.218 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:14:59.218 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:14:59.218 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:14:59.218 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:14:59.218 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:14:59.218 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:59.218 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:14:59.218 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:14:59.218 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:14:59.218 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:14:59.218 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:59.218 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:59.218 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@168 -- # get_net_dev target1 00:14:59.218 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@107 -- # local dev=target1 00:14:59.218 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:14:59.219 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:14:59.219 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@109 -- # return 1 00:14:59.219 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@168 -- # dev= 00:14:59.219 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@169 -- # return 0 00:14:59.219 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:14:59.219 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:59.219 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:14:59.219 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:14:59.219 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:59.219 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:14:59.219 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:14:59.219 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:14:59.219 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:14:59.219 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:59.219 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:59.219 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # nvmfpid=288392 00:14:59.219 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@329 -- # waitforlisten 288392 00:14:59.219 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:59.219 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@833 -- # '[' -z 288392 ']' 00:14:59.219 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:59.219 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:59.219 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:59.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:59.219 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:59.219 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:59.219 [2024-11-05 19:04:27.579531] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:14:59.219 [2024-11-05 19:04:27.579586] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:59.219 [2024-11-05 19:04:27.656494] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:59.219 [2024-11-05 19:04:27.690668] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:59.219 [2024-11-05 19:04:27.690702] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:59.219 [2024-11-05 19:04:27.690710] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:59.219 [2024-11-05 19:04:27.690717] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:59.219 [2024-11-05 19:04:27.690723] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:59.219 [2024-11-05 19:04:27.691267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:59.219 19:04:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:59.219 19:04:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@866 -- # return 0 00:14:59.219 19:04:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:14:59.219 19:04:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:59.219 19:04:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:59.219 19:04:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:59.219 19:04:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:59.479 [2024-11-05 19:04:28.563826] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:59.479 19:04:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:14:59.479 19:04:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:14:59.479 19:04:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:59.479 Malloc1 00:14:59.479 19:04:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:59.739 Malloc2 00:14:59.739 19:04:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:59.999 19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:15:00.259 19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:00.259 [2024-11-05 19:04:29.481297] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:00.259 19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:15:00.259 19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I eadeb6e0-e599-4f19-98bf-06f71a099456 -a 10.0.0.2 -s 4420 -i 4 00:15:00.529 19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:15:00.529 19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # local i=0 00:15:00.529 19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:15:00.529 19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:15:00.529 19:04:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # sleep 2 00:15:02.438 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:15:02.438 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:15:02.438 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:15:02.438 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:15:02.438 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:15:02.438 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # return 0 00:15:02.438 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:15:02.438 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:02.698 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:15:02.698 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:15:02.698 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:15:02.698 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:02.698 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:02.698 [ 0]:0x1 00:15:02.698 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:02.698 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:02.698 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=daad133ae04a404284d811e6d7471058 00:15:02.698 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ daad133ae04a404284d811e6d7471058 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:02.698 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:15:02.958 19:04:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:15:02.958 19:04:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:02.958 19:04:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:02.958 [ 0]:0x1 00:15:02.958 19:04:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:02.958 19:04:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:02.958 19:04:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=daad133ae04a404284d811e6d7471058 00:15:02.958 19:04:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ daad133ae04a404284d811e6d7471058 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:02.958 19:04:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:15:02.958 19:04:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:02.958 19:04:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:02.958 [ 1]:0x2 00:15:02.958 19:04:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:02.959 19:04:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:02.959 19:04:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4392288a6c124af5a39cba5c6fd52b04 00:15:02.959 19:04:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4392288a6c124af5a39cba5c6fd52b04 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:02.959 19:04:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:15:02.959 19:04:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:03.218 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:03.218 19:04:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:03.218 19:04:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:15:03.509 19:04:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:15:03.509 19:04:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I eadeb6e0-e599-4f19-98bf-06f71a099456 -a 10.0.0.2 -s 4420 -i 4 00:15:03.821 19:04:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:15:03.821 19:04:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # local i=0 00:15:03.821 19:04:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:15:03.821 19:04:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # [[ -n 1 ]] 00:15:03.821 19:04:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_device_counter=1 00:15:03.821 19:04:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # sleep 2 00:15:05.781 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:15:05.781 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:15:05.781 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:15:05.781 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:15:05.781 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:15:05.781 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # return 0 00:15:05.781 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:15:05.781 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:05.781 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:15:05.781 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:15:05.781 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:15:05.781 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:15:05.781 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:15:05.781 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:15:05.781 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:05.781 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:15:05.781 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:05.781 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:15:05.781 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:05.781 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:05.781 19:04:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:05.781 19:04:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:05.781 19:04:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:05.781 19:04:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:05.781 19:04:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:15:05.781 19:04:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:05.781 19:04:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:05.781 19:04:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:05.781 19:04:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:15:05.781 19:04:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:05.781 19:04:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:06.042 [ 0]:0x2 00:15:06.042 19:04:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:06.042 19:04:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:06.042 19:04:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4392288a6c124af5a39cba5c6fd52b04 00:15:06.042 19:04:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4392288a6c124af5a39cba5c6fd52b04 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:06.042 19:04:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:06.042 19:04:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:15:06.304 19:04:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:06.304 19:04:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:06.304 [ 0]:0x1 00:15:06.304 19:04:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:06.304 19:04:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:06.304 19:04:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=daad133ae04a404284d811e6d7471058 00:15:06.304 19:04:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ daad133ae04a404284d811e6d7471058 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:06.304 19:04:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:15:06.304 19:04:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:06.304 19:04:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:06.304 [ 1]:0x2 00:15:06.304 19:04:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:06.304 19:04:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:06.304 19:04:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4392288a6c124af5a39cba5c6fd52b04 00:15:06.304 19:04:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4392288a6c124af5a39cba5c6fd52b04 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:06.304 19:04:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:06.564 19:04:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:15:06.564 19:04:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:15:06.564 19:04:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:15:06.564 19:04:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:15:06.564 19:04:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:06.564 19:04:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:15:06.564 19:04:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:06.564 19:04:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:15:06.564 19:04:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:06.564 19:04:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:06.564 19:04:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:06.564 19:04:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:06.564 19:04:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:06.564 19:04:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:06.564 19:04:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:15:06.564 19:04:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:06.564 19:04:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:06.564 19:04:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:06.564 19:04:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:15:06.564 19:04:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:06.564 19:04:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:06.564 [ 0]:0x2 00:15:06.564 19:04:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:06.564 19:04:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:06.564 19:04:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4392288a6c124af5a39cba5c6fd52b04 00:15:06.564 19:04:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4392288a6c124af5a39cba5c6fd52b04 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:06.564 19:04:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:15:06.564 19:04:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:06.564 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:06.564 19:04:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:06.826 19:04:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:15:06.826 19:04:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I eadeb6e0-e599-4f19-98bf-06f71a099456 -a 10.0.0.2 -s 4420 -i 4 00:15:07.088 19:04:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:15:07.088 19:04:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # local i=0 00:15:07.088 19:04:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:15:07.088 19:04:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # [[ -n 2 ]] 00:15:07.088 19:04:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_device_counter=2 00:15:07.088 19:04:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # sleep 2 00:15:09.003 19:04:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:15:09.003 19:04:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:15:09.003 19:04:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:15:09.003 19:04:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # nvme_devices=2 00:15:09.003 19:04:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:15:09.003 19:04:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # return 0 00:15:09.003 19:04:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:15:09.003 19:04:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:09.263 19:04:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:15:09.263 19:04:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:15:09.263 19:04:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:15:09.263 19:04:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:09.263 19:04:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:09.263 [ 0]:0x1 00:15:09.263 19:04:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:09.263 19:04:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:09.263 19:04:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=daad133ae04a404284d811e6d7471058 00:15:09.263 19:04:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ daad133ae04a404284d811e6d7471058 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:09.263 19:04:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:15:09.263 19:04:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:09.263 19:04:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:09.263 [ 1]:0x2 00:15:09.263 19:04:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:09.263 19:04:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:09.525 19:04:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4392288a6c124af5a39cba5c6fd52b04 00:15:09.525 19:04:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4392288a6c124af5a39cba5c6fd52b04 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:09.525 19:04:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:09.525 19:04:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:15:09.525 19:04:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:15:09.525 19:04:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:15:09.525 19:04:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:15:09.525 19:04:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:09.525 19:04:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:15:09.525 19:04:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:09.525 19:04:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:15:09.525 19:04:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:09.525 19:04:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:09.525 19:04:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:09.525 19:04:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:09.525 19:04:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:09.525 19:04:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:09.525 19:04:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:15:09.525 19:04:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:09.525 19:04:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:09.525 19:04:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:09.525 19:04:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:15:09.525 19:04:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:09.525 19:04:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:09.525 [ 0]:0x2 00:15:09.786 19:04:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:09.786 19:04:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:09.786 19:04:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4392288a6c124af5a39cba5c6fd52b04 00:15:09.786 19:04:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4392288a6c124af5a39cba5c6fd52b04 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:09.786 19:04:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:09.786 19:04:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:15:09.786 19:04:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:09.786 19:04:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:09.786 19:04:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:09.786 19:04:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:09.786 19:04:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:09.786 19:04:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:09.786 19:04:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:09.786 19:04:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:09.786 19:04:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:09.786 19:04:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:09.786 [2024-11-05 19:04:39.052782] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:15:09.786 request: 00:15:09.786 { 00:15:09.786 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:09.786 "nsid": 2, 00:15:09.786 "host": "nqn.2016-06.io.spdk:host1", 00:15:09.786 "method": "nvmf_ns_remove_host", 00:15:09.786 "req_id": 1 00:15:09.786 } 00:15:09.786 Got JSON-RPC error response 00:15:09.786 response: 00:15:09.786 { 00:15:09.786 "code": -32602, 00:15:09.786 "message": "Invalid parameters" 00:15:09.786 } 00:15:09.786 19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:15:09.786 19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:09.786 19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:09.786 19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:09.786 19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:15:09.786 19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:15:09.786 19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:15:09.786 19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:15:09.786 19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:09.786 19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:15:09.786 19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:09.786 19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:15:09.786 19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:09.786 19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:09.786 19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:09.786 19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:10.047 19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:10.047 19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:10.047 19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:15:10.047 19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:10.047 19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:10.047 19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:10.047 19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:15:10.047 19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:10.047 19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:10.047 [ 0]:0x2 00:15:10.047 19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:10.047 19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:10.047 19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4392288a6c124af5a39cba5c6fd52b04 00:15:10.047 19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4392288a6c124af5a39cba5c6fd52b04 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:10.047 19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:15:10.047 19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:10.047 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:10.047 19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=290896 00:15:10.047 19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:15:10.047 19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:15:10.047 19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 290896 /var/tmp/host.sock 00:15:10.047 19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@833 -- # '[' -z 290896 ']' 00:15:10.047 19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/host.sock 00:15:10.047 19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:10.047 19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:15:10.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:15:10.047 19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:10.047 19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:10.047 [2024-11-05 19:04:39.301670] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:15:10.047 [2024-11-05 19:04:39.301721] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid290896 ] 00:15:10.308 [2024-11-05 19:04:39.391717] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:10.308 [2024-11-05 19:04:39.426878] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:10.880 19:04:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:10.880 19:04:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@866 -- # return 0 00:15:10.880 19:04:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:11.140 19:04:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:11.141 19:04:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid c813445f-19ec-4488-bd7f-cc25ec992d7b 00:15:11.141 19:04:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@538 -- # tr -d - 00:15:11.141 19:04:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g C813445F19EC4488BD7FCC25EC992D7B -i 00:15:11.401 19:04:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 4270730f-c992-49e0-8bfb-dbf9410b317c 00:15:11.401 19:04:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@538 -- # tr -d - 00:15:11.401 19:04:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 4270730FC99249E08BFBDBF9410B317C -i 00:15:11.662 19:04:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:11.662 19:04:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:15:11.923 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:15:11.923 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:15:12.184 nvme0n1 00:15:12.184 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:15:12.184 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:15:12.445 nvme1n2 00:15:12.445 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:15:12.445 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:15:12.445 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:15:12.445 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:15:12.445 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:15:12.445 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:15:12.446 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:15:12.446 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:15:12.446 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:15:12.706 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ c813445f-19ec-4488-bd7f-cc25ec992d7b == \c\8\1\3\4\4\5\f\-\1\9\e\c\-\4\4\8\8\-\b\d\7\f\-\c\c\2\5\e\c\9\9\2\d\7\b ]] 00:15:12.706 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:15:12.706 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:15:12.706 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:15:12.967 19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 4270730f-c992-49e0-8bfb-dbf9410b317c == \4\2\7\0\7\3\0\f\-\c\9\9\2\-\4\9\e\0\-\8\b\f\b\-\d\b\f\9\4\1\0\b\3\1\7\c ]] 00:15:12.967 19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:12.967 19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:13.228 19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid c813445f-19ec-4488-bd7f-cc25ec992d7b 00:15:13.228 19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@538 -- # tr -d - 00:15:13.228 19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g C813445F19EC4488BD7FCC25EC992D7B 00:15:13.228 19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:15:13.228 19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g C813445F19EC4488BD7FCC25EC992D7B 00:15:13.228 19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:13.228 19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:13.228 19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:13.228 19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:13.228 19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:13.228 19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:13.229 19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:13.229 19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:13.229 19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g C813445F19EC4488BD7FCC25EC992D7B 00:15:13.490 [2024-11-05 19:04:42.626662] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:15:13.490 [2024-11-05 19:04:42.626699] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:15:13.490 [2024-11-05 19:04:42.626709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:13.490 request: 00:15:13.490 { 00:15:13.490 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:13.490 "namespace": { 00:15:13.490 "bdev_name": "invalid", 00:15:13.490 "nsid": 1, 00:15:13.490 "nguid": "C813445F19EC4488BD7FCC25EC992D7B", 00:15:13.490 "no_auto_visible": false 00:15:13.490 }, 00:15:13.490 "method": "nvmf_subsystem_add_ns", 00:15:13.490 "req_id": 1 00:15:13.490 } 00:15:13.490 Got JSON-RPC error response 00:15:13.490 response: 00:15:13.490 { 00:15:13.490 "code": -32602, 00:15:13.490 "message": "Invalid parameters" 00:15:13.490 } 00:15:13.490 19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:15:13.490 19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:13.490 19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:13.490 19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:13.490 19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid c813445f-19ec-4488-bd7f-cc25ec992d7b 00:15:13.490 19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@538 -- # tr -d - 00:15:13.490 19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g C813445F19EC4488BD7FCC25EC992D7B -i 00:15:13.750 19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:15:15.663 19:04:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:15:15.663 19:04:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:15:15.663 19:04:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:15:15.924 19:04:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:15:15.924 19:04:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 290896 00:15:15.924 19:04:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@952 -- # '[' -z 290896 ']' 00:15:15.924 19:04:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # kill -0 290896 00:15:15.924 19:04:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # uname 00:15:15.924 19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:15.924 19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 290896 00:15:15.924 19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:15:15.924 19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:15:15.924 19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@970 -- # echo 'killing process with pid 290896' 00:15:15.924 killing process with pid 290896 00:15:15.924 19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@971 -- # kill 290896 00:15:15.924 19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@976 -- # wait 290896 00:15:16.184 19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:16.184 19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:15:16.184 19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:15:16.184 19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@335 -- # nvmfcleanup 00:15:16.184 19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@99 -- # sync 00:15:16.184 19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:15:16.184 19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@102 -- # set +e 00:15:16.184 19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@103 -- # for i in {1..20} 00:15:16.184 19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:15:16.184 rmmod nvme_tcp 00:15:16.184 rmmod nvme_fabrics 00:15:16.184 rmmod nvme_keyring 00:15:16.184 19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:15:16.184 19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # set -e 00:15:16.184 19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # return 0 00:15:16.184 19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # '[' -n 288392 ']' 00:15:16.184 19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@337 -- # killprocess 288392 00:15:16.184 19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@952 -- # '[' -z 288392 ']' 00:15:16.184 19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # kill -0 288392 00:15:16.184 19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # uname 00:15:16.445 19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:16.445 19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 288392 00:15:16.445 19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:16.445 19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:16.445 19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@970 -- # echo 'killing process with pid 288392' 00:15:16.445 killing process with pid 288392 00:15:16.445 19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@971 -- # kill 288392 00:15:16.445 19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@976 -- # wait 288392 00:15:16.445 19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:15:16.445 19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # nvmf_fini 00:15:16.445 19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@264 -- # local dev 00:15:16.445 19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@267 -- # remove_target_ns 00:15:16.445 19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:15:16.445 19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:15:16.445 19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_target_ns 00:15:18.990 19:04:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@268 -- # delete_main_bridge 00:15:18.990 19:04:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:15:18.990 19:04:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@130 -- # return 0 00:15:18.990 19:04:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:15:18.991 19:04:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:15:18.991 19:04:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:15:18.991 19:04:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:15:18.991 19:04:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:15:18.991 19:04:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:15:18.991 19:04:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:15:18.991 19:04:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:15:18.991 19:04:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:15:18.991 19:04:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:15:18.991 19:04:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:15:18.991 19:04:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:15:18.991 19:04:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:15:18.991 19:04:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:15:18.991 19:04:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:15:18.991 19:04:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:15:18.991 19:04:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:15:18.991 19:04:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@41 -- # _dev=0 00:15:18.991 19:04:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@41 -- # dev_map=() 00:15:18.991 19:04:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@284 -- # iptr 00:15:18.991 19:04:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@542 -- # iptables-save 00:15:18.991 19:04:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:15:18.991 19:04:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@542 -- # iptables-restore 00:15:18.991 00:15:18.991 real 0m27.825s 00:15:18.991 user 0m31.365s 00:15:18.991 sys 0m8.061s 00:15:18.991 19:04:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:18.991 19:04:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:18.991 ************************************ 00:15:18.991 END TEST nvmf_ns_masking 00:15:18.991 ************************************ 00:15:18.991 19:04:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:15:18.991 19:04:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:18.991 19:04:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:15:18.991 19:04:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:18.991 19:04:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:18.991 ************************************ 00:15:18.991 START TEST nvmf_nvme_cli 00:15:18.991 ************************************ 00:15:18.991 19:04:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:18.991 * Looking for test storage... 00:15:18.991 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:18.991 19:04:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:18.991 19:04:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # lcov --version 00:15:18.991 19:04:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:18.991 19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:18.991 19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:18.991 19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:18.991 19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:18.991 19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:15:18.991 19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:15:18.991 19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:15:18.991 19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:15:18.991 19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:15:18.991 19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:15:18.991 19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:15:18.991 19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:18.991 19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:15:18.991 19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:15:18.991 19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:18.991 19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:18.991 19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:15:18.991 19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:15:18.991 19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:18.991 19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:15:18.991 19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:15:18.991 19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:15:18.991 19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:15:18.991 19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:18.991 19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:15:18.991 19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:15:18.991 19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:18.991 19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:18.991 19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:15:18.991 19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:18.991 19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:18.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:18.991 --rc genhtml_branch_coverage=1 00:15:18.991 --rc genhtml_function_coverage=1 00:15:18.991 --rc genhtml_legend=1 00:15:18.991 --rc geninfo_all_blocks=1 00:15:18.991 --rc geninfo_unexecuted_blocks=1 00:15:18.991 00:15:18.991 ' 00:15:18.991 19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:18.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:18.991 --rc genhtml_branch_coverage=1 00:15:18.991 --rc genhtml_function_coverage=1 00:15:18.991 --rc genhtml_legend=1 00:15:18.991 --rc geninfo_all_blocks=1 00:15:18.991 --rc geninfo_unexecuted_blocks=1 00:15:18.991 00:15:18.991 ' 00:15:18.991 19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:18.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:18.991 --rc genhtml_branch_coverage=1 00:15:18.991 --rc genhtml_function_coverage=1 00:15:18.991 --rc genhtml_legend=1 00:15:18.991 --rc geninfo_all_blocks=1 00:15:18.991 --rc geninfo_unexecuted_blocks=1 00:15:18.991 00:15:18.991 ' 00:15:18.991 19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:18.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:18.991 --rc genhtml_branch_coverage=1 00:15:18.991 --rc genhtml_function_coverage=1 00:15:18.991 --rc genhtml_legend=1 00:15:18.991 --rc geninfo_all_blocks=1 00:15:18.991 --rc geninfo_unexecuted_blocks=1 00:15:18.991 00:15:18.991 ' 00:15:18.991 19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:18.991 19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:15:18.991 19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:18.991 19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:18.991 19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:18.991 19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:18.991 19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:18.991 19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:15:18.991 19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:18.991 19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:15:18.991 19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:18.991 19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:18.991 19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:18.991 19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:15:18.991 19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:15:18.991 19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:18.991 19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:18.991 19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:15:18.991 19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:18.991 19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:18.991 19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:18.992 19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:18.992 19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:18.992 19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:18.992 19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:15:18.992 19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:18.992 19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:15:18.992 19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:15:18.992 19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:15:18.992 19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:15:18.992 19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@50 -- # : 0 00:15:18.992 19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:15:18.992 19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:15:18.992 19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:15:18.992 19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:18.992 19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:18.992 19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:15:18.992 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:15:18.992 19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:15:18.992 19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:15:18.992 19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@54 -- # have_pci_nics=0 00:15:18.992 19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:18.992 19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:18.992 19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:15:18.992 19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:15:18.992 19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:15:18.992 19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:18.992 19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@296 -- # prepare_net_devs 00:15:18.992 19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # local -g is_hw=no 00:15:18.992 19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@260 -- # remove_target_ns 00:15:18.992 19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:15:18.992 19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:15:18.992 19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_target_ns 00:15:18.992 19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:15:18.992 19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:15:18.992 19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # xtrace_disable 00:15:18.992 19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:25.585 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:25.585 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@131 -- # pci_devs=() 00:15:25.585 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@131 -- # local -a pci_devs 00:15:25.585 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@132 -- # pci_net_devs=() 00:15:25.585 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:15:25.585 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@133 -- # pci_drivers=() 00:15:25.585 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@133 -- # local -A pci_drivers 00:15:25.585 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@135 -- # net_devs=() 00:15:25.585 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@135 -- # local -ga net_devs 00:15:25.585 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@136 -- # e810=() 00:15:25.585 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@136 -- # local -ga e810 00:15:25.585 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@137 -- # x722=() 00:15:25.585 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@137 -- # local -ga x722 00:15:25.585 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@138 -- # mlx=() 00:15:25.585 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@138 -- # local -ga mlx 00:15:25.585 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:25.585 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:25.585 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:25.585 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:25.585 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:25.585 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:25.585 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:25.585 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:25.585 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:25.585 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:25.585 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:25.585 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:25.585 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:15:25.585 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:15:25.585 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:15:25.585 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:15:25.585 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:15:25.585 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:15:25.585 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:15:25.586 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:25.586 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:25.586 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:15:25.586 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:15:25.586 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:25.586 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:25.586 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:15:25.586 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:15:25.586 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:25.586 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:25.586 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:15:25.586 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:15:25.586 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:25.586 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:25.586 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:15:25.586 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:15:25.586 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:15:25.586 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:15:25.586 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:15:25.586 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:25.586 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:15:25.586 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:25.586 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@234 -- # [[ up == up ]] 00:15:25.586 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:15:25.586 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:25.586 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:25.586 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:25.586 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:15:25.586 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:15:25.586 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:25.586 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:15:25.586 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:25.586 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@234 -- # [[ up == up ]] 00:15:25.586 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:15:25.586 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:25.586 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:25.586 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:25.586 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:15:25.586 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:15:25.586 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:15:25.586 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # is_hw=yes 00:15:25.586 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:15:25.586 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:15:25.586 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:15:25.586 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:15:25.586 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@257 -- # create_target_ns 00:15:25.586 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:15:25.586 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:15:25.586 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:15:25.586 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:25.586 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:15:25.586 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:15:25.586 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:15:25.586 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:15:25.586 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:15:25.586 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:15:25.586 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:15:25.586 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:15:25.586 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@27 -- # local -gA dev_map 00:15:25.586 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@28 -- # local -g _dev 00:15:25.586 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:15:25.586 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:15:25.586 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:15:25.586 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:15:25.586 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@44 -- # ips=() 00:15:25.586 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:15:25.586 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:15:25.586 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:15:25.586 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:15:25.586 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:15:25.586 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:15:25.586 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:15:25.586 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:15:25.586 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:15:25.586 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:15:25.586 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:15:25.586 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:15:25.586 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:15:25.586 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:15:25.586 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:15:25.586 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:15:25.848 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:15:25.848 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:15:25.848 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:15:25.848 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:15:25.848 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@11 -- # local val=167772161 00:15:25.848 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:15:25.848 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:15:25.848 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:15:25.848 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:15:25.848 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:15:25.848 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:15:25.848 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:15:25.848 10.0.0.1 00:15:25.848 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:15:25.848 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:15:25.848 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:15:25.848 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:15:25.848 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:15:25.848 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@11 -- # local val=167772162 00:15:25.848 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:15:25.848 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:15:25.848 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:15:25.848 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:15:25.848 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:15:25.848 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:15:25.848 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:15:25.848 10.0.0.2 00:15:25.848 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:15:25.848 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:15:25.848 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:15:25.848 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:15:25.848 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:15:25.848 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:15:25.848 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:15:25.848 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:15:25.848 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:15:25.848 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:15:25.848 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:15:25.848 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:15:25.848 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:15:25.848 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:15:25.848 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:15:25.848 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:15:25.849 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:15:25.849 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:15:25.849 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:15:25.849 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:15:25.849 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@38 -- # ping_ips 1 00:15:25.849 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:15:25.849 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:15:25.849 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:15:25.849 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:15:25.849 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:15:25.849 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:15:25.849 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:15:25.849 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:15:25.849 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:15:25.849 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@107 -- # local dev=initiator0 00:15:25.849 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:15:25.849 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:15:25.849 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:15:25.849 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:15:25.849 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:15:25.849 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:15:25.849 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:15:25.849 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:15:25.849 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:15:25.849 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:15:25.849 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:15:25.849 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:15:25.849 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:15:25.849 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:15:25.849 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:15:26.111 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:26.111 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.664 ms 00:15:26.111 00:15:26.111 --- 10.0.0.1 ping statistics --- 00:15:26.111 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:26.111 rtt min/avg/max/mdev = 0.664/0.664/0.664/0.000 ms 00:15:26.111 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:15:26.111 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:15:26.111 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:15:26.111 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:15:26.111 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:15:26.111 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:15:26.111 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@168 -- # get_net_dev target0 00:15:26.111 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@107 -- # local dev=target0 00:15:26.111 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:15:26.111 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:15:26.111 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:15:26.111 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:15:26.111 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:15:26.111 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:15:26.111 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:15:26.111 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:15:26.111 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:15:26.111 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:15:26.111 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:15:26.111 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:15:26.111 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:15:26.112 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:15:26.112 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:26.112 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.199 ms 00:15:26.112 00:15:26.112 --- 10.0.0.2 ping statistics --- 00:15:26.112 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:26.112 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:15:26.112 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@98 -- # (( pair++ )) 00:15:26.112 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:15:26.112 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:26.112 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@270 -- # return 0 00:15:26.112 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:15:26.112 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:15:26.112 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:15:26.112 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:15:26.112 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:15:26.112 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:15:26.112 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:15:26.112 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:15:26.112 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:15:26.112 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:15:26.112 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@107 -- # local dev=initiator0 00:15:26.112 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:15:26.112 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:15:26.112 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:15:26.112 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:15:26.112 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:15:26.112 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:15:26.112 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:15:26.112 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:15:26.112 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:15:26.112 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:26.112 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:15:26.112 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:15:26.112 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:15:26.112 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:15:26.112 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:15:26.112 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:15:26.112 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@107 -- # local dev=initiator1 00:15:26.112 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:15:26.112 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:15:26.112 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@109 -- # return 1 00:15:26.112 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@168 -- # dev= 00:15:26.112 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@169 -- # return 0 00:15:26.112 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:15:26.112 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:15:26.112 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:15:26.112 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:15:26.112 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:15:26.112 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:15:26.112 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:15:26.112 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@168 -- # get_net_dev target0 00:15:26.112 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@107 -- # local dev=target0 00:15:26.112 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:15:26.112 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:15:26.112 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:15:26.112 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:15:26.112 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:15:26.112 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:15:26.112 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:15:26.112 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:15:26.112 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:15:26.112 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:26.112 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:15:26.112 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:15:26.112 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:15:26.112 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:15:26.112 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:15:26.112 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:15:26.112 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@168 -- # get_net_dev target1 00:15:26.112 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@107 -- # local dev=target1 00:15:26.112 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:15:26.112 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:15:26.112 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@109 -- # return 1 00:15:26.112 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@168 -- # dev= 00:15:26.112 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@169 -- # return 0 00:15:26.112 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:15:26.112 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:26.112 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:15:26.112 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:15:26.112 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:26.112 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:15:26.112 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:15:26.112 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:15:26.112 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:15:26.112 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:26.112 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:26.112 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # nvmfpid=296300 00:15:26.112 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@329 -- # waitforlisten 296300 00:15:26.112 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:26.112 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # '[' -z 296300 ']' 00:15:26.112 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:26.112 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:26.112 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:26.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:26.112 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:26.112 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:26.112 [2024-11-05 19:04:55.375341] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:15:26.112 [2024-11-05 19:04:55.375411] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:26.374 [2024-11-05 19:04:55.459376] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:26.374 [2024-11-05 19:04:55.503552] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:26.374 [2024-11-05 19:04:55.503589] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:26.374 [2024-11-05 19:04:55.503597] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:26.374 [2024-11-05 19:04:55.503605] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:26.374 [2024-11-05 19:04:55.503610] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:26.374 [2024-11-05 19:04:55.505393] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:26.374 [2024-11-05 19:04:55.505507] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:26.374 [2024-11-05 19:04:55.505662] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:26.374 [2024-11-05 19:04:55.505662] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:26.946 19:04:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:26.946 19:04:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@866 -- # return 0 00:15:26.946 19:04:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:15:26.946 19:04:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:26.946 19:04:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:26.946 19:04:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:26.946 19:04:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:26.946 19:04:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.946 19:04:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:26.946 [2024-11-05 19:04:56.228836] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:26.946 19:04:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.946 19:04:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:26.946 19:04:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.946 19:04:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:26.946 Malloc0 00:15:26.946 19:04:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.946 19:04:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:26.946 19:04:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.946 19:04:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:27.208 Malloc1 00:15:27.208 19:04:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.208 19:04:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:15:27.208 19:04:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.208 19:04:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:27.208 19:04:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.208 19:04:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:27.208 19:04:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.208 19:04:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:27.208 19:04:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.208 19:04:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:27.208 19:04:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.208 19:04:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:27.208 19:04:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.208 19:04:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:27.208 19:04:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.208 19:04:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:27.208 [2024-11-05 19:04:56.326513] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:27.208 19:04:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.208 19:04:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:27.208 19:04:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.208 19:04:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:27.208 19:04:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.208 19:04:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:15:27.208 00:15:27.208 Discovery Log Number of Records 2, Generation counter 2 00:15:27.208 =====Discovery Log Entry 0====== 00:15:27.208 trtype: tcp 00:15:27.208 adrfam: ipv4 00:15:27.208 subtype: current discovery subsystem 00:15:27.208 treq: not required 00:15:27.208 portid: 0 00:15:27.208 trsvcid: 4420 00:15:27.208 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:15:27.208 traddr: 10.0.0.2 00:15:27.208 eflags: explicit discovery connections, duplicate discovery information 00:15:27.208 sectype: none 00:15:27.208 =====Discovery Log Entry 1====== 00:15:27.208 trtype: tcp 00:15:27.208 adrfam: ipv4 00:15:27.208 subtype: nvme subsystem 00:15:27.208 treq: not required 00:15:27.208 portid: 0 00:15:27.208 trsvcid: 4420 00:15:27.208 subnqn: nqn.2016-06.io.spdk:cnode1 00:15:27.208 traddr: 10.0.0.2 00:15:27.208 eflags: none 00:15:27.208 sectype: none 00:15:27.208 19:04:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:15:27.208 19:04:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:15:27.208 19:04:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@362 -- # local dev _ 00:15:27.208 19:04:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # read -r dev _ 00:15:27.208 19:04:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # nvme list 00:15:27.208 19:04:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@365 -- # [[ Node == /dev/nvme* ]] 00:15:27.208 19:04:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # read -r dev _ 00:15:27.208 19:04:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@365 -- # [[ --------------------- == /dev/nvme* ]] 00:15:27.208 19:04:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # read -r dev _ 00:15:27.468 19:04:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:15:27.468 19:04:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:28.851 19:04:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:15:28.851 19:04:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # local i=0 00:15:28.851 19:04:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:15:28.851 19:04:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # [[ -n 2 ]] 00:15:28.851 19:04:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # nvme_device_counter=2 00:15:28.851 19:04:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # sleep 2 00:15:31.395 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:15:31.395 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:15:31.395 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:15:31.395 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # nvme_devices=2 00:15:31.395 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:15:31.395 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # return 0 00:15:31.395 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:15:31.395 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@362 -- # local dev _ 00:15:31.395 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # read -r dev _ 00:15:31.395 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # nvme list 00:15:31.395 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@365 -- # [[ Node == /dev/nvme* ]] 00:15:31.395 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # read -r dev _ 00:15:31.395 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@365 -- # [[ --------------------- == /dev/nvme* ]] 00:15:31.395 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # read -r dev _ 00:15:31.395 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@365 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:31.395 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # echo /dev/nvme0n1 00:15:31.395 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # read -r dev _ 00:15:31.395 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@365 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:31.395 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # echo /dev/nvme0n2 00:15:31.395 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # read -r dev _ 00:15:31.395 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:15:31.395 /dev/nvme0n2 ]] 00:15:31.395 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:15:31.395 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:15:31.395 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@362 -- # local dev _ 00:15:31.395 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # read -r dev _ 00:15:31.395 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # nvme list 00:15:31.395 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@365 -- # [[ Node == /dev/nvme* ]] 00:15:31.395 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # read -r dev _ 00:15:31.395 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@365 -- # [[ --------------------- == /dev/nvme* ]] 00:15:31.395 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # read -r dev _ 00:15:31.395 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@365 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:31.395 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # echo /dev/nvme0n1 00:15:31.395 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # read -r dev _ 00:15:31.395 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@365 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:31.395 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # echo /dev/nvme0n2 00:15:31.395 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # read -r dev _ 00:15:31.395 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:15:31.395 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:31.395 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:31.396 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:31.396 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1221 -- # local i=0 00:15:31.396 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:15:31.396 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:31.396 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:15:31.396 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:31.396 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1233 -- # return 0 00:15:31.396 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:15:31.396 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:31.396 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.396 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:31.396 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.396 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:15:31.396 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:15:31.396 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@335 -- # nvmfcleanup 00:15:31.396 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@99 -- # sync 00:15:31.396 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:15:31.396 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@102 -- # set +e 00:15:31.396 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@103 -- # for i in {1..20} 00:15:31.396 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:15:31.396 rmmod nvme_tcp 00:15:31.396 rmmod nvme_fabrics 00:15:31.396 rmmod nvme_keyring 00:15:31.396 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:15:31.396 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # set -e 00:15:31.396 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # return 0 00:15:31.396 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # '[' -n 296300 ']' 00:15:31.396 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@337 -- # killprocess 296300 00:15:31.396 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # '[' -z 296300 ']' 00:15:31.396 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # kill -0 296300 00:15:31.396 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@957 -- # uname 00:15:31.396 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:31.396 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 296300 00:15:31.396 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:31.396 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:31.396 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@970 -- # echo 'killing process with pid 296300' 00:15:31.396 killing process with pid 296300 00:15:31.396 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@971 -- # kill 296300 00:15:31.396 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@976 -- # wait 296300 00:15:31.396 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:15:31.396 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # nvmf_fini 00:15:31.396 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@264 -- # local dev 00:15:31.396 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@267 -- # remove_target_ns 00:15:31.396 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:15:31.396 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:15:31.396 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_target_ns 00:15:33.945 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@268 -- # delete_main_bridge 00:15:33.945 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:15:33.945 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@130 -- # return 0 00:15:33.945 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:15:33.945 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:15:33.945 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:15:33.945 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:15:33.945 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:15:33.945 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:15:33.945 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:15:33.945 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:15:33.945 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:15:33.945 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:15:33.945 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:15:33.945 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:15:33.945 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:15:33.945 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:15:33.945 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:15:33.945 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:15:33.945 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:15:33.945 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@41 -- # _dev=0 00:15:33.945 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@41 -- # dev_map=() 00:15:33.945 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@284 -- # iptr 00:15:33.945 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@542 -- # iptables-save 00:15:33.945 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:15:33.945 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@542 -- # iptables-restore 00:15:33.945 00:15:33.945 real 0m14.801s 00:15:33.945 user 0m22.378s 00:15:33.945 sys 0m6.182s 00:15:33.945 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:33.945 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:33.945 ************************************ 00:15:33.945 END TEST nvmf_nvme_cli 00:15:33.945 ************************************ 00:15:33.945 19:05:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:15:33.945 19:05:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:15:33.945 19:05:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:15:33.945 19:05:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:33.945 19:05:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:33.945 ************************************ 00:15:33.945 START TEST nvmf_vfio_user 00:15:33.945 ************************************ 00:15:33.945 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:15:33.945 * Looking for test storage... 00:15:33.945 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:33.945 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:33.945 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # lcov --version 00:15:33.945 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:33.945 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:33.945 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:33.945 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:33.945 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:33.945 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:15:33.945 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:15:33.945 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:15:33.945 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:15:33.945 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:15:33.945 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:15:33.945 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:15:33.945 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:33.945 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:15:33.945 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:15:33.945 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:33.945 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:33.945 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:15:33.945 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:15:33.945 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:33.945 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:15:33.945 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:15:33.945 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:15:33.945 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:15:33.945 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:33.945 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:15:33.945 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:15:33.945 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:33.945 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:33.945 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:15:33.945 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:33.945 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:33.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:33.946 --rc genhtml_branch_coverage=1 00:15:33.946 --rc genhtml_function_coverage=1 00:15:33.946 --rc genhtml_legend=1 00:15:33.946 --rc geninfo_all_blocks=1 00:15:33.946 --rc geninfo_unexecuted_blocks=1 00:15:33.946 00:15:33.946 ' 00:15:33.946 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:33.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:33.946 --rc genhtml_branch_coverage=1 00:15:33.946 --rc genhtml_function_coverage=1 00:15:33.946 --rc genhtml_legend=1 00:15:33.946 --rc geninfo_all_blocks=1 00:15:33.946 --rc geninfo_unexecuted_blocks=1 00:15:33.946 00:15:33.946 ' 00:15:33.946 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:33.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:33.946 --rc genhtml_branch_coverage=1 00:15:33.946 --rc genhtml_function_coverage=1 00:15:33.946 --rc genhtml_legend=1 00:15:33.946 --rc geninfo_all_blocks=1 00:15:33.946 --rc geninfo_unexecuted_blocks=1 00:15:33.946 00:15:33.946 ' 00:15:33.946 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:33.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:33.946 --rc genhtml_branch_coverage=1 00:15:33.946 --rc genhtml_function_coverage=1 00:15:33.946 --rc genhtml_legend=1 00:15:33.946 --rc geninfo_all_blocks=1 00:15:33.946 --rc geninfo_unexecuted_blocks=1 00:15:33.946 00:15:33.946 ' 00:15:33.946 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:33.946 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:15:33.946 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:33.946 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:33.946 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:33.946 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:33.946 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:33.946 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:15:33.946 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:33.946 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:15:33.946 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:33.946 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:33.946 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:33.946 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:15:33.946 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:15:33.946 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:33.946 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:33.946 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:15:33.946 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:33.946 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:33.946 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:33.946 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:33.946 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:33.946 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:33.946 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:15:33.946 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:33.946 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:15:33.946 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:15:33.946 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:15:33.946 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:15:33.946 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@50 -- # : 0 00:15:33.946 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:15:33.946 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:15:33.946 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:15:33.946 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:33.946 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:33.946 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:15:33.946 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:15:33.946 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:15:33.946 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:15:33.946 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@54 -- # have_pci_nics=0 00:15:33.946 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:33.946 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:33.946 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:15:33.946 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:33.946 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:33.946 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:33.946 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:15:33.946 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:15:33.946 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:15:33.946 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:15:33.946 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:15:33.946 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=298103 00:15:33.946 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 298103' 00:15:33.946 Process pid: 298103 00:15:33.946 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:33.946 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 298103 00:15:33.946 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@833 -- # '[' -z 298103 ']' 00:15:33.946 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:33.946 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:33.946 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:33.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:33.946 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:33.946 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:33.946 [2024-11-05 19:05:03.024513] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:15:33.946 [2024-11-05 19:05:03.024569] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:33.946 [2024-11-05 19:05:03.091458] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:33.946 [2024-11-05 19:05:03.128913] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:33.946 [2024-11-05 19:05:03.128946] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:33.946 [2024-11-05 19:05:03.128955] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:33.946 [2024-11-05 19:05:03.128961] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:33.947 [2024-11-05 19:05:03.128967] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:33.947 [2024-11-05 19:05:03.130508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:33.947 [2024-11-05 19:05:03.130627] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:33.947 [2024-11-05 19:05:03.130790] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:33.947 [2024-11-05 19:05:03.130790] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:33.947 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:33.947 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@866 -- # return 0 00:15:33.947 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:35.332 19:05:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:15:35.332 19:05:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:35.332 19:05:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:35.332 19:05:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:35.332 19:05:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:35.332 19:05:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:35.332 Malloc1 00:15:35.332 19:05:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:35.592 19:05:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:35.852 19:05:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:35.852 19:05:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:35.852 19:05:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:35.852 19:05:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:36.112 Malloc2 00:15:36.112 19:05:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:36.372 19:05:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:36.372 19:05:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:36.634 19:05:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:15:36.634 19:05:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:15:36.634 19:05:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:36.634 19:05:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:36.634 19:05:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:15:36.634 19:05:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:36.634 [2024-11-05 19:05:05.911314] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:15:36.634 [2024-11-05 19:05:05.911357] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid298524 ] 00:15:36.897 [2024-11-05 19:05:05.965865] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:15:36.897 [2024-11-05 19:05:05.974021] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:36.897 [2024-11-05 19:05:05.974040] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fdd598de000 00:15:36.897 [2024-11-05 19:05:05.975020] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:36.897 [2024-11-05 19:05:05.976018] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:36.897 [2024-11-05 19:05:05.977025] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:36.897 [2024-11-05 19:05:05.978031] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:36.897 [2024-11-05 19:05:05.979035] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:36.897 [2024-11-05 19:05:05.983753] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:36.897 [2024-11-05 19:05:05.984064] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:36.897 [2024-11-05 19:05:05.985075] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:36.897 [2024-11-05 19:05:05.986085] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:36.897 [2024-11-05 19:05:05.986098] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fdd598d3000 00:15:36.897 [2024-11-05 19:05:05.987425] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:36.897 [2024-11-05 19:05:06.005511] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:15:36.897 [2024-11-05 19:05:06.005544] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:15:36.897 [2024-11-05 19:05:06.008198] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:36.897 [2024-11-05 19:05:06.008249] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:36.897 [2024-11-05 19:05:06.008335] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:15:36.897 [2024-11-05 19:05:06.008353] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:15:36.897 [2024-11-05 19:05:06.008359] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:15:36.897 [2024-11-05 19:05:06.009205] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:15:36.897 [2024-11-05 19:05:06.009215] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:15:36.897 [2024-11-05 19:05:06.009222] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:15:36.897 [2024-11-05 19:05:06.010209] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:36.897 [2024-11-05 19:05:06.010219] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:15:36.897 [2024-11-05 19:05:06.010227] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:15:36.897 [2024-11-05 19:05:06.011209] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:15:36.897 [2024-11-05 19:05:06.011217] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:36.897 [2024-11-05 19:05:06.012223] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:15:36.897 [2024-11-05 19:05:06.012233] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:15:36.897 [2024-11-05 19:05:06.012238] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:15:36.897 [2024-11-05 19:05:06.012245] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:36.897 [2024-11-05 19:05:06.012353] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:15:36.897 [2024-11-05 19:05:06.012358] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:36.897 [2024-11-05 19:05:06.012364] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:15:36.897 [2024-11-05 19:05:06.013230] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:15:36.897 [2024-11-05 19:05:06.014229] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:15:36.897 [2024-11-05 19:05:06.015236] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:36.897 [2024-11-05 19:05:06.016231] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:36.897 [2024-11-05 19:05:06.016281] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:36.897 [2024-11-05 19:05:06.017247] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:15:36.897 [2024-11-05 19:05:06.017257] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:36.897 [2024-11-05 19:05:06.017263] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:15:36.897 [2024-11-05 19:05:06.017284] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:15:36.897 [2024-11-05 19:05:06.017299] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:15:36.897 [2024-11-05 19:05:06.017314] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:36.897 [2024-11-05 19:05:06.017320] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:36.897 [2024-11-05 19:05:06.017324] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:36.897 [2024-11-05 19:05:06.017338] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:36.897 [2024-11-05 19:05:06.017371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:36.897 [2024-11-05 19:05:06.017381] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:15:36.897 [2024-11-05 19:05:06.017386] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:15:36.897 [2024-11-05 19:05:06.017390] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:15:36.897 [2024-11-05 19:05:06.017395] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:36.897 [2024-11-05 19:05:06.017400] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:15:36.897 [2024-11-05 19:05:06.017407] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:15:36.897 [2024-11-05 19:05:06.017412] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:15:36.897 [2024-11-05 19:05:06.017421] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:15:36.897 [2024-11-05 19:05:06.017431] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:36.897 [2024-11-05 19:05:06.017439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:36.897 [2024-11-05 19:05:06.017452] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:36.897 [2024-11-05 19:05:06.017461] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:36.897 [2024-11-05 19:05:06.017469] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:36.897 [2024-11-05 19:05:06.017477] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:36.897 [2024-11-05 19:05:06.017482] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:15:36.897 [2024-11-05 19:05:06.017489] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:36.898 [2024-11-05 19:05:06.017501] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:36.898 [2024-11-05 19:05:06.017513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:36.898 [2024-11-05 19:05:06.017520] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:15:36.898 [2024-11-05 19:05:06.017526] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:36.898 [2024-11-05 19:05:06.017533] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:15:36.898 [2024-11-05 19:05:06.017539] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:15:36.898 [2024-11-05 19:05:06.017548] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:36.898 [2024-11-05 19:05:06.017555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:36.898 [2024-11-05 19:05:06.017619] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:15:36.898 [2024-11-05 19:05:06.017628] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:15:36.898 [2024-11-05 19:05:06.017636] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:36.898 [2024-11-05 19:05:06.017640] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:36.898 [2024-11-05 19:05:06.017643] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:36.898 [2024-11-05 19:05:06.017650] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:36.898 [2024-11-05 19:05:06.017661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:36.898 [2024-11-05 19:05:06.017671] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:15:36.898 [2024-11-05 19:05:06.017679] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:15:36.898 [2024-11-05 19:05:06.017688] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:15:36.898 [2024-11-05 19:05:06.017695] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:36.898 [2024-11-05 19:05:06.017699] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:36.898 [2024-11-05 19:05:06.017703] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:36.898 [2024-11-05 19:05:06.017709] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:36.898 [2024-11-05 19:05:06.017723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:36.898 [2024-11-05 19:05:06.017735] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:36.898 [2024-11-05 19:05:06.017744] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:36.898 [2024-11-05 19:05:06.017768] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:36.898 [2024-11-05 19:05:06.017774] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:36.898 [2024-11-05 19:05:06.017778] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:36.898 [2024-11-05 19:05:06.017784] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:36.898 [2024-11-05 19:05:06.017794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:36.898 [2024-11-05 19:05:06.017802] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:36.898 [2024-11-05 19:05:06.017809] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:15:36.898 [2024-11-05 19:05:06.017817] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:15:36.898 [2024-11-05 19:05:06.017823] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:15:36.898 [2024-11-05 19:05:06.017829] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:36.898 [2024-11-05 19:05:06.017834] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:15:36.898 [2024-11-05 19:05:06.017839] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:15:36.898 [2024-11-05 19:05:06.017844] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:15:36.898 [2024-11-05 19:05:06.017849] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:15:36.898 [2024-11-05 19:05:06.017867] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:36.898 [2024-11-05 19:05:06.017877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:36.898 [2024-11-05 19:05:06.017889] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:36.898 [2024-11-05 19:05:06.017901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:36.898 [2024-11-05 19:05:06.017913] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:36.898 [2024-11-05 19:05:06.017922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:36.898 [2024-11-05 19:05:06.017934] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:36.898 [2024-11-05 19:05:06.017944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:36.898 [2024-11-05 19:05:06.017957] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:36.898 [2024-11-05 19:05:06.017962] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:36.898 [2024-11-05 19:05:06.017966] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:36.898 [2024-11-05 19:05:06.017969] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:36.898 [2024-11-05 19:05:06.017973] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:15:36.898 [2024-11-05 19:05:06.017980] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:36.898 [2024-11-05 19:05:06.017988] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:36.898 [2024-11-05 19:05:06.017992] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:36.898 [2024-11-05 19:05:06.017996] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:36.898 [2024-11-05 19:05:06.018002] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:36.898 [2024-11-05 19:05:06.018009] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:36.898 [2024-11-05 19:05:06.018014] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:36.898 [2024-11-05 19:05:06.018017] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:36.898 [2024-11-05 19:05:06.018023] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:36.898 [2024-11-05 19:05:06.018033] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:36.898 [2024-11-05 19:05:06.018037] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:36.898 [2024-11-05 19:05:06.018040] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:36.898 [2024-11-05 19:05:06.018046] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:36.898 [2024-11-05 19:05:06.018054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:36.898 [2024-11-05 19:05:06.018065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:36.898 [2024-11-05 19:05:06.018076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:36.898 [2024-11-05 19:05:06.018083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:36.898 ===================================================== 00:15:36.898 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:36.898 ===================================================== 00:15:36.898 Controller Capabilities/Features 00:15:36.898 ================================ 00:15:36.898 Vendor ID: 4e58 00:15:36.898 Subsystem Vendor ID: 4e58 00:15:36.898 Serial Number: SPDK1 00:15:36.898 Model Number: SPDK bdev Controller 00:15:36.898 Firmware Version: 25.01 00:15:36.898 Recommended Arb Burst: 6 00:15:36.899 IEEE OUI Identifier: 8d 6b 50 00:15:36.899 Multi-path I/O 00:15:36.899 May have multiple subsystem ports: Yes 00:15:36.899 May have multiple controllers: Yes 00:15:36.899 Associated with SR-IOV VF: No 00:15:36.899 Max Data Transfer Size: 131072 00:15:36.899 Max Number of Namespaces: 32 00:15:36.899 Max Number of I/O Queues: 127 00:15:36.899 NVMe Specification Version (VS): 1.3 00:15:36.899 NVMe Specification Version (Identify): 1.3 00:15:36.899 Maximum Queue Entries: 256 00:15:36.899 Contiguous Queues Required: Yes 00:15:36.899 Arbitration Mechanisms Supported 00:15:36.899 Weighted Round Robin: Not Supported 00:15:36.899 Vendor Specific: Not Supported 00:15:36.899 Reset Timeout: 15000 ms 00:15:36.899 Doorbell Stride: 4 bytes 00:15:36.899 NVM Subsystem Reset: Not Supported 00:15:36.899 Command Sets Supported 00:15:36.899 NVM Command Set: Supported 00:15:36.899 Boot Partition: Not Supported 00:15:36.899 Memory Page Size Minimum: 4096 bytes 00:15:36.899 Memory Page Size Maximum: 4096 bytes 00:15:36.899 Persistent Memory Region: Not Supported 00:15:36.899 Optional Asynchronous Events Supported 00:15:36.899 Namespace Attribute Notices: Supported 00:15:36.899 Firmware Activation Notices: Not Supported 00:15:36.899 ANA Change Notices: Not Supported 00:15:36.899 PLE Aggregate Log Change Notices: Not Supported 00:15:36.899 LBA Status Info Alert Notices: Not Supported 00:15:36.899 EGE Aggregate Log Change Notices: Not Supported 00:15:36.899 Normal NVM Subsystem Shutdown event: Not Supported 00:15:36.899 Zone Descriptor Change Notices: Not Supported 00:15:36.899 Discovery Log Change Notices: Not Supported 00:15:36.899 Controller Attributes 00:15:36.899 128-bit Host Identifier: Supported 00:15:36.899 Non-Operational Permissive Mode: Not Supported 00:15:36.899 NVM Sets: Not Supported 00:15:36.899 Read Recovery Levels: Not Supported 00:15:36.899 Endurance Groups: Not Supported 00:15:36.899 Predictable Latency Mode: Not Supported 00:15:36.899 Traffic Based Keep ALive: Not Supported 00:15:36.899 Namespace Granularity: Not Supported 00:15:36.899 SQ Associations: Not Supported 00:15:36.899 UUID List: Not Supported 00:15:36.899 Multi-Domain Subsystem: Not Supported 00:15:36.899 Fixed Capacity Management: Not Supported 00:15:36.899 Variable Capacity Management: Not Supported 00:15:36.899 Delete Endurance Group: Not Supported 00:15:36.899 Delete NVM Set: Not Supported 00:15:36.899 Extended LBA Formats Supported: Not Supported 00:15:36.899 Flexible Data Placement Supported: Not Supported 00:15:36.899 00:15:36.899 Controller Memory Buffer Support 00:15:36.899 ================================ 00:15:36.899 Supported: No 00:15:36.899 00:15:36.899 Persistent Memory Region Support 00:15:36.899 ================================ 00:15:36.899 Supported: No 00:15:36.899 00:15:36.899 Admin Command Set Attributes 00:15:36.899 ============================ 00:15:36.899 Security Send/Receive: Not Supported 00:15:36.899 Format NVM: Not Supported 00:15:36.899 Firmware Activate/Download: Not Supported 00:15:36.899 Namespace Management: Not Supported 00:15:36.899 Device Self-Test: Not Supported 00:15:36.899 Directives: Not Supported 00:15:36.899 NVMe-MI: Not Supported 00:15:36.899 Virtualization Management: Not Supported 00:15:36.899 Doorbell Buffer Config: Not Supported 00:15:36.899 Get LBA Status Capability: Not Supported 00:15:36.899 Command & Feature Lockdown Capability: Not Supported 00:15:36.899 Abort Command Limit: 4 00:15:36.899 Async Event Request Limit: 4 00:15:36.899 Number of Firmware Slots: N/A 00:15:36.899 Firmware Slot 1 Read-Only: N/A 00:15:36.899 Firmware Activation Without Reset: N/A 00:15:36.899 Multiple Update Detection Support: N/A 00:15:36.899 Firmware Update Granularity: No Information Provided 00:15:36.899 Per-Namespace SMART Log: No 00:15:36.899 Asymmetric Namespace Access Log Page: Not Supported 00:15:36.899 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:15:36.899 Command Effects Log Page: Supported 00:15:36.899 Get Log Page Extended Data: Supported 00:15:36.899 Telemetry Log Pages: Not Supported 00:15:36.899 Persistent Event Log Pages: Not Supported 00:15:36.899 Supported Log Pages Log Page: May Support 00:15:36.899 Commands Supported & Effects Log Page: Not Supported 00:15:36.899 Feature Identifiers & Effects Log Page:May Support 00:15:36.899 NVMe-MI Commands & Effects Log Page: May Support 00:15:36.899 Data Area 4 for Telemetry Log: Not Supported 00:15:36.899 Error Log Page Entries Supported: 128 00:15:36.899 Keep Alive: Supported 00:15:36.899 Keep Alive Granularity: 10000 ms 00:15:36.899 00:15:36.899 NVM Command Set Attributes 00:15:36.899 ========================== 00:15:36.899 Submission Queue Entry Size 00:15:36.899 Max: 64 00:15:36.899 Min: 64 00:15:36.899 Completion Queue Entry Size 00:15:36.899 Max: 16 00:15:36.899 Min: 16 00:15:36.899 Number of Namespaces: 32 00:15:36.899 Compare Command: Supported 00:15:36.899 Write Uncorrectable Command: Not Supported 00:15:36.899 Dataset Management Command: Supported 00:15:36.899 Write Zeroes Command: Supported 00:15:36.899 Set Features Save Field: Not Supported 00:15:36.899 Reservations: Not Supported 00:15:36.899 Timestamp: Not Supported 00:15:36.899 Copy: Supported 00:15:36.899 Volatile Write Cache: Present 00:15:36.899 Atomic Write Unit (Normal): 1 00:15:36.899 Atomic Write Unit (PFail): 1 00:15:36.899 Atomic Compare & Write Unit: 1 00:15:36.899 Fused Compare & Write: Supported 00:15:36.899 Scatter-Gather List 00:15:36.899 SGL Command Set: Supported (Dword aligned) 00:15:36.899 SGL Keyed: Not Supported 00:15:36.899 SGL Bit Bucket Descriptor: Not Supported 00:15:36.899 SGL Metadata Pointer: Not Supported 00:15:36.899 Oversized SGL: Not Supported 00:15:36.899 SGL Metadata Address: Not Supported 00:15:36.899 SGL Offset: Not Supported 00:15:36.899 Transport SGL Data Block: Not Supported 00:15:36.899 Replay Protected Memory Block: Not Supported 00:15:36.899 00:15:36.899 Firmware Slot Information 00:15:36.899 ========================= 00:15:36.899 Active slot: 1 00:15:36.899 Slot 1 Firmware Revision: 25.01 00:15:36.899 00:15:36.899 00:15:36.899 Commands Supported and Effects 00:15:36.899 ============================== 00:15:36.899 Admin Commands 00:15:36.899 -------------- 00:15:36.899 Get Log Page (02h): Supported 00:15:36.899 Identify (06h): Supported 00:15:36.899 Abort (08h): Supported 00:15:36.899 Set Features (09h): Supported 00:15:36.899 Get Features (0Ah): Supported 00:15:36.899 Asynchronous Event Request (0Ch): Supported 00:15:36.899 Keep Alive (18h): Supported 00:15:36.899 I/O Commands 00:15:36.899 ------------ 00:15:36.899 Flush (00h): Supported LBA-Change 00:15:36.899 Write (01h): Supported LBA-Change 00:15:36.899 Read (02h): Supported 00:15:36.899 Compare (05h): Supported 00:15:36.899 Write Zeroes (08h): Supported LBA-Change 00:15:36.899 Dataset Management (09h): Supported LBA-Change 00:15:36.899 Copy (19h): Supported LBA-Change 00:15:36.899 00:15:36.899 Error Log 00:15:36.899 ========= 00:15:36.899 00:15:36.899 Arbitration 00:15:36.899 =========== 00:15:36.899 Arbitration Burst: 1 00:15:36.899 00:15:36.899 Power Management 00:15:36.899 ================ 00:15:36.899 Number of Power States: 1 00:15:36.899 Current Power State: Power State #0 00:15:36.899 Power State #0: 00:15:36.899 Max Power: 0.00 W 00:15:36.899 Non-Operational State: Operational 00:15:36.899 Entry Latency: Not Reported 00:15:36.899 Exit Latency: Not Reported 00:15:36.899 Relative Read Throughput: 0 00:15:36.899 Relative Read Latency: 0 00:15:36.899 Relative Write Throughput: 0 00:15:36.899 Relative Write Latency: 0 00:15:36.899 Idle Power: Not Reported 00:15:36.899 Active Power: Not Reported 00:15:36.899 Non-Operational Permissive Mode: Not Supported 00:15:36.899 00:15:36.899 Health Information 00:15:36.899 ================== 00:15:36.899 Critical Warnings: 00:15:36.899 Available Spare Space: OK 00:15:36.899 Temperature: OK 00:15:36.899 Device Reliability: OK 00:15:36.899 Read Only: No 00:15:36.899 Volatile Memory Backup: OK 00:15:36.899 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:36.899 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:36.899 Available Spare: 0% 00:15:36.899 Available Sp[2024-11-05 19:05:06.018188] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:36.899 [2024-11-05 19:05:06.018200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:36.899 [2024-11-05 19:05:06.018229] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:15:36.899 [2024-11-05 19:05:06.018239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.899 [2024-11-05 19:05:06.018246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.899 [2024-11-05 19:05:06.018253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.900 [2024-11-05 19:05:06.018259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.900 [2024-11-05 19:05:06.020754] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:36.900 [2024-11-05 19:05:06.020766] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:15:36.900 [2024-11-05 19:05:06.021264] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:36.900 [2024-11-05 19:05:06.021307] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:15:36.900 [2024-11-05 19:05:06.021316] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:15:36.900 [2024-11-05 19:05:06.022270] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:15:36.900 [2024-11-05 19:05:06.022283] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:15:36.900 [2024-11-05 19:05:06.022344] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:15:36.900 [2024-11-05 19:05:06.025754] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:36.900 are Threshold: 0% 00:15:36.900 Life Percentage Used: 0% 00:15:36.900 Data Units Read: 0 00:15:36.900 Data Units Written: 0 00:15:36.900 Host Read Commands: 0 00:15:36.900 Host Write Commands: 0 00:15:36.900 Controller Busy Time: 0 minutes 00:15:36.900 Power Cycles: 0 00:15:36.900 Power On Hours: 0 hours 00:15:36.900 Unsafe Shutdowns: 0 00:15:36.900 Unrecoverable Media Errors: 0 00:15:36.900 Lifetime Error Log Entries: 0 00:15:36.900 Warning Temperature Time: 0 minutes 00:15:36.900 Critical Temperature Time: 0 minutes 00:15:36.900 00:15:36.900 Number of Queues 00:15:36.900 ================ 00:15:36.900 Number of I/O Submission Queues: 127 00:15:36.900 Number of I/O Completion Queues: 127 00:15:36.900 00:15:36.900 Active Namespaces 00:15:36.900 ================= 00:15:36.900 Namespace ID:1 00:15:36.900 Error Recovery Timeout: Unlimited 00:15:36.900 Command Set Identifier: NVM (00h) 00:15:36.900 Deallocate: Supported 00:15:36.900 Deallocated/Unwritten Error: Not Supported 00:15:36.900 Deallocated Read Value: Unknown 00:15:36.900 Deallocate in Write Zeroes: Not Supported 00:15:36.900 Deallocated Guard Field: 0xFFFF 00:15:36.900 Flush: Supported 00:15:36.900 Reservation: Supported 00:15:36.900 Namespace Sharing Capabilities: Multiple Controllers 00:15:36.900 Size (in LBAs): 131072 (0GiB) 00:15:36.900 Capacity (in LBAs): 131072 (0GiB) 00:15:36.900 Utilization (in LBAs): 131072 (0GiB) 00:15:36.900 NGUID: 7CD2D539F3DB4AE8BFF85FD7A4E890B0 00:15:36.900 UUID: 7cd2d539-f3db-4ae8-bff8-5fd7a4e890b0 00:15:36.900 Thin Provisioning: Not Supported 00:15:36.900 Per-NS Atomic Units: Yes 00:15:36.900 Atomic Boundary Size (Normal): 0 00:15:36.900 Atomic Boundary Size (PFail): 0 00:15:36.900 Atomic Boundary Offset: 0 00:15:36.900 Maximum Single Source Range Length: 65535 00:15:36.900 Maximum Copy Length: 65535 00:15:36.900 Maximum Source Range Count: 1 00:15:36.900 NGUID/EUI64 Never Reused: No 00:15:36.900 Namespace Write Protected: No 00:15:36.900 Number of LBA Formats: 1 00:15:36.900 Current LBA Format: LBA Format #00 00:15:36.900 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:36.900 00:15:36.900 19:05:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:36.900 [2024-11-05 19:05:06.220458] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:42.189 Initializing NVMe Controllers 00:15:42.189 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:42.189 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:42.189 Initialization complete. Launching workers. 00:15:42.189 ======================================================== 00:15:42.189 Latency(us) 00:15:42.189 Device Information : IOPS MiB/s Average min max 00:15:42.189 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 40102.60 156.65 3194.46 850.81 9768.88 00:15:42.189 ======================================================== 00:15:42.189 Total : 40102.60 156.65 3194.46 850.81 9768.88 00:15:42.189 00:15:42.189 [2024-11-05 19:05:11.241015] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:42.189 19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:42.189 [2024-11-05 19:05:11.433915] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:47.474 Initializing NVMe Controllers 00:15:47.474 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:47.474 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:47.474 Initialization complete. Launching workers. 00:15:47.474 ======================================================== 00:15:47.474 Latency(us) 00:15:47.474 Device Information : IOPS MiB/s Average min max 00:15:47.474 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16076.80 62.80 7972.78 4987.65 9977.34 00:15:47.474 ======================================================== 00:15:47.474 Total : 16076.80 62.80 7972.78 4987.65 9977.34 00:15:47.474 00:15:47.474 [2024-11-05 19:05:16.472558] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:47.474 19:05:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:47.474 [2024-11-05 19:05:16.682456] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:52.758 [2024-11-05 19:05:21.768033] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:52.758 Initializing NVMe Controllers 00:15:52.758 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:52.758 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:52.758 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:15:52.758 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:15:52.758 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:15:52.758 Initialization complete. Launching workers. 00:15:52.758 Starting thread on core 2 00:15:52.758 Starting thread on core 3 00:15:52.758 Starting thread on core 1 00:15:52.758 19:05:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:15:52.758 [2024-11-05 19:05:22.046390] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:56.056 [2024-11-05 19:05:25.100652] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:56.056 Initializing NVMe Controllers 00:15:56.056 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:56.056 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:56.056 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:15:56.056 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:15:56.056 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:15:56.056 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:15:56.056 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:56.056 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:56.056 Initialization complete. Launching workers. 00:15:56.056 Starting thread on core 1 with urgent priority queue 00:15:56.056 Starting thread on core 2 with urgent priority queue 00:15:56.056 Starting thread on core 3 with urgent priority queue 00:15:56.056 Starting thread on core 0 with urgent priority queue 00:15:56.056 SPDK bdev Controller (SPDK1 ) core 0: 11229.00 IO/s 8.91 secs/100000 ios 00:15:56.056 SPDK bdev Controller (SPDK1 ) core 1: 11128.67 IO/s 8.99 secs/100000 ios 00:15:56.056 SPDK bdev Controller (SPDK1 ) core 2: 8875.00 IO/s 11.27 secs/100000 ios 00:15:56.056 SPDK bdev Controller (SPDK1 ) core 3: 11575.33 IO/s 8.64 secs/100000 ios 00:15:56.056 ======================================================== 00:15:56.056 00:15:56.056 19:05:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:56.316 [2024-11-05 19:05:25.386636] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:56.316 Initializing NVMe Controllers 00:15:56.316 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:56.316 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:56.316 Namespace ID: 1 size: 0GB 00:15:56.316 Initialization complete. 00:15:56.316 INFO: using host memory buffer for IO 00:15:56.316 Hello world! 00:15:56.316 [2024-11-05 19:05:25.421826] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:56.316 19:05:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:56.576 [2024-11-05 19:05:25.704129] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:57.516 Initializing NVMe Controllers 00:15:57.516 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:57.516 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:57.516 Initialization complete. Launching workers. 00:15:57.516 submit (in ns) avg, min, max = 9580.5, 3908.3, 4037170.0 00:15:57.516 complete (in ns) avg, min, max = 16933.8, 2375.0, 4024575.8 00:15:57.516 00:15:57.516 Submit histogram 00:15:57.516 ================ 00:15:57.516 Range in us Cumulative Count 00:15:57.516 3.893 - 3.920: 0.4199% ( 79) 00:15:57.516 3.920 - 3.947: 4.6399% ( 794) 00:15:57.516 3.947 - 3.973: 15.1741% ( 1982) 00:15:57.516 3.973 - 4.000: 25.5116% ( 1945) 00:15:57.516 4.000 - 4.027: 35.9288% ( 1960) 00:15:57.516 4.027 - 4.053: 48.0733% ( 2285) 00:15:57.516 4.053 - 4.080: 64.0978% ( 3015) 00:15:57.516 4.080 - 4.107: 80.3667% ( 3061) 00:15:57.516 4.107 - 4.133: 91.6503% ( 2123) 00:15:57.516 4.133 - 4.160: 96.5772% ( 927) 00:15:57.516 4.160 - 4.187: 98.5225% ( 366) 00:15:57.516 4.187 - 4.213: 99.2240% ( 132) 00:15:57.516 4.213 - 4.240: 99.4419% ( 41) 00:15:57.516 4.240 - 4.267: 99.4845% ( 8) 00:15:57.516 4.267 - 4.293: 99.4951% ( 2) 00:15:57.516 4.400 - 4.427: 99.5004% ( 1) 00:15:57.516 4.427 - 4.453: 99.5057% ( 1) 00:15:57.516 4.480 - 4.507: 99.5110% ( 1) 00:15:57.516 4.533 - 4.560: 99.5163% ( 1) 00:15:57.516 4.640 - 4.667: 99.5217% ( 1) 00:15:57.516 4.720 - 4.747: 99.5323% ( 2) 00:15:57.516 4.880 - 4.907: 99.5376% ( 1) 00:15:57.516 4.907 - 4.933: 99.5429% ( 1) 00:15:57.516 4.960 - 4.987: 99.5482% ( 1) 00:15:57.516 5.360 - 5.387: 99.5535% ( 1) 00:15:57.516 5.440 - 5.467: 99.5589% ( 1) 00:15:57.516 5.840 - 5.867: 99.5642% ( 1) 00:15:57.516 5.867 - 5.893: 99.5695% ( 1) 00:15:57.516 5.973 - 6.000: 99.5748% ( 1) 00:15:57.516 6.000 - 6.027: 99.5801% ( 1) 00:15:57.516 6.027 - 6.053: 99.5961% ( 3) 00:15:57.516 6.053 - 6.080: 99.6067% ( 2) 00:15:57.516 6.080 - 6.107: 99.6120% ( 1) 00:15:57.516 6.107 - 6.133: 99.6280% ( 3) 00:15:57.516 6.133 - 6.160: 99.6333% ( 1) 00:15:57.516 6.160 - 6.187: 99.6439% ( 2) 00:15:57.516 6.187 - 6.213: 99.6545% ( 2) 00:15:57.516 6.373 - 6.400: 99.6598% ( 1) 00:15:57.516 6.400 - 6.427: 99.6705% ( 2) 00:15:57.516 6.427 - 6.453: 99.6758% ( 1) 00:15:57.516 6.480 - 6.507: 99.6811% ( 1) 00:15:57.516 6.507 - 6.533: 99.6864% ( 1) 00:15:57.516 6.533 - 6.560: 99.6917% ( 1) 00:15:57.516 6.560 - 6.587: 99.7130% ( 4) 00:15:57.516 6.587 - 6.613: 99.7236% ( 2) 00:15:57.516 6.613 - 6.640: 99.7289% ( 1) 00:15:57.516 6.667 - 6.693: 99.7343% ( 1) 00:15:57.516 6.720 - 6.747: 99.7396% ( 1) 00:15:57.516 6.747 - 6.773: 99.7449% ( 1) 00:15:57.516 6.800 - 6.827: 99.7502% ( 1) 00:15:57.516 6.880 - 6.933: 99.7555% ( 1) 00:15:57.516 6.933 - 6.987: 99.7608% ( 1) 00:15:57.516 6.987 - 7.040: 99.7715% ( 2) 00:15:57.516 7.040 - 7.093: 99.7821% ( 2) 00:15:57.516 7.147 - 7.200: 99.7927% ( 2) 00:15:57.516 7.253 - 7.307: 99.7980% ( 1) 00:15:57.516 7.467 - 7.520: 99.8033% ( 1) 00:15:57.516 7.520 - 7.573: 99.8193% ( 3) 00:15:57.516 7.573 - 7.627: 99.8246% ( 1) 00:15:57.516 7.733 - 7.787: 99.8299% ( 1) 00:15:57.516 8.107 - 8.160: 99.8352% ( 1) 00:15:57.516 8.267 - 8.320: 99.8406% ( 1) 00:15:57.516 10.987 - 11.040: 99.8459% ( 1) 00:15:57.516 11.840 - 11.893: 99.8512% ( 1) 00:15:57.516 11.947 - 12.000: 99.8618% ( 2) 00:15:57.516 3986.773 - 4014.080: 99.9947% ( 25) 00:15:57.516 4014.080 - 4041.387: 100.0000% ( 1) 00:15:57.516 00:15:57.516 Complete histogram 00:15:57.516 ================== 00:15:57.516 Range in us Cumulative Count 00:15:57.516 2.373 - 2.387: 0.0053% ( 1) 00:15:57.516 2.387 - 2.400: 0.2551% ( 47) 00:15:57.516 2.400 - 2.413: 0.5528% ( 56) 00:15:57.516 2.413 - 2.427: 0.6325% ( 15) 00:15:57.516 2.427 - 2.440: 0.7281% ( 18) 00:15:57.516 2.440 - 2.453: 15.4876% ( 2777) 00:15:57.516 2.453 - 2.467: 57.6774% ( 7938) 00:15:57.516 2.467 - 2.480: 65.9686% ( 1560) 00:15:57.516 2.480 - 2.493: 77.5977% ( 2188) 00:15:57.516 2.493 - 2.507: 80.5421% ( 554) 00:15:57.516 2.507 - [2024-11-05 19:05:26.726527] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:57.516 2.520: 82.0197% ( 278) 00:15:57.516 2.520 - 2.533: 87.4940% ( 1030) 00:15:57.516 2.533 - 2.547: 93.1757% ( 1069) 00:15:57.516 2.547 - 2.560: 96.5666% ( 638) 00:15:57.516 2.560 - 2.573: 98.3790% ( 341) 00:15:57.517 2.573 - 2.587: 99.0646% ( 129) 00:15:57.517 2.587 - 2.600: 99.3091% ( 46) 00:15:57.517 2.600 - 2.613: 99.3782% ( 13) 00:15:57.517 2.613 - 2.627: 99.4100% ( 6) 00:15:57.517 2.720 - 2.733: 99.4154% ( 1) 00:15:57.517 3.027 - 3.040: 99.4207% ( 1) 00:15:57.517 3.160 - 3.173: 99.4260% ( 1) 00:15:57.517 3.173 - 3.187: 99.4313% ( 1) 00:15:57.517 3.187 - 3.200: 99.4366% ( 1) 00:15:57.517 3.400 - 3.413: 99.4419% ( 1) 00:15:57.517 4.320 - 4.347: 99.4472% ( 1) 00:15:57.517 4.480 - 4.507: 99.4632% ( 3) 00:15:57.517 4.507 - 4.533: 99.4685% ( 1) 00:15:57.517 4.560 - 4.587: 99.4738% ( 1) 00:15:57.517 4.587 - 4.613: 99.4791% ( 1) 00:15:57.517 4.613 - 4.640: 99.4845% ( 1) 00:15:57.517 4.640 - 4.667: 99.4951% ( 2) 00:15:57.517 4.667 - 4.693: 99.5004% ( 1) 00:15:57.517 4.693 - 4.720: 99.5163% ( 3) 00:15:57.517 4.720 - 4.747: 99.5270% ( 2) 00:15:57.517 4.827 - 4.853: 99.5323% ( 1) 00:15:57.517 4.853 - 4.880: 99.5376% ( 1) 00:15:57.517 5.093 - 5.120: 99.5429% ( 1) 00:15:57.517 5.120 - 5.147: 99.5482% ( 1) 00:15:57.517 5.147 - 5.173: 99.5535% ( 1) 00:15:57.517 5.333 - 5.360: 99.5642% ( 2) 00:15:57.517 5.360 - 5.387: 99.5695% ( 1) 00:15:57.517 5.573 - 5.600: 99.5748% ( 1) 00:15:57.517 5.787 - 5.813: 99.5801% ( 1) 00:15:57.517 5.867 - 5.893: 99.5854% ( 1) 00:15:57.517 5.973 - 6.000: 99.5908% ( 1) 00:15:57.517 6.080 - 6.107: 99.5961% ( 1) 00:15:57.517 6.107 - 6.133: 99.6014% ( 1) 00:15:57.517 6.427 - 6.453: 99.6067% ( 1) 00:15:57.517 7.200 - 7.253: 99.6120% ( 1) 00:15:57.517 11.893 - 11.947: 99.6173% ( 1) 00:15:57.517 12.053 - 12.107: 99.6226% ( 1) 00:15:57.517 34.773 - 34.987: 99.6280% ( 1) 00:15:57.517 44.373 - 44.587: 99.6333% ( 1) 00:15:57.517 169.813 - 170.667: 99.6386% ( 1) 00:15:57.517 3986.773 - 4014.080: 99.9947% ( 67) 00:15:57.517 4014.080 - 4041.387: 100.0000% ( 1) 00:15:57.517 00:15:57.517 19:05:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:15:57.517 19:05:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:57.517 19:05:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:15:57.517 19:05:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:15:57.517 19:05:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:57.777 [ 00:15:57.777 { 00:15:57.777 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:57.777 "subtype": "Discovery", 00:15:57.777 "listen_addresses": [], 00:15:57.777 "allow_any_host": true, 00:15:57.777 "hosts": [] 00:15:57.777 }, 00:15:57.777 { 00:15:57.777 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:57.777 "subtype": "NVMe", 00:15:57.777 "listen_addresses": [ 00:15:57.777 { 00:15:57.777 "trtype": "VFIOUSER", 00:15:57.777 "adrfam": "IPv4", 00:15:57.777 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:57.777 "trsvcid": "0" 00:15:57.777 } 00:15:57.777 ], 00:15:57.777 "allow_any_host": true, 00:15:57.777 "hosts": [], 00:15:57.777 "serial_number": "SPDK1", 00:15:57.777 "model_number": "SPDK bdev Controller", 00:15:57.777 "max_namespaces": 32, 00:15:57.777 "min_cntlid": 1, 00:15:57.777 "max_cntlid": 65519, 00:15:57.777 "namespaces": [ 00:15:57.777 { 00:15:57.777 "nsid": 1, 00:15:57.777 "bdev_name": "Malloc1", 00:15:57.777 "name": "Malloc1", 00:15:57.777 "nguid": "7CD2D539F3DB4AE8BFF85FD7A4E890B0", 00:15:57.777 "uuid": "7cd2d539-f3db-4ae8-bff8-5fd7a4e890b0" 00:15:57.777 } 00:15:57.777 ] 00:15:57.777 }, 00:15:57.777 { 00:15:57.777 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:57.777 "subtype": "NVMe", 00:15:57.777 "listen_addresses": [ 00:15:57.777 { 00:15:57.777 "trtype": "VFIOUSER", 00:15:57.777 "adrfam": "IPv4", 00:15:57.777 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:57.777 "trsvcid": "0" 00:15:57.777 } 00:15:57.777 ], 00:15:57.777 "allow_any_host": true, 00:15:57.777 "hosts": [], 00:15:57.777 "serial_number": "SPDK2", 00:15:57.777 "model_number": "SPDK bdev Controller", 00:15:57.777 "max_namespaces": 32, 00:15:57.777 "min_cntlid": 1, 00:15:57.777 "max_cntlid": 65519, 00:15:57.777 "namespaces": [ 00:15:57.777 { 00:15:57.777 "nsid": 1, 00:15:57.777 "bdev_name": "Malloc2", 00:15:57.777 "name": "Malloc2", 00:15:57.777 "nguid": "2C22EA65E52340448BDB363B23F9744D", 00:15:57.777 "uuid": "2c22ea65-e523-4044-8bdb-363b23f9744d" 00:15:57.777 } 00:15:57.777 ] 00:15:57.777 } 00:15:57.777 ] 00:15:57.777 19:05:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:57.777 19:05:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=302732 00:15:57.778 19:05:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:57.778 19:05:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # local i=0 00:15:57.778 19:05:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:15:57.778 19:05:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:57.778 19:05:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1274 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:57.778 19:05:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1278 -- # return 0 00:15:57.778 19:05:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:57.778 19:05:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:15:58.038 Malloc3 00:15:58.038 [2024-11-05 19:05:27.157426] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:58.038 19:05:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:15:58.038 [2024-11-05 19:05:27.319511] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:58.038 19:05:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:58.298 Asynchronous Event Request test 00:15:58.298 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:58.298 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:58.298 Registering asynchronous event callbacks... 00:15:58.298 Starting namespace attribute notice tests for all controllers... 00:15:58.298 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:58.298 aer_cb - Changed Namespace 00:15:58.298 Cleaning up... 00:15:58.298 [ 00:15:58.298 { 00:15:58.298 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:58.298 "subtype": "Discovery", 00:15:58.298 "listen_addresses": [], 00:15:58.298 "allow_any_host": true, 00:15:58.298 "hosts": [] 00:15:58.298 }, 00:15:58.298 { 00:15:58.298 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:58.298 "subtype": "NVMe", 00:15:58.298 "listen_addresses": [ 00:15:58.298 { 00:15:58.298 "trtype": "VFIOUSER", 00:15:58.298 "adrfam": "IPv4", 00:15:58.298 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:58.298 "trsvcid": "0" 00:15:58.298 } 00:15:58.298 ], 00:15:58.298 "allow_any_host": true, 00:15:58.298 "hosts": [], 00:15:58.298 "serial_number": "SPDK1", 00:15:58.298 "model_number": "SPDK bdev Controller", 00:15:58.298 "max_namespaces": 32, 00:15:58.298 "min_cntlid": 1, 00:15:58.298 "max_cntlid": 65519, 00:15:58.298 "namespaces": [ 00:15:58.298 { 00:15:58.298 "nsid": 1, 00:15:58.298 "bdev_name": "Malloc1", 00:15:58.298 "name": "Malloc1", 00:15:58.298 "nguid": "7CD2D539F3DB4AE8BFF85FD7A4E890B0", 00:15:58.298 "uuid": "7cd2d539-f3db-4ae8-bff8-5fd7a4e890b0" 00:15:58.298 }, 00:15:58.298 { 00:15:58.298 "nsid": 2, 00:15:58.298 "bdev_name": "Malloc3", 00:15:58.298 "name": "Malloc3", 00:15:58.298 "nguid": "8C8EE253771149CFB115CCAD5C38E2D2", 00:15:58.298 "uuid": "8c8ee253-7711-49cf-b115-ccad5c38e2d2" 00:15:58.298 } 00:15:58.298 ] 00:15:58.298 }, 00:15:58.298 { 00:15:58.298 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:58.298 "subtype": "NVMe", 00:15:58.298 "listen_addresses": [ 00:15:58.298 { 00:15:58.298 "trtype": "VFIOUSER", 00:15:58.298 "adrfam": "IPv4", 00:15:58.298 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:58.298 "trsvcid": "0" 00:15:58.298 } 00:15:58.298 ], 00:15:58.298 "allow_any_host": true, 00:15:58.298 "hosts": [], 00:15:58.298 "serial_number": "SPDK2", 00:15:58.298 "model_number": "SPDK bdev Controller", 00:15:58.298 "max_namespaces": 32, 00:15:58.298 "min_cntlid": 1, 00:15:58.298 "max_cntlid": 65519, 00:15:58.298 "namespaces": [ 00:15:58.298 { 00:15:58.298 "nsid": 1, 00:15:58.298 "bdev_name": "Malloc2", 00:15:58.298 "name": "Malloc2", 00:15:58.298 "nguid": "2C22EA65E52340448BDB363B23F9744D", 00:15:58.298 "uuid": "2c22ea65-e523-4044-8bdb-363b23f9744d" 00:15:58.298 } 00:15:58.298 ] 00:15:58.298 } 00:15:58.298 ] 00:15:58.298 19:05:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 302732 00:15:58.298 19:05:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:58.298 19:05:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:58.298 19:05:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:15:58.298 19:05:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:58.298 [2024-11-05 19:05:27.551538] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:15:58.298 [2024-11-05 19:05:27.551584] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid302842 ] 00:15:58.298 [2024-11-05 19:05:27.605819] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:15:58.298 [2024-11-05 19:05:27.608029] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:58.298 [2024-11-05 19:05:27.608051] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7ff3b446e000 00:15:58.298 [2024-11-05 19:05:27.609031] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:58.298 [2024-11-05 19:05:27.610037] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:58.298 [2024-11-05 19:05:27.611060] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:58.298 [2024-11-05 19:05:27.612051] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:58.298 [2024-11-05 19:05:27.613060] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:58.298 [2024-11-05 19:05:27.614072] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:58.298 [2024-11-05 19:05:27.615080] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:58.298 [2024-11-05 19:05:27.616084] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:58.298 [2024-11-05 19:05:27.617092] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:58.298 [2024-11-05 19:05:27.617106] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7ff3b4463000 00:15:58.298 [2024-11-05 19:05:27.618431] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:58.559 [2024-11-05 19:05:27.637903] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:15:58.559 [2024-11-05 19:05:27.637929] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:15:58.559 [2024-11-05 19:05:27.643014] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:58.559 [2024-11-05 19:05:27.643061] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:58.559 [2024-11-05 19:05:27.643144] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:15:58.559 [2024-11-05 19:05:27.643159] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:15:58.559 [2024-11-05 19:05:27.643164] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:15:58.559 [2024-11-05 19:05:27.644016] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:15:58.559 [2024-11-05 19:05:27.644026] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:15:58.559 [2024-11-05 19:05:27.644034] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:15:58.559 [2024-11-05 19:05:27.645021] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:58.559 [2024-11-05 19:05:27.645030] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:15:58.559 [2024-11-05 19:05:27.645038] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:15:58.559 [2024-11-05 19:05:27.646026] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:15:58.559 [2024-11-05 19:05:27.646035] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:58.559 [2024-11-05 19:05:27.647030] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:15:58.559 [2024-11-05 19:05:27.647039] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:15:58.559 [2024-11-05 19:05:27.647044] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:15:58.559 [2024-11-05 19:05:27.647051] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:58.559 [2024-11-05 19:05:27.647158] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:15:58.559 [2024-11-05 19:05:27.647164] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:58.559 [2024-11-05 19:05:27.647169] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:15:58.559 [2024-11-05 19:05:27.648039] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:15:58.559 [2024-11-05 19:05:27.649043] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:15:58.559 [2024-11-05 19:05:27.650051] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:58.559 [2024-11-05 19:05:27.651055] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:58.559 [2024-11-05 19:05:27.651095] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:58.559 [2024-11-05 19:05:27.652066] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:15:58.559 [2024-11-05 19:05:27.652075] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:58.559 [2024-11-05 19:05:27.652080] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:15:58.560 [2024-11-05 19:05:27.652101] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:15:58.560 [2024-11-05 19:05:27.652112] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:15:58.560 [2024-11-05 19:05:27.652125] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:58.560 [2024-11-05 19:05:27.652130] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:58.560 [2024-11-05 19:05:27.652134] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:58.560 [2024-11-05 19:05:27.652145] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:58.560 [2024-11-05 19:05:27.658755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:58.560 [2024-11-05 19:05:27.658768] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:15:58.560 [2024-11-05 19:05:27.658773] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:15:58.560 [2024-11-05 19:05:27.658777] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:15:58.560 [2024-11-05 19:05:27.658784] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:58.560 [2024-11-05 19:05:27.658789] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:15:58.560 [2024-11-05 19:05:27.658795] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:15:58.560 [2024-11-05 19:05:27.658801] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:15:58.560 [2024-11-05 19:05:27.658808] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:15:58.560 [2024-11-05 19:05:27.658818] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:58.560 [2024-11-05 19:05:27.666753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:58.560 [2024-11-05 19:05:27.666769] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:58.560 [2024-11-05 19:05:27.666778] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:58.560 [2024-11-05 19:05:27.666786] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:58.560 [2024-11-05 19:05:27.666795] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:58.560 [2024-11-05 19:05:27.666799] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:15:58.560 [2024-11-05 19:05:27.666806] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:58.560 [2024-11-05 19:05:27.666815] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:58.560 [2024-11-05 19:05:27.674751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:58.560 [2024-11-05 19:05:27.674761] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:15:58.560 [2024-11-05 19:05:27.674766] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:58.560 [2024-11-05 19:05:27.674773] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:15:58.560 [2024-11-05 19:05:27.674779] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:15:58.560 [2024-11-05 19:05:27.674788] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:58.560 [2024-11-05 19:05:27.682751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:58.560 [2024-11-05 19:05:27.682820] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:15:58.560 [2024-11-05 19:05:27.682828] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:15:58.560 [2024-11-05 19:05:27.682836] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:58.560 [2024-11-05 19:05:27.682843] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:58.560 [2024-11-05 19:05:27.682847] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:58.560 [2024-11-05 19:05:27.682853] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:58.560 [2024-11-05 19:05:27.690752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:58.560 [2024-11-05 19:05:27.690763] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:15:58.560 [2024-11-05 19:05:27.690772] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:15:58.560 [2024-11-05 19:05:27.690780] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:15:58.560 [2024-11-05 19:05:27.690787] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:58.560 [2024-11-05 19:05:27.690791] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:58.560 [2024-11-05 19:05:27.690794] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:58.560 [2024-11-05 19:05:27.690801] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:58.560 [2024-11-05 19:05:27.698751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:58.560 [2024-11-05 19:05:27.698765] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:58.560 [2024-11-05 19:05:27.698773] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:58.560 [2024-11-05 19:05:27.698781] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:58.560 [2024-11-05 19:05:27.698785] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:58.560 [2024-11-05 19:05:27.698788] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:58.560 [2024-11-05 19:05:27.698795] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:58.560 [2024-11-05 19:05:27.706751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:58.560 [2024-11-05 19:05:27.706761] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:58.560 [2024-11-05 19:05:27.706768] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:15:58.560 [2024-11-05 19:05:27.706775] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:15:58.560 [2024-11-05 19:05:27.706781] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:15:58.560 [2024-11-05 19:05:27.706786] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:58.560 [2024-11-05 19:05:27.706791] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:15:58.560 [2024-11-05 19:05:27.706796] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:15:58.560 [2024-11-05 19:05:27.706803] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:15:58.560 [2024-11-05 19:05:27.706808] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:15:58.560 [2024-11-05 19:05:27.706825] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:58.560 [2024-11-05 19:05:27.714751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:58.560 [2024-11-05 19:05:27.714765] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:58.560 [2024-11-05 19:05:27.722751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:58.561 [2024-11-05 19:05:27.722764] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:58.561 [2024-11-05 19:05:27.730755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:58.561 [2024-11-05 19:05:27.730773] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:58.561 [2024-11-05 19:05:27.738750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:58.561 [2024-11-05 19:05:27.738767] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:58.561 [2024-11-05 19:05:27.738772] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:58.561 [2024-11-05 19:05:27.738776] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:58.561 [2024-11-05 19:05:27.738779] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:58.561 [2024-11-05 19:05:27.738783] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:15:58.561 [2024-11-05 19:05:27.738789] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:58.561 [2024-11-05 19:05:27.738797] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:58.561 [2024-11-05 19:05:27.738801] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:58.561 [2024-11-05 19:05:27.738805] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:58.561 [2024-11-05 19:05:27.738811] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:58.561 [2024-11-05 19:05:27.738818] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:58.561 [2024-11-05 19:05:27.738822] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:58.561 [2024-11-05 19:05:27.738826] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:58.561 [2024-11-05 19:05:27.738831] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:58.561 [2024-11-05 19:05:27.738841] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:58.561 [2024-11-05 19:05:27.738845] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:58.561 [2024-11-05 19:05:27.738848] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:58.561 [2024-11-05 19:05:27.738854] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:58.561 [2024-11-05 19:05:27.746751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:58.561 [2024-11-05 19:05:27.746766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:58.561 [2024-11-05 19:05:27.746777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:58.561 [2024-11-05 19:05:27.746784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:58.561 ===================================================== 00:15:58.561 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:58.561 ===================================================== 00:15:58.561 Controller Capabilities/Features 00:15:58.561 ================================ 00:15:58.561 Vendor ID: 4e58 00:15:58.561 Subsystem Vendor ID: 4e58 00:15:58.561 Serial Number: SPDK2 00:15:58.561 Model Number: SPDK bdev Controller 00:15:58.561 Firmware Version: 25.01 00:15:58.561 Recommended Arb Burst: 6 00:15:58.561 IEEE OUI Identifier: 8d 6b 50 00:15:58.561 Multi-path I/O 00:15:58.561 May have multiple subsystem ports: Yes 00:15:58.561 May have multiple controllers: Yes 00:15:58.561 Associated with SR-IOV VF: No 00:15:58.561 Max Data Transfer Size: 131072 00:15:58.561 Max Number of Namespaces: 32 00:15:58.561 Max Number of I/O Queues: 127 00:15:58.561 NVMe Specification Version (VS): 1.3 00:15:58.561 NVMe Specification Version (Identify): 1.3 00:15:58.561 Maximum Queue Entries: 256 00:15:58.561 Contiguous Queues Required: Yes 00:15:58.561 Arbitration Mechanisms Supported 00:15:58.561 Weighted Round Robin: Not Supported 00:15:58.561 Vendor Specific: Not Supported 00:15:58.561 Reset Timeout: 15000 ms 00:15:58.561 Doorbell Stride: 4 bytes 00:15:58.561 NVM Subsystem Reset: Not Supported 00:15:58.561 Command Sets Supported 00:15:58.561 NVM Command Set: Supported 00:15:58.561 Boot Partition: Not Supported 00:15:58.561 Memory Page Size Minimum: 4096 bytes 00:15:58.561 Memory Page Size Maximum: 4096 bytes 00:15:58.561 Persistent Memory Region: Not Supported 00:15:58.561 Optional Asynchronous Events Supported 00:15:58.561 Namespace Attribute Notices: Supported 00:15:58.561 Firmware Activation Notices: Not Supported 00:15:58.561 ANA Change Notices: Not Supported 00:15:58.561 PLE Aggregate Log Change Notices: Not Supported 00:15:58.561 LBA Status Info Alert Notices: Not Supported 00:15:58.561 EGE Aggregate Log Change Notices: Not Supported 00:15:58.561 Normal NVM Subsystem Shutdown event: Not Supported 00:15:58.561 Zone Descriptor Change Notices: Not Supported 00:15:58.561 Discovery Log Change Notices: Not Supported 00:15:58.561 Controller Attributes 00:15:58.561 128-bit Host Identifier: Supported 00:15:58.561 Non-Operational Permissive Mode: Not Supported 00:15:58.561 NVM Sets: Not Supported 00:15:58.561 Read Recovery Levels: Not Supported 00:15:58.561 Endurance Groups: Not Supported 00:15:58.561 Predictable Latency Mode: Not Supported 00:15:58.561 Traffic Based Keep ALive: Not Supported 00:15:58.561 Namespace Granularity: Not Supported 00:15:58.561 SQ Associations: Not Supported 00:15:58.561 UUID List: Not Supported 00:15:58.561 Multi-Domain Subsystem: Not Supported 00:15:58.561 Fixed Capacity Management: Not Supported 00:15:58.561 Variable Capacity Management: Not Supported 00:15:58.561 Delete Endurance Group: Not Supported 00:15:58.561 Delete NVM Set: Not Supported 00:15:58.561 Extended LBA Formats Supported: Not Supported 00:15:58.561 Flexible Data Placement Supported: Not Supported 00:15:58.561 00:15:58.561 Controller Memory Buffer Support 00:15:58.561 ================================ 00:15:58.561 Supported: No 00:15:58.561 00:15:58.561 Persistent Memory Region Support 00:15:58.561 ================================ 00:15:58.561 Supported: No 00:15:58.561 00:15:58.561 Admin Command Set Attributes 00:15:58.561 ============================ 00:15:58.561 Security Send/Receive: Not Supported 00:15:58.561 Format NVM: Not Supported 00:15:58.561 Firmware Activate/Download: Not Supported 00:15:58.561 Namespace Management: Not Supported 00:15:58.561 Device Self-Test: Not Supported 00:15:58.561 Directives: Not Supported 00:15:58.561 NVMe-MI: Not Supported 00:15:58.561 Virtualization Management: Not Supported 00:15:58.561 Doorbell Buffer Config: Not Supported 00:15:58.561 Get LBA Status Capability: Not Supported 00:15:58.561 Command & Feature Lockdown Capability: Not Supported 00:15:58.561 Abort Command Limit: 4 00:15:58.561 Async Event Request Limit: 4 00:15:58.561 Number of Firmware Slots: N/A 00:15:58.561 Firmware Slot 1 Read-Only: N/A 00:15:58.561 Firmware Activation Without Reset: N/A 00:15:58.561 Multiple Update Detection Support: N/A 00:15:58.561 Firmware Update Granularity: No Information Provided 00:15:58.561 Per-Namespace SMART Log: No 00:15:58.561 Asymmetric Namespace Access Log Page: Not Supported 00:15:58.561 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:15:58.561 Command Effects Log Page: Supported 00:15:58.561 Get Log Page Extended Data: Supported 00:15:58.561 Telemetry Log Pages: Not Supported 00:15:58.561 Persistent Event Log Pages: Not Supported 00:15:58.561 Supported Log Pages Log Page: May Support 00:15:58.561 Commands Supported & Effects Log Page: Not Supported 00:15:58.561 Feature Identifiers & Effects Log Page:May Support 00:15:58.561 NVMe-MI Commands & Effects Log Page: May Support 00:15:58.561 Data Area 4 for Telemetry Log: Not Supported 00:15:58.561 Error Log Page Entries Supported: 128 00:15:58.561 Keep Alive: Supported 00:15:58.561 Keep Alive Granularity: 10000 ms 00:15:58.561 00:15:58.561 NVM Command Set Attributes 00:15:58.561 ========================== 00:15:58.561 Submission Queue Entry Size 00:15:58.561 Max: 64 00:15:58.561 Min: 64 00:15:58.561 Completion Queue Entry Size 00:15:58.561 Max: 16 00:15:58.561 Min: 16 00:15:58.561 Number of Namespaces: 32 00:15:58.561 Compare Command: Supported 00:15:58.561 Write Uncorrectable Command: Not Supported 00:15:58.561 Dataset Management Command: Supported 00:15:58.561 Write Zeroes Command: Supported 00:15:58.561 Set Features Save Field: Not Supported 00:15:58.561 Reservations: Not Supported 00:15:58.561 Timestamp: Not Supported 00:15:58.561 Copy: Supported 00:15:58.561 Volatile Write Cache: Present 00:15:58.561 Atomic Write Unit (Normal): 1 00:15:58.561 Atomic Write Unit (PFail): 1 00:15:58.561 Atomic Compare & Write Unit: 1 00:15:58.561 Fused Compare & Write: Supported 00:15:58.561 Scatter-Gather List 00:15:58.561 SGL Command Set: Supported (Dword aligned) 00:15:58.562 SGL Keyed: Not Supported 00:15:58.562 SGL Bit Bucket Descriptor: Not Supported 00:15:58.562 SGL Metadata Pointer: Not Supported 00:15:58.562 Oversized SGL: Not Supported 00:15:58.562 SGL Metadata Address: Not Supported 00:15:58.562 SGL Offset: Not Supported 00:15:58.562 Transport SGL Data Block: Not Supported 00:15:58.562 Replay Protected Memory Block: Not Supported 00:15:58.562 00:15:58.562 Firmware Slot Information 00:15:58.562 ========================= 00:15:58.562 Active slot: 1 00:15:58.562 Slot 1 Firmware Revision: 25.01 00:15:58.562 00:15:58.562 00:15:58.562 Commands Supported and Effects 00:15:58.562 ============================== 00:15:58.562 Admin Commands 00:15:58.562 -------------- 00:15:58.562 Get Log Page (02h): Supported 00:15:58.562 Identify (06h): Supported 00:15:58.562 Abort (08h): Supported 00:15:58.562 Set Features (09h): Supported 00:15:58.562 Get Features (0Ah): Supported 00:15:58.562 Asynchronous Event Request (0Ch): Supported 00:15:58.562 Keep Alive (18h): Supported 00:15:58.562 I/O Commands 00:15:58.562 ------------ 00:15:58.562 Flush (00h): Supported LBA-Change 00:15:58.562 Write (01h): Supported LBA-Change 00:15:58.562 Read (02h): Supported 00:15:58.562 Compare (05h): Supported 00:15:58.562 Write Zeroes (08h): Supported LBA-Change 00:15:58.562 Dataset Management (09h): Supported LBA-Change 00:15:58.562 Copy (19h): Supported LBA-Change 00:15:58.562 00:15:58.562 Error Log 00:15:58.562 ========= 00:15:58.562 00:15:58.562 Arbitration 00:15:58.562 =========== 00:15:58.562 Arbitration Burst: 1 00:15:58.562 00:15:58.562 Power Management 00:15:58.562 ================ 00:15:58.562 Number of Power States: 1 00:15:58.562 Current Power State: Power State #0 00:15:58.562 Power State #0: 00:15:58.562 Max Power: 0.00 W 00:15:58.562 Non-Operational State: Operational 00:15:58.562 Entry Latency: Not Reported 00:15:58.562 Exit Latency: Not Reported 00:15:58.562 Relative Read Throughput: 0 00:15:58.562 Relative Read Latency: 0 00:15:58.562 Relative Write Throughput: 0 00:15:58.562 Relative Write Latency: 0 00:15:58.562 Idle Power: Not Reported 00:15:58.562 Active Power: Not Reported 00:15:58.562 Non-Operational Permissive Mode: Not Supported 00:15:58.562 00:15:58.562 Health Information 00:15:58.562 ================== 00:15:58.562 Critical Warnings: 00:15:58.562 Available Spare Space: OK 00:15:58.562 Temperature: OK 00:15:58.562 Device Reliability: OK 00:15:58.562 Read Only: No 00:15:58.562 Volatile Memory Backup: OK 00:15:58.562 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:58.562 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:58.562 Available Spare: 0% 00:15:58.562 Available Sp[2024-11-05 19:05:27.746884] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:58.562 [2024-11-05 19:05:27.754751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:58.562 [2024-11-05 19:05:27.754783] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:15:58.562 [2024-11-05 19:05:27.754792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:58.562 [2024-11-05 19:05:27.754799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:58.562 [2024-11-05 19:05:27.754805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:58.562 [2024-11-05 19:05:27.754811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:58.562 [2024-11-05 19:05:27.754856] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:58.562 [2024-11-05 19:05:27.754867] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:15:58.562 [2024-11-05 19:05:27.755862] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:58.562 [2024-11-05 19:05:27.755917] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:15:58.562 [2024-11-05 19:05:27.755924] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:15:58.562 [2024-11-05 19:05:27.756863] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:15:58.562 [2024-11-05 19:05:27.756875] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:15:58.562 [2024-11-05 19:05:27.756923] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:15:58.562 [2024-11-05 19:05:27.758299] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:58.562 are Threshold: 0% 00:15:58.562 Life Percentage Used: 0% 00:15:58.562 Data Units Read: 0 00:15:58.562 Data Units Written: 0 00:15:58.562 Host Read Commands: 0 00:15:58.562 Host Write Commands: 0 00:15:58.562 Controller Busy Time: 0 minutes 00:15:58.562 Power Cycles: 0 00:15:58.562 Power On Hours: 0 hours 00:15:58.562 Unsafe Shutdowns: 0 00:15:58.562 Unrecoverable Media Errors: 0 00:15:58.562 Lifetime Error Log Entries: 0 00:15:58.562 Warning Temperature Time: 0 minutes 00:15:58.562 Critical Temperature Time: 0 minutes 00:15:58.562 00:15:58.562 Number of Queues 00:15:58.562 ================ 00:15:58.562 Number of I/O Submission Queues: 127 00:15:58.562 Number of I/O Completion Queues: 127 00:15:58.562 00:15:58.562 Active Namespaces 00:15:58.562 ================= 00:15:58.562 Namespace ID:1 00:15:58.562 Error Recovery Timeout: Unlimited 00:15:58.562 Command Set Identifier: NVM (00h) 00:15:58.562 Deallocate: Supported 00:15:58.562 Deallocated/Unwritten Error: Not Supported 00:15:58.562 Deallocated Read Value: Unknown 00:15:58.562 Deallocate in Write Zeroes: Not Supported 00:15:58.562 Deallocated Guard Field: 0xFFFF 00:15:58.562 Flush: Supported 00:15:58.562 Reservation: Supported 00:15:58.562 Namespace Sharing Capabilities: Multiple Controllers 00:15:58.562 Size (in LBAs): 131072 (0GiB) 00:15:58.562 Capacity (in LBAs): 131072 (0GiB) 00:15:58.562 Utilization (in LBAs): 131072 (0GiB) 00:15:58.562 NGUID: 2C22EA65E52340448BDB363B23F9744D 00:15:58.562 UUID: 2c22ea65-e523-4044-8bdb-363b23f9744d 00:15:58.562 Thin Provisioning: Not Supported 00:15:58.562 Per-NS Atomic Units: Yes 00:15:58.562 Atomic Boundary Size (Normal): 0 00:15:58.562 Atomic Boundary Size (PFail): 0 00:15:58.562 Atomic Boundary Offset: 0 00:15:58.562 Maximum Single Source Range Length: 65535 00:15:58.562 Maximum Copy Length: 65535 00:15:58.562 Maximum Source Range Count: 1 00:15:58.562 NGUID/EUI64 Never Reused: No 00:15:58.562 Namespace Write Protected: No 00:15:58.562 Number of LBA Formats: 1 00:15:58.562 Current LBA Format: LBA Format #00 00:15:58.563 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:58.563 00:15:58.563 19:05:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:58.823 [2024-11-05 19:05:27.962143] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:04.258 Initializing NVMe Controllers 00:16:04.258 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:04.258 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:16:04.258 Initialization complete. Launching workers. 00:16:04.258 ======================================================== 00:16:04.258 Latency(us) 00:16:04.258 Device Information : IOPS MiB/s Average min max 00:16:04.258 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39996.16 156.24 3200.15 842.91 6947.68 00:16:04.258 ======================================================== 00:16:04.258 Total : 39996.16 156.24 3200.15 842.91 6947.68 00:16:04.258 00:16:04.258 [2024-11-05 19:05:33.064945] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:04.258 19:05:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:16:04.258 [2024-11-05 19:05:33.256561] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:09.546 Initializing NVMe Controllers 00:16:09.546 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:09.546 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:16:09.546 Initialization complete. Launching workers. 00:16:09.546 ======================================================== 00:16:09.546 Latency(us) 00:16:09.546 Device Information : IOPS MiB/s Average min max 00:16:09.546 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 34870.14 136.21 3670.40 1110.07 8359.37 00:16:09.546 ======================================================== 00:16:09.546 Total : 34870.14 136.21 3670.40 1110.07 8359.37 00:16:09.546 00:16:09.546 [2024-11-05 19:05:38.278686] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:09.546 19:05:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:16:09.546 [2024-11-05 19:05:38.483916] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:14.829 [2024-11-05 19:05:43.616827] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:14.829 Initializing NVMe Controllers 00:16:14.829 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:14.829 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:14.829 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:16:14.829 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:16:14.829 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:16:14.829 Initialization complete. Launching workers. 00:16:14.829 Starting thread on core 2 00:16:14.829 Starting thread on core 3 00:16:14.829 Starting thread on core 1 00:16:14.829 19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:16:14.829 [2024-11-05 19:05:43.895607] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:18.127 [2024-11-05 19:05:46.949170] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:18.127 Initializing NVMe Controllers 00:16:18.127 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:18.127 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:18.127 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:16:18.127 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:16:18.127 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:16:18.127 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:16:18.127 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:16:18.127 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:16:18.127 Initialization complete. Launching workers. 00:16:18.127 Starting thread on core 1 with urgent priority queue 00:16:18.127 Starting thread on core 2 with urgent priority queue 00:16:18.127 Starting thread on core 3 with urgent priority queue 00:16:18.127 Starting thread on core 0 with urgent priority queue 00:16:18.127 SPDK bdev Controller (SPDK2 ) core 0: 11635.00 IO/s 8.59 secs/100000 ios 00:16:18.127 SPDK bdev Controller (SPDK2 ) core 1: 10716.33 IO/s 9.33 secs/100000 ios 00:16:18.127 SPDK bdev Controller (SPDK2 ) core 2: 16519.67 IO/s 6.05 secs/100000 ios 00:16:18.127 SPDK bdev Controller (SPDK2 ) core 3: 8001.33 IO/s 12.50 secs/100000 ios 00:16:18.127 ======================================================== 00:16:18.127 00:16:18.127 19:05:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:16:18.127 [2024-11-05 19:05:47.235172] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:18.127 Initializing NVMe Controllers 00:16:18.127 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:18.127 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:18.127 Namespace ID: 1 size: 0GB 00:16:18.127 Initialization complete. 00:16:18.127 INFO: using host memory buffer for IO 00:16:18.127 Hello world! 00:16:18.127 [2024-11-05 19:05:47.247241] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:18.127 19:05:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:16:18.387 [2024-11-05 19:05:47.526708] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:19.327 Initializing NVMe Controllers 00:16:19.327 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:19.327 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:19.327 Initialization complete. Launching workers. 00:16:19.327 submit (in ns) avg, min, max = 6174.8, 3903.3, 4000285.8 00:16:19.327 complete (in ns) avg, min, max = 19220.8, 2378.3, 3999348.3 00:16:19.327 00:16:19.327 Submit histogram 00:16:19.327 ================ 00:16:19.327 Range in us Cumulative Count 00:16:19.327 3.893 - 3.920: 1.2307% ( 232) 00:16:19.327 3.920 - 3.947: 6.9811% ( 1084) 00:16:19.327 3.947 - 3.973: 17.6914% ( 2019) 00:16:19.327 3.973 - 4.000: 29.8180% ( 2286) 00:16:19.327 4.000 - 4.027: 39.5417% ( 1833) 00:16:19.327 4.027 - 4.053: 50.1194% ( 1994) 00:16:19.327 4.053 - 4.080: 65.4448% ( 2889) 00:16:19.327 4.080 - 4.107: 81.3113% ( 2991) 00:16:19.327 4.107 - 4.133: 92.4672% ( 2103) 00:16:19.328 4.133 - 4.160: 97.4749% ( 944) 00:16:19.328 4.160 - 4.187: 99.0292% ( 293) 00:16:19.328 4.187 - 4.213: 99.4059% ( 71) 00:16:19.328 4.213 - 4.240: 99.4536% ( 9) 00:16:19.328 4.240 - 4.267: 99.4695% ( 3) 00:16:19.328 4.267 - 4.293: 99.4748% ( 1) 00:16:19.328 4.320 - 4.347: 99.4907% ( 3) 00:16:19.328 4.347 - 4.373: 99.4960% ( 1) 00:16:19.328 4.400 - 4.427: 99.5014% ( 1) 00:16:19.328 4.453 - 4.480: 99.5067% ( 1) 00:16:19.328 4.587 - 4.613: 99.5120% ( 1) 00:16:19.328 4.800 - 4.827: 99.5173% ( 1) 00:16:19.328 4.827 - 4.853: 99.5226% ( 1) 00:16:19.328 4.880 - 4.907: 99.5279% ( 1) 00:16:19.328 5.147 - 5.173: 99.5332% ( 1) 00:16:19.328 5.200 - 5.227: 99.5385% ( 1) 00:16:19.328 5.360 - 5.387: 99.5438% ( 1) 00:16:19.328 5.520 - 5.547: 99.5491% ( 1) 00:16:19.328 5.653 - 5.680: 99.5544% ( 1) 00:16:19.328 5.947 - 5.973: 99.5650% ( 2) 00:16:19.328 5.973 - 6.000: 99.5703% ( 1) 00:16:19.328 6.000 - 6.027: 99.5756% ( 1) 00:16:19.328 6.053 - 6.080: 99.5809% ( 1) 00:16:19.328 6.133 - 6.160: 99.5862% ( 1) 00:16:19.328 6.160 - 6.187: 99.5968% ( 2) 00:16:19.328 6.213 - 6.240: 99.6074% ( 2) 00:16:19.328 6.240 - 6.267: 99.6234% ( 3) 00:16:19.328 6.293 - 6.320: 99.6287% ( 1) 00:16:19.328 6.373 - 6.400: 99.6340% ( 1) 00:16:19.328 6.400 - 6.427: 99.6393% ( 1) 00:16:19.328 6.427 - 6.453: 99.6446% ( 1) 00:16:19.328 6.453 - 6.480: 99.6499% ( 1) 00:16:19.328 6.507 - 6.533: 99.6658% ( 3) 00:16:19.328 6.533 - 6.560: 99.6711% ( 1) 00:16:19.328 6.560 - 6.587: 99.6870% ( 3) 00:16:19.328 6.613 - 6.640: 99.6976% ( 2) 00:16:19.328 6.640 - 6.667: 99.7082% ( 2) 00:16:19.328 6.693 - 6.720: 99.7242% ( 3) 00:16:19.328 6.720 - 6.747: 99.7295% ( 1) 00:16:19.328 6.747 - 6.773: 99.7348% ( 1) 00:16:19.328 6.773 - 6.800: 99.7401% ( 1) 00:16:19.328 6.800 - 6.827: 99.7454% ( 1) 00:16:19.328 6.827 - 6.880: 99.7666% ( 4) 00:16:19.328 6.933 - 6.987: 99.7825% ( 3) 00:16:19.328 6.987 - 7.040: 99.8037% ( 4) 00:16:19.328 7.040 - 7.093: 99.8143% ( 2) 00:16:19.328 7.093 - 7.147: 99.8356% ( 4) 00:16:19.328 7.147 - 7.200: 99.8462% ( 2) 00:16:19.328 7.200 - 7.253: 99.8515% ( 1) 00:16:19.328 7.253 - 7.307: 99.8621% ( 2) 00:16:19.328 7.307 - 7.360: 99.8674% ( 1) 00:16:19.328 7.360 - 7.413: 99.8833% ( 3) 00:16:19.328 7.520 - 7.573: 99.9045% ( 4) 00:16:19.328 7.573 - 7.627: 99.9151% ( 2) 00:16:19.328 7.627 - 7.680: 99.9204% ( 1) 00:16:19.328 7.680 - 7.733: 99.9257% ( 1) 00:16:19.328 7.947 - 8.000: 99.9310% ( 1) 00:16:19.328 8.533 - 8.587: 99.9363% ( 1) 00:16:19.328 8.640 - 8.693: 99.9416% ( 1) 00:16:19.328 9.067 - 9.120: 99.9470% ( 1) 00:16:19.328 3986.773 - 4014.080: 100.0000% ( 10) 00:16:19.328 00:16:19.328 Complete histogram 00:16:19.328 ================== 00:16:19.328 Range in us Cumulative Count 00:16:19.328 2.373 - 2.387: 0.0053% ( 1) 00:16:19.328 2.387 - 2.400: 0.3342% ( 62) 00:16:19.328 2.400 - 2.413: 0.7002% ( 69) 00:16:19.328 2.413 - 2.427: 0.8222% ( 23) 00:16:19.328 2.427 - 2.440: 0.9071% ( 16) 00:16:19.328 2.440 - 2.453: 33.9664% ( 6232) 00:16:19.328 2.453 - 2.467: 60.8880% ( 5075) 00:16:19.328 2.467 - [2024-11-05 19:05:48.623429] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:19.589 2.480: 71.2588% ( 1955) 00:16:19.589 2.480 - 2.493: 79.1364% ( 1485) 00:16:19.589 2.493 - 2.507: 81.2848% ( 405) 00:16:19.589 2.507 - 2.520: 83.4598% ( 410) 00:16:19.589 2.520 - 2.533: 88.3189% ( 916) 00:16:19.589 2.533 - 2.547: 94.1223% ( 1094) 00:16:19.589 2.547 - 2.560: 97.0452% ( 551) 00:16:19.589 2.560 - 2.573: 98.5677% ( 287) 00:16:19.589 2.573 - 2.587: 99.2043% ( 120) 00:16:19.589 2.587 - 2.600: 99.3687% ( 31) 00:16:19.589 2.600 - 2.613: 99.3900% ( 4) 00:16:19.589 2.613 - 2.627: 99.3953% ( 1) 00:16:19.589 2.627 - 2.640: 99.4006% ( 1) 00:16:19.589 2.853 - 2.867: 99.4059% ( 1) 00:16:19.589 2.920 - 2.933: 99.4112% ( 1) 00:16:19.589 2.987 - 3.000: 99.4165% ( 1) 00:16:19.589 3.160 - 3.173: 99.4218% ( 1) 00:16:19.589 4.400 - 4.427: 99.4271% ( 1) 00:16:19.589 4.453 - 4.480: 99.4324% ( 1) 00:16:19.589 4.640 - 4.667: 99.4377% ( 1) 00:16:19.589 4.667 - 4.693: 99.4430% ( 1) 00:16:19.589 5.067 - 5.093: 99.4536% ( 2) 00:16:19.589 5.200 - 5.227: 99.4642% ( 2) 00:16:19.589 5.227 - 5.253: 99.4695% ( 1) 00:16:19.589 5.253 - 5.280: 99.4801% ( 2) 00:16:19.589 5.307 - 5.333: 99.4907% ( 2) 00:16:19.589 5.333 - 5.360: 99.4960% ( 1) 00:16:19.589 5.360 - 5.387: 99.5014% ( 1) 00:16:19.589 5.387 - 5.413: 99.5067% ( 1) 00:16:19.589 5.413 - 5.440: 99.5120% ( 1) 00:16:19.589 5.440 - 5.467: 99.5226% ( 2) 00:16:19.589 5.520 - 5.547: 99.5279% ( 1) 00:16:19.589 5.573 - 5.600: 99.5332% ( 1) 00:16:19.589 5.627 - 5.653: 99.5385% ( 1) 00:16:19.589 5.653 - 5.680: 99.5438% ( 1) 00:16:19.589 5.680 - 5.707: 99.5544% ( 2) 00:16:19.589 6.240 - 6.267: 99.5597% ( 1) 00:16:19.589 6.347 - 6.373: 99.5650% ( 1) 00:16:19.589 9.867 - 9.920: 99.5703% ( 1) 00:16:19.589 14.507 - 14.613: 99.5756% ( 1) 00:16:19.589 16.640 - 16.747: 99.5809% ( 1) 00:16:19.589 3986.773 - 4014.080: 100.0000% ( 79) 00:16:19.589 00:16:19.589 19:05:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:16:19.589 19:05:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:16:19.589 19:05:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:16:19.589 19:05:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:16:19.589 19:05:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:19.589 [ 00:16:19.589 { 00:16:19.589 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:19.589 "subtype": "Discovery", 00:16:19.589 "listen_addresses": [], 00:16:19.589 "allow_any_host": true, 00:16:19.589 "hosts": [] 00:16:19.589 }, 00:16:19.589 { 00:16:19.589 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:19.589 "subtype": "NVMe", 00:16:19.589 "listen_addresses": [ 00:16:19.589 { 00:16:19.589 "trtype": "VFIOUSER", 00:16:19.589 "adrfam": "IPv4", 00:16:19.589 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:19.589 "trsvcid": "0" 00:16:19.589 } 00:16:19.589 ], 00:16:19.589 "allow_any_host": true, 00:16:19.589 "hosts": [], 00:16:19.589 "serial_number": "SPDK1", 00:16:19.589 "model_number": "SPDK bdev Controller", 00:16:19.589 "max_namespaces": 32, 00:16:19.589 "min_cntlid": 1, 00:16:19.589 "max_cntlid": 65519, 00:16:19.589 "namespaces": [ 00:16:19.589 { 00:16:19.589 "nsid": 1, 00:16:19.589 "bdev_name": "Malloc1", 00:16:19.589 "name": "Malloc1", 00:16:19.589 "nguid": "7CD2D539F3DB4AE8BFF85FD7A4E890B0", 00:16:19.589 "uuid": "7cd2d539-f3db-4ae8-bff8-5fd7a4e890b0" 00:16:19.589 }, 00:16:19.589 { 00:16:19.589 "nsid": 2, 00:16:19.589 "bdev_name": "Malloc3", 00:16:19.589 "name": "Malloc3", 00:16:19.589 "nguid": "8C8EE253771149CFB115CCAD5C38E2D2", 00:16:19.589 "uuid": "8c8ee253-7711-49cf-b115-ccad5c38e2d2" 00:16:19.589 } 00:16:19.589 ] 00:16:19.589 }, 00:16:19.589 { 00:16:19.589 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:19.589 "subtype": "NVMe", 00:16:19.589 "listen_addresses": [ 00:16:19.589 { 00:16:19.589 "trtype": "VFIOUSER", 00:16:19.589 "adrfam": "IPv4", 00:16:19.589 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:19.589 "trsvcid": "0" 00:16:19.589 } 00:16:19.589 ], 00:16:19.589 "allow_any_host": true, 00:16:19.589 "hosts": [], 00:16:19.589 "serial_number": "SPDK2", 00:16:19.589 "model_number": "SPDK bdev Controller", 00:16:19.589 "max_namespaces": 32, 00:16:19.589 "min_cntlid": 1, 00:16:19.589 "max_cntlid": 65519, 00:16:19.589 "namespaces": [ 00:16:19.589 { 00:16:19.589 "nsid": 1, 00:16:19.589 "bdev_name": "Malloc2", 00:16:19.589 "name": "Malloc2", 00:16:19.589 "nguid": "2C22EA65E52340448BDB363B23F9744D", 00:16:19.589 "uuid": "2c22ea65-e523-4044-8bdb-363b23f9744d" 00:16:19.589 } 00:16:19.589 ] 00:16:19.589 } 00:16:19.589 ] 00:16:19.589 19:05:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:16:19.589 19:05:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=306883 00:16:19.589 19:05:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:16:19.589 19:05:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:16:19.589 19:05:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # local i=0 00:16:19.589 19:05:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:19.590 19:05:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1274 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:19.590 19:05:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1278 -- # return 0 00:16:19.590 19:05:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:16:19.590 19:05:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:16:19.851 Malloc4 00:16:19.851 [2024-11-05 19:05:49.035774] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:19.851 19:05:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:16:20.111 [2024-11-05 19:05:49.216991] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:20.111 19:05:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:20.111 Asynchronous Event Request test 00:16:20.111 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:20.111 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:20.111 Registering asynchronous event callbacks... 00:16:20.111 Starting namespace attribute notice tests for all controllers... 00:16:20.111 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:16:20.111 aer_cb - Changed Namespace 00:16:20.111 Cleaning up... 00:16:20.111 [ 00:16:20.111 { 00:16:20.111 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:20.111 "subtype": "Discovery", 00:16:20.111 "listen_addresses": [], 00:16:20.111 "allow_any_host": true, 00:16:20.111 "hosts": [] 00:16:20.111 }, 00:16:20.111 { 00:16:20.111 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:20.111 "subtype": "NVMe", 00:16:20.111 "listen_addresses": [ 00:16:20.111 { 00:16:20.111 "trtype": "VFIOUSER", 00:16:20.111 "adrfam": "IPv4", 00:16:20.111 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:20.111 "trsvcid": "0" 00:16:20.111 } 00:16:20.111 ], 00:16:20.111 "allow_any_host": true, 00:16:20.111 "hosts": [], 00:16:20.111 "serial_number": "SPDK1", 00:16:20.111 "model_number": "SPDK bdev Controller", 00:16:20.111 "max_namespaces": 32, 00:16:20.111 "min_cntlid": 1, 00:16:20.111 "max_cntlid": 65519, 00:16:20.111 "namespaces": [ 00:16:20.111 { 00:16:20.111 "nsid": 1, 00:16:20.111 "bdev_name": "Malloc1", 00:16:20.111 "name": "Malloc1", 00:16:20.111 "nguid": "7CD2D539F3DB4AE8BFF85FD7A4E890B0", 00:16:20.111 "uuid": "7cd2d539-f3db-4ae8-bff8-5fd7a4e890b0" 00:16:20.111 }, 00:16:20.111 { 00:16:20.111 "nsid": 2, 00:16:20.111 "bdev_name": "Malloc3", 00:16:20.111 "name": "Malloc3", 00:16:20.111 "nguid": "8C8EE253771149CFB115CCAD5C38E2D2", 00:16:20.111 "uuid": "8c8ee253-7711-49cf-b115-ccad5c38e2d2" 00:16:20.111 } 00:16:20.111 ] 00:16:20.111 }, 00:16:20.111 { 00:16:20.111 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:20.111 "subtype": "NVMe", 00:16:20.111 "listen_addresses": [ 00:16:20.111 { 00:16:20.111 "trtype": "VFIOUSER", 00:16:20.111 "adrfam": "IPv4", 00:16:20.111 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:20.111 "trsvcid": "0" 00:16:20.111 } 00:16:20.111 ], 00:16:20.111 "allow_any_host": true, 00:16:20.111 "hosts": [], 00:16:20.111 "serial_number": "SPDK2", 00:16:20.111 "model_number": "SPDK bdev Controller", 00:16:20.111 "max_namespaces": 32, 00:16:20.111 "min_cntlid": 1, 00:16:20.111 "max_cntlid": 65519, 00:16:20.111 "namespaces": [ 00:16:20.111 { 00:16:20.111 "nsid": 1, 00:16:20.111 "bdev_name": "Malloc2", 00:16:20.111 "name": "Malloc2", 00:16:20.111 "nguid": "2C22EA65E52340448BDB363B23F9744D", 00:16:20.111 "uuid": "2c22ea65-e523-4044-8bdb-363b23f9744d" 00:16:20.111 }, 00:16:20.111 { 00:16:20.111 "nsid": 2, 00:16:20.111 "bdev_name": "Malloc4", 00:16:20.111 "name": "Malloc4", 00:16:20.111 "nguid": "ACA75D5403FC476E9C807814FAE579F9", 00:16:20.111 "uuid": "aca75d54-03fc-476e-9c80-7814fae579f9" 00:16:20.111 } 00:16:20.111 ] 00:16:20.111 } 00:16:20.111 ] 00:16:20.111 19:05:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 306883 00:16:20.111 19:05:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:16:20.111 19:05:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 298103 00:16:20.111 19:05:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@952 -- # '[' -z 298103 ']' 00:16:20.111 19:05:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # kill -0 298103 00:16:20.111 19:05:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@957 -- # uname 00:16:20.372 19:05:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:20.372 19:05:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 298103 00:16:20.372 19:05:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:20.372 19:05:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:20.372 19:05:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@970 -- # echo 'killing process with pid 298103' 00:16:20.372 killing process with pid 298103 00:16:20.372 19:05:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@971 -- # kill 298103 00:16:20.372 19:05:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@976 -- # wait 298103 00:16:20.372 19:05:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:16:20.372 19:05:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:20.372 19:05:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:16:20.372 19:05:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:16:20.372 19:05:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:16:20.372 19:05:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=307213 00:16:20.372 19:05:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 307213' 00:16:20.372 Process pid: 307213 00:16:20.372 19:05:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:20.372 19:05:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:16:20.372 19:05:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 307213 00:16:20.372 19:05:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@833 -- # '[' -z 307213 ']' 00:16:20.372 19:05:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:20.372 19:05:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:20.372 19:05:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:20.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:20.372 19:05:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:20.372 19:05:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:16:20.633 [2024-11-05 19:05:49.725098] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:16:20.633 [2024-11-05 19:05:49.726094] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:16:20.633 [2024-11-05 19:05:49.726138] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:20.633 [2024-11-05 19:05:49.798506] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:20.633 [2024-11-05 19:05:49.833209] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:20.633 [2024-11-05 19:05:49.833243] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:20.633 [2024-11-05 19:05:49.833251] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:20.633 [2024-11-05 19:05:49.833258] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:20.633 [2024-11-05 19:05:49.833265] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:20.633 [2024-11-05 19:05:49.834646] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:20.633 [2024-11-05 19:05:49.834765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:20.633 [2024-11-05 19:05:49.834864] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:20.633 [2024-11-05 19:05:49.834865] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:20.633 [2024-11-05 19:05:49.889403] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:16:20.633 [2024-11-05 19:05:49.889784] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:16:20.633 [2024-11-05 19:05:49.890468] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:16:20.633 [2024-11-05 19:05:49.890534] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:16:20.633 [2024-11-05 19:05:49.892920] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:16:21.204 19:05:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:21.204 19:05:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@866 -- # return 0 00:16:21.204 19:05:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:16:22.590 19:05:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:16:22.590 19:05:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:16:22.590 19:05:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:16:22.590 19:05:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:22.590 19:05:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:16:22.590 19:05:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:22.590 Malloc1 00:16:22.590 19:05:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:16:22.851 19:05:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:16:23.113 19:05:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:16:23.113 19:05:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:23.113 19:05:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:16:23.113 19:05:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:16:23.375 Malloc2 00:16:23.375 19:05:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:16:23.636 19:05:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:16:23.897 19:05:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:16:23.897 19:05:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:16:23.897 19:05:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 307213 00:16:23.897 19:05:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@952 -- # '[' -z 307213 ']' 00:16:23.897 19:05:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # kill -0 307213 00:16:23.897 19:05:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@957 -- # uname 00:16:23.897 19:05:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:23.897 19:05:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 307213 00:16:23.897 19:05:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:23.897 19:05:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:24.160 19:05:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@970 -- # echo 'killing process with pid 307213' 00:16:24.160 killing process with pid 307213 00:16:24.160 19:05:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@971 -- # kill 307213 00:16:24.160 19:05:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@976 -- # wait 307213 00:16:24.160 19:05:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:16:24.160 19:05:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:24.160 00:16:24.160 real 0m50.602s 00:16:24.160 user 3m14.110s 00:16:24.160 sys 0m2.663s 00:16:24.160 19:05:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:24.160 19:05:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:16:24.160 ************************************ 00:16:24.160 END TEST nvmf_vfio_user 00:16:24.160 ************************************ 00:16:24.160 19:05:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:16:24.160 19:05:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:16:24.160 19:05:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:24.160 19:05:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:24.160 ************************************ 00:16:24.160 START TEST nvmf_vfio_user_nvme_compliance 00:16:24.160 ************************************ 00:16:24.160 19:05:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:16:24.423 * Looking for test storage... 00:16:24.423 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:16:24.423 19:05:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:24.423 19:05:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # lcov --version 00:16:24.423 19:05:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:24.423 19:05:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:24.423 19:05:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:24.423 19:05:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:24.423 19:05:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:24.423 19:05:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:16:24.423 19:05:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:16:24.423 19:05:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:16:24.423 19:05:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:16:24.423 19:05:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:16:24.423 19:05:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:16:24.423 19:05:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:16:24.423 19:05:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:24.423 19:05:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:16:24.423 19:05:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:16:24.423 19:05:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:24.423 19:05:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:24.423 19:05:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:16:24.423 19:05:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:16:24.423 19:05:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:24.423 19:05:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:16:24.423 19:05:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:16:24.423 19:05:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:16:24.423 19:05:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:16:24.423 19:05:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:24.423 19:05:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:16:24.423 19:05:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:16:24.423 19:05:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:24.423 19:05:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:24.423 19:05:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:16:24.423 19:05:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:24.423 19:05:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:24.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:24.423 --rc genhtml_branch_coverage=1 00:16:24.423 --rc genhtml_function_coverage=1 00:16:24.423 --rc genhtml_legend=1 00:16:24.423 --rc geninfo_all_blocks=1 00:16:24.423 --rc geninfo_unexecuted_blocks=1 00:16:24.423 00:16:24.423 ' 00:16:24.423 19:05:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:24.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:24.423 --rc genhtml_branch_coverage=1 00:16:24.423 --rc genhtml_function_coverage=1 00:16:24.423 --rc genhtml_legend=1 00:16:24.423 --rc geninfo_all_blocks=1 00:16:24.423 --rc geninfo_unexecuted_blocks=1 00:16:24.423 00:16:24.423 ' 00:16:24.423 19:05:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:24.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:24.423 --rc genhtml_branch_coverage=1 00:16:24.423 --rc genhtml_function_coverage=1 00:16:24.423 --rc genhtml_legend=1 00:16:24.423 --rc geninfo_all_blocks=1 00:16:24.423 --rc geninfo_unexecuted_blocks=1 00:16:24.423 00:16:24.423 ' 00:16:24.423 19:05:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:24.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:24.423 --rc genhtml_branch_coverage=1 00:16:24.423 --rc genhtml_function_coverage=1 00:16:24.423 --rc genhtml_legend=1 00:16:24.423 --rc geninfo_all_blocks=1 00:16:24.423 --rc geninfo_unexecuted_blocks=1 00:16:24.423 00:16:24.423 ' 00:16:24.423 19:05:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:24.423 19:05:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:16:24.423 19:05:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:24.423 19:05:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:24.423 19:05:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:24.423 19:05:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:24.423 19:05:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:24.423 19:05:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:16:24.423 19:05:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:24.423 19:05:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:16:24.423 19:05:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:24.423 19:05:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:24.423 19:05:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:24.423 19:05:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:16:24.423 19:05:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:16:24.423 19:05:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:24.423 19:05:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:24.423 19:05:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:16:24.423 19:05:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:24.423 19:05:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:24.423 19:05:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:24.423 19:05:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.423 19:05:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.423 19:05:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.423 19:05:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:16:24.423 19:05:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.423 19:05:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:16:24.423 19:05:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:16:24.423 19:05:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:16:24.424 19:05:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:16:24.424 19:05:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@50 -- # : 0 00:16:24.424 19:05:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:16:24.424 19:05:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:16:24.424 19:05:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:16:24.424 19:05:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:24.424 19:05:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:24.424 19:05:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:16:24.424 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:16:24.424 19:05:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:16:24.424 19:05:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:16:24.424 19:05:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@54 -- # have_pci_nics=0 00:16:24.424 19:05:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:24.424 19:05:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:24.424 19:05:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:16:24.424 19:05:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:16:24.424 19:05:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:16:24.424 19:05:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=307976 00:16:24.424 19:05:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 307976' 00:16:24.424 Process pid: 307976 00:16:24.424 19:05:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:24.424 19:05:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:16:24.424 19:05:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 307976 00:16:24.424 19:05:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@833 -- # '[' -z 307976 ']' 00:16:24.424 19:05:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:24.424 19:05:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:24.424 19:05:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:24.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:24.424 19:05:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:24.424 19:05:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:24.424 [2024-11-05 19:05:53.705658] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:16:24.424 [2024-11-05 19:05:53.705702] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:24.686 [2024-11-05 19:05:53.771888] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:24.686 [2024-11-05 19:05:53.806905] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:24.686 [2024-11-05 19:05:53.806936] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:24.686 [2024-11-05 19:05:53.806944] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:24.686 [2024-11-05 19:05:53.806951] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:24.686 [2024-11-05 19:05:53.806957] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:24.686 [2024-11-05 19:05:53.808445] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:24.686 [2024-11-05 19:05:53.808561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:24.686 [2024-11-05 19:05:53.808564] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:24.686 19:05:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:24.686 19:05:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@866 -- # return 0 00:16:24.686 19:05:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:16:25.627 19:05:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:16:25.627 19:05:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:16:25.627 19:05:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:16:25.627 19:05:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.627 19:05:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:25.627 19:05:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.627 19:05:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:16:25.627 19:05:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:16:25.627 19:05:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.627 19:05:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:25.627 malloc0 00:16:25.627 19:05:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.627 19:05:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:16:25.627 19:05:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.627 19:05:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:25.888 19:05:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.888 19:05:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:16:25.888 19:05:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.888 19:05:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:25.888 19:05:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.888 19:05:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:16:25.888 19:05:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.888 19:05:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:25.888 19:05:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.888 19:05:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:16:25.888 00:16:25.888 00:16:25.888 CUnit - A unit testing framework for C - Version 2.1-3 00:16:25.888 http://cunit.sourceforge.net/ 00:16:25.888 00:16:25.888 00:16:25.888 Suite: nvme_compliance 00:16:25.888 Test: admin_identify_ctrlr_verify_dptr ...[2024-11-05 19:05:55.174160] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:25.888 [2024-11-05 19:05:55.175502] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:16:25.888 [2024-11-05 19:05:55.175514] vfio_user.c:5507:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:16:25.888 [2024-11-05 19:05:55.175521] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:16:25.888 [2024-11-05 19:05:55.177180] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:26.149 passed 00:16:26.149 Test: admin_identify_ctrlr_verify_fused ...[2024-11-05 19:05:55.271775] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:26.149 [2024-11-05 19:05:55.274796] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:26.149 passed 00:16:26.149 Test: admin_identify_ns ...[2024-11-05 19:05:55.371006] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:26.149 [2024-11-05 19:05:55.430767] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:16:26.149 [2024-11-05 19:05:55.438771] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:16:26.149 [2024-11-05 19:05:55.459872] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:26.410 passed 00:16:26.410 Test: admin_get_features_mandatory_features ...[2024-11-05 19:05:55.553855] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:26.410 [2024-11-05 19:05:55.556867] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:26.410 passed 00:16:26.410 Test: admin_get_features_optional_features ...[2024-11-05 19:05:55.650422] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:26.410 [2024-11-05 19:05:55.654440] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:26.410 passed 00:16:26.672 Test: admin_set_features_number_of_queues ...[2024-11-05 19:05:55.746592] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:26.672 [2024-11-05 19:05:55.851859] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:26.672 passed 00:16:26.672 Test: admin_get_log_page_mandatory_logs ...[2024-11-05 19:05:55.943488] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:26.672 [2024-11-05 19:05:55.946510] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:26.672 passed 00:16:26.934 Test: admin_get_log_page_with_lpo ...[2024-11-05 19:05:56.040787] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:26.934 [2024-11-05 19:05:56.109759] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:16:26.934 [2024-11-05 19:05:56.122819] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:26.934 passed 00:16:26.934 Test: fabric_property_get ...[2024-11-05 19:05:56.214431] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:26.934 [2024-11-05 19:05:56.215681] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:16:26.934 [2024-11-05 19:05:56.217444] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:26.934 passed 00:16:27.194 Test: admin_delete_io_sq_use_admin_qid ...[2024-11-05 19:05:56.312974] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:27.194 [2024-11-05 19:05:56.314231] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:16:27.194 [2024-11-05 19:05:56.315996] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:27.194 passed 00:16:27.194 Test: admin_delete_io_sq_delete_sq_twice ...[2024-11-05 19:05:56.407009] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:27.194 [2024-11-05 19:05:56.490757] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:27.194 [2024-11-05 19:05:56.506763] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:27.194 [2024-11-05 19:05:56.511836] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:27.455 passed 00:16:27.455 Test: admin_delete_io_cq_use_admin_qid ...[2024-11-05 19:05:56.604813] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:27.455 [2024-11-05 19:05:56.606056] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:16:27.455 [2024-11-05 19:05:56.608833] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:27.455 passed 00:16:27.455 Test: admin_delete_io_cq_delete_cq_first ...[2024-11-05 19:05:56.700999] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:27.455 [2024-11-05 19:05:56.776757] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:16:27.715 [2024-11-05 19:05:56.800754] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:27.715 [2024-11-05 19:05:56.805841] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:27.715 passed 00:16:27.715 Test: admin_create_io_cq_verify_iv_pc ...[2024-11-05 19:05:56.899887] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:27.715 [2024-11-05 19:05:56.901160] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:16:27.715 [2024-11-05 19:05:56.901183] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:16:27.715 [2024-11-05 19:05:56.902905] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:27.715 passed 00:16:27.715 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-11-05 19:05:56.995986] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:27.976 [2024-11-05 19:05:57.090752] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:16:27.976 [2024-11-05 19:05:57.098757] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:16:27.976 [2024-11-05 19:05:57.106828] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:16:27.976 [2024-11-05 19:05:57.114751] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:16:27.976 [2024-11-05 19:05:57.143842] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:27.976 passed 00:16:27.976 Test: admin_create_io_sq_verify_pc ...[2024-11-05 19:05:57.235435] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:27.976 [2024-11-05 19:05:57.251761] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:16:27.976 [2024-11-05 19:05:57.269583] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:28.237 passed 00:16:28.237 Test: admin_create_io_qp_max_qps ...[2024-11-05 19:05:57.363087] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:29.179 [2024-11-05 19:05:58.457756] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:16:29.751 [2024-11-05 19:05:58.837203] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:29.751 passed 00:16:29.751 Test: admin_create_io_sq_shared_cq ...[2024-11-05 19:05:58.929373] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:29.751 [2024-11-05 19:05:59.060756] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:16:30.012 [2024-11-05 19:05:59.096875] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:30.012 passed 00:16:30.012 00:16:30.012 Run Summary: Type Total Ran Passed Failed Inactive 00:16:30.012 suites 1 1 n/a 0 0 00:16:30.012 tests 18 18 18 0 0 00:16:30.012 asserts 360 360 360 0 n/a 00:16:30.012 00:16:30.012 Elapsed time = 1.644 seconds 00:16:30.012 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 307976 00:16:30.012 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # '[' -z 307976 ']' 00:16:30.012 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # kill -0 307976 00:16:30.012 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@957 -- # uname 00:16:30.012 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:30.012 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 307976 00:16:30.012 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:30.012 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:30.012 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@970 -- # echo 'killing process with pid 307976' 00:16:30.012 killing process with pid 307976 00:16:30.012 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@971 -- # kill 307976 00:16:30.012 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@976 -- # wait 307976 00:16:30.274 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:16:30.274 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:16:30.274 00:16:30.274 real 0m5.913s 00:16:30.274 user 0m16.621s 00:16:30.274 sys 0m0.506s 00:16:30.274 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:30.274 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:30.274 ************************************ 00:16:30.274 END TEST nvmf_vfio_user_nvme_compliance 00:16:30.274 ************************************ 00:16:30.274 19:05:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:16:30.274 19:05:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:16:30.274 19:05:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:30.274 19:05:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:30.274 ************************************ 00:16:30.274 START TEST nvmf_vfio_user_fuzz 00:16:30.274 ************************************ 00:16:30.274 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:16:30.274 * Looking for test storage... 00:16:30.274 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:30.274 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:30.274 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:30.274 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # lcov --version 00:16:30.274 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:30.274 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:30.274 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:30.274 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:30.274 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:16:30.274 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:16:30.274 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:16:30.274 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:16:30.274 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:16:30.274 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:16:30.274 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:16:30.274 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:30.274 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:16:30.274 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:16:30.274 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:30.274 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:30.536 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:16:30.536 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:16:30.536 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:30.536 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:16:30.536 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:16:30.536 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:16:30.536 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:16:30.536 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:30.536 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:16:30.536 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:16:30.536 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:30.536 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:30.536 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:16:30.536 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:30.536 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:30.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:30.536 --rc genhtml_branch_coverage=1 00:16:30.536 --rc genhtml_function_coverage=1 00:16:30.536 --rc genhtml_legend=1 00:16:30.536 --rc geninfo_all_blocks=1 00:16:30.536 --rc geninfo_unexecuted_blocks=1 00:16:30.536 00:16:30.536 ' 00:16:30.536 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:30.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:30.536 --rc genhtml_branch_coverage=1 00:16:30.536 --rc genhtml_function_coverage=1 00:16:30.536 --rc genhtml_legend=1 00:16:30.536 --rc geninfo_all_blocks=1 00:16:30.536 --rc geninfo_unexecuted_blocks=1 00:16:30.536 00:16:30.536 ' 00:16:30.536 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:30.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:30.536 --rc genhtml_branch_coverage=1 00:16:30.536 --rc genhtml_function_coverage=1 00:16:30.536 --rc genhtml_legend=1 00:16:30.536 --rc geninfo_all_blocks=1 00:16:30.536 --rc geninfo_unexecuted_blocks=1 00:16:30.536 00:16:30.536 ' 00:16:30.536 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:30.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:30.536 --rc genhtml_branch_coverage=1 00:16:30.536 --rc genhtml_function_coverage=1 00:16:30.536 --rc genhtml_legend=1 00:16:30.536 --rc geninfo_all_blocks=1 00:16:30.536 --rc geninfo_unexecuted_blocks=1 00:16:30.536 00:16:30.536 ' 00:16:30.536 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:30.536 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:16:30.536 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:30.536 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:30.536 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:30.536 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:30.536 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:30.536 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:16:30.536 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:30.536 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:16:30.536 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:30.536 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:30.536 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:30.536 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:16:30.536 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:16:30.536 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:30.536 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:30.536 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:16:30.536 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:30.536 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:30.536 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:30.536 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.536 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.536 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.536 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:16:30.536 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.536 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:16:30.536 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:16:30.536 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:16:30.536 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:16:30.536 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@50 -- # : 0 00:16:30.536 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:16:30.536 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:16:30.536 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:16:30.536 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:30.536 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:30.536 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:16:30.536 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:16:30.536 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:16:30.536 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:16:30.536 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@54 -- # have_pci_nics=0 00:16:30.536 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:30.536 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:30.536 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:16:30.536 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:16:30.536 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:16:30.536 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:16:30.536 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:16:30.537 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:30.537 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=309289 00:16:30.537 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 309289' 00:16:30.537 Process pid: 309289 00:16:30.537 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:30.537 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 309289 00:16:30.537 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@833 -- # '[' -z 309289 ']' 00:16:30.537 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:30.537 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:30.537 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:30.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:30.537 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:30.537 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:30.797 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:30.797 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@866 -- # return 0 00:16:30.797 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:16:31.740 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:16:31.740 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.740 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:31.740 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.740 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:16:31.740 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:16:31.740 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.740 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:31.740 malloc0 00:16:31.740 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.740 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:16:31.740 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.740 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:31.740 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.740 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:16:31.740 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.740 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:31.740 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.740 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:16:31.740 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.740 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:31.740 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.740 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:16:31.740 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:17:03.859 Fuzzing completed. Shutting down the fuzz application 00:17:03.859 00:17:03.859 Dumping successful admin opcodes: 00:17:03.859 8, 9, 10, 24, 00:17:03.859 Dumping successful io opcodes: 00:17:03.859 0, 00:17:03.859 NS: 0x20000081ef00 I/O qp, Total commands completed: 1098048, total successful commands: 4326, random_seed: 4223775296 00:17:03.859 NS: 0x20000081ef00 admin qp, Total commands completed: 138076, total successful commands: 1118, random_seed: 3475653184 00:17:03.859 19:06:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:17:03.859 19:06:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.859 19:06:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:03.859 19:06:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.859 19:06:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 309289 00:17:03.859 19:06:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # '[' -z 309289 ']' 00:17:03.859 19:06:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # kill -0 309289 00:17:03.859 19:06:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@957 -- # uname 00:17:03.859 19:06:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:03.859 19:06:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 309289 00:17:03.859 19:06:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:03.859 19:06:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:03.859 19:06:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@970 -- # echo 'killing process with pid 309289' 00:17:03.859 killing process with pid 309289 00:17:03.859 19:06:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@971 -- # kill 309289 00:17:03.859 19:06:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@976 -- # wait 309289 00:17:03.859 19:06:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:17:03.859 19:06:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:17:03.859 00:17:03.859 real 0m33.146s 00:17:03.859 user 0m37.501s 00:17:03.859 sys 0m25.342s 00:17:03.859 19:06:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:03.859 19:06:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:03.859 ************************************ 00:17:03.859 END TEST nvmf_vfio_user_fuzz 00:17:03.859 ************************************ 00:17:03.859 19:06:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:17:03.859 19:06:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:17:03.859 19:06:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:03.859 19:06:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:03.859 ************************************ 00:17:03.859 START TEST nvmf_auth_target 00:17:03.859 ************************************ 00:17:03.859 19:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:17:03.859 * Looking for test storage... 00:17:03.859 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:03.860 19:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:03.860 19:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lcov --version 00:17:03.860 19:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:03.860 19:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:03.860 19:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:03.860 19:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:03.860 19:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:03.860 19:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:17:03.860 19:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:17:03.860 19:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:17:03.860 19:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:17:03.860 19:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:17:03.860 19:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:17:03.860 19:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:17:03.860 19:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:03.860 19:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:17:03.860 19:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:17:03.860 19:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:03.860 19:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:03.860 19:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:17:03.860 19:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:17:03.860 19:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:03.860 19:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:17:03.860 19:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:17:03.860 19:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:17:03.860 19:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:17:03.860 19:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:03.860 19:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:17:03.860 19:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:17:03.860 19:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:03.860 19:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:03.860 19:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:17:03.860 19:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:03.860 19:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:03.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:03.860 --rc genhtml_branch_coverage=1 00:17:03.860 --rc genhtml_function_coverage=1 00:17:03.860 --rc genhtml_legend=1 00:17:03.860 --rc geninfo_all_blocks=1 00:17:03.860 --rc geninfo_unexecuted_blocks=1 00:17:03.860 00:17:03.860 ' 00:17:03.860 19:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:03.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:03.860 --rc genhtml_branch_coverage=1 00:17:03.860 --rc genhtml_function_coverage=1 00:17:03.860 --rc genhtml_legend=1 00:17:03.860 --rc geninfo_all_blocks=1 00:17:03.860 --rc geninfo_unexecuted_blocks=1 00:17:03.860 00:17:03.860 ' 00:17:03.860 19:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:03.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:03.860 --rc genhtml_branch_coverage=1 00:17:03.860 --rc genhtml_function_coverage=1 00:17:03.860 --rc genhtml_legend=1 00:17:03.860 --rc geninfo_all_blocks=1 00:17:03.860 --rc geninfo_unexecuted_blocks=1 00:17:03.860 00:17:03.860 ' 00:17:03.860 19:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:03.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:03.860 --rc genhtml_branch_coverage=1 00:17:03.860 --rc genhtml_function_coverage=1 00:17:03.860 --rc genhtml_legend=1 00:17:03.860 --rc geninfo_all_blocks=1 00:17:03.860 --rc geninfo_unexecuted_blocks=1 00:17:03.860 00:17:03.860 ' 00:17:03.860 19:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:03.860 19:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:17:03.860 19:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:03.860 19:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:03.860 19:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:03.860 19:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:03.860 19:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:03.860 19:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:17:03.860 19:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:03.860 19:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:17:03.860 19:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:03.860 19:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:03.860 19:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:03.860 19:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:17:03.860 19:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:17:03.860 19:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:03.860 19:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:03.860 19:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:17:03.860 19:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:03.860 19:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:03.860 19:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:03.860 19:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.860 19:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.860 19:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.860 19:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:17:03.860 19:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.860 19:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:17:03.860 19:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:17:03.860 19:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:17:03.860 19:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:17:03.860 19:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@50 -- # : 0 00:17:03.860 19:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:17:03.860 19:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:17:03.860 19:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:17:03.860 19:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:03.860 19:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:03.860 19:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:17:03.861 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:17:03.861 19:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:17:03.861 19:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:17:03.861 19:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@54 -- # have_pci_nics=0 00:17:03.861 19:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:17:03.861 19:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:17:03.861 19:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:17:03.861 19:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:03.861 19:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:17:03.861 19:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:17:03.861 19:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:17:03.861 19:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:17:03.861 19:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:17:03.861 19:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:03.861 19:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # prepare_net_devs 00:17:03.861 19:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # local -g is_hw=no 00:17:03.861 19:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@260 -- # remove_target_ns 00:17:03.861 19:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:17:03.861 19:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:17:03.861 19:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_target_ns 00:17:03.861 19:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:17:03.861 19:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:17:03.861 19:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # xtrace_disable 00:17:03.861 19:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.458 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:10.458 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@131 -- # pci_devs=() 00:17:10.458 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@131 -- # local -a pci_devs 00:17:10.458 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@132 -- # pci_net_devs=() 00:17:10.458 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:17:10.458 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@133 -- # pci_drivers=() 00:17:10.458 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@133 -- # local -A pci_drivers 00:17:10.458 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@135 -- # net_devs=() 00:17:10.458 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@135 -- # local -ga net_devs 00:17:10.458 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@136 -- # e810=() 00:17:10.458 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@136 -- # local -ga e810 00:17:10.458 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@137 -- # x722=() 00:17:10.458 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@137 -- # local -ga x722 00:17:10.458 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@138 -- # mlx=() 00:17:10.458 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@138 -- # local -ga mlx 00:17:10.458 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:10.458 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:10.458 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:10.458 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:10.458 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:10.458 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:10.458 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:10.458 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:10.458 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:10.458 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:10.458 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:10.458 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:10.458 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:17:10.458 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:17:10.458 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:17:10.458 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:17:10.458 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:17:10.458 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:17:10.458 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:17:10.458 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:17:10.458 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:17:10.458 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:17:10.458 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:17:10.458 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:10.458 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:10.458 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:17:10.458 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:17:10.458 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:17:10.458 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:17:10.458 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:17:10.458 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:17:10.458 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:10.458 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:10.458 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:17:10.458 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:17:10.458 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:17:10.458 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:17:10.458 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:17:10.458 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:10.458 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:17:10.458 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:10.458 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # [[ up == up ]] 00:17:10.458 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:17:10.458 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:10.458 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:17:10.458 Found net devices under 0000:4b:00.0: cvl_0_0 00:17:10.458 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:17:10.458 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:17:10.458 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:10.458 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:17:10.458 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:10.458 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # [[ up == up ]] 00:17:10.458 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:17:10.458 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:10.458 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:17:10.458 Found net devices under 0000:4b:00.1: cvl_0_1 00:17:10.458 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:17:10.458 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:17:10.458 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:17:10.458 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # is_hw=yes 00:17:10.458 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:17:10.458 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:17:10.458 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:17:10.458 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:17:10.458 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@257 -- # create_target_ns 00:17:10.458 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:17:10.458 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:17:10.458 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:17:10.458 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:10.458 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:17:10.458 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:17:10.458 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:10.458 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:10.458 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:17:10.458 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:17:10.458 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:17:10.458 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:17:10.458 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@27 -- # local -gA dev_map 00:17:10.458 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@28 -- # local -g _dev 00:17:10.458 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:17:10.458 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:17:10.458 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:17:10.458 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:17:10.458 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@44 -- # ips=() 00:17:10.458 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:17:10.458 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:17:10.458 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:17:10.458 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:17:10.458 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:17:10.458 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:17:10.458 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:17:10.458 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:17:10.458 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:17:10.458 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:17:10.458 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:17:10.458 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:17:10.458 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:17:10.458 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:17:10.458 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:17:10.458 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:17:10.719 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:17:10.720 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:17:10.720 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:10.720 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:17:10.720 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@11 -- # local val=167772161 00:17:10.720 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:17:10.720 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:17:10.720 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:17:10.720 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:17:10.720 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:17:10.720 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:17:10.720 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:17:10.720 10.0.0.1 00:17:10.720 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:17:10.720 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:17:10.720 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:10.720 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:10.720 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:17:10.720 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@11 -- # local val=167772162 00:17:10.720 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:17:10.720 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:17:10.720 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:17:10.720 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:17:10.720 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:17:10.720 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:17:10.720 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:17:10.720 10.0.0.2 00:17:10.720 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:17:10.720 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:17:10.720 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:17:10.720 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:17:10.720 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:17:10.720 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:17:10.720 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:17:10.720 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:10.720 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:10.720 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:17:10.720 19:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:17:10.720 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:17:10.720 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:17:10.720 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:17:10.720 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:17:10.720 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:17:10.982 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:17:10.982 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:17:10.982 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:17:10.982 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:17:10.982 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@38 -- # ping_ips 1 00:17:10.982 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:17:10.982 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:17:10.982 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:17:10.982 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:17:10.982 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:17:10.982 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:17:10.982 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:17:10.982 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:17:10.982 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:17:10.982 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@107 -- # local dev=initiator0 00:17:10.982 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:17:10.982 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:17:10.982 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:17:10.982 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:17:10.982 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:17:10.982 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:17:10.982 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:17:10.983 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:17:10.983 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:17:10.983 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:17:10.983 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:17:10.983 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:10.983 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:10.983 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:17:10.983 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:17:10.983 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:10.983 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.524 ms 00:17:10.983 00:17:10.983 --- 10.0.0.1 ping statistics --- 00:17:10.983 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:10.983 rtt min/avg/max/mdev = 0.524/0.524/0.524/0.000 ms 00:17:10.983 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:17:10.983 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:17:10.983 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:17:10.983 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:17:10.983 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:10.983 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:10.983 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@168 -- # get_net_dev target0 00:17:10.983 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@107 -- # local dev=target0 00:17:10.983 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:17:10.983 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:17:10.983 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:17:10.983 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:17:10.983 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:17:10.983 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:17:10.983 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:17:10.983 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:17:10.983 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:17:10.983 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:17:10.983 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:17:10.983 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:17:10.983 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:17:10.983 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:17:10.983 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:10.983 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.326 ms 00:17:10.983 00:17:10.983 --- 10.0.0.2 ping statistics --- 00:17:10.983 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:10.983 rtt min/avg/max/mdev = 0.326/0.326/0.326/0.000 ms 00:17:10.983 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@98 -- # (( pair++ )) 00:17:10.983 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:17:10.983 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:10.983 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@270 -- # return 0 00:17:10.983 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:17:10.983 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:17:10.983 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:17:10.983 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:17:10.983 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:17:10.983 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:17:10.983 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:17:10.983 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:17:10.983 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:17:10.983 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:17:10.983 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@107 -- # local dev=initiator0 00:17:10.983 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:17:10.983 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:17:10.983 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:17:10.983 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:17:10.983 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:17:10.983 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:17:10.983 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:17:10.983 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:17:10.983 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:17:10.983 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:10.983 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:17:10.983 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:17:10.983 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:17:10.983 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:17:10.983 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:17:10.983 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:17:10.983 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@107 -- # local dev=initiator1 00:17:10.983 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:17:10.983 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:17:10.983 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@109 -- # return 1 00:17:10.983 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@168 -- # dev= 00:17:10.983 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@169 -- # return 0 00:17:10.983 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:17:10.983 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:17:10.983 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:17:10.983 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:17:10.983 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:17:10.983 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:10.983 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:10.983 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@168 -- # get_net_dev target0 00:17:10.983 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@107 -- # local dev=target0 00:17:10.983 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:17:10.983 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:17:10.983 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:17:10.983 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:17:10.983 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:17:10.983 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:17:10.983 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:17:10.983 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:17:10.983 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:17:10.983 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:10.983 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:17:10.983 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:17:10.983 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:17:10.983 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:17:10.983 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:10.983 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:10.983 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@168 -- # get_net_dev target1 00:17:10.983 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@107 -- # local dev=target1 00:17:10.983 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:17:10.983 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:17:10.983 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@109 -- # return 1 00:17:10.983 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@168 -- # dev= 00:17:10.983 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@169 -- # return 0 00:17:10.983 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:17:10.983 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:10.983 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:17:10.983 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:17:10.983 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:10.984 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:17:10.984 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:17:10.984 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:17:10.984 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:17:10.984 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:10.984 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.984 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # nvmfpid=319959 00:17:10.984 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@329 -- # waitforlisten 319959 00:17:10.984 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 319959 ']' 00:17:10.984 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:10.984 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:10.984 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:10.984 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:10.984 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.984 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:17:11.926 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:11.926 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:17:11.926 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:17:11.926 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:11.926 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.926 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:11.926 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=320117 00:17:11.926 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:11.926 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:17:11.926 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:17:11.926 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@525 -- # local digest len file key 00:17:11.926 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:11.926 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # local -A digests 00:17:11.926 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # digest=null 00:17:11.926 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # len=48 00:17:11.926 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:11.926 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # key=b09217da7a24c218d2107c50355a5b85c84764db255f3168 00:17:11.926 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # mktemp -t spdk.key-null.XXX 00:17:11.926 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-null.FBG 00:17:11.926 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@531 -- # format_dhchap_key b09217da7a24c218d2107c50355a5b85c84764db255f3168 0 00:17:11.926 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # format_key DHHC-1 b09217da7a24c218d2107c50355a5b85c84764db255f3168 0 00:17:11.926 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # local prefix key digest 00:17:11.926 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:17:11.926 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # key=b09217da7a24c218d2107c50355a5b85c84764db255f3168 00:17:11.926 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # digest=0 00:17:11.926 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # python - 00:17:11.926 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-null.FBG 00:17:11.926 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-null.FBG 00:17:11.926 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.FBG 00:17:11.926 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:17:11.926 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@525 -- # local digest len file key 00:17:11.926 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:11.926 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # local -A digests 00:17:11.926 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # digest=sha512 00:17:11.926 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # len=64 00:17:11.926 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:11.926 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # key=22d5b85da96f94a376132c3d3553fd583ab57db6b2d7ed2e00119ba2a221edda 00:17:11.926 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha512.XXX 00:17:11.926 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha512.EFg 00:17:11.926 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@531 -- # format_dhchap_key 22d5b85da96f94a376132c3d3553fd583ab57db6b2d7ed2e00119ba2a221edda 3 00:17:11.926 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # format_key DHHC-1 22d5b85da96f94a376132c3d3553fd583ab57db6b2d7ed2e00119ba2a221edda 3 00:17:11.926 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # local prefix key digest 00:17:11.926 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:17:11.926 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # key=22d5b85da96f94a376132c3d3553fd583ab57db6b2d7ed2e00119ba2a221edda 00:17:11.926 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # digest=3 00:17:11.926 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # python - 00:17:11.926 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha512.EFg 00:17:11.926 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha512.EFg 00:17:11.926 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.EFg 00:17:11.926 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:17:11.926 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@525 -- # local digest len file key 00:17:11.926 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:11.926 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # local -A digests 00:17:11.926 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # digest=sha256 00:17:11.926 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # len=32 00:17:11.926 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:12.187 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # key=8dcba73b175457cc5601d8be40758b5f 00:17:12.187 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha256.XXX 00:17:12.187 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha256.ugs 00:17:12.187 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@531 -- # format_dhchap_key 8dcba73b175457cc5601d8be40758b5f 1 00:17:12.187 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # format_key DHHC-1 8dcba73b175457cc5601d8be40758b5f 1 00:17:12.187 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # local prefix key digest 00:17:12.187 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:17:12.187 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # key=8dcba73b175457cc5601d8be40758b5f 00:17:12.187 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # digest=1 00:17:12.187 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # python - 00:17:12.187 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha256.ugs 00:17:12.187 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha256.ugs 00:17:12.187 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.ugs 00:17:12.187 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:17:12.187 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@525 -- # local digest len file key 00:17:12.187 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:12.187 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # local -A digests 00:17:12.187 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # digest=sha384 00:17:12.187 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # len=48 00:17:12.187 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:12.187 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # key=61c44976c9538ae6d2ed69cc242e9135152fc48c6e03549b 00:17:12.187 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha384.XXX 00:17:12.187 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha384.Xzm 00:17:12.187 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@531 -- # format_dhchap_key 61c44976c9538ae6d2ed69cc242e9135152fc48c6e03549b 2 00:17:12.187 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # format_key DHHC-1 61c44976c9538ae6d2ed69cc242e9135152fc48c6e03549b 2 00:17:12.187 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # local prefix key digest 00:17:12.187 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:17:12.187 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # key=61c44976c9538ae6d2ed69cc242e9135152fc48c6e03549b 00:17:12.187 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # digest=2 00:17:12.187 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # python - 00:17:12.187 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha384.Xzm 00:17:12.187 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha384.Xzm 00:17:12.187 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.Xzm 00:17:12.187 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:17:12.187 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@525 -- # local digest len file key 00:17:12.187 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:12.187 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # local -A digests 00:17:12.187 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # digest=sha384 00:17:12.187 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # len=48 00:17:12.187 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:12.187 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # key=d0281fdb4a7fbfec87871a4574e2d466426b9ad37c44c4f3 00:17:12.187 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha384.XXX 00:17:12.187 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha384.KtO 00:17:12.187 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@531 -- # format_dhchap_key d0281fdb4a7fbfec87871a4574e2d466426b9ad37c44c4f3 2 00:17:12.187 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # format_key DHHC-1 d0281fdb4a7fbfec87871a4574e2d466426b9ad37c44c4f3 2 00:17:12.187 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # local prefix key digest 00:17:12.187 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:17:12.187 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # key=d0281fdb4a7fbfec87871a4574e2d466426b9ad37c44c4f3 00:17:12.187 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # digest=2 00:17:12.187 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # python - 00:17:12.187 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha384.KtO 00:17:12.187 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha384.KtO 00:17:12.187 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.KtO 00:17:12.187 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:17:12.187 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@525 -- # local digest len file key 00:17:12.187 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:12.187 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # local -A digests 00:17:12.187 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # digest=sha256 00:17:12.187 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # len=32 00:17:12.187 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:12.187 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # key=bf96d663f6010fc31f6382cf683d0a2a 00:17:12.187 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha256.XXX 00:17:12.187 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha256.koY 00:17:12.187 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@531 -- # format_dhchap_key bf96d663f6010fc31f6382cf683d0a2a 1 00:17:12.187 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # format_key DHHC-1 bf96d663f6010fc31f6382cf683d0a2a 1 00:17:12.187 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # local prefix key digest 00:17:12.188 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:17:12.188 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # key=bf96d663f6010fc31f6382cf683d0a2a 00:17:12.188 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # digest=1 00:17:12.188 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # python - 00:17:12.188 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha256.koY 00:17:12.188 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha256.koY 00:17:12.188 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.koY 00:17:12.188 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:17:12.188 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@525 -- # local digest len file key 00:17:12.188 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:12.188 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # local -A digests 00:17:12.188 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # digest=sha512 00:17:12.188 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # len=64 00:17:12.188 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:12.188 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # key=b483116a2ee75e818efa7c0ddd1207ae44d0bdf0a8ef3b3c2f2379fc2d4121ae 00:17:12.188 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha512.XXX 00:17:12.447 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha512.8uv 00:17:12.447 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@531 -- # format_dhchap_key b483116a2ee75e818efa7c0ddd1207ae44d0bdf0a8ef3b3c2f2379fc2d4121ae 3 00:17:12.447 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # format_key DHHC-1 b483116a2ee75e818efa7c0ddd1207ae44d0bdf0a8ef3b3c2f2379fc2d4121ae 3 00:17:12.447 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # local prefix key digest 00:17:12.447 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:17:12.447 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # key=b483116a2ee75e818efa7c0ddd1207ae44d0bdf0a8ef3b3c2f2379fc2d4121ae 00:17:12.447 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # digest=3 00:17:12.447 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # python - 00:17:12.447 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha512.8uv 00:17:12.447 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha512.8uv 00:17:12.447 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.8uv 00:17:12.447 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:17:12.447 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 319959 00:17:12.447 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 319959 ']' 00:17:12.447 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:12.447 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:12.447 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:12.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:12.447 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:12.447 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.447 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:12.447 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:17:12.447 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 320117 /var/tmp/host.sock 00:17:12.447 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 320117 ']' 00:17:12.447 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/host.sock 00:17:12.448 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:12.448 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:17:12.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:17:12.448 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:12.448 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.707 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:12.707 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:17:12.707 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:17:12.707 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.707 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.707 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.707 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:17:12.707 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.FBG 00:17:12.707 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.707 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.707 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.707 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.FBG 00:17:12.707 19:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.FBG 00:17:12.967 19:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.EFg ]] 00:17:12.967 19:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.EFg 00:17:12.967 19:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.967 19:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.967 19:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.967 19:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.EFg 00:17:12.967 19:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.EFg 00:17:12.967 19:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:17:12.967 19:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.ugs 00:17:12.967 19:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.967 19:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.967 19:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.967 19:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.ugs 00:17:12.967 19:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.ugs 00:17:13.227 19:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.Xzm ]] 00:17:13.227 19:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Xzm 00:17:13.227 19:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.227 19:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.227 19:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.227 19:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Xzm 00:17:13.227 19:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Xzm 00:17:13.487 19:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:17:13.487 19:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.KtO 00:17:13.487 19:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.487 19:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.487 19:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.487 19:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.KtO 00:17:13.487 19:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.KtO 00:17:13.487 19:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.koY ]] 00:17:13.487 19:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.koY 00:17:13.487 19:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.487 19:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.487 19:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.487 19:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.koY 00:17:13.487 19:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.koY 00:17:13.746 19:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:17:13.746 19:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.8uv 00:17:13.746 19:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.746 19:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.746 19:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.746 19:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.8uv 00:17:13.746 19:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.8uv 00:17:14.005 19:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:17:14.005 19:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:17:14.005 19:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:14.005 19:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:14.005 19:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:14.005 19:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:14.005 19:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:17:14.005 19:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:14.005 19:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:14.005 19:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:14.005 19:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:14.005 19:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:14.005 19:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:14.005 19:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.005 19:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.005 19:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.005 19:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:14.005 19:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:14.005 19:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:14.264 00:17:14.264 19:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:14.264 19:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:14.264 19:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:14.523 19:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.523 19:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:14.523 19:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.523 19:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.523 19:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.523 19:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:14.523 { 00:17:14.523 "cntlid": 1, 00:17:14.523 "qid": 0, 00:17:14.523 "state": "enabled", 00:17:14.523 "thread": "nvmf_tgt_poll_group_000", 00:17:14.523 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:14.523 "listen_address": { 00:17:14.523 "trtype": "TCP", 00:17:14.523 "adrfam": "IPv4", 00:17:14.523 "traddr": "10.0.0.2", 00:17:14.523 "trsvcid": "4420" 00:17:14.523 }, 00:17:14.523 "peer_address": { 00:17:14.523 "trtype": "TCP", 00:17:14.523 "adrfam": "IPv4", 00:17:14.523 "traddr": "10.0.0.1", 00:17:14.523 "trsvcid": "49778" 00:17:14.523 }, 00:17:14.523 "auth": { 00:17:14.523 "state": "completed", 00:17:14.523 "digest": "sha256", 00:17:14.523 "dhgroup": "null" 00:17:14.523 } 00:17:14.523 } 00:17:14.523 ]' 00:17:14.523 19:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:14.523 19:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:14.523 19:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:14.523 19:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:14.523 19:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:14.523 19:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:14.523 19:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:14.523 19:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:14.782 19:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjA5MjE3ZGE3YTI0YzIxOGQyMTA3YzUwMzU1YTViODVjODQ3NjRkYjI1NWYzMTY4IHDRog==: --dhchap-ctrl-secret DHHC-1:03:MjJkNWI4NWRhOTZmOTRhMzc2MTMyYzNkMzU1M2ZkNTgzYWI1N2RiNmIyZDdlZDJlMDAxMTliYTJhMjIxZWRkYdVqnqo=: 00:17:14.782 19:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YjA5MjE3ZGE3YTI0YzIxOGQyMTA3YzUwMzU1YTViODVjODQ3NjRkYjI1NWYzMTY4IHDRog==: --dhchap-ctrl-secret DHHC-1:03:MjJkNWI4NWRhOTZmOTRhMzc2MTMyYzNkMzU1M2ZkNTgzYWI1N2RiNmIyZDdlZDJlMDAxMTliYTJhMjIxZWRkYdVqnqo=: 00:17:15.720 19:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:15.720 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:15.720 19:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:15.720 19:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.720 19:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.720 19:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.720 19:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:15.720 19:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:15.720 19:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:15.720 19:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:17:15.720 19:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:15.720 19:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:15.720 19:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:15.720 19:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:15.720 19:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:15.720 19:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:15.720 19:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.720 19:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.720 19:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.720 19:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:15.720 19:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:15.720 19:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:15.979 00:17:15.979 19:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:15.979 19:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:15.979 19:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:16.238 19:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.238 19:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:16.238 19:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.238 19:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.238 19:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.238 19:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:16.238 { 00:17:16.238 "cntlid": 3, 00:17:16.238 "qid": 0, 00:17:16.238 "state": "enabled", 00:17:16.238 "thread": "nvmf_tgt_poll_group_000", 00:17:16.238 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:16.238 "listen_address": { 00:17:16.238 "trtype": "TCP", 00:17:16.238 "adrfam": "IPv4", 00:17:16.238 "traddr": "10.0.0.2", 00:17:16.238 "trsvcid": "4420" 00:17:16.238 }, 00:17:16.238 "peer_address": { 00:17:16.238 "trtype": "TCP", 00:17:16.238 "adrfam": "IPv4", 00:17:16.238 "traddr": "10.0.0.1", 00:17:16.238 "trsvcid": "49796" 00:17:16.238 }, 00:17:16.238 "auth": { 00:17:16.238 "state": "completed", 00:17:16.238 "digest": "sha256", 00:17:16.238 "dhgroup": "null" 00:17:16.238 } 00:17:16.238 } 00:17:16.238 ]' 00:17:16.238 19:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:16.238 19:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:16.238 19:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:16.238 19:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:16.238 19:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:16.238 19:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:16.238 19:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:16.238 19:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:16.497 19:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGRjYmE3M2IxNzU0NTdjYzU2MDFkOGJlNDA3NThiNWbPZiCO: --dhchap-ctrl-secret DHHC-1:02:NjFjNDQ5NzZjOTUzOGFlNmQyZWQ2OWNjMjQyZTkxMzUxNTJmYzQ4YzZlMDM1NDliJ6QLcw==: 00:17:16.497 19:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:OGRjYmE3M2IxNzU0NTdjYzU2MDFkOGJlNDA3NThiNWbPZiCO: --dhchap-ctrl-secret DHHC-1:02:NjFjNDQ5NzZjOTUzOGFlNmQyZWQ2OWNjMjQyZTkxMzUxNTJmYzQ4YzZlMDM1NDliJ6QLcw==: 00:17:17.437 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:17.437 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:17.437 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:17.437 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.437 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.437 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.437 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:17.437 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:17.437 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:17.437 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:17:17.437 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:17.437 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:17.437 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:17.437 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:17.437 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:17.437 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:17.437 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.437 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.437 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.437 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:17.437 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:17.437 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:17.697 00:17:17.697 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:17.697 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:17.697 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:17.956 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.956 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:17.956 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.956 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.956 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.956 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:17.956 { 00:17:17.956 "cntlid": 5, 00:17:17.956 "qid": 0, 00:17:17.956 "state": "enabled", 00:17:17.956 "thread": "nvmf_tgt_poll_group_000", 00:17:17.956 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:17.956 "listen_address": { 00:17:17.956 "trtype": "TCP", 00:17:17.956 "adrfam": "IPv4", 00:17:17.956 "traddr": "10.0.0.2", 00:17:17.956 "trsvcid": "4420" 00:17:17.956 }, 00:17:17.956 "peer_address": { 00:17:17.956 "trtype": "TCP", 00:17:17.956 "adrfam": "IPv4", 00:17:17.956 "traddr": "10.0.0.1", 00:17:17.956 "trsvcid": "49818" 00:17:17.956 }, 00:17:17.956 "auth": { 00:17:17.956 "state": "completed", 00:17:17.956 "digest": "sha256", 00:17:17.956 "dhgroup": "null" 00:17:17.956 } 00:17:17.956 } 00:17:17.956 ]' 00:17:17.956 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:17.956 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:17.956 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:17.956 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:17.956 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:17.956 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:17.956 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:17.956 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:18.216 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDAyODFmZGI0YTdmYmZlYzg3ODcxYTQ1NzRlMmQ0NjY0MjZiOWFkMzdjNDRjNGYzV9FB/w==: --dhchap-ctrl-secret DHHC-1:01:YmY5NmQ2NjNmNjAxMGZjMzFmNjM4MmNmNjgzZDBhMmH980Km: 00:17:18.216 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZDAyODFmZGI0YTdmYmZlYzg3ODcxYTQ1NzRlMmQ0NjY0MjZiOWFkMzdjNDRjNGYzV9FB/w==: --dhchap-ctrl-secret DHHC-1:01:YmY5NmQ2NjNmNjAxMGZjMzFmNjM4MmNmNjgzZDBhMmH980Km: 00:17:19.157 19:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:19.157 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:19.157 19:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:19.157 19:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.157 19:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.157 19:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.157 19:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:19.157 19:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:19.157 19:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:19.157 19:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:17:19.157 19:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:19.157 19:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:19.157 19:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:19.157 19:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:19.157 19:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:19.157 19:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:19.157 19:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.157 19:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.157 19:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.157 19:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:19.157 19:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:19.157 19:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:19.417 00:17:19.417 19:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:19.417 19:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:19.417 19:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:19.677 19:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.677 19:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:19.677 19:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.677 19:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.677 19:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.677 19:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:19.677 { 00:17:19.677 "cntlid": 7, 00:17:19.677 "qid": 0, 00:17:19.677 "state": "enabled", 00:17:19.677 "thread": "nvmf_tgt_poll_group_000", 00:17:19.677 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:19.677 "listen_address": { 00:17:19.677 "trtype": "TCP", 00:17:19.677 "adrfam": "IPv4", 00:17:19.677 "traddr": "10.0.0.2", 00:17:19.677 "trsvcid": "4420" 00:17:19.677 }, 00:17:19.677 "peer_address": { 00:17:19.677 "trtype": "TCP", 00:17:19.677 "adrfam": "IPv4", 00:17:19.677 "traddr": "10.0.0.1", 00:17:19.677 "trsvcid": "50182" 00:17:19.677 }, 00:17:19.677 "auth": { 00:17:19.677 "state": "completed", 00:17:19.677 "digest": "sha256", 00:17:19.677 "dhgroup": "null" 00:17:19.677 } 00:17:19.677 } 00:17:19.677 ]' 00:17:19.677 19:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:19.677 19:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:19.677 19:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:19.677 19:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:19.677 19:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:19.677 19:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:19.677 19:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:19.677 19:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:19.937 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjQ4MzExNmEyZWU3NWU4MThlZmE3YzBkZGQxMjA3YWU0NGQwYmRmMGE4ZWYzYjNjMmYyMzc5ZmMyZDQxMjFhZbmCPzk=: 00:17:19.937 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YjQ4MzExNmEyZWU3NWU4MThlZmE3YzBkZGQxMjA3YWU0NGQwYmRmMGE4ZWYzYjNjMmYyMzc5ZmMyZDQxMjFhZbmCPzk=: 00:17:20.522 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:20.828 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:20.828 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:20.828 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.828 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.828 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.828 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:20.828 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:20.828 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:20.828 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:20.828 19:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:17:20.828 19:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:20.828 19:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:20.828 19:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:20.828 19:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:20.828 19:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:20.828 19:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:20.828 19:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.828 19:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.828 19:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.828 19:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:20.828 19:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:20.828 19:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:21.148 00:17:21.148 19:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:21.148 19:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:21.148 19:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:21.148 19:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:21.148 19:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:21.148 19:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.148 19:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.148 19:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.148 19:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:21.148 { 00:17:21.148 "cntlid": 9, 00:17:21.148 "qid": 0, 00:17:21.148 "state": "enabled", 00:17:21.148 "thread": "nvmf_tgt_poll_group_000", 00:17:21.148 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:21.148 "listen_address": { 00:17:21.148 "trtype": "TCP", 00:17:21.148 "adrfam": "IPv4", 00:17:21.148 "traddr": "10.0.0.2", 00:17:21.148 "trsvcid": "4420" 00:17:21.148 }, 00:17:21.148 "peer_address": { 00:17:21.148 "trtype": "TCP", 00:17:21.148 "adrfam": "IPv4", 00:17:21.148 "traddr": "10.0.0.1", 00:17:21.148 "trsvcid": "50210" 00:17:21.148 }, 00:17:21.148 "auth": { 00:17:21.148 "state": "completed", 00:17:21.148 "digest": "sha256", 00:17:21.148 "dhgroup": "ffdhe2048" 00:17:21.148 } 00:17:21.148 } 00:17:21.148 ]' 00:17:21.409 19:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:21.409 19:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:21.409 19:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:21.409 19:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:21.409 19:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:21.409 19:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:21.409 19:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:21.409 19:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:21.670 19:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjA5MjE3ZGE3YTI0YzIxOGQyMTA3YzUwMzU1YTViODVjODQ3NjRkYjI1NWYzMTY4IHDRog==: --dhchap-ctrl-secret DHHC-1:03:MjJkNWI4NWRhOTZmOTRhMzc2MTMyYzNkMzU1M2ZkNTgzYWI1N2RiNmIyZDdlZDJlMDAxMTliYTJhMjIxZWRkYdVqnqo=: 00:17:21.670 19:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YjA5MjE3ZGE3YTI0YzIxOGQyMTA3YzUwMzU1YTViODVjODQ3NjRkYjI1NWYzMTY4IHDRog==: --dhchap-ctrl-secret DHHC-1:03:MjJkNWI4NWRhOTZmOTRhMzc2MTMyYzNkMzU1M2ZkNTgzYWI1N2RiNmIyZDdlZDJlMDAxMTliYTJhMjIxZWRkYdVqnqo=: 00:17:22.240 19:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:22.240 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:22.240 19:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:22.240 19:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.240 19:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.240 19:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.240 19:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:22.240 19:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:22.240 19:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:22.501 19:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:17:22.501 19:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:22.501 19:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:22.501 19:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:22.501 19:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:22.501 19:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:22.501 19:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:22.501 19:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.501 19:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.501 19:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.501 19:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:22.501 19:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:22.501 19:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:22.762 00:17:22.762 19:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:22.762 19:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:22.762 19:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:23.023 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.023 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:23.023 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.023 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.023 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.023 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:23.023 { 00:17:23.023 "cntlid": 11, 00:17:23.023 "qid": 0, 00:17:23.023 "state": "enabled", 00:17:23.023 "thread": "nvmf_tgt_poll_group_000", 00:17:23.023 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:23.023 "listen_address": { 00:17:23.023 "trtype": "TCP", 00:17:23.023 "adrfam": "IPv4", 00:17:23.023 "traddr": "10.0.0.2", 00:17:23.023 "trsvcid": "4420" 00:17:23.023 }, 00:17:23.023 "peer_address": { 00:17:23.023 "trtype": "TCP", 00:17:23.023 "adrfam": "IPv4", 00:17:23.023 "traddr": "10.0.0.1", 00:17:23.023 "trsvcid": "50222" 00:17:23.023 }, 00:17:23.023 "auth": { 00:17:23.023 "state": "completed", 00:17:23.023 "digest": "sha256", 00:17:23.023 "dhgroup": "ffdhe2048" 00:17:23.023 } 00:17:23.023 } 00:17:23.023 ]' 00:17:23.023 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:23.023 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:23.023 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:23.023 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:23.023 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:23.023 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:23.023 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:23.023 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:23.283 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGRjYmE3M2IxNzU0NTdjYzU2MDFkOGJlNDA3NThiNWbPZiCO: --dhchap-ctrl-secret DHHC-1:02:NjFjNDQ5NzZjOTUzOGFlNmQyZWQ2OWNjMjQyZTkxMzUxNTJmYzQ4YzZlMDM1NDliJ6QLcw==: 00:17:23.283 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:OGRjYmE3M2IxNzU0NTdjYzU2MDFkOGJlNDA3NThiNWbPZiCO: --dhchap-ctrl-secret DHHC-1:02:NjFjNDQ5NzZjOTUzOGFlNmQyZWQ2OWNjMjQyZTkxMzUxNTJmYzQ4YzZlMDM1NDliJ6QLcw==: 00:17:24.228 19:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:24.228 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:24.228 19:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:24.228 19:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.228 19:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.228 19:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.228 19:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:24.228 19:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:24.228 19:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:24.228 19:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:17:24.228 19:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:24.228 19:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:24.228 19:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:24.228 19:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:24.228 19:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:24.228 19:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:24.228 19:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.228 19:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.228 19:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.228 19:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:24.228 19:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:24.228 19:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:24.489 00:17:24.489 19:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:24.489 19:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:24.489 19:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:24.750 19:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.750 19:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:24.750 19:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.750 19:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.750 19:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.750 19:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:24.750 { 00:17:24.750 "cntlid": 13, 00:17:24.750 "qid": 0, 00:17:24.750 "state": "enabled", 00:17:24.750 "thread": "nvmf_tgt_poll_group_000", 00:17:24.750 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:24.750 "listen_address": { 00:17:24.750 "trtype": "TCP", 00:17:24.750 "adrfam": "IPv4", 00:17:24.750 "traddr": "10.0.0.2", 00:17:24.750 "trsvcid": "4420" 00:17:24.750 }, 00:17:24.750 "peer_address": { 00:17:24.750 "trtype": "TCP", 00:17:24.750 "adrfam": "IPv4", 00:17:24.750 "traddr": "10.0.0.1", 00:17:24.750 "trsvcid": "50238" 00:17:24.750 }, 00:17:24.750 "auth": { 00:17:24.750 "state": "completed", 00:17:24.750 "digest": "sha256", 00:17:24.750 "dhgroup": "ffdhe2048" 00:17:24.750 } 00:17:24.750 } 00:17:24.750 ]' 00:17:24.750 19:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:24.750 19:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:24.750 19:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:24.750 19:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:24.750 19:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:24.750 19:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:24.750 19:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:24.750 19:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:25.011 19:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDAyODFmZGI0YTdmYmZlYzg3ODcxYTQ1NzRlMmQ0NjY0MjZiOWFkMzdjNDRjNGYzV9FB/w==: --dhchap-ctrl-secret DHHC-1:01:YmY5NmQ2NjNmNjAxMGZjMzFmNjM4MmNmNjgzZDBhMmH980Km: 00:17:25.011 19:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZDAyODFmZGI0YTdmYmZlYzg3ODcxYTQ1NzRlMmQ0NjY0MjZiOWFkMzdjNDRjNGYzV9FB/w==: --dhchap-ctrl-secret DHHC-1:01:YmY5NmQ2NjNmNjAxMGZjMzFmNjM4MmNmNjgzZDBhMmH980Km: 00:17:25.582 19:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:25.844 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:25.844 19:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:25.844 19:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.844 19:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.844 19:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.844 19:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:25.844 19:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:25.844 19:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:25.844 19:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:17:25.844 19:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:25.844 19:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:25.844 19:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:25.844 19:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:25.844 19:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:25.844 19:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:25.844 19:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.844 19:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.844 19:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.844 19:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:25.844 19:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:25.844 19:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:26.105 00:17:26.105 19:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:26.105 19:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:26.105 19:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:26.366 19:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.366 19:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:26.366 19:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.366 19:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.366 19:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.366 19:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:26.366 { 00:17:26.366 "cntlid": 15, 00:17:26.366 "qid": 0, 00:17:26.366 "state": "enabled", 00:17:26.366 "thread": "nvmf_tgt_poll_group_000", 00:17:26.366 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:26.366 "listen_address": { 00:17:26.366 "trtype": "TCP", 00:17:26.366 "adrfam": "IPv4", 00:17:26.366 "traddr": "10.0.0.2", 00:17:26.366 "trsvcid": "4420" 00:17:26.366 }, 00:17:26.366 "peer_address": { 00:17:26.366 "trtype": "TCP", 00:17:26.366 "adrfam": "IPv4", 00:17:26.366 "traddr": "10.0.0.1", 00:17:26.366 "trsvcid": "50274" 00:17:26.366 }, 00:17:26.366 "auth": { 00:17:26.366 "state": "completed", 00:17:26.366 "digest": "sha256", 00:17:26.366 "dhgroup": "ffdhe2048" 00:17:26.366 } 00:17:26.366 } 00:17:26.366 ]' 00:17:26.366 19:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:26.366 19:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:26.366 19:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:26.366 19:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:26.366 19:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:26.366 19:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:26.366 19:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:26.366 19:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:26.627 19:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjQ4MzExNmEyZWU3NWU4MThlZmE3YzBkZGQxMjA3YWU0NGQwYmRmMGE4ZWYzYjNjMmYyMzc5ZmMyZDQxMjFhZbmCPzk=: 00:17:26.627 19:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YjQ4MzExNmEyZWU3NWU4MThlZmE3YzBkZGQxMjA3YWU0NGQwYmRmMGE4ZWYzYjNjMmYyMzc5ZmMyZDQxMjFhZbmCPzk=: 00:17:27.323 19:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:27.323 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:27.323 19:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:27.323 19:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.323 19:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.323 19:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.323 19:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:27.323 19:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:27.323 19:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:27.323 19:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:27.585 19:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:17:27.585 19:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:27.585 19:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:27.585 19:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:27.585 19:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:27.585 19:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:27.585 19:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:27.585 19:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.585 19:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.585 19:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.585 19:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:27.585 19:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:27.585 19:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:27.846 00:17:27.846 19:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:27.846 19:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:27.846 19:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:28.107 19:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.107 19:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:28.107 19:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.107 19:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.107 19:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.107 19:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:28.107 { 00:17:28.107 "cntlid": 17, 00:17:28.107 "qid": 0, 00:17:28.107 "state": "enabled", 00:17:28.107 "thread": "nvmf_tgt_poll_group_000", 00:17:28.107 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:28.107 "listen_address": { 00:17:28.107 "trtype": "TCP", 00:17:28.107 "adrfam": "IPv4", 00:17:28.107 "traddr": "10.0.0.2", 00:17:28.107 "trsvcid": "4420" 00:17:28.107 }, 00:17:28.107 "peer_address": { 00:17:28.107 "trtype": "TCP", 00:17:28.107 "adrfam": "IPv4", 00:17:28.107 "traddr": "10.0.0.1", 00:17:28.107 "trsvcid": "50306" 00:17:28.107 }, 00:17:28.107 "auth": { 00:17:28.107 "state": "completed", 00:17:28.107 "digest": "sha256", 00:17:28.107 "dhgroup": "ffdhe3072" 00:17:28.107 } 00:17:28.107 } 00:17:28.107 ]' 00:17:28.107 19:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:28.107 19:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:28.107 19:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:28.107 19:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:28.107 19:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:28.107 19:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:28.107 19:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:28.107 19:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:28.368 19:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjA5MjE3ZGE3YTI0YzIxOGQyMTA3YzUwMzU1YTViODVjODQ3NjRkYjI1NWYzMTY4IHDRog==: --dhchap-ctrl-secret DHHC-1:03:MjJkNWI4NWRhOTZmOTRhMzc2MTMyYzNkMzU1M2ZkNTgzYWI1N2RiNmIyZDdlZDJlMDAxMTliYTJhMjIxZWRkYdVqnqo=: 00:17:28.368 19:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YjA5MjE3ZGE3YTI0YzIxOGQyMTA3YzUwMzU1YTViODVjODQ3NjRkYjI1NWYzMTY4IHDRog==: --dhchap-ctrl-secret DHHC-1:03:MjJkNWI4NWRhOTZmOTRhMzc2MTMyYzNkMzU1M2ZkNTgzYWI1N2RiNmIyZDdlZDJlMDAxMTliYTJhMjIxZWRkYdVqnqo=: 00:17:29.308 19:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:29.308 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:29.309 19:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:29.309 19:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.309 19:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.309 19:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.309 19:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:29.309 19:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:29.309 19:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:29.309 19:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:17:29.309 19:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:29.309 19:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:29.309 19:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:29.309 19:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:29.309 19:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:29.309 19:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:29.309 19:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.309 19:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.309 19:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.309 19:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:29.309 19:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:29.309 19:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:29.570 00:17:29.570 19:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:29.570 19:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:29.570 19:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:29.830 19:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:29.830 19:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:29.830 19:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.830 19:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.830 19:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.830 19:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:29.830 { 00:17:29.830 "cntlid": 19, 00:17:29.830 "qid": 0, 00:17:29.830 "state": "enabled", 00:17:29.830 "thread": "nvmf_tgt_poll_group_000", 00:17:29.830 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:29.830 "listen_address": { 00:17:29.830 "trtype": "TCP", 00:17:29.830 "adrfam": "IPv4", 00:17:29.830 "traddr": "10.0.0.2", 00:17:29.830 "trsvcid": "4420" 00:17:29.830 }, 00:17:29.830 "peer_address": { 00:17:29.830 "trtype": "TCP", 00:17:29.830 "adrfam": "IPv4", 00:17:29.830 "traddr": "10.0.0.1", 00:17:29.830 "trsvcid": "39860" 00:17:29.830 }, 00:17:29.830 "auth": { 00:17:29.830 "state": "completed", 00:17:29.830 "digest": "sha256", 00:17:29.830 "dhgroup": "ffdhe3072" 00:17:29.830 } 00:17:29.830 } 00:17:29.830 ]' 00:17:29.830 19:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:29.830 19:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:29.830 19:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:29.830 19:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:29.830 19:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:29.830 19:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:29.830 19:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:29.830 19:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:30.091 19:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGRjYmE3M2IxNzU0NTdjYzU2MDFkOGJlNDA3NThiNWbPZiCO: --dhchap-ctrl-secret DHHC-1:02:NjFjNDQ5NzZjOTUzOGFlNmQyZWQ2OWNjMjQyZTkxMzUxNTJmYzQ4YzZlMDM1NDliJ6QLcw==: 00:17:30.091 19:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:OGRjYmE3M2IxNzU0NTdjYzU2MDFkOGJlNDA3NThiNWbPZiCO: --dhchap-ctrl-secret DHHC-1:02:NjFjNDQ5NzZjOTUzOGFlNmQyZWQ2OWNjMjQyZTkxMzUxNTJmYzQ4YzZlMDM1NDliJ6QLcw==: 00:17:31.033 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:31.033 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:31.033 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:31.033 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.033 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.033 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.033 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:31.033 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:31.033 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:31.033 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:17:31.033 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:31.033 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:31.033 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:31.033 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:31.033 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:31.033 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:31.033 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.033 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.033 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.033 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:31.033 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:31.033 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:31.293 00:17:31.293 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:31.293 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:31.293 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:31.554 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.554 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:31.554 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.554 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.554 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.554 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:31.554 { 00:17:31.554 "cntlid": 21, 00:17:31.554 "qid": 0, 00:17:31.554 "state": "enabled", 00:17:31.554 "thread": "nvmf_tgt_poll_group_000", 00:17:31.554 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:31.554 "listen_address": { 00:17:31.554 "trtype": "TCP", 00:17:31.554 "adrfam": "IPv4", 00:17:31.554 "traddr": "10.0.0.2", 00:17:31.554 "trsvcid": "4420" 00:17:31.554 }, 00:17:31.554 "peer_address": { 00:17:31.554 "trtype": "TCP", 00:17:31.554 "adrfam": "IPv4", 00:17:31.554 "traddr": "10.0.0.1", 00:17:31.554 "trsvcid": "39876" 00:17:31.554 }, 00:17:31.554 "auth": { 00:17:31.554 "state": "completed", 00:17:31.554 "digest": "sha256", 00:17:31.554 "dhgroup": "ffdhe3072" 00:17:31.554 } 00:17:31.554 } 00:17:31.554 ]' 00:17:31.554 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:31.554 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:31.554 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:31.554 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:31.554 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:31.554 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:31.554 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:31.554 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:31.814 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDAyODFmZGI0YTdmYmZlYzg3ODcxYTQ1NzRlMmQ0NjY0MjZiOWFkMzdjNDRjNGYzV9FB/w==: --dhchap-ctrl-secret DHHC-1:01:YmY5NmQ2NjNmNjAxMGZjMzFmNjM4MmNmNjgzZDBhMmH980Km: 00:17:31.814 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZDAyODFmZGI0YTdmYmZlYzg3ODcxYTQ1NzRlMmQ0NjY0MjZiOWFkMzdjNDRjNGYzV9FB/w==: --dhchap-ctrl-secret DHHC-1:01:YmY5NmQ2NjNmNjAxMGZjMzFmNjM4MmNmNjgzZDBhMmH980Km: 00:17:32.754 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:32.754 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:32.754 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:32.754 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.754 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.754 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.754 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:32.754 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:32.754 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:32.754 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:17:32.754 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:32.754 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:32.754 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:32.754 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:32.754 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:32.754 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:32.754 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.754 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.754 19:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.754 19:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:32.754 19:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:32.754 19:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:33.014 00:17:33.015 19:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:33.015 19:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:33.015 19:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:33.275 19:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:33.275 19:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:33.275 19:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.275 19:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.275 19:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.275 19:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:33.275 { 00:17:33.275 "cntlid": 23, 00:17:33.275 "qid": 0, 00:17:33.275 "state": "enabled", 00:17:33.275 "thread": "nvmf_tgt_poll_group_000", 00:17:33.275 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:33.275 "listen_address": { 00:17:33.275 "trtype": "TCP", 00:17:33.275 "adrfam": "IPv4", 00:17:33.275 "traddr": "10.0.0.2", 00:17:33.275 "trsvcid": "4420" 00:17:33.275 }, 00:17:33.275 "peer_address": { 00:17:33.275 "trtype": "TCP", 00:17:33.275 "adrfam": "IPv4", 00:17:33.275 "traddr": "10.0.0.1", 00:17:33.275 "trsvcid": "39900" 00:17:33.275 }, 00:17:33.275 "auth": { 00:17:33.275 "state": "completed", 00:17:33.275 "digest": "sha256", 00:17:33.275 "dhgroup": "ffdhe3072" 00:17:33.275 } 00:17:33.275 } 00:17:33.275 ]' 00:17:33.275 19:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:33.275 19:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:33.275 19:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:33.275 19:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:33.275 19:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:33.275 19:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:33.275 19:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:33.275 19:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:33.535 19:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjQ4MzExNmEyZWU3NWU4MThlZmE3YzBkZGQxMjA3YWU0NGQwYmRmMGE4ZWYzYjNjMmYyMzc5ZmMyZDQxMjFhZbmCPzk=: 00:17:33.535 19:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YjQ4MzExNmEyZWU3NWU4MThlZmE3YzBkZGQxMjA3YWU0NGQwYmRmMGE4ZWYzYjNjMmYyMzc5ZmMyZDQxMjFhZbmCPzk=: 00:17:34.475 19:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:34.475 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:34.475 19:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:34.475 19:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.475 19:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.475 19:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.475 19:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:34.475 19:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:34.475 19:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:34.475 19:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:34.475 19:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:17:34.475 19:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:34.475 19:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:34.475 19:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:34.475 19:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:34.475 19:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:34.475 19:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:34.475 19:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.476 19:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.476 19:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.476 19:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:34.476 19:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:34.476 19:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:34.736 00:17:34.736 19:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:34.736 19:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:34.736 19:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:34.996 19:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.996 19:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:34.996 19:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.996 19:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.996 19:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.996 19:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:34.996 { 00:17:34.996 "cntlid": 25, 00:17:34.996 "qid": 0, 00:17:34.996 "state": "enabled", 00:17:34.996 "thread": "nvmf_tgt_poll_group_000", 00:17:34.996 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:34.996 "listen_address": { 00:17:34.996 "trtype": "TCP", 00:17:34.996 "adrfam": "IPv4", 00:17:34.996 "traddr": "10.0.0.2", 00:17:34.996 "trsvcid": "4420" 00:17:34.996 }, 00:17:34.996 "peer_address": { 00:17:34.996 "trtype": "TCP", 00:17:34.996 "adrfam": "IPv4", 00:17:34.996 "traddr": "10.0.0.1", 00:17:34.996 "trsvcid": "39922" 00:17:34.996 }, 00:17:34.996 "auth": { 00:17:34.996 "state": "completed", 00:17:34.996 "digest": "sha256", 00:17:34.996 "dhgroup": "ffdhe4096" 00:17:34.996 } 00:17:34.996 } 00:17:34.996 ]' 00:17:34.996 19:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:34.996 19:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:34.996 19:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:34.996 19:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:34.996 19:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:34.996 19:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:34.996 19:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:34.996 19:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:35.256 19:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjA5MjE3ZGE3YTI0YzIxOGQyMTA3YzUwMzU1YTViODVjODQ3NjRkYjI1NWYzMTY4IHDRog==: --dhchap-ctrl-secret DHHC-1:03:MjJkNWI4NWRhOTZmOTRhMzc2MTMyYzNkMzU1M2ZkNTgzYWI1N2RiNmIyZDdlZDJlMDAxMTliYTJhMjIxZWRkYdVqnqo=: 00:17:35.256 19:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YjA5MjE3ZGE3YTI0YzIxOGQyMTA3YzUwMzU1YTViODVjODQ3NjRkYjI1NWYzMTY4IHDRog==: --dhchap-ctrl-secret DHHC-1:03:MjJkNWI4NWRhOTZmOTRhMzc2MTMyYzNkMzU1M2ZkNTgzYWI1N2RiNmIyZDdlZDJlMDAxMTliYTJhMjIxZWRkYdVqnqo=: 00:17:36.196 19:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:36.196 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:36.196 19:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:36.196 19:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.196 19:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.196 19:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.196 19:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:36.196 19:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:36.196 19:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:36.196 19:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:17:36.196 19:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:36.196 19:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:36.196 19:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:36.196 19:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:36.196 19:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:36.196 19:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:36.196 19:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.196 19:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.196 19:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.196 19:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:36.196 19:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:36.196 19:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:36.457 00:17:36.457 19:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:36.457 19:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:36.457 19:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:36.717 19:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:36.717 19:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:36.717 19:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.717 19:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.717 19:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.717 19:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:36.717 { 00:17:36.717 "cntlid": 27, 00:17:36.717 "qid": 0, 00:17:36.717 "state": "enabled", 00:17:36.717 "thread": "nvmf_tgt_poll_group_000", 00:17:36.717 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:36.717 "listen_address": { 00:17:36.717 "trtype": "TCP", 00:17:36.717 "adrfam": "IPv4", 00:17:36.717 "traddr": "10.0.0.2", 00:17:36.717 "trsvcid": "4420" 00:17:36.717 }, 00:17:36.717 "peer_address": { 00:17:36.717 "trtype": "TCP", 00:17:36.717 "adrfam": "IPv4", 00:17:36.717 "traddr": "10.0.0.1", 00:17:36.717 "trsvcid": "39944" 00:17:36.717 }, 00:17:36.717 "auth": { 00:17:36.717 "state": "completed", 00:17:36.717 "digest": "sha256", 00:17:36.717 "dhgroup": "ffdhe4096" 00:17:36.717 } 00:17:36.717 } 00:17:36.717 ]' 00:17:36.717 19:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:36.717 19:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:36.717 19:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:36.718 19:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:36.718 19:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:36.718 19:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:36.718 19:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:36.718 19:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:36.978 19:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGRjYmE3M2IxNzU0NTdjYzU2MDFkOGJlNDA3NThiNWbPZiCO: --dhchap-ctrl-secret DHHC-1:02:NjFjNDQ5NzZjOTUzOGFlNmQyZWQ2OWNjMjQyZTkxMzUxNTJmYzQ4YzZlMDM1NDliJ6QLcw==: 00:17:36.978 19:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:OGRjYmE3M2IxNzU0NTdjYzU2MDFkOGJlNDA3NThiNWbPZiCO: --dhchap-ctrl-secret DHHC-1:02:NjFjNDQ5NzZjOTUzOGFlNmQyZWQ2OWNjMjQyZTkxMzUxNTJmYzQ4YzZlMDM1NDliJ6QLcw==: 00:17:37.918 19:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:37.918 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:37.918 19:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:37.918 19:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.918 19:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.918 19:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.918 19:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:37.918 19:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:37.918 19:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:37.918 19:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:17:37.918 19:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:37.918 19:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:37.918 19:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:37.918 19:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:37.918 19:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:37.918 19:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:37.918 19:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.918 19:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.918 19:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.918 19:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:37.918 19:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:37.918 19:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:38.178 00:17:38.178 19:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:38.178 19:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:38.178 19:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:38.438 19:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.438 19:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:38.438 19:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.438 19:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.438 19:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.438 19:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:38.438 { 00:17:38.438 "cntlid": 29, 00:17:38.438 "qid": 0, 00:17:38.438 "state": "enabled", 00:17:38.438 "thread": "nvmf_tgt_poll_group_000", 00:17:38.438 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:38.438 "listen_address": { 00:17:38.438 "trtype": "TCP", 00:17:38.438 "adrfam": "IPv4", 00:17:38.438 "traddr": "10.0.0.2", 00:17:38.438 "trsvcid": "4420" 00:17:38.438 }, 00:17:38.438 "peer_address": { 00:17:38.438 "trtype": "TCP", 00:17:38.438 "adrfam": "IPv4", 00:17:38.438 "traddr": "10.0.0.1", 00:17:38.438 "trsvcid": "39968" 00:17:38.438 }, 00:17:38.438 "auth": { 00:17:38.438 "state": "completed", 00:17:38.438 "digest": "sha256", 00:17:38.438 "dhgroup": "ffdhe4096" 00:17:38.438 } 00:17:38.439 } 00:17:38.439 ]' 00:17:38.439 19:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:38.439 19:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:38.439 19:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:38.439 19:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:38.439 19:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:38.439 19:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:38.439 19:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:38.439 19:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:38.699 19:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDAyODFmZGI0YTdmYmZlYzg3ODcxYTQ1NzRlMmQ0NjY0MjZiOWFkMzdjNDRjNGYzV9FB/w==: --dhchap-ctrl-secret DHHC-1:01:YmY5NmQ2NjNmNjAxMGZjMzFmNjM4MmNmNjgzZDBhMmH980Km: 00:17:38.699 19:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZDAyODFmZGI0YTdmYmZlYzg3ODcxYTQ1NzRlMmQ0NjY0MjZiOWFkMzdjNDRjNGYzV9FB/w==: --dhchap-ctrl-secret DHHC-1:01:YmY5NmQ2NjNmNjAxMGZjMzFmNjM4MmNmNjgzZDBhMmH980Km: 00:17:39.641 19:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:39.641 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:39.641 19:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:39.641 19:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.641 19:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.642 19:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.642 19:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:39.642 19:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:39.642 19:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:39.642 19:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:17:39.642 19:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:39.642 19:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:39.642 19:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:39.642 19:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:39.642 19:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:39.642 19:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:39.642 19:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.642 19:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.642 19:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.642 19:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:39.642 19:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:39.642 19:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:39.902 00:17:39.902 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:39.902 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:39.902 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:40.162 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:40.162 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:40.162 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.162 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.162 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.162 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:40.162 { 00:17:40.162 "cntlid": 31, 00:17:40.162 "qid": 0, 00:17:40.162 "state": "enabled", 00:17:40.162 "thread": "nvmf_tgt_poll_group_000", 00:17:40.162 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:40.162 "listen_address": { 00:17:40.162 "trtype": "TCP", 00:17:40.162 "adrfam": "IPv4", 00:17:40.162 "traddr": "10.0.0.2", 00:17:40.162 "trsvcid": "4420" 00:17:40.162 }, 00:17:40.162 "peer_address": { 00:17:40.162 "trtype": "TCP", 00:17:40.162 "adrfam": "IPv4", 00:17:40.162 "traddr": "10.0.0.1", 00:17:40.162 "trsvcid": "36792" 00:17:40.162 }, 00:17:40.162 "auth": { 00:17:40.162 "state": "completed", 00:17:40.162 "digest": "sha256", 00:17:40.162 "dhgroup": "ffdhe4096" 00:17:40.162 } 00:17:40.162 } 00:17:40.162 ]' 00:17:40.162 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:40.162 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:40.162 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:40.162 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:40.162 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:40.162 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:40.162 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:40.162 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:40.423 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjQ4MzExNmEyZWU3NWU4MThlZmE3YzBkZGQxMjA3YWU0NGQwYmRmMGE4ZWYzYjNjMmYyMzc5ZmMyZDQxMjFhZbmCPzk=: 00:17:40.423 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YjQ4MzExNmEyZWU3NWU4MThlZmE3YzBkZGQxMjA3YWU0NGQwYmRmMGE4ZWYzYjNjMmYyMzc5ZmMyZDQxMjFhZbmCPzk=: 00:17:41.364 19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:41.364 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:41.364 19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:41.364 19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.364 19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.364 19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.364 19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:41.364 19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:41.364 19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:41.365 19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:41.365 19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:17:41.365 19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:41.365 19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:41.365 19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:41.365 19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:41.365 19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:41.365 19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:41.365 19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.365 19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.365 19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.365 19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:41.365 19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:41.365 19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:41.625 00:17:41.886 19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:41.886 19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:41.886 19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:41.886 19:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.886 19:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:41.886 19:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.886 19:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.886 19:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.886 19:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:41.886 { 00:17:41.886 "cntlid": 33, 00:17:41.886 "qid": 0, 00:17:41.886 "state": "enabled", 00:17:41.886 "thread": "nvmf_tgt_poll_group_000", 00:17:41.886 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:41.886 "listen_address": { 00:17:41.886 "trtype": "TCP", 00:17:41.886 "adrfam": "IPv4", 00:17:41.886 "traddr": "10.0.0.2", 00:17:41.886 "trsvcid": "4420" 00:17:41.886 }, 00:17:41.886 "peer_address": { 00:17:41.886 "trtype": "TCP", 00:17:41.886 "adrfam": "IPv4", 00:17:41.886 "traddr": "10.0.0.1", 00:17:41.886 "trsvcid": "36824" 00:17:41.886 }, 00:17:41.886 "auth": { 00:17:41.886 "state": "completed", 00:17:41.886 "digest": "sha256", 00:17:41.886 "dhgroup": "ffdhe6144" 00:17:41.886 } 00:17:41.886 } 00:17:41.886 ]' 00:17:41.886 19:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:41.886 19:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:41.886 19:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:42.146 19:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:42.146 19:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:42.146 19:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:42.146 19:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:42.146 19:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:42.405 19:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjA5MjE3ZGE3YTI0YzIxOGQyMTA3YzUwMzU1YTViODVjODQ3NjRkYjI1NWYzMTY4IHDRog==: --dhchap-ctrl-secret DHHC-1:03:MjJkNWI4NWRhOTZmOTRhMzc2MTMyYzNkMzU1M2ZkNTgzYWI1N2RiNmIyZDdlZDJlMDAxMTliYTJhMjIxZWRkYdVqnqo=: 00:17:42.405 19:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YjA5MjE3ZGE3YTI0YzIxOGQyMTA3YzUwMzU1YTViODVjODQ3NjRkYjI1NWYzMTY4IHDRog==: --dhchap-ctrl-secret DHHC-1:03:MjJkNWI4NWRhOTZmOTRhMzc2MTMyYzNkMzU1M2ZkNTgzYWI1N2RiNmIyZDdlZDJlMDAxMTliYTJhMjIxZWRkYdVqnqo=: 00:17:42.977 19:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:42.977 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:42.977 19:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:42.977 19:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.977 19:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.977 19:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.977 19:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:42.977 19:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:42.977 19:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:43.237 19:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:17:43.237 19:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:43.237 19:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:43.237 19:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:43.238 19:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:43.238 19:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:43.238 19:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:43.238 19:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.238 19:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.238 19:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.238 19:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:43.238 19:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:43.238 19:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:43.498 00:17:43.498 19:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:43.498 19:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:43.498 19:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:43.758 19:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:43.758 19:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:43.758 19:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.758 19:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.758 19:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.758 19:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:43.759 { 00:17:43.759 "cntlid": 35, 00:17:43.759 "qid": 0, 00:17:43.759 "state": "enabled", 00:17:43.759 "thread": "nvmf_tgt_poll_group_000", 00:17:43.759 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:43.759 "listen_address": { 00:17:43.759 "trtype": "TCP", 00:17:43.759 "adrfam": "IPv4", 00:17:43.759 "traddr": "10.0.0.2", 00:17:43.759 "trsvcid": "4420" 00:17:43.759 }, 00:17:43.759 "peer_address": { 00:17:43.759 "trtype": "TCP", 00:17:43.759 "adrfam": "IPv4", 00:17:43.759 "traddr": "10.0.0.1", 00:17:43.759 "trsvcid": "36836" 00:17:43.759 }, 00:17:43.759 "auth": { 00:17:43.759 "state": "completed", 00:17:43.759 "digest": "sha256", 00:17:43.759 "dhgroup": "ffdhe6144" 00:17:43.759 } 00:17:43.759 } 00:17:43.759 ]' 00:17:43.759 19:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:43.759 19:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:43.759 19:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:44.020 19:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:44.020 19:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:44.020 19:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:44.020 19:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:44.020 19:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:44.020 19:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGRjYmE3M2IxNzU0NTdjYzU2MDFkOGJlNDA3NThiNWbPZiCO: --dhchap-ctrl-secret DHHC-1:02:NjFjNDQ5NzZjOTUzOGFlNmQyZWQ2OWNjMjQyZTkxMzUxNTJmYzQ4YzZlMDM1NDliJ6QLcw==: 00:17:44.020 19:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:OGRjYmE3M2IxNzU0NTdjYzU2MDFkOGJlNDA3NThiNWbPZiCO: --dhchap-ctrl-secret DHHC-1:02:NjFjNDQ5NzZjOTUzOGFlNmQyZWQ2OWNjMjQyZTkxMzUxNTJmYzQ4YzZlMDM1NDliJ6QLcw==: 00:17:44.963 19:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:44.963 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:44.963 19:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:44.963 19:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.963 19:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.963 19:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.963 19:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:44.963 19:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:44.963 19:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:44.963 19:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:17:44.963 19:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:44.963 19:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:44.963 19:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:44.963 19:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:44.963 19:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:44.963 19:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:44.963 19:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.963 19:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.963 19:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.963 19:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:44.963 19:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:44.963 19:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:45.535 00:17:45.535 19:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:45.535 19:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:45.535 19:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:45.535 19:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:45.535 19:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:45.535 19:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.535 19:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.535 19:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.535 19:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:45.535 { 00:17:45.535 "cntlid": 37, 00:17:45.535 "qid": 0, 00:17:45.535 "state": "enabled", 00:17:45.535 "thread": "nvmf_tgt_poll_group_000", 00:17:45.535 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:45.535 "listen_address": { 00:17:45.535 "trtype": "TCP", 00:17:45.535 "adrfam": "IPv4", 00:17:45.535 "traddr": "10.0.0.2", 00:17:45.535 "trsvcid": "4420" 00:17:45.535 }, 00:17:45.535 "peer_address": { 00:17:45.535 "trtype": "TCP", 00:17:45.535 "adrfam": "IPv4", 00:17:45.535 "traddr": "10.0.0.1", 00:17:45.535 "trsvcid": "36866" 00:17:45.535 }, 00:17:45.535 "auth": { 00:17:45.535 "state": "completed", 00:17:45.535 "digest": "sha256", 00:17:45.535 "dhgroup": "ffdhe6144" 00:17:45.535 } 00:17:45.535 } 00:17:45.535 ]' 00:17:45.535 19:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:45.796 19:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:45.796 19:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:45.796 19:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:45.796 19:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:45.796 19:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:45.797 19:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:45.797 19:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:46.057 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDAyODFmZGI0YTdmYmZlYzg3ODcxYTQ1NzRlMmQ0NjY0MjZiOWFkMzdjNDRjNGYzV9FB/w==: --dhchap-ctrl-secret DHHC-1:01:YmY5NmQ2NjNmNjAxMGZjMzFmNjM4MmNmNjgzZDBhMmH980Km: 00:17:46.057 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZDAyODFmZGI0YTdmYmZlYzg3ODcxYTQ1NzRlMmQ0NjY0MjZiOWFkMzdjNDRjNGYzV9FB/w==: --dhchap-ctrl-secret DHHC-1:01:YmY5NmQ2NjNmNjAxMGZjMzFmNjM4MmNmNjgzZDBhMmH980Km: 00:17:46.628 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:46.628 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:46.628 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:46.628 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.628 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.628 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.628 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:46.628 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:46.628 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:46.888 19:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:17:46.888 19:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:46.888 19:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:46.888 19:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:46.888 19:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:46.888 19:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:46.888 19:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:46.888 19:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.888 19:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.888 19:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.888 19:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:46.888 19:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:46.888 19:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:47.148 00:17:47.408 19:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:47.408 19:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:47.408 19:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:47.408 19:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:47.408 19:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:47.408 19:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.408 19:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.408 19:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.408 19:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:47.408 { 00:17:47.408 "cntlid": 39, 00:17:47.408 "qid": 0, 00:17:47.408 "state": "enabled", 00:17:47.408 "thread": "nvmf_tgt_poll_group_000", 00:17:47.408 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:47.408 "listen_address": { 00:17:47.408 "trtype": "TCP", 00:17:47.408 "adrfam": "IPv4", 00:17:47.408 "traddr": "10.0.0.2", 00:17:47.408 "trsvcid": "4420" 00:17:47.408 }, 00:17:47.408 "peer_address": { 00:17:47.408 "trtype": "TCP", 00:17:47.408 "adrfam": "IPv4", 00:17:47.408 "traddr": "10.0.0.1", 00:17:47.408 "trsvcid": "36902" 00:17:47.408 }, 00:17:47.408 "auth": { 00:17:47.408 "state": "completed", 00:17:47.408 "digest": "sha256", 00:17:47.408 "dhgroup": "ffdhe6144" 00:17:47.408 } 00:17:47.408 } 00:17:47.408 ]' 00:17:47.408 19:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:47.408 19:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:47.408 19:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:47.668 19:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:47.668 19:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:47.668 19:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:47.668 19:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:47.668 19:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:47.929 19:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjQ4MzExNmEyZWU3NWU4MThlZmE3YzBkZGQxMjA3YWU0NGQwYmRmMGE4ZWYzYjNjMmYyMzc5ZmMyZDQxMjFhZbmCPzk=: 00:17:47.929 19:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YjQ4MzExNmEyZWU3NWU4MThlZmE3YzBkZGQxMjA3YWU0NGQwYmRmMGE4ZWYzYjNjMmYyMzc5ZmMyZDQxMjFhZbmCPzk=: 00:17:48.498 19:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:48.498 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:48.498 19:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:48.498 19:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.498 19:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.498 19:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.498 19:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:48.498 19:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:48.498 19:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:48.498 19:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:48.758 19:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:17:48.758 19:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:48.758 19:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:48.758 19:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:48.759 19:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:48.759 19:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:48.759 19:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:48.759 19:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.759 19:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.759 19:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.759 19:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:48.759 19:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:48.759 19:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:49.329 00:17:49.329 19:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:49.329 19:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:49.329 19:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:49.329 19:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:49.329 19:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:49.329 19:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.329 19:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.329 19:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.329 19:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:49.329 { 00:17:49.329 "cntlid": 41, 00:17:49.329 "qid": 0, 00:17:49.329 "state": "enabled", 00:17:49.329 "thread": "nvmf_tgt_poll_group_000", 00:17:49.329 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:49.329 "listen_address": { 00:17:49.329 "trtype": "TCP", 00:17:49.329 "adrfam": "IPv4", 00:17:49.329 "traddr": "10.0.0.2", 00:17:49.329 "trsvcid": "4420" 00:17:49.329 }, 00:17:49.329 "peer_address": { 00:17:49.329 "trtype": "TCP", 00:17:49.329 "adrfam": "IPv4", 00:17:49.329 "traddr": "10.0.0.1", 00:17:49.329 "trsvcid": "51276" 00:17:49.329 }, 00:17:49.329 "auth": { 00:17:49.329 "state": "completed", 00:17:49.329 "digest": "sha256", 00:17:49.329 "dhgroup": "ffdhe8192" 00:17:49.329 } 00:17:49.329 } 00:17:49.329 ]' 00:17:49.589 19:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:49.589 19:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:49.589 19:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:49.589 19:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:49.589 19:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:49.589 19:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:49.589 19:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:49.589 19:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:49.850 19:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjA5MjE3ZGE3YTI0YzIxOGQyMTA3YzUwMzU1YTViODVjODQ3NjRkYjI1NWYzMTY4IHDRog==: --dhchap-ctrl-secret DHHC-1:03:MjJkNWI4NWRhOTZmOTRhMzc2MTMyYzNkMzU1M2ZkNTgzYWI1N2RiNmIyZDdlZDJlMDAxMTliYTJhMjIxZWRkYdVqnqo=: 00:17:49.850 19:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YjA5MjE3ZGE3YTI0YzIxOGQyMTA3YzUwMzU1YTViODVjODQ3NjRkYjI1NWYzMTY4IHDRog==: --dhchap-ctrl-secret DHHC-1:03:MjJkNWI4NWRhOTZmOTRhMzc2MTMyYzNkMzU1M2ZkNTgzYWI1N2RiNmIyZDdlZDJlMDAxMTliYTJhMjIxZWRkYdVqnqo=: 00:17:50.421 19:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:50.421 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:50.421 19:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:50.421 19:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.421 19:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.421 19:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.421 19:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:50.421 19:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:50.421 19:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:50.682 19:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:17:50.682 19:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:50.682 19:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:50.682 19:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:50.682 19:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:50.682 19:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:50.682 19:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:50.682 19:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.682 19:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.682 19:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.682 19:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:50.682 19:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:50.682 19:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:51.254 00:17:51.254 19:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:51.254 19:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:51.254 19:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:51.515 19:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:51.515 19:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:51.515 19:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.515 19:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.515 19:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.515 19:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:51.515 { 00:17:51.515 "cntlid": 43, 00:17:51.515 "qid": 0, 00:17:51.515 "state": "enabled", 00:17:51.515 "thread": "nvmf_tgt_poll_group_000", 00:17:51.515 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:51.515 "listen_address": { 00:17:51.515 "trtype": "TCP", 00:17:51.515 "adrfam": "IPv4", 00:17:51.515 "traddr": "10.0.0.2", 00:17:51.515 "trsvcid": "4420" 00:17:51.515 }, 00:17:51.515 "peer_address": { 00:17:51.515 "trtype": "TCP", 00:17:51.515 "adrfam": "IPv4", 00:17:51.515 "traddr": "10.0.0.1", 00:17:51.515 "trsvcid": "51304" 00:17:51.515 }, 00:17:51.515 "auth": { 00:17:51.515 "state": "completed", 00:17:51.515 "digest": "sha256", 00:17:51.515 "dhgroup": "ffdhe8192" 00:17:51.515 } 00:17:51.515 } 00:17:51.515 ]' 00:17:51.515 19:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:51.515 19:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:51.515 19:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:51.515 19:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:51.515 19:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:51.515 19:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:51.515 19:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:51.515 19:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:51.776 19:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGRjYmE3M2IxNzU0NTdjYzU2MDFkOGJlNDA3NThiNWbPZiCO: --dhchap-ctrl-secret DHHC-1:02:NjFjNDQ5NzZjOTUzOGFlNmQyZWQ2OWNjMjQyZTkxMzUxNTJmYzQ4YzZlMDM1NDliJ6QLcw==: 00:17:51.776 19:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:OGRjYmE3M2IxNzU0NTdjYzU2MDFkOGJlNDA3NThiNWbPZiCO: --dhchap-ctrl-secret DHHC-1:02:NjFjNDQ5NzZjOTUzOGFlNmQyZWQ2OWNjMjQyZTkxMzUxNTJmYzQ4YzZlMDM1NDliJ6QLcw==: 00:17:52.717 19:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:52.717 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:52.717 19:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:52.717 19:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.717 19:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.717 19:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.717 19:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:52.717 19:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:52.717 19:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:52.717 19:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:17:52.717 19:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:52.717 19:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:52.717 19:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:52.717 19:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:52.717 19:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:52.717 19:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:52.717 19:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.717 19:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.717 19:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.717 19:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:52.717 19:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:52.717 19:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:53.288 00:17:53.288 19:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:53.288 19:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:53.288 19:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:53.549 19:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:53.549 19:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:53.549 19:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.549 19:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.549 19:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.549 19:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:53.549 { 00:17:53.549 "cntlid": 45, 00:17:53.549 "qid": 0, 00:17:53.549 "state": "enabled", 00:17:53.549 "thread": "nvmf_tgt_poll_group_000", 00:17:53.549 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:53.549 "listen_address": { 00:17:53.549 "trtype": "TCP", 00:17:53.549 "adrfam": "IPv4", 00:17:53.549 "traddr": "10.0.0.2", 00:17:53.549 "trsvcid": "4420" 00:17:53.549 }, 00:17:53.549 "peer_address": { 00:17:53.549 "trtype": "TCP", 00:17:53.549 "adrfam": "IPv4", 00:17:53.549 "traddr": "10.0.0.1", 00:17:53.549 "trsvcid": "51336" 00:17:53.549 }, 00:17:53.549 "auth": { 00:17:53.549 "state": "completed", 00:17:53.549 "digest": "sha256", 00:17:53.549 "dhgroup": "ffdhe8192" 00:17:53.549 } 00:17:53.549 } 00:17:53.549 ]' 00:17:53.549 19:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:53.549 19:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:53.549 19:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:53.549 19:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:53.549 19:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:53.549 19:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:53.549 19:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:53.549 19:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:53.810 19:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDAyODFmZGI0YTdmYmZlYzg3ODcxYTQ1NzRlMmQ0NjY0MjZiOWFkMzdjNDRjNGYzV9FB/w==: --dhchap-ctrl-secret DHHC-1:01:YmY5NmQ2NjNmNjAxMGZjMzFmNjM4MmNmNjgzZDBhMmH980Km: 00:17:53.810 19:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZDAyODFmZGI0YTdmYmZlYzg3ODcxYTQ1NzRlMmQ0NjY0MjZiOWFkMzdjNDRjNGYzV9FB/w==: --dhchap-ctrl-secret DHHC-1:01:YmY5NmQ2NjNmNjAxMGZjMzFmNjM4MmNmNjgzZDBhMmH980Km: 00:17:54.752 19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:54.752 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:54.752 19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:54.752 19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.752 19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.752 19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.752 19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:54.752 19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:54.752 19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:54.752 19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:17:54.752 19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:54.752 19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:54.752 19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:54.752 19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:54.752 19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:54.752 19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:54.752 19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.752 19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.752 19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.752 19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:54.752 19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:54.752 19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:55.325 00:17:55.325 19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:55.325 19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:55.325 19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:55.586 19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:55.586 19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:55.586 19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.586 19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.586 19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.586 19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:55.586 { 00:17:55.586 "cntlid": 47, 00:17:55.586 "qid": 0, 00:17:55.586 "state": "enabled", 00:17:55.586 "thread": "nvmf_tgt_poll_group_000", 00:17:55.586 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:55.586 "listen_address": { 00:17:55.586 "trtype": "TCP", 00:17:55.586 "adrfam": "IPv4", 00:17:55.586 "traddr": "10.0.0.2", 00:17:55.586 "trsvcid": "4420" 00:17:55.586 }, 00:17:55.586 "peer_address": { 00:17:55.586 "trtype": "TCP", 00:17:55.586 "adrfam": "IPv4", 00:17:55.586 "traddr": "10.0.0.1", 00:17:55.586 "trsvcid": "51378" 00:17:55.586 }, 00:17:55.586 "auth": { 00:17:55.586 "state": "completed", 00:17:55.586 "digest": "sha256", 00:17:55.586 "dhgroup": "ffdhe8192" 00:17:55.586 } 00:17:55.586 } 00:17:55.586 ]' 00:17:55.586 19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:55.586 19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:55.586 19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:55.586 19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:55.586 19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:55.586 19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:55.586 19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:55.586 19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:55.847 19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjQ4MzExNmEyZWU3NWU4MThlZmE3YzBkZGQxMjA3YWU0NGQwYmRmMGE4ZWYzYjNjMmYyMzc5ZmMyZDQxMjFhZbmCPzk=: 00:17:55.847 19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YjQ4MzExNmEyZWU3NWU4MThlZmE3YzBkZGQxMjA3YWU0NGQwYmRmMGE4ZWYzYjNjMmYyMzc5ZmMyZDQxMjFhZbmCPzk=: 00:17:56.417 19:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:56.677 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:56.677 19:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:56.677 19:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.677 19:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.677 19:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.677 19:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:17:56.677 19:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:56.677 19:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:56.677 19:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:56.677 19:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:56.677 19:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:17:56.677 19:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:56.677 19:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:56.677 19:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:56.677 19:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:56.677 19:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:56.677 19:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:56.677 19:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.677 19:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.677 19:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.677 19:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:56.677 19:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:56.677 19:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:56.938 00:17:56.938 19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:56.938 19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:56.938 19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:57.198 19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:57.198 19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:57.198 19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.198 19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.198 19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.198 19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:57.198 { 00:17:57.198 "cntlid": 49, 00:17:57.198 "qid": 0, 00:17:57.198 "state": "enabled", 00:17:57.198 "thread": "nvmf_tgt_poll_group_000", 00:17:57.198 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:57.198 "listen_address": { 00:17:57.198 "trtype": "TCP", 00:17:57.198 "adrfam": "IPv4", 00:17:57.198 "traddr": "10.0.0.2", 00:17:57.198 "trsvcid": "4420" 00:17:57.198 }, 00:17:57.198 "peer_address": { 00:17:57.198 "trtype": "TCP", 00:17:57.198 "adrfam": "IPv4", 00:17:57.198 "traddr": "10.0.0.1", 00:17:57.198 "trsvcid": "51410" 00:17:57.198 }, 00:17:57.198 "auth": { 00:17:57.198 "state": "completed", 00:17:57.198 "digest": "sha384", 00:17:57.198 "dhgroup": "null" 00:17:57.198 } 00:17:57.198 } 00:17:57.198 ]' 00:17:57.199 19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:57.199 19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:57.199 19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:57.199 19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:57.199 19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:57.199 19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:57.199 19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:57.199 19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:57.458 19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjA5MjE3ZGE3YTI0YzIxOGQyMTA3YzUwMzU1YTViODVjODQ3NjRkYjI1NWYzMTY4IHDRog==: --dhchap-ctrl-secret DHHC-1:03:MjJkNWI4NWRhOTZmOTRhMzc2MTMyYzNkMzU1M2ZkNTgzYWI1N2RiNmIyZDdlZDJlMDAxMTliYTJhMjIxZWRkYdVqnqo=: 00:17:57.458 19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YjA5MjE3ZGE3YTI0YzIxOGQyMTA3YzUwMzU1YTViODVjODQ3NjRkYjI1NWYzMTY4IHDRog==: --dhchap-ctrl-secret DHHC-1:03:MjJkNWI4NWRhOTZmOTRhMzc2MTMyYzNkMzU1M2ZkNTgzYWI1N2RiNmIyZDdlZDJlMDAxMTliYTJhMjIxZWRkYdVqnqo=: 00:17:58.399 19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:58.399 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:58.399 19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:58.399 19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.399 19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.399 19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.399 19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:58.399 19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:58.399 19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:58.399 19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:17:58.399 19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:58.399 19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:58.399 19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:58.399 19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:58.399 19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:58.399 19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:58.399 19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.399 19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.399 19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.399 19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:58.399 19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:58.399 19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:58.660 00:17:58.660 19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:58.660 19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:58.660 19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:58.920 19:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:58.920 19:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:58.920 19:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.920 19:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.920 19:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.920 19:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:58.920 { 00:17:58.920 "cntlid": 51, 00:17:58.920 "qid": 0, 00:17:58.920 "state": "enabled", 00:17:58.920 "thread": "nvmf_tgt_poll_group_000", 00:17:58.920 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:58.920 "listen_address": { 00:17:58.920 "trtype": "TCP", 00:17:58.920 "adrfam": "IPv4", 00:17:58.920 "traddr": "10.0.0.2", 00:17:58.920 "trsvcid": "4420" 00:17:58.920 }, 00:17:58.920 "peer_address": { 00:17:58.920 "trtype": "TCP", 00:17:58.920 "adrfam": "IPv4", 00:17:58.920 "traddr": "10.0.0.1", 00:17:58.920 "trsvcid": "51430" 00:17:58.920 }, 00:17:58.920 "auth": { 00:17:58.920 "state": "completed", 00:17:58.920 "digest": "sha384", 00:17:58.920 "dhgroup": "null" 00:17:58.920 } 00:17:58.920 } 00:17:58.920 ]' 00:17:58.920 19:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:58.920 19:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:58.920 19:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:58.920 19:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:58.920 19:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:58.920 19:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:58.920 19:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:58.920 19:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:59.203 19:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGRjYmE3M2IxNzU0NTdjYzU2MDFkOGJlNDA3NThiNWbPZiCO: --dhchap-ctrl-secret DHHC-1:02:NjFjNDQ5NzZjOTUzOGFlNmQyZWQ2OWNjMjQyZTkxMzUxNTJmYzQ4YzZlMDM1NDliJ6QLcw==: 00:17:59.203 19:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:OGRjYmE3M2IxNzU0NTdjYzU2MDFkOGJlNDA3NThiNWbPZiCO: --dhchap-ctrl-secret DHHC-1:02:NjFjNDQ5NzZjOTUzOGFlNmQyZWQ2OWNjMjQyZTkxMzUxNTJmYzQ4YzZlMDM1NDliJ6QLcw==: 00:17:59.867 19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:59.867 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:59.867 19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:59.867 19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.867 19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.867 19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.867 19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:59.867 19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:59.867 19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:00.128 19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:18:00.128 19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:00.128 19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:00.128 19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:00.128 19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:00.128 19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:00.128 19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:00.128 19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.128 19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.128 19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.128 19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:00.128 19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:00.128 19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:00.389 00:18:00.389 19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:00.389 19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:00.389 19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:00.650 19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:00.650 19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:00.650 19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.650 19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.650 19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.650 19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:00.650 { 00:18:00.650 "cntlid": 53, 00:18:00.650 "qid": 0, 00:18:00.650 "state": "enabled", 00:18:00.650 "thread": "nvmf_tgt_poll_group_000", 00:18:00.650 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:00.650 "listen_address": { 00:18:00.650 "trtype": "TCP", 00:18:00.650 "adrfam": "IPv4", 00:18:00.650 "traddr": "10.0.0.2", 00:18:00.650 "trsvcid": "4420" 00:18:00.650 }, 00:18:00.650 "peer_address": { 00:18:00.650 "trtype": "TCP", 00:18:00.650 "adrfam": "IPv4", 00:18:00.650 "traddr": "10.0.0.1", 00:18:00.650 "trsvcid": "56766" 00:18:00.650 }, 00:18:00.650 "auth": { 00:18:00.650 "state": "completed", 00:18:00.650 "digest": "sha384", 00:18:00.650 "dhgroup": "null" 00:18:00.650 } 00:18:00.650 } 00:18:00.650 ]' 00:18:00.650 19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:00.650 19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:00.650 19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:00.650 19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:00.650 19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:00.650 19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:00.650 19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:00.650 19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:00.911 19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDAyODFmZGI0YTdmYmZlYzg3ODcxYTQ1NzRlMmQ0NjY0MjZiOWFkMzdjNDRjNGYzV9FB/w==: --dhchap-ctrl-secret DHHC-1:01:YmY5NmQ2NjNmNjAxMGZjMzFmNjM4MmNmNjgzZDBhMmH980Km: 00:18:00.911 19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZDAyODFmZGI0YTdmYmZlYzg3ODcxYTQ1NzRlMmQ0NjY0MjZiOWFkMzdjNDRjNGYzV9FB/w==: --dhchap-ctrl-secret DHHC-1:01:YmY5NmQ2NjNmNjAxMGZjMzFmNjM4MmNmNjgzZDBhMmH980Km: 00:18:01.853 19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:01.853 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:01.853 19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:01.853 19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.853 19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.853 19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.853 19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:01.853 19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:01.853 19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:01.853 19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:18:01.853 19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:01.853 19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:01.853 19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:01.853 19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:01.853 19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:01.853 19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:01.853 19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.853 19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.853 19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.853 19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:01.853 19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:01.853 19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:02.115 00:18:02.115 19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:02.115 19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:02.115 19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:02.376 19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:02.376 19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:02.376 19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.376 19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.376 19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.376 19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:02.376 { 00:18:02.376 "cntlid": 55, 00:18:02.376 "qid": 0, 00:18:02.376 "state": "enabled", 00:18:02.376 "thread": "nvmf_tgt_poll_group_000", 00:18:02.376 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:02.376 "listen_address": { 00:18:02.376 "trtype": "TCP", 00:18:02.376 "adrfam": "IPv4", 00:18:02.376 "traddr": "10.0.0.2", 00:18:02.376 "trsvcid": "4420" 00:18:02.376 }, 00:18:02.376 "peer_address": { 00:18:02.376 "trtype": "TCP", 00:18:02.376 "adrfam": "IPv4", 00:18:02.376 "traddr": "10.0.0.1", 00:18:02.376 "trsvcid": "56798" 00:18:02.376 }, 00:18:02.376 "auth": { 00:18:02.376 "state": "completed", 00:18:02.376 "digest": "sha384", 00:18:02.376 "dhgroup": "null" 00:18:02.376 } 00:18:02.376 } 00:18:02.376 ]' 00:18:02.376 19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:02.376 19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:02.376 19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:02.376 19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:02.376 19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:02.376 19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:02.376 19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:02.376 19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:02.637 19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjQ4MzExNmEyZWU3NWU4MThlZmE3YzBkZGQxMjA3YWU0NGQwYmRmMGE4ZWYzYjNjMmYyMzc5ZmMyZDQxMjFhZbmCPzk=: 00:18:02.637 19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YjQ4MzExNmEyZWU3NWU4MThlZmE3YzBkZGQxMjA3YWU0NGQwYmRmMGE4ZWYzYjNjMmYyMzc5ZmMyZDQxMjFhZbmCPzk=: 00:18:03.580 19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:03.580 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:03.580 19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:03.580 19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.580 19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.580 19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.580 19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:03.580 19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:03.580 19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:03.580 19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:03.580 19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:18:03.580 19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:03.580 19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:03.580 19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:03.580 19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:03.580 19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:03.580 19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:03.580 19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.580 19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.580 19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.580 19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:03.580 19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:03.580 19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:03.841 00:18:03.841 19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:03.841 19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:03.841 19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:04.102 19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:04.102 19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:04.102 19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.102 19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.102 19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.102 19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:04.102 { 00:18:04.102 "cntlid": 57, 00:18:04.102 "qid": 0, 00:18:04.102 "state": "enabled", 00:18:04.102 "thread": "nvmf_tgt_poll_group_000", 00:18:04.102 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:04.102 "listen_address": { 00:18:04.102 "trtype": "TCP", 00:18:04.102 "adrfam": "IPv4", 00:18:04.102 "traddr": "10.0.0.2", 00:18:04.102 "trsvcid": "4420" 00:18:04.102 }, 00:18:04.102 "peer_address": { 00:18:04.102 "trtype": "TCP", 00:18:04.102 "adrfam": "IPv4", 00:18:04.102 "traddr": "10.0.0.1", 00:18:04.102 "trsvcid": "56832" 00:18:04.102 }, 00:18:04.102 "auth": { 00:18:04.102 "state": "completed", 00:18:04.102 "digest": "sha384", 00:18:04.102 "dhgroup": "ffdhe2048" 00:18:04.102 } 00:18:04.102 } 00:18:04.102 ]' 00:18:04.102 19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:04.102 19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:04.102 19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:04.102 19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:04.102 19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:04.102 19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:04.102 19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:04.102 19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:04.363 19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjA5MjE3ZGE3YTI0YzIxOGQyMTA3YzUwMzU1YTViODVjODQ3NjRkYjI1NWYzMTY4IHDRog==: --dhchap-ctrl-secret DHHC-1:03:MjJkNWI4NWRhOTZmOTRhMzc2MTMyYzNkMzU1M2ZkNTgzYWI1N2RiNmIyZDdlZDJlMDAxMTliYTJhMjIxZWRkYdVqnqo=: 00:18:04.363 19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YjA5MjE3ZGE3YTI0YzIxOGQyMTA3YzUwMzU1YTViODVjODQ3NjRkYjI1NWYzMTY4IHDRog==: --dhchap-ctrl-secret DHHC-1:03:MjJkNWI4NWRhOTZmOTRhMzc2MTMyYzNkMzU1M2ZkNTgzYWI1N2RiNmIyZDdlZDJlMDAxMTliYTJhMjIxZWRkYdVqnqo=: 00:18:04.934 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:05.195 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:05.195 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:05.195 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.195 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.195 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.195 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:05.195 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:05.195 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:05.456 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:18:05.456 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:05.456 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:05.456 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:05.456 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:05.456 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:05.456 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:05.456 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.456 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.456 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.456 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:05.456 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:05.456 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:05.456 00:18:05.718 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:05.718 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:05.718 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:05.718 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:05.718 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:05.718 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.718 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.718 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.718 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:05.718 { 00:18:05.718 "cntlid": 59, 00:18:05.718 "qid": 0, 00:18:05.718 "state": "enabled", 00:18:05.718 "thread": "nvmf_tgt_poll_group_000", 00:18:05.718 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:05.718 "listen_address": { 00:18:05.718 "trtype": "TCP", 00:18:05.718 "adrfam": "IPv4", 00:18:05.718 "traddr": "10.0.0.2", 00:18:05.718 "trsvcid": "4420" 00:18:05.718 }, 00:18:05.718 "peer_address": { 00:18:05.718 "trtype": "TCP", 00:18:05.718 "adrfam": "IPv4", 00:18:05.718 "traddr": "10.0.0.1", 00:18:05.718 "trsvcid": "56866" 00:18:05.718 }, 00:18:05.718 "auth": { 00:18:05.718 "state": "completed", 00:18:05.718 "digest": "sha384", 00:18:05.718 "dhgroup": "ffdhe2048" 00:18:05.718 } 00:18:05.718 } 00:18:05.718 ]' 00:18:05.718 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:05.718 19:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:05.718 19:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:05.979 19:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:05.979 19:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:05.979 19:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:05.979 19:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:05.979 19:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:05.979 19:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGRjYmE3M2IxNzU0NTdjYzU2MDFkOGJlNDA3NThiNWbPZiCO: --dhchap-ctrl-secret DHHC-1:02:NjFjNDQ5NzZjOTUzOGFlNmQyZWQ2OWNjMjQyZTkxMzUxNTJmYzQ4YzZlMDM1NDliJ6QLcw==: 00:18:05.979 19:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:OGRjYmE3M2IxNzU0NTdjYzU2MDFkOGJlNDA3NThiNWbPZiCO: --dhchap-ctrl-secret DHHC-1:02:NjFjNDQ5NzZjOTUzOGFlNmQyZWQ2OWNjMjQyZTkxMzUxNTJmYzQ4YzZlMDM1NDliJ6QLcw==: 00:18:06.921 19:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:06.921 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:06.921 19:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:06.921 19:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.921 19:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.921 19:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.921 19:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:06.921 19:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:06.921 19:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:06.921 19:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:18:06.921 19:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:06.921 19:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:06.921 19:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:06.921 19:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:06.921 19:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:06.921 19:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:06.921 19:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.921 19:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.921 19:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.921 19:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:06.921 19:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:06.921 19:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:07.183 00:18:07.183 19:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:07.183 19:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:07.183 19:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:07.444 19:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:07.444 19:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:07.444 19:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.444 19:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.444 19:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.444 19:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:07.444 { 00:18:07.444 "cntlid": 61, 00:18:07.444 "qid": 0, 00:18:07.444 "state": "enabled", 00:18:07.444 "thread": "nvmf_tgt_poll_group_000", 00:18:07.444 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:07.444 "listen_address": { 00:18:07.444 "trtype": "TCP", 00:18:07.444 "adrfam": "IPv4", 00:18:07.444 "traddr": "10.0.0.2", 00:18:07.444 "trsvcid": "4420" 00:18:07.444 }, 00:18:07.444 "peer_address": { 00:18:07.444 "trtype": "TCP", 00:18:07.444 "adrfam": "IPv4", 00:18:07.444 "traddr": "10.0.0.1", 00:18:07.444 "trsvcid": "56906" 00:18:07.444 }, 00:18:07.444 "auth": { 00:18:07.444 "state": "completed", 00:18:07.444 "digest": "sha384", 00:18:07.444 "dhgroup": "ffdhe2048" 00:18:07.444 } 00:18:07.444 } 00:18:07.444 ]' 00:18:07.444 19:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:07.444 19:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:07.444 19:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:07.444 19:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:07.444 19:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:07.705 19:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:07.705 19:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:07.705 19:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:07.705 19:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDAyODFmZGI0YTdmYmZlYzg3ODcxYTQ1NzRlMmQ0NjY0MjZiOWFkMzdjNDRjNGYzV9FB/w==: --dhchap-ctrl-secret DHHC-1:01:YmY5NmQ2NjNmNjAxMGZjMzFmNjM4MmNmNjgzZDBhMmH980Km: 00:18:07.705 19:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZDAyODFmZGI0YTdmYmZlYzg3ODcxYTQ1NzRlMmQ0NjY0MjZiOWFkMzdjNDRjNGYzV9FB/w==: --dhchap-ctrl-secret DHHC-1:01:YmY5NmQ2NjNmNjAxMGZjMzFmNjM4MmNmNjgzZDBhMmH980Km: 00:18:08.648 19:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:08.648 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:08.648 19:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:08.648 19:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.648 19:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.648 19:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.648 19:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:08.648 19:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:08.648 19:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:08.648 19:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:18:08.648 19:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:08.648 19:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:08.648 19:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:08.648 19:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:08.648 19:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:08.648 19:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:08.648 19:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.648 19:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.648 19:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.648 19:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:08.648 19:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:08.648 19:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:08.909 00:18:08.909 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:08.909 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:08.909 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:09.169 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:09.169 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:09.169 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.169 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.169 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.169 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:09.169 { 00:18:09.169 "cntlid": 63, 00:18:09.169 "qid": 0, 00:18:09.169 "state": "enabled", 00:18:09.169 "thread": "nvmf_tgt_poll_group_000", 00:18:09.169 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:09.169 "listen_address": { 00:18:09.169 "trtype": "TCP", 00:18:09.169 "adrfam": "IPv4", 00:18:09.169 "traddr": "10.0.0.2", 00:18:09.169 "trsvcid": "4420" 00:18:09.169 }, 00:18:09.169 "peer_address": { 00:18:09.169 "trtype": "TCP", 00:18:09.169 "adrfam": "IPv4", 00:18:09.169 "traddr": "10.0.0.1", 00:18:09.169 "trsvcid": "57898" 00:18:09.169 }, 00:18:09.169 "auth": { 00:18:09.169 "state": "completed", 00:18:09.169 "digest": "sha384", 00:18:09.169 "dhgroup": "ffdhe2048" 00:18:09.169 } 00:18:09.169 } 00:18:09.169 ]' 00:18:09.169 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:09.169 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:09.169 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:09.169 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:09.169 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:09.169 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:09.169 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:09.169 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:09.430 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjQ4MzExNmEyZWU3NWU4MThlZmE3YzBkZGQxMjA3YWU0NGQwYmRmMGE4ZWYzYjNjMmYyMzc5ZmMyZDQxMjFhZbmCPzk=: 00:18:09.430 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YjQ4MzExNmEyZWU3NWU4MThlZmE3YzBkZGQxMjA3YWU0NGQwYmRmMGE4ZWYzYjNjMmYyMzc5ZmMyZDQxMjFhZbmCPzk=: 00:18:10.371 19:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:10.371 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:10.371 19:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:10.371 19:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.371 19:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.371 19:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.371 19:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:10.371 19:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:10.371 19:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:10.371 19:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:10.371 19:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:18:10.371 19:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:10.371 19:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:10.371 19:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:10.371 19:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:10.371 19:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:10.371 19:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:10.371 19:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.371 19:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.371 19:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.371 19:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:10.371 19:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:10.371 19:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:10.632 00:18:10.632 19:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:10.632 19:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:10.632 19:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:10.892 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:10.892 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:10.892 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.892 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.892 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.892 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:10.892 { 00:18:10.892 "cntlid": 65, 00:18:10.892 "qid": 0, 00:18:10.892 "state": "enabled", 00:18:10.892 "thread": "nvmf_tgt_poll_group_000", 00:18:10.892 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:10.892 "listen_address": { 00:18:10.892 "trtype": "TCP", 00:18:10.892 "adrfam": "IPv4", 00:18:10.892 "traddr": "10.0.0.2", 00:18:10.892 "trsvcid": "4420" 00:18:10.892 }, 00:18:10.892 "peer_address": { 00:18:10.892 "trtype": "TCP", 00:18:10.892 "adrfam": "IPv4", 00:18:10.892 "traddr": "10.0.0.1", 00:18:10.892 "trsvcid": "57928" 00:18:10.892 }, 00:18:10.892 "auth": { 00:18:10.892 "state": "completed", 00:18:10.892 "digest": "sha384", 00:18:10.892 "dhgroup": "ffdhe3072" 00:18:10.892 } 00:18:10.892 } 00:18:10.892 ]' 00:18:10.892 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:10.892 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:10.892 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:10.892 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:10.892 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:11.152 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:11.152 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:11.152 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:11.152 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjA5MjE3ZGE3YTI0YzIxOGQyMTA3YzUwMzU1YTViODVjODQ3NjRkYjI1NWYzMTY4IHDRog==: --dhchap-ctrl-secret DHHC-1:03:MjJkNWI4NWRhOTZmOTRhMzc2MTMyYzNkMzU1M2ZkNTgzYWI1N2RiNmIyZDdlZDJlMDAxMTliYTJhMjIxZWRkYdVqnqo=: 00:18:11.152 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YjA5MjE3ZGE3YTI0YzIxOGQyMTA3YzUwMzU1YTViODVjODQ3NjRkYjI1NWYzMTY4IHDRog==: --dhchap-ctrl-secret DHHC-1:03:MjJkNWI4NWRhOTZmOTRhMzc2MTMyYzNkMzU1M2ZkNTgzYWI1N2RiNmIyZDdlZDJlMDAxMTliYTJhMjIxZWRkYdVqnqo=: 00:18:12.093 19:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:12.093 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:12.093 19:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:12.093 19:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.093 19:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.093 19:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.093 19:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:12.093 19:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:12.093 19:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:12.093 19:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:18:12.093 19:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:12.093 19:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:12.093 19:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:12.093 19:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:12.093 19:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:12.093 19:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:12.093 19:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.093 19:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.093 19:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.093 19:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:12.093 19:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:12.093 19:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:12.353 00:18:12.353 19:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:12.353 19:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:12.353 19:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:12.613 19:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:12.613 19:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:12.613 19:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.613 19:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.613 19:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.613 19:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:12.613 { 00:18:12.613 "cntlid": 67, 00:18:12.613 "qid": 0, 00:18:12.613 "state": "enabled", 00:18:12.613 "thread": "nvmf_tgt_poll_group_000", 00:18:12.613 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:12.613 "listen_address": { 00:18:12.613 "trtype": "TCP", 00:18:12.613 "adrfam": "IPv4", 00:18:12.613 "traddr": "10.0.0.2", 00:18:12.613 "trsvcid": "4420" 00:18:12.613 }, 00:18:12.613 "peer_address": { 00:18:12.613 "trtype": "TCP", 00:18:12.613 "adrfam": "IPv4", 00:18:12.613 "traddr": "10.0.0.1", 00:18:12.613 "trsvcid": "57950" 00:18:12.613 }, 00:18:12.613 "auth": { 00:18:12.613 "state": "completed", 00:18:12.613 "digest": "sha384", 00:18:12.613 "dhgroup": "ffdhe3072" 00:18:12.613 } 00:18:12.613 } 00:18:12.613 ]' 00:18:12.613 19:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:12.613 19:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:12.613 19:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:12.613 19:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:12.613 19:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:12.613 19:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:12.613 19:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:12.613 19:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:12.874 19:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGRjYmE3M2IxNzU0NTdjYzU2MDFkOGJlNDA3NThiNWbPZiCO: --dhchap-ctrl-secret DHHC-1:02:NjFjNDQ5NzZjOTUzOGFlNmQyZWQ2OWNjMjQyZTkxMzUxNTJmYzQ4YzZlMDM1NDliJ6QLcw==: 00:18:12.874 19:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:OGRjYmE3M2IxNzU0NTdjYzU2MDFkOGJlNDA3NThiNWbPZiCO: --dhchap-ctrl-secret DHHC-1:02:NjFjNDQ5NzZjOTUzOGFlNmQyZWQ2OWNjMjQyZTkxMzUxNTJmYzQ4YzZlMDM1NDliJ6QLcw==: 00:18:13.816 19:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:13.816 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:13.816 19:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:13.816 19:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.816 19:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.816 19:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.816 19:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:13.816 19:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:13.816 19:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:13.816 19:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:18:13.816 19:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:13.816 19:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:13.816 19:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:13.816 19:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:13.816 19:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:13.816 19:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:13.816 19:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.816 19:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.816 19:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.816 19:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:13.816 19:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:13.816 19:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:14.080 00:18:14.080 19:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:14.080 19:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:14.080 19:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:14.341 19:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:14.341 19:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:14.341 19:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.341 19:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.341 19:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.341 19:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:14.341 { 00:18:14.341 "cntlid": 69, 00:18:14.341 "qid": 0, 00:18:14.341 "state": "enabled", 00:18:14.341 "thread": "nvmf_tgt_poll_group_000", 00:18:14.341 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:14.341 "listen_address": { 00:18:14.341 "trtype": "TCP", 00:18:14.341 "adrfam": "IPv4", 00:18:14.341 "traddr": "10.0.0.2", 00:18:14.341 "trsvcid": "4420" 00:18:14.341 }, 00:18:14.341 "peer_address": { 00:18:14.341 "trtype": "TCP", 00:18:14.341 "adrfam": "IPv4", 00:18:14.341 "traddr": "10.0.0.1", 00:18:14.341 "trsvcid": "57962" 00:18:14.341 }, 00:18:14.341 "auth": { 00:18:14.341 "state": "completed", 00:18:14.341 "digest": "sha384", 00:18:14.341 "dhgroup": "ffdhe3072" 00:18:14.341 } 00:18:14.341 } 00:18:14.341 ]' 00:18:14.341 19:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:14.341 19:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:14.341 19:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:14.341 19:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:14.341 19:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:14.341 19:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:14.341 19:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:14.341 19:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:14.602 19:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDAyODFmZGI0YTdmYmZlYzg3ODcxYTQ1NzRlMmQ0NjY0MjZiOWFkMzdjNDRjNGYzV9FB/w==: --dhchap-ctrl-secret DHHC-1:01:YmY5NmQ2NjNmNjAxMGZjMzFmNjM4MmNmNjgzZDBhMmH980Km: 00:18:14.602 19:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZDAyODFmZGI0YTdmYmZlYzg3ODcxYTQ1NzRlMmQ0NjY0MjZiOWFkMzdjNDRjNGYzV9FB/w==: --dhchap-ctrl-secret DHHC-1:01:YmY5NmQ2NjNmNjAxMGZjMzFmNjM4MmNmNjgzZDBhMmH980Km: 00:18:15.543 19:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:15.543 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:15.543 19:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:15.543 19:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.543 19:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.543 19:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.543 19:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:15.543 19:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:15.543 19:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:15.543 19:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:18:15.543 19:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:15.543 19:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:15.543 19:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:15.543 19:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:15.543 19:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:15.543 19:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:15.543 19:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.543 19:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.543 19:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.543 19:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:15.543 19:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:15.543 19:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:15.805 00:18:15.805 19:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:15.805 19:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:15.805 19:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:16.067 19:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:16.067 19:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:16.067 19:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.067 19:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.067 19:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.067 19:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:16.067 { 00:18:16.067 "cntlid": 71, 00:18:16.067 "qid": 0, 00:18:16.067 "state": "enabled", 00:18:16.067 "thread": "nvmf_tgt_poll_group_000", 00:18:16.067 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:16.067 "listen_address": { 00:18:16.067 "trtype": "TCP", 00:18:16.067 "adrfam": "IPv4", 00:18:16.067 "traddr": "10.0.0.2", 00:18:16.067 "trsvcid": "4420" 00:18:16.067 }, 00:18:16.067 "peer_address": { 00:18:16.067 "trtype": "TCP", 00:18:16.067 "adrfam": "IPv4", 00:18:16.067 "traddr": "10.0.0.1", 00:18:16.067 "trsvcid": "57984" 00:18:16.067 }, 00:18:16.067 "auth": { 00:18:16.067 "state": "completed", 00:18:16.067 "digest": "sha384", 00:18:16.067 "dhgroup": "ffdhe3072" 00:18:16.067 } 00:18:16.067 } 00:18:16.067 ]' 00:18:16.067 19:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:16.067 19:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:16.067 19:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:16.067 19:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:16.067 19:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:16.067 19:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:16.067 19:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:16.067 19:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:16.327 19:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjQ4MzExNmEyZWU3NWU4MThlZmE3YzBkZGQxMjA3YWU0NGQwYmRmMGE4ZWYzYjNjMmYyMzc5ZmMyZDQxMjFhZbmCPzk=: 00:18:16.327 19:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YjQ4MzExNmEyZWU3NWU4MThlZmE3YzBkZGQxMjA3YWU0NGQwYmRmMGE4ZWYzYjNjMmYyMzc5ZmMyZDQxMjFhZbmCPzk=: 00:18:17.269 19:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:17.269 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:17.269 19:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:17.269 19:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.269 19:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.269 19:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.269 19:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:17.269 19:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:17.269 19:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:17.269 19:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:17.269 19:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:18:17.269 19:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:17.269 19:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:17.269 19:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:17.269 19:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:17.269 19:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:17.269 19:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:17.269 19:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.269 19:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.269 19:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.269 19:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:17.269 19:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:17.269 19:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:17.530 00:18:17.530 19:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:17.530 19:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:17.530 19:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:17.791 19:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:17.791 19:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:17.791 19:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.791 19:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.791 19:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.791 19:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:17.791 { 00:18:17.791 "cntlid": 73, 00:18:17.791 "qid": 0, 00:18:17.791 "state": "enabled", 00:18:17.791 "thread": "nvmf_tgt_poll_group_000", 00:18:17.791 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:17.791 "listen_address": { 00:18:17.791 "trtype": "TCP", 00:18:17.791 "adrfam": "IPv4", 00:18:17.791 "traddr": "10.0.0.2", 00:18:17.791 "trsvcid": "4420" 00:18:17.791 }, 00:18:17.791 "peer_address": { 00:18:17.791 "trtype": "TCP", 00:18:17.791 "adrfam": "IPv4", 00:18:17.791 "traddr": "10.0.0.1", 00:18:17.791 "trsvcid": "58018" 00:18:17.791 }, 00:18:17.791 "auth": { 00:18:17.791 "state": "completed", 00:18:17.791 "digest": "sha384", 00:18:17.791 "dhgroup": "ffdhe4096" 00:18:17.791 } 00:18:17.791 } 00:18:17.791 ]' 00:18:17.791 19:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:17.791 19:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:17.791 19:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:17.791 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:17.791 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:17.791 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:17.791 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:17.791 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:18.051 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjA5MjE3ZGE3YTI0YzIxOGQyMTA3YzUwMzU1YTViODVjODQ3NjRkYjI1NWYzMTY4IHDRog==: --dhchap-ctrl-secret DHHC-1:03:MjJkNWI4NWRhOTZmOTRhMzc2MTMyYzNkMzU1M2ZkNTgzYWI1N2RiNmIyZDdlZDJlMDAxMTliYTJhMjIxZWRkYdVqnqo=: 00:18:18.051 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YjA5MjE3ZGE3YTI0YzIxOGQyMTA3YzUwMzU1YTViODVjODQ3NjRkYjI1NWYzMTY4IHDRog==: --dhchap-ctrl-secret DHHC-1:03:MjJkNWI4NWRhOTZmOTRhMzc2MTMyYzNkMzU1M2ZkNTgzYWI1N2RiNmIyZDdlZDJlMDAxMTliYTJhMjIxZWRkYdVqnqo=: 00:18:18.994 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:18.994 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:18.994 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:18.994 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.994 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.994 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.994 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:18.994 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:18.994 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:18.994 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:18:18.994 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:18.994 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:18.994 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:18.994 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:18.994 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:18.994 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:18.994 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.994 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.994 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.994 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:18.994 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:18.994 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:19.255 00:18:19.255 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:19.255 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:19.255 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:19.517 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:19.517 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:19.517 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.517 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.517 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.517 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:19.517 { 00:18:19.517 "cntlid": 75, 00:18:19.517 "qid": 0, 00:18:19.517 "state": "enabled", 00:18:19.517 "thread": "nvmf_tgt_poll_group_000", 00:18:19.517 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:19.517 "listen_address": { 00:18:19.517 "trtype": "TCP", 00:18:19.517 "adrfam": "IPv4", 00:18:19.517 "traddr": "10.0.0.2", 00:18:19.517 "trsvcid": "4420" 00:18:19.517 }, 00:18:19.517 "peer_address": { 00:18:19.517 "trtype": "TCP", 00:18:19.517 "adrfam": "IPv4", 00:18:19.517 "traddr": "10.0.0.1", 00:18:19.517 "trsvcid": "36300" 00:18:19.517 }, 00:18:19.517 "auth": { 00:18:19.517 "state": "completed", 00:18:19.517 "digest": "sha384", 00:18:19.517 "dhgroup": "ffdhe4096" 00:18:19.517 } 00:18:19.517 } 00:18:19.517 ]' 00:18:19.517 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:19.517 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:19.517 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:19.517 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:19.517 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:19.517 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:19.517 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:19.517 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:19.778 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGRjYmE3M2IxNzU0NTdjYzU2MDFkOGJlNDA3NThiNWbPZiCO: --dhchap-ctrl-secret DHHC-1:02:NjFjNDQ5NzZjOTUzOGFlNmQyZWQ2OWNjMjQyZTkxMzUxNTJmYzQ4YzZlMDM1NDliJ6QLcw==: 00:18:19.778 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:OGRjYmE3M2IxNzU0NTdjYzU2MDFkOGJlNDA3NThiNWbPZiCO: --dhchap-ctrl-secret DHHC-1:02:NjFjNDQ5NzZjOTUzOGFlNmQyZWQ2OWNjMjQyZTkxMzUxNTJmYzQ4YzZlMDM1NDliJ6QLcw==: 00:18:20.718 19:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:20.718 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:20.718 19:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:20.718 19:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.718 19:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.718 19:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.718 19:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:20.718 19:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:20.718 19:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:20.718 19:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:18:20.718 19:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:20.718 19:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:20.718 19:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:20.718 19:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:20.718 19:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:20.718 19:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:20.718 19:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.718 19:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.718 19:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.718 19:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:20.718 19:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:20.718 19:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:20.980 00:18:20.980 19:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:20.980 19:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:20.980 19:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:21.240 19:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:21.240 19:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:21.240 19:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.240 19:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.240 19:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.240 19:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:21.240 { 00:18:21.240 "cntlid": 77, 00:18:21.240 "qid": 0, 00:18:21.240 "state": "enabled", 00:18:21.240 "thread": "nvmf_tgt_poll_group_000", 00:18:21.240 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:21.240 "listen_address": { 00:18:21.240 "trtype": "TCP", 00:18:21.240 "adrfam": "IPv4", 00:18:21.240 "traddr": "10.0.0.2", 00:18:21.240 "trsvcid": "4420" 00:18:21.240 }, 00:18:21.240 "peer_address": { 00:18:21.240 "trtype": "TCP", 00:18:21.240 "adrfam": "IPv4", 00:18:21.240 "traddr": "10.0.0.1", 00:18:21.240 "trsvcid": "36326" 00:18:21.240 }, 00:18:21.240 "auth": { 00:18:21.240 "state": "completed", 00:18:21.240 "digest": "sha384", 00:18:21.240 "dhgroup": "ffdhe4096" 00:18:21.240 } 00:18:21.240 } 00:18:21.240 ]' 00:18:21.240 19:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:21.240 19:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:21.240 19:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:21.240 19:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:21.240 19:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:21.500 19:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:21.500 19:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:21.500 19:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:21.500 19:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDAyODFmZGI0YTdmYmZlYzg3ODcxYTQ1NzRlMmQ0NjY0MjZiOWFkMzdjNDRjNGYzV9FB/w==: --dhchap-ctrl-secret DHHC-1:01:YmY5NmQ2NjNmNjAxMGZjMzFmNjM4MmNmNjgzZDBhMmH980Km: 00:18:21.500 19:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZDAyODFmZGI0YTdmYmZlYzg3ODcxYTQ1NzRlMmQ0NjY0MjZiOWFkMzdjNDRjNGYzV9FB/w==: --dhchap-ctrl-secret DHHC-1:01:YmY5NmQ2NjNmNjAxMGZjMzFmNjM4MmNmNjgzZDBhMmH980Km: 00:18:22.443 19:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:22.443 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:22.443 19:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:22.443 19:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.443 19:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.443 19:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.443 19:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:22.443 19:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:22.443 19:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:22.443 19:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:18:22.443 19:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:22.443 19:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:22.443 19:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:22.443 19:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:22.443 19:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:22.443 19:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:22.443 19:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.443 19:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.443 19:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.443 19:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:22.443 19:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:22.443 19:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:22.704 00:18:22.704 19:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:22.704 19:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:22.704 19:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:22.965 19:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:22.965 19:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:22.965 19:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.965 19:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.965 19:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.965 19:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:22.965 { 00:18:22.965 "cntlid": 79, 00:18:22.965 "qid": 0, 00:18:22.965 "state": "enabled", 00:18:22.965 "thread": "nvmf_tgt_poll_group_000", 00:18:22.965 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:22.966 "listen_address": { 00:18:22.966 "trtype": "TCP", 00:18:22.966 "adrfam": "IPv4", 00:18:22.966 "traddr": "10.0.0.2", 00:18:22.966 "trsvcid": "4420" 00:18:22.966 }, 00:18:22.966 "peer_address": { 00:18:22.966 "trtype": "TCP", 00:18:22.966 "adrfam": "IPv4", 00:18:22.966 "traddr": "10.0.0.1", 00:18:22.966 "trsvcid": "36340" 00:18:22.966 }, 00:18:22.966 "auth": { 00:18:22.966 "state": "completed", 00:18:22.966 "digest": "sha384", 00:18:22.966 "dhgroup": "ffdhe4096" 00:18:22.966 } 00:18:22.966 } 00:18:22.966 ]' 00:18:22.966 19:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:22.966 19:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:22.966 19:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:23.226 19:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:23.226 19:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:23.226 19:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:23.226 19:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:23.226 19:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:23.226 19:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjQ4MzExNmEyZWU3NWU4MThlZmE3YzBkZGQxMjA3YWU0NGQwYmRmMGE4ZWYzYjNjMmYyMzc5ZmMyZDQxMjFhZbmCPzk=: 00:18:23.226 19:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YjQ4MzExNmEyZWU3NWU4MThlZmE3YzBkZGQxMjA3YWU0NGQwYmRmMGE4ZWYzYjNjMmYyMzc5ZmMyZDQxMjFhZbmCPzk=: 00:18:24.168 19:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:24.168 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:24.168 19:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:24.168 19:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.168 19:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.168 19:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.168 19:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:24.168 19:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:24.168 19:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:24.168 19:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:24.168 19:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:18:24.168 19:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:24.168 19:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:24.168 19:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:24.168 19:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:24.168 19:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:24.168 19:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:24.168 19:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.168 19:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.168 19:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.168 19:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:24.168 19:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:24.168 19:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:24.740 00:18:24.740 19:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:24.740 19:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:24.740 19:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:24.740 19:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:24.740 19:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:24.740 19:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.740 19:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.740 19:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.740 19:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:24.740 { 00:18:24.740 "cntlid": 81, 00:18:24.740 "qid": 0, 00:18:24.740 "state": "enabled", 00:18:24.740 "thread": "nvmf_tgt_poll_group_000", 00:18:24.740 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:24.740 "listen_address": { 00:18:24.740 "trtype": "TCP", 00:18:24.740 "adrfam": "IPv4", 00:18:24.740 "traddr": "10.0.0.2", 00:18:24.740 "trsvcid": "4420" 00:18:24.740 }, 00:18:24.740 "peer_address": { 00:18:24.740 "trtype": "TCP", 00:18:24.740 "adrfam": "IPv4", 00:18:24.740 "traddr": "10.0.0.1", 00:18:24.740 "trsvcid": "36368" 00:18:24.740 }, 00:18:24.740 "auth": { 00:18:24.740 "state": "completed", 00:18:24.740 "digest": "sha384", 00:18:24.740 "dhgroup": "ffdhe6144" 00:18:24.740 } 00:18:24.740 } 00:18:24.740 ]' 00:18:24.740 19:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:25.001 19:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:25.001 19:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:25.001 19:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:25.001 19:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:25.001 19:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:25.001 19:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:25.001 19:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:25.261 19:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjA5MjE3ZGE3YTI0YzIxOGQyMTA3YzUwMzU1YTViODVjODQ3NjRkYjI1NWYzMTY4IHDRog==: --dhchap-ctrl-secret DHHC-1:03:MjJkNWI4NWRhOTZmOTRhMzc2MTMyYzNkMzU1M2ZkNTgzYWI1N2RiNmIyZDdlZDJlMDAxMTliYTJhMjIxZWRkYdVqnqo=: 00:18:25.261 19:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YjA5MjE3ZGE3YTI0YzIxOGQyMTA3YzUwMzU1YTViODVjODQ3NjRkYjI1NWYzMTY4IHDRog==: --dhchap-ctrl-secret DHHC-1:03:MjJkNWI4NWRhOTZmOTRhMzc2MTMyYzNkMzU1M2ZkNTgzYWI1N2RiNmIyZDdlZDJlMDAxMTliYTJhMjIxZWRkYdVqnqo=: 00:18:25.833 19:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:25.833 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:25.833 19:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:25.833 19:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.833 19:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.833 19:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.833 19:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:25.833 19:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:25.833 19:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:26.094 19:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:18:26.094 19:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:26.094 19:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:26.094 19:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:26.094 19:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:26.094 19:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:26.094 19:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:26.094 19:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.094 19:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.094 19:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.094 19:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:26.094 19:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:26.094 19:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:26.354 00:18:26.614 19:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:26.614 19:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:26.614 19:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:26.614 19:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:26.614 19:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:26.614 19:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.614 19:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.614 19:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.614 19:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:26.614 { 00:18:26.614 "cntlid": 83, 00:18:26.614 "qid": 0, 00:18:26.614 "state": "enabled", 00:18:26.614 "thread": "nvmf_tgt_poll_group_000", 00:18:26.614 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:26.614 "listen_address": { 00:18:26.614 "trtype": "TCP", 00:18:26.614 "adrfam": "IPv4", 00:18:26.614 "traddr": "10.0.0.2", 00:18:26.614 "trsvcid": "4420" 00:18:26.614 }, 00:18:26.614 "peer_address": { 00:18:26.614 "trtype": "TCP", 00:18:26.614 "adrfam": "IPv4", 00:18:26.614 "traddr": "10.0.0.1", 00:18:26.614 "trsvcid": "36382" 00:18:26.614 }, 00:18:26.614 "auth": { 00:18:26.614 "state": "completed", 00:18:26.614 "digest": "sha384", 00:18:26.614 "dhgroup": "ffdhe6144" 00:18:26.614 } 00:18:26.614 } 00:18:26.614 ]' 00:18:26.614 19:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:26.614 19:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:26.614 19:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:26.875 19:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:26.875 19:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:26.875 19:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:26.875 19:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:26.875 19:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:26.875 19:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGRjYmE3M2IxNzU0NTdjYzU2MDFkOGJlNDA3NThiNWbPZiCO: --dhchap-ctrl-secret DHHC-1:02:NjFjNDQ5NzZjOTUzOGFlNmQyZWQ2OWNjMjQyZTkxMzUxNTJmYzQ4YzZlMDM1NDliJ6QLcw==: 00:18:26.875 19:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:OGRjYmE3M2IxNzU0NTdjYzU2MDFkOGJlNDA3NThiNWbPZiCO: --dhchap-ctrl-secret DHHC-1:02:NjFjNDQ5NzZjOTUzOGFlNmQyZWQ2OWNjMjQyZTkxMzUxNTJmYzQ4YzZlMDM1NDliJ6QLcw==: 00:18:27.814 19:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:27.814 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:27.814 19:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:27.814 19:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.814 19:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.814 19:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.814 19:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:27.814 19:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:27.814 19:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:28.075 19:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:18:28.075 19:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:28.075 19:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:28.075 19:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:28.075 19:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:28.075 19:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:28.075 19:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:28.075 19:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.075 19:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.075 19:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.075 19:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:28.075 19:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:28.075 19:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:28.334 00:18:28.334 19:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:28.334 19:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:28.334 19:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:28.594 19:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:28.594 19:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:28.594 19:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.595 19:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.595 19:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.595 19:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:28.595 { 00:18:28.595 "cntlid": 85, 00:18:28.595 "qid": 0, 00:18:28.595 "state": "enabled", 00:18:28.595 "thread": "nvmf_tgt_poll_group_000", 00:18:28.595 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:28.595 "listen_address": { 00:18:28.595 "trtype": "TCP", 00:18:28.595 "adrfam": "IPv4", 00:18:28.595 "traddr": "10.0.0.2", 00:18:28.595 "trsvcid": "4420" 00:18:28.595 }, 00:18:28.595 "peer_address": { 00:18:28.595 "trtype": "TCP", 00:18:28.595 "adrfam": "IPv4", 00:18:28.595 "traddr": "10.0.0.1", 00:18:28.595 "trsvcid": "36392" 00:18:28.595 }, 00:18:28.595 "auth": { 00:18:28.595 "state": "completed", 00:18:28.595 "digest": "sha384", 00:18:28.595 "dhgroup": "ffdhe6144" 00:18:28.595 } 00:18:28.595 } 00:18:28.595 ]' 00:18:28.595 19:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:28.595 19:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:28.595 19:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:28.595 19:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:28.595 19:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:28.595 19:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:28.595 19:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:28.595 19:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:28.855 19:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDAyODFmZGI0YTdmYmZlYzg3ODcxYTQ1NzRlMmQ0NjY0MjZiOWFkMzdjNDRjNGYzV9FB/w==: --dhchap-ctrl-secret DHHC-1:01:YmY5NmQ2NjNmNjAxMGZjMzFmNjM4MmNmNjgzZDBhMmH980Km: 00:18:28.855 19:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZDAyODFmZGI0YTdmYmZlYzg3ODcxYTQ1NzRlMmQ0NjY0MjZiOWFkMzdjNDRjNGYzV9FB/w==: --dhchap-ctrl-secret DHHC-1:01:YmY5NmQ2NjNmNjAxMGZjMzFmNjM4MmNmNjgzZDBhMmH980Km: 00:18:29.426 19:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:29.686 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:29.686 19:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:29.686 19:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.686 19:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.686 19:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.686 19:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:29.686 19:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:29.686 19:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:29.686 19:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:18:29.686 19:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:29.686 19:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:29.686 19:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:29.686 19:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:29.686 19:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:29.686 19:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:29.686 19:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.686 19:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.686 19:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.687 19:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:29.687 19:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:29.687 19:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:30.257 00:18:30.257 19:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:30.257 19:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:30.257 19:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:30.257 19:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:30.257 19:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:30.257 19:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.257 19:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.257 19:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.257 19:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:30.257 { 00:18:30.257 "cntlid": 87, 00:18:30.257 "qid": 0, 00:18:30.257 "state": "enabled", 00:18:30.257 "thread": "nvmf_tgt_poll_group_000", 00:18:30.257 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:30.257 "listen_address": { 00:18:30.257 "trtype": "TCP", 00:18:30.257 "adrfam": "IPv4", 00:18:30.257 "traddr": "10.0.0.2", 00:18:30.257 "trsvcid": "4420" 00:18:30.257 }, 00:18:30.257 "peer_address": { 00:18:30.257 "trtype": "TCP", 00:18:30.257 "adrfam": "IPv4", 00:18:30.257 "traddr": "10.0.0.1", 00:18:30.257 "trsvcid": "35700" 00:18:30.257 }, 00:18:30.257 "auth": { 00:18:30.258 "state": "completed", 00:18:30.258 "digest": "sha384", 00:18:30.258 "dhgroup": "ffdhe6144" 00:18:30.258 } 00:18:30.258 } 00:18:30.258 ]' 00:18:30.258 19:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:30.258 19:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:30.258 19:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:30.517 19:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:30.517 19:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:30.517 19:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:30.517 19:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:30.517 19:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:30.517 19:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjQ4MzExNmEyZWU3NWU4MThlZmE3YzBkZGQxMjA3YWU0NGQwYmRmMGE4ZWYzYjNjMmYyMzc5ZmMyZDQxMjFhZbmCPzk=: 00:18:30.517 19:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YjQ4MzExNmEyZWU3NWU4MThlZmE3YzBkZGQxMjA3YWU0NGQwYmRmMGE4ZWYzYjNjMmYyMzc5ZmMyZDQxMjFhZbmCPzk=: 00:18:31.458 19:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:31.458 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:31.458 19:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:31.458 19:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.458 19:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.458 19:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.458 19:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:31.458 19:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:31.458 19:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:31.458 19:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:31.458 19:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:18:31.458 19:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:31.458 19:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:31.458 19:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:31.458 19:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:31.458 19:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:31.459 19:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:31.459 19:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.459 19:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.459 19:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.459 19:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:31.459 19:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:31.459 19:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:32.029 00:18:32.029 19:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:32.029 19:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:32.029 19:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:32.289 19:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:32.289 19:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:32.289 19:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.289 19:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.289 19:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.289 19:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:32.289 { 00:18:32.290 "cntlid": 89, 00:18:32.290 "qid": 0, 00:18:32.290 "state": "enabled", 00:18:32.290 "thread": "nvmf_tgt_poll_group_000", 00:18:32.290 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:32.290 "listen_address": { 00:18:32.290 "trtype": "TCP", 00:18:32.290 "adrfam": "IPv4", 00:18:32.290 "traddr": "10.0.0.2", 00:18:32.290 "trsvcid": "4420" 00:18:32.290 }, 00:18:32.290 "peer_address": { 00:18:32.290 "trtype": "TCP", 00:18:32.290 "adrfam": "IPv4", 00:18:32.290 "traddr": "10.0.0.1", 00:18:32.290 "trsvcid": "35712" 00:18:32.290 }, 00:18:32.290 "auth": { 00:18:32.290 "state": "completed", 00:18:32.290 "digest": "sha384", 00:18:32.290 "dhgroup": "ffdhe8192" 00:18:32.290 } 00:18:32.290 } 00:18:32.290 ]' 00:18:32.290 19:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:32.290 19:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:32.290 19:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:32.290 19:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:32.290 19:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:32.550 19:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:32.550 19:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:32.550 19:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:32.550 19:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjA5MjE3ZGE3YTI0YzIxOGQyMTA3YzUwMzU1YTViODVjODQ3NjRkYjI1NWYzMTY4IHDRog==: --dhchap-ctrl-secret DHHC-1:03:MjJkNWI4NWRhOTZmOTRhMzc2MTMyYzNkMzU1M2ZkNTgzYWI1N2RiNmIyZDdlZDJlMDAxMTliYTJhMjIxZWRkYdVqnqo=: 00:18:32.550 19:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YjA5MjE3ZGE3YTI0YzIxOGQyMTA3YzUwMzU1YTViODVjODQ3NjRkYjI1NWYzMTY4IHDRog==: --dhchap-ctrl-secret DHHC-1:03:MjJkNWI4NWRhOTZmOTRhMzc2MTMyYzNkMzU1M2ZkNTgzYWI1N2RiNmIyZDdlZDJlMDAxMTliYTJhMjIxZWRkYdVqnqo=: 00:18:33.491 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:33.491 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:33.491 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:33.491 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.491 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.491 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.491 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:33.491 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:33.491 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:33.491 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:18:33.491 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:33.491 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:33.491 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:33.491 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:33.491 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:33.491 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:33.491 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.491 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.491 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.491 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:33.491 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:33.491 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:34.061 00:18:34.062 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:34.062 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:34.062 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:34.321 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:34.321 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:34.321 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.321 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.321 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.321 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:34.321 { 00:18:34.321 "cntlid": 91, 00:18:34.321 "qid": 0, 00:18:34.321 "state": "enabled", 00:18:34.321 "thread": "nvmf_tgt_poll_group_000", 00:18:34.321 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:34.321 "listen_address": { 00:18:34.321 "trtype": "TCP", 00:18:34.321 "adrfam": "IPv4", 00:18:34.321 "traddr": "10.0.0.2", 00:18:34.321 "trsvcid": "4420" 00:18:34.321 }, 00:18:34.321 "peer_address": { 00:18:34.321 "trtype": "TCP", 00:18:34.321 "adrfam": "IPv4", 00:18:34.321 "traddr": "10.0.0.1", 00:18:34.321 "trsvcid": "35748" 00:18:34.321 }, 00:18:34.321 "auth": { 00:18:34.321 "state": "completed", 00:18:34.321 "digest": "sha384", 00:18:34.321 "dhgroup": "ffdhe8192" 00:18:34.321 } 00:18:34.321 } 00:18:34.321 ]' 00:18:34.321 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:34.321 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:34.321 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:34.321 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:34.321 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:34.582 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:34.582 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:34.582 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:34.582 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGRjYmE3M2IxNzU0NTdjYzU2MDFkOGJlNDA3NThiNWbPZiCO: --dhchap-ctrl-secret DHHC-1:02:NjFjNDQ5NzZjOTUzOGFlNmQyZWQ2OWNjMjQyZTkxMzUxNTJmYzQ4YzZlMDM1NDliJ6QLcw==: 00:18:34.582 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:OGRjYmE3M2IxNzU0NTdjYzU2MDFkOGJlNDA3NThiNWbPZiCO: --dhchap-ctrl-secret DHHC-1:02:NjFjNDQ5NzZjOTUzOGFlNmQyZWQ2OWNjMjQyZTkxMzUxNTJmYzQ4YzZlMDM1NDliJ6QLcw==: 00:18:35.523 19:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:35.523 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:35.523 19:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:35.523 19:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.523 19:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.523 19:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.523 19:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:35.523 19:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:35.523 19:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:35.523 19:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:18:35.523 19:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:35.523 19:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:35.523 19:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:35.523 19:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:35.523 19:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:35.523 19:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:35.523 19:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.523 19:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.523 19:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.523 19:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:35.523 19:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:35.523 19:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:36.093 00:18:36.093 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:36.093 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:36.093 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:36.354 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:36.354 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:36.354 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.354 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.354 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.354 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:36.354 { 00:18:36.354 "cntlid": 93, 00:18:36.354 "qid": 0, 00:18:36.354 "state": "enabled", 00:18:36.354 "thread": "nvmf_tgt_poll_group_000", 00:18:36.354 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:36.354 "listen_address": { 00:18:36.354 "trtype": "TCP", 00:18:36.354 "adrfam": "IPv4", 00:18:36.354 "traddr": "10.0.0.2", 00:18:36.354 "trsvcid": "4420" 00:18:36.354 }, 00:18:36.354 "peer_address": { 00:18:36.354 "trtype": "TCP", 00:18:36.354 "adrfam": "IPv4", 00:18:36.354 "traddr": "10.0.0.1", 00:18:36.354 "trsvcid": "35760" 00:18:36.354 }, 00:18:36.354 "auth": { 00:18:36.354 "state": "completed", 00:18:36.354 "digest": "sha384", 00:18:36.354 "dhgroup": "ffdhe8192" 00:18:36.354 } 00:18:36.354 } 00:18:36.354 ]' 00:18:36.354 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:36.354 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:36.354 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:36.354 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:36.354 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:36.354 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:36.354 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:36.354 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:36.615 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDAyODFmZGI0YTdmYmZlYzg3ODcxYTQ1NzRlMmQ0NjY0MjZiOWFkMzdjNDRjNGYzV9FB/w==: --dhchap-ctrl-secret DHHC-1:01:YmY5NmQ2NjNmNjAxMGZjMzFmNjM4MmNmNjgzZDBhMmH980Km: 00:18:36.615 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZDAyODFmZGI0YTdmYmZlYzg3ODcxYTQ1NzRlMmQ0NjY0MjZiOWFkMzdjNDRjNGYzV9FB/w==: --dhchap-ctrl-secret DHHC-1:01:YmY5NmQ2NjNmNjAxMGZjMzFmNjM4MmNmNjgzZDBhMmH980Km: 00:18:37.557 19:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:37.557 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:37.557 19:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:37.557 19:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.557 19:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.557 19:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.557 19:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:37.557 19:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:37.557 19:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:37.557 19:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:18:37.557 19:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:37.557 19:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:37.557 19:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:37.557 19:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:37.557 19:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:37.557 19:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:37.557 19:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.557 19:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.557 19:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.557 19:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:37.557 19:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:37.557 19:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:38.127 00:18:38.127 19:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:38.127 19:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:38.127 19:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:38.127 19:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:38.388 19:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:38.388 19:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.388 19:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.388 19:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.388 19:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:38.388 { 00:18:38.388 "cntlid": 95, 00:18:38.388 "qid": 0, 00:18:38.388 "state": "enabled", 00:18:38.388 "thread": "nvmf_tgt_poll_group_000", 00:18:38.388 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:38.388 "listen_address": { 00:18:38.388 "trtype": "TCP", 00:18:38.388 "adrfam": "IPv4", 00:18:38.388 "traddr": "10.0.0.2", 00:18:38.388 "trsvcid": "4420" 00:18:38.388 }, 00:18:38.388 "peer_address": { 00:18:38.388 "trtype": "TCP", 00:18:38.388 "adrfam": "IPv4", 00:18:38.388 "traddr": "10.0.0.1", 00:18:38.388 "trsvcid": "35778" 00:18:38.388 }, 00:18:38.388 "auth": { 00:18:38.388 "state": "completed", 00:18:38.388 "digest": "sha384", 00:18:38.388 "dhgroup": "ffdhe8192" 00:18:38.388 } 00:18:38.388 } 00:18:38.388 ]' 00:18:38.388 19:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:38.388 19:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:38.388 19:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:38.388 19:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:38.388 19:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:38.388 19:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:38.388 19:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:38.388 19:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:38.649 19:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjQ4MzExNmEyZWU3NWU4MThlZmE3YzBkZGQxMjA3YWU0NGQwYmRmMGE4ZWYzYjNjMmYyMzc5ZmMyZDQxMjFhZbmCPzk=: 00:18:38.649 19:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YjQ4MzExNmEyZWU3NWU4MThlZmE3YzBkZGQxMjA3YWU0NGQwYmRmMGE4ZWYzYjNjMmYyMzc5ZmMyZDQxMjFhZbmCPzk=: 00:18:39.220 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:39.220 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:39.220 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:39.220 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.220 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.220 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.220 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:18:39.220 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:39.220 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:39.220 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:39.220 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:39.480 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:18:39.480 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:39.480 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:39.480 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:39.480 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:39.480 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:39.480 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:39.480 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.480 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.480 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.480 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:39.480 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:39.480 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:39.789 00:18:39.789 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:39.789 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:39.790 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:40.100 19:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:40.100 19:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:40.100 19:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.100 19:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.100 19:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.100 19:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:40.100 { 00:18:40.100 "cntlid": 97, 00:18:40.100 "qid": 0, 00:18:40.100 "state": "enabled", 00:18:40.100 "thread": "nvmf_tgt_poll_group_000", 00:18:40.100 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:40.100 "listen_address": { 00:18:40.100 "trtype": "TCP", 00:18:40.100 "adrfam": "IPv4", 00:18:40.100 "traddr": "10.0.0.2", 00:18:40.100 "trsvcid": "4420" 00:18:40.100 }, 00:18:40.100 "peer_address": { 00:18:40.100 "trtype": "TCP", 00:18:40.100 "adrfam": "IPv4", 00:18:40.100 "traddr": "10.0.0.1", 00:18:40.100 "trsvcid": "48322" 00:18:40.100 }, 00:18:40.100 "auth": { 00:18:40.100 "state": "completed", 00:18:40.100 "digest": "sha512", 00:18:40.100 "dhgroup": "null" 00:18:40.100 } 00:18:40.100 } 00:18:40.100 ]' 00:18:40.100 19:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:40.100 19:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:40.100 19:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:40.100 19:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:40.100 19:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:40.100 19:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:40.100 19:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:40.100 19:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:40.376 19:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjA5MjE3ZGE3YTI0YzIxOGQyMTA3YzUwMzU1YTViODVjODQ3NjRkYjI1NWYzMTY4IHDRog==: --dhchap-ctrl-secret DHHC-1:03:MjJkNWI4NWRhOTZmOTRhMzc2MTMyYzNkMzU1M2ZkNTgzYWI1N2RiNmIyZDdlZDJlMDAxMTliYTJhMjIxZWRkYdVqnqo=: 00:18:40.376 19:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YjA5MjE3ZGE3YTI0YzIxOGQyMTA3YzUwMzU1YTViODVjODQ3NjRkYjI1NWYzMTY4IHDRog==: --dhchap-ctrl-secret DHHC-1:03:MjJkNWI4NWRhOTZmOTRhMzc2MTMyYzNkMzU1M2ZkNTgzYWI1N2RiNmIyZDdlZDJlMDAxMTliYTJhMjIxZWRkYdVqnqo=: 00:18:40.948 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:40.948 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:40.948 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:40.948 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.948 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.948 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.948 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:40.948 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:40.948 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:41.209 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:18:41.209 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:41.209 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:41.209 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:41.209 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:41.209 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:41.209 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:41.209 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.209 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.209 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.209 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:41.209 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:41.209 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:41.471 00:18:41.471 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:41.471 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:41.471 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:41.732 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:41.732 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:41.732 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.732 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.732 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.732 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:41.732 { 00:18:41.732 "cntlid": 99, 00:18:41.732 "qid": 0, 00:18:41.732 "state": "enabled", 00:18:41.732 "thread": "nvmf_tgt_poll_group_000", 00:18:41.732 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:41.732 "listen_address": { 00:18:41.732 "trtype": "TCP", 00:18:41.732 "adrfam": "IPv4", 00:18:41.732 "traddr": "10.0.0.2", 00:18:41.732 "trsvcid": "4420" 00:18:41.732 }, 00:18:41.732 "peer_address": { 00:18:41.732 "trtype": "TCP", 00:18:41.732 "adrfam": "IPv4", 00:18:41.732 "traddr": "10.0.0.1", 00:18:41.732 "trsvcid": "48360" 00:18:41.732 }, 00:18:41.732 "auth": { 00:18:41.732 "state": "completed", 00:18:41.732 "digest": "sha512", 00:18:41.732 "dhgroup": "null" 00:18:41.732 } 00:18:41.732 } 00:18:41.732 ]' 00:18:41.732 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:41.732 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:41.732 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:41.732 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:41.732 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:41.732 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:41.732 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:41.732 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:41.994 19:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGRjYmE3M2IxNzU0NTdjYzU2MDFkOGJlNDA3NThiNWbPZiCO: --dhchap-ctrl-secret DHHC-1:02:NjFjNDQ5NzZjOTUzOGFlNmQyZWQ2OWNjMjQyZTkxMzUxNTJmYzQ4YzZlMDM1NDliJ6QLcw==: 00:18:41.994 19:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:OGRjYmE3M2IxNzU0NTdjYzU2MDFkOGJlNDA3NThiNWbPZiCO: --dhchap-ctrl-secret DHHC-1:02:NjFjNDQ5NzZjOTUzOGFlNmQyZWQ2OWNjMjQyZTkxMzUxNTJmYzQ4YzZlMDM1NDliJ6QLcw==: 00:18:42.938 19:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:42.938 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:42.938 19:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:42.938 19:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.938 19:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.938 19:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.938 19:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:42.938 19:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:42.938 19:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:42.938 19:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:18:42.938 19:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:42.938 19:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:42.938 19:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:42.938 19:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:42.938 19:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:42.938 19:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:42.938 19:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.938 19:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.938 19:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.938 19:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:42.938 19:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:42.938 19:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:43.199 00:18:43.199 19:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:43.199 19:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:43.199 19:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:43.460 19:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:43.460 19:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:43.460 19:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.460 19:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.460 19:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.460 19:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:43.460 { 00:18:43.460 "cntlid": 101, 00:18:43.460 "qid": 0, 00:18:43.460 "state": "enabled", 00:18:43.460 "thread": "nvmf_tgt_poll_group_000", 00:18:43.460 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:43.460 "listen_address": { 00:18:43.460 "trtype": "TCP", 00:18:43.460 "adrfam": "IPv4", 00:18:43.460 "traddr": "10.0.0.2", 00:18:43.460 "trsvcid": "4420" 00:18:43.460 }, 00:18:43.460 "peer_address": { 00:18:43.460 "trtype": "TCP", 00:18:43.460 "adrfam": "IPv4", 00:18:43.460 "traddr": "10.0.0.1", 00:18:43.460 "trsvcid": "48392" 00:18:43.460 }, 00:18:43.460 "auth": { 00:18:43.460 "state": "completed", 00:18:43.460 "digest": "sha512", 00:18:43.460 "dhgroup": "null" 00:18:43.460 } 00:18:43.460 } 00:18:43.460 ]' 00:18:43.460 19:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:43.460 19:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:43.460 19:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:43.460 19:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:43.460 19:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:43.460 19:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:43.460 19:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:43.460 19:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:43.721 19:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDAyODFmZGI0YTdmYmZlYzg3ODcxYTQ1NzRlMmQ0NjY0MjZiOWFkMzdjNDRjNGYzV9FB/w==: --dhchap-ctrl-secret DHHC-1:01:YmY5NmQ2NjNmNjAxMGZjMzFmNjM4MmNmNjgzZDBhMmH980Km: 00:18:43.721 19:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZDAyODFmZGI0YTdmYmZlYzg3ODcxYTQ1NzRlMmQ0NjY0MjZiOWFkMzdjNDRjNGYzV9FB/w==: --dhchap-ctrl-secret DHHC-1:01:YmY5NmQ2NjNmNjAxMGZjMzFmNjM4MmNmNjgzZDBhMmH980Km: 00:18:44.293 19:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:44.293 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:44.293 19:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:44.293 19:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.293 19:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.293 19:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.293 19:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:44.293 19:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:44.293 19:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:44.555 19:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:18:44.555 19:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:44.555 19:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:44.555 19:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:44.555 19:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:44.555 19:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:44.555 19:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:44.555 19:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.555 19:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.555 19:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.555 19:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:44.555 19:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:44.555 19:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:44.816 00:18:44.816 19:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:44.816 19:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:44.816 19:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:45.076 19:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:45.076 19:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:45.076 19:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.076 19:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.076 19:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.076 19:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:45.076 { 00:18:45.076 "cntlid": 103, 00:18:45.076 "qid": 0, 00:18:45.076 "state": "enabled", 00:18:45.076 "thread": "nvmf_tgt_poll_group_000", 00:18:45.076 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:45.076 "listen_address": { 00:18:45.076 "trtype": "TCP", 00:18:45.076 "adrfam": "IPv4", 00:18:45.076 "traddr": "10.0.0.2", 00:18:45.076 "trsvcid": "4420" 00:18:45.076 }, 00:18:45.076 "peer_address": { 00:18:45.076 "trtype": "TCP", 00:18:45.076 "adrfam": "IPv4", 00:18:45.076 "traddr": "10.0.0.1", 00:18:45.076 "trsvcid": "48416" 00:18:45.076 }, 00:18:45.076 "auth": { 00:18:45.076 "state": "completed", 00:18:45.076 "digest": "sha512", 00:18:45.076 "dhgroup": "null" 00:18:45.076 } 00:18:45.076 } 00:18:45.076 ]' 00:18:45.076 19:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:45.076 19:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:45.076 19:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:45.076 19:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:45.076 19:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:45.076 19:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:45.076 19:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:45.077 19:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:45.337 19:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjQ4MzExNmEyZWU3NWU4MThlZmE3YzBkZGQxMjA3YWU0NGQwYmRmMGE4ZWYzYjNjMmYyMzc5ZmMyZDQxMjFhZbmCPzk=: 00:18:45.337 19:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YjQ4MzExNmEyZWU3NWU4MThlZmE3YzBkZGQxMjA3YWU0NGQwYmRmMGE4ZWYzYjNjMmYyMzc5ZmMyZDQxMjFhZbmCPzk=: 00:18:46.285 19:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:46.285 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:46.285 19:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:46.285 19:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.285 19:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.285 19:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.285 19:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:46.285 19:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:46.285 19:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:46.285 19:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:46.285 19:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:18:46.285 19:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:46.285 19:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:46.285 19:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:46.285 19:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:46.285 19:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:46.285 19:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:46.285 19:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.285 19:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.285 19:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.285 19:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:46.285 19:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:46.285 19:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:46.545 00:18:46.545 19:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:46.545 19:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:46.545 19:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:46.805 19:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:46.805 19:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:46.805 19:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.805 19:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.805 19:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.805 19:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:46.805 { 00:18:46.805 "cntlid": 105, 00:18:46.805 "qid": 0, 00:18:46.805 "state": "enabled", 00:18:46.806 "thread": "nvmf_tgt_poll_group_000", 00:18:46.806 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:46.806 "listen_address": { 00:18:46.806 "trtype": "TCP", 00:18:46.806 "adrfam": "IPv4", 00:18:46.806 "traddr": "10.0.0.2", 00:18:46.806 "trsvcid": "4420" 00:18:46.806 }, 00:18:46.806 "peer_address": { 00:18:46.806 "trtype": "TCP", 00:18:46.806 "adrfam": "IPv4", 00:18:46.806 "traddr": "10.0.0.1", 00:18:46.806 "trsvcid": "48438" 00:18:46.806 }, 00:18:46.806 "auth": { 00:18:46.806 "state": "completed", 00:18:46.806 "digest": "sha512", 00:18:46.806 "dhgroup": "ffdhe2048" 00:18:46.806 } 00:18:46.806 } 00:18:46.806 ]' 00:18:46.806 19:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:46.806 19:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:46.806 19:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:46.806 19:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:46.806 19:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:46.806 19:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:46.806 19:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:46.806 19:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:47.066 19:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjA5MjE3ZGE3YTI0YzIxOGQyMTA3YzUwMzU1YTViODVjODQ3NjRkYjI1NWYzMTY4IHDRog==: --dhchap-ctrl-secret DHHC-1:03:MjJkNWI4NWRhOTZmOTRhMzc2MTMyYzNkMzU1M2ZkNTgzYWI1N2RiNmIyZDdlZDJlMDAxMTliYTJhMjIxZWRkYdVqnqo=: 00:18:47.066 19:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YjA5MjE3ZGE3YTI0YzIxOGQyMTA3YzUwMzU1YTViODVjODQ3NjRkYjI1NWYzMTY4IHDRog==: --dhchap-ctrl-secret DHHC-1:03:MjJkNWI4NWRhOTZmOTRhMzc2MTMyYzNkMzU1M2ZkNTgzYWI1N2RiNmIyZDdlZDJlMDAxMTliYTJhMjIxZWRkYdVqnqo=: 00:18:48.008 19:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:48.008 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:48.008 19:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:48.008 19:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.008 19:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.008 19:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.008 19:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:48.008 19:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:48.008 19:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:48.008 19:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:18:48.008 19:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:48.008 19:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:48.008 19:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:48.008 19:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:48.008 19:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:48.008 19:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:48.008 19:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.008 19:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.008 19:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.008 19:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:48.008 19:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:48.008 19:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:48.269 00:18:48.269 19:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:48.269 19:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:48.269 19:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:48.530 19:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:48.530 19:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:48.530 19:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.530 19:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.530 19:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.530 19:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:48.530 { 00:18:48.530 "cntlid": 107, 00:18:48.530 "qid": 0, 00:18:48.530 "state": "enabled", 00:18:48.530 "thread": "nvmf_tgt_poll_group_000", 00:18:48.530 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:48.530 "listen_address": { 00:18:48.530 "trtype": "TCP", 00:18:48.530 "adrfam": "IPv4", 00:18:48.530 "traddr": "10.0.0.2", 00:18:48.530 "trsvcid": "4420" 00:18:48.530 }, 00:18:48.530 "peer_address": { 00:18:48.530 "trtype": "TCP", 00:18:48.530 "adrfam": "IPv4", 00:18:48.530 "traddr": "10.0.0.1", 00:18:48.530 "trsvcid": "48464" 00:18:48.530 }, 00:18:48.530 "auth": { 00:18:48.530 "state": "completed", 00:18:48.530 "digest": "sha512", 00:18:48.530 "dhgroup": "ffdhe2048" 00:18:48.530 } 00:18:48.530 } 00:18:48.530 ]' 00:18:48.530 19:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:48.530 19:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:48.530 19:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:48.530 19:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:48.530 19:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:48.530 19:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:48.530 19:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:48.530 19:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:48.791 19:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGRjYmE3M2IxNzU0NTdjYzU2MDFkOGJlNDA3NThiNWbPZiCO: --dhchap-ctrl-secret DHHC-1:02:NjFjNDQ5NzZjOTUzOGFlNmQyZWQ2OWNjMjQyZTkxMzUxNTJmYzQ4YzZlMDM1NDliJ6QLcw==: 00:18:48.791 19:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:OGRjYmE3M2IxNzU0NTdjYzU2MDFkOGJlNDA3NThiNWbPZiCO: --dhchap-ctrl-secret DHHC-1:02:NjFjNDQ5NzZjOTUzOGFlNmQyZWQ2OWNjMjQyZTkxMzUxNTJmYzQ4YzZlMDM1NDliJ6QLcw==: 00:18:49.733 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:49.734 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:49.734 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:49.734 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.734 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.734 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.734 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:49.734 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:49.734 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:49.734 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:18:49.734 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:49.734 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:49.734 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:49.734 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:49.734 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:49.734 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:49.734 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.734 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.734 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.734 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:49.734 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:49.734 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:49.994 00:18:49.994 19:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:49.994 19:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:49.994 19:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:50.256 19:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:50.256 19:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:50.256 19:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.256 19:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.256 19:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.256 19:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:50.256 { 00:18:50.256 "cntlid": 109, 00:18:50.256 "qid": 0, 00:18:50.256 "state": "enabled", 00:18:50.256 "thread": "nvmf_tgt_poll_group_000", 00:18:50.256 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:50.256 "listen_address": { 00:18:50.256 "trtype": "TCP", 00:18:50.256 "adrfam": "IPv4", 00:18:50.256 "traddr": "10.0.0.2", 00:18:50.256 "trsvcid": "4420" 00:18:50.256 }, 00:18:50.256 "peer_address": { 00:18:50.256 "trtype": "TCP", 00:18:50.256 "adrfam": "IPv4", 00:18:50.256 "traddr": "10.0.0.1", 00:18:50.256 "trsvcid": "55972" 00:18:50.256 }, 00:18:50.256 "auth": { 00:18:50.256 "state": "completed", 00:18:50.256 "digest": "sha512", 00:18:50.256 "dhgroup": "ffdhe2048" 00:18:50.256 } 00:18:50.256 } 00:18:50.256 ]' 00:18:50.256 19:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:50.256 19:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:50.256 19:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:50.256 19:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:50.256 19:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:50.256 19:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:50.256 19:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:50.256 19:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:50.515 19:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDAyODFmZGI0YTdmYmZlYzg3ODcxYTQ1NzRlMmQ0NjY0MjZiOWFkMzdjNDRjNGYzV9FB/w==: --dhchap-ctrl-secret DHHC-1:01:YmY5NmQ2NjNmNjAxMGZjMzFmNjM4MmNmNjgzZDBhMmH980Km: 00:18:50.516 19:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZDAyODFmZGI0YTdmYmZlYzg3ODcxYTQ1NzRlMmQ0NjY0MjZiOWFkMzdjNDRjNGYzV9FB/w==: --dhchap-ctrl-secret DHHC-1:01:YmY5NmQ2NjNmNjAxMGZjMzFmNjM4MmNmNjgzZDBhMmH980Km: 00:18:51.455 19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:51.455 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:51.456 19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:51.456 19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.456 19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.456 19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.456 19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:51.456 19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:51.456 19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:51.456 19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:18:51.456 19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:51.456 19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:51.456 19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:51.456 19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:51.456 19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:51.456 19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:51.456 19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.456 19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.456 19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.456 19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:51.456 19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:51.456 19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:51.716 00:18:51.716 19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:51.716 19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:51.716 19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:51.978 19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:51.978 19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:51.978 19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.978 19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.978 19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.978 19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:51.978 { 00:18:51.978 "cntlid": 111, 00:18:51.978 "qid": 0, 00:18:51.978 "state": "enabled", 00:18:51.978 "thread": "nvmf_tgt_poll_group_000", 00:18:51.978 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:51.978 "listen_address": { 00:18:51.978 "trtype": "TCP", 00:18:51.978 "adrfam": "IPv4", 00:18:51.978 "traddr": "10.0.0.2", 00:18:51.978 "trsvcid": "4420" 00:18:51.978 }, 00:18:51.978 "peer_address": { 00:18:51.978 "trtype": "TCP", 00:18:51.978 "adrfam": "IPv4", 00:18:51.978 "traddr": "10.0.0.1", 00:18:51.978 "trsvcid": "56004" 00:18:51.978 }, 00:18:51.978 "auth": { 00:18:51.978 "state": "completed", 00:18:51.978 "digest": "sha512", 00:18:51.978 "dhgroup": "ffdhe2048" 00:18:51.978 } 00:18:51.978 } 00:18:51.978 ]' 00:18:51.978 19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:51.978 19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:51.978 19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:51.978 19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:51.978 19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:51.978 19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:51.978 19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:51.978 19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:52.239 19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjQ4MzExNmEyZWU3NWU4MThlZmE3YzBkZGQxMjA3YWU0NGQwYmRmMGE4ZWYzYjNjMmYyMzc5ZmMyZDQxMjFhZbmCPzk=: 00:18:52.239 19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YjQ4MzExNmEyZWU3NWU4MThlZmE3YzBkZGQxMjA3YWU0NGQwYmRmMGE4ZWYzYjNjMmYyMzc5ZmMyZDQxMjFhZbmCPzk=: 00:18:52.810 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:52.810 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:52.810 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:52.810 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.810 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.810 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.810 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:52.810 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:52.810 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:52.810 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:53.070 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:18:53.070 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:53.070 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:53.070 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:53.070 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:53.070 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:53.070 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:53.070 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.070 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.070 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.070 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:53.070 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:53.070 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:53.331 00:18:53.331 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:53.331 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:53.331 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:53.591 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:53.592 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:53.592 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.592 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.592 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.592 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:53.592 { 00:18:53.592 "cntlid": 113, 00:18:53.592 "qid": 0, 00:18:53.592 "state": "enabled", 00:18:53.592 "thread": "nvmf_tgt_poll_group_000", 00:18:53.592 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:53.592 "listen_address": { 00:18:53.592 "trtype": "TCP", 00:18:53.592 "adrfam": "IPv4", 00:18:53.592 "traddr": "10.0.0.2", 00:18:53.592 "trsvcid": "4420" 00:18:53.592 }, 00:18:53.592 "peer_address": { 00:18:53.592 "trtype": "TCP", 00:18:53.592 "adrfam": "IPv4", 00:18:53.592 "traddr": "10.0.0.1", 00:18:53.592 "trsvcid": "56032" 00:18:53.592 }, 00:18:53.592 "auth": { 00:18:53.592 "state": "completed", 00:18:53.592 "digest": "sha512", 00:18:53.592 "dhgroup": "ffdhe3072" 00:18:53.592 } 00:18:53.592 } 00:18:53.592 ]' 00:18:53.592 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:53.592 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:53.592 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:53.592 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:53.592 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:53.592 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:53.592 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:53.592 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:53.853 19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjA5MjE3ZGE3YTI0YzIxOGQyMTA3YzUwMzU1YTViODVjODQ3NjRkYjI1NWYzMTY4IHDRog==: --dhchap-ctrl-secret DHHC-1:03:MjJkNWI4NWRhOTZmOTRhMzc2MTMyYzNkMzU1M2ZkNTgzYWI1N2RiNmIyZDdlZDJlMDAxMTliYTJhMjIxZWRkYdVqnqo=: 00:18:53.853 19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YjA5MjE3ZGE3YTI0YzIxOGQyMTA3YzUwMzU1YTViODVjODQ3NjRkYjI1NWYzMTY4IHDRog==: --dhchap-ctrl-secret DHHC-1:03:MjJkNWI4NWRhOTZmOTRhMzc2MTMyYzNkMzU1M2ZkNTgzYWI1N2RiNmIyZDdlZDJlMDAxMTliYTJhMjIxZWRkYdVqnqo=: 00:18:54.796 19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:54.796 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:54.796 19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:54.796 19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.796 19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.796 19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.796 19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:54.796 19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:54.796 19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:54.796 19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:18:54.796 19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:54.796 19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:54.796 19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:54.796 19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:54.796 19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:54.796 19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:54.796 19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.796 19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.796 19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.796 19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:54.796 19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:54.796 19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:55.055 00:18:55.055 19:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:55.055 19:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:55.055 19:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:55.315 19:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:55.315 19:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:55.315 19:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.315 19:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.315 19:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.315 19:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:55.315 { 00:18:55.315 "cntlid": 115, 00:18:55.315 "qid": 0, 00:18:55.315 "state": "enabled", 00:18:55.315 "thread": "nvmf_tgt_poll_group_000", 00:18:55.315 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:55.315 "listen_address": { 00:18:55.315 "trtype": "TCP", 00:18:55.315 "adrfam": "IPv4", 00:18:55.315 "traddr": "10.0.0.2", 00:18:55.315 "trsvcid": "4420" 00:18:55.315 }, 00:18:55.315 "peer_address": { 00:18:55.315 "trtype": "TCP", 00:18:55.315 "adrfam": "IPv4", 00:18:55.315 "traddr": "10.0.0.1", 00:18:55.315 "trsvcid": "56072" 00:18:55.315 }, 00:18:55.315 "auth": { 00:18:55.315 "state": "completed", 00:18:55.315 "digest": "sha512", 00:18:55.315 "dhgroup": "ffdhe3072" 00:18:55.315 } 00:18:55.315 } 00:18:55.315 ]' 00:18:55.315 19:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:55.315 19:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:55.315 19:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:55.315 19:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:55.315 19:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:55.315 19:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:55.315 19:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:55.315 19:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:55.574 19:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGRjYmE3M2IxNzU0NTdjYzU2MDFkOGJlNDA3NThiNWbPZiCO: --dhchap-ctrl-secret DHHC-1:02:NjFjNDQ5NzZjOTUzOGFlNmQyZWQ2OWNjMjQyZTkxMzUxNTJmYzQ4YzZlMDM1NDliJ6QLcw==: 00:18:55.574 19:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:OGRjYmE3M2IxNzU0NTdjYzU2MDFkOGJlNDA3NThiNWbPZiCO: --dhchap-ctrl-secret DHHC-1:02:NjFjNDQ5NzZjOTUzOGFlNmQyZWQ2OWNjMjQyZTkxMzUxNTJmYzQ4YzZlMDM1NDliJ6QLcw==: 00:18:56.516 19:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:56.516 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:56.516 19:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:56.516 19:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.516 19:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.516 19:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.516 19:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:56.516 19:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:56.516 19:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:56.516 19:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:18:56.516 19:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:56.516 19:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:56.516 19:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:56.516 19:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:56.516 19:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:56.516 19:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:56.516 19:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.516 19:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.516 19:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.516 19:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:56.516 19:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:56.516 19:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:56.777 00:18:56.777 19:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:56.777 19:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:56.777 19:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:57.038 19:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:57.038 19:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:57.038 19:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.038 19:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.038 19:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.038 19:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:57.038 { 00:18:57.038 "cntlid": 117, 00:18:57.038 "qid": 0, 00:18:57.038 "state": "enabled", 00:18:57.038 "thread": "nvmf_tgt_poll_group_000", 00:18:57.038 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:57.038 "listen_address": { 00:18:57.038 "trtype": "TCP", 00:18:57.038 "adrfam": "IPv4", 00:18:57.038 "traddr": "10.0.0.2", 00:18:57.038 "trsvcid": "4420" 00:18:57.038 }, 00:18:57.038 "peer_address": { 00:18:57.038 "trtype": "TCP", 00:18:57.038 "adrfam": "IPv4", 00:18:57.038 "traddr": "10.0.0.1", 00:18:57.038 "trsvcid": "56098" 00:18:57.038 }, 00:18:57.038 "auth": { 00:18:57.038 "state": "completed", 00:18:57.038 "digest": "sha512", 00:18:57.038 "dhgroup": "ffdhe3072" 00:18:57.038 } 00:18:57.038 } 00:18:57.038 ]' 00:18:57.038 19:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:57.038 19:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:57.038 19:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:57.038 19:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:57.038 19:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:57.038 19:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:57.038 19:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:57.038 19:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:57.299 19:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDAyODFmZGI0YTdmYmZlYzg3ODcxYTQ1NzRlMmQ0NjY0MjZiOWFkMzdjNDRjNGYzV9FB/w==: --dhchap-ctrl-secret DHHC-1:01:YmY5NmQ2NjNmNjAxMGZjMzFmNjM4MmNmNjgzZDBhMmH980Km: 00:18:57.299 19:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZDAyODFmZGI0YTdmYmZlYzg3ODcxYTQ1NzRlMmQ0NjY0MjZiOWFkMzdjNDRjNGYzV9FB/w==: --dhchap-ctrl-secret DHHC-1:01:YmY5NmQ2NjNmNjAxMGZjMzFmNjM4MmNmNjgzZDBhMmH980Km: 00:18:57.870 19:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:57.870 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:57.870 19:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:57.870 19:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.870 19:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.870 19:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.870 19:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:57.870 19:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:57.870 19:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:58.129 19:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:18:58.129 19:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:58.129 19:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:58.129 19:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:58.129 19:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:58.129 19:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:58.129 19:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:58.129 19:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.129 19:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.129 19:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.129 19:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:58.130 19:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:58.130 19:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:58.390 00:18:58.390 19:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:58.390 19:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:58.390 19:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:58.650 19:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:58.650 19:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:58.650 19:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.650 19:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.650 19:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.650 19:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:58.650 { 00:18:58.650 "cntlid": 119, 00:18:58.650 "qid": 0, 00:18:58.650 "state": "enabled", 00:18:58.650 "thread": "nvmf_tgt_poll_group_000", 00:18:58.650 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:58.650 "listen_address": { 00:18:58.650 "trtype": "TCP", 00:18:58.650 "adrfam": "IPv4", 00:18:58.650 "traddr": "10.0.0.2", 00:18:58.650 "trsvcid": "4420" 00:18:58.650 }, 00:18:58.650 "peer_address": { 00:18:58.650 "trtype": "TCP", 00:18:58.650 "adrfam": "IPv4", 00:18:58.650 "traddr": "10.0.0.1", 00:18:58.650 "trsvcid": "56130" 00:18:58.650 }, 00:18:58.650 "auth": { 00:18:58.650 "state": "completed", 00:18:58.650 "digest": "sha512", 00:18:58.650 "dhgroup": "ffdhe3072" 00:18:58.650 } 00:18:58.650 } 00:18:58.650 ]' 00:18:58.650 19:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:58.650 19:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:58.650 19:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:58.650 19:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:58.650 19:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:58.650 19:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:58.650 19:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:58.650 19:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:58.910 19:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjQ4MzExNmEyZWU3NWU4MThlZmE3YzBkZGQxMjA3YWU0NGQwYmRmMGE4ZWYzYjNjMmYyMzc5ZmMyZDQxMjFhZbmCPzk=: 00:18:58.910 19:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YjQ4MzExNmEyZWU3NWU4MThlZmE3YzBkZGQxMjA3YWU0NGQwYmRmMGE4ZWYzYjNjMmYyMzc5ZmMyZDQxMjFhZbmCPzk=: 00:18:59.480 19:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:59.480 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:59.480 19:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:59.480 19:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.480 19:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.480 19:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.480 19:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:59.480 19:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:59.480 19:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:59.480 19:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:59.741 19:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:18:59.741 19:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:59.741 19:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:59.741 19:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:59.741 19:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:59.741 19:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:59.741 19:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:59.741 19:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.741 19:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.741 19:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.741 19:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:59.741 19:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:59.741 19:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:00.001 00:19:00.001 19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:00.001 19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:00.001 19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:00.261 19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:00.261 19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:00.261 19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.261 19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.261 19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.261 19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:00.261 { 00:19:00.261 "cntlid": 121, 00:19:00.261 "qid": 0, 00:19:00.261 "state": "enabled", 00:19:00.261 "thread": "nvmf_tgt_poll_group_000", 00:19:00.261 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:00.261 "listen_address": { 00:19:00.261 "trtype": "TCP", 00:19:00.261 "adrfam": "IPv4", 00:19:00.261 "traddr": "10.0.0.2", 00:19:00.261 "trsvcid": "4420" 00:19:00.261 }, 00:19:00.261 "peer_address": { 00:19:00.261 "trtype": "TCP", 00:19:00.261 "adrfam": "IPv4", 00:19:00.261 "traddr": "10.0.0.1", 00:19:00.261 "trsvcid": "46848" 00:19:00.261 }, 00:19:00.261 "auth": { 00:19:00.261 "state": "completed", 00:19:00.261 "digest": "sha512", 00:19:00.261 "dhgroup": "ffdhe4096" 00:19:00.261 } 00:19:00.261 } 00:19:00.261 ]' 00:19:00.261 19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:00.261 19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:00.261 19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:00.261 19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:00.261 19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:00.261 19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:00.261 19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:00.261 19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:00.526 19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjA5MjE3ZGE3YTI0YzIxOGQyMTA3YzUwMzU1YTViODVjODQ3NjRkYjI1NWYzMTY4IHDRog==: --dhchap-ctrl-secret DHHC-1:03:MjJkNWI4NWRhOTZmOTRhMzc2MTMyYzNkMzU1M2ZkNTgzYWI1N2RiNmIyZDdlZDJlMDAxMTliYTJhMjIxZWRkYdVqnqo=: 00:19:00.526 19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YjA5MjE3ZGE3YTI0YzIxOGQyMTA3YzUwMzU1YTViODVjODQ3NjRkYjI1NWYzMTY4IHDRog==: --dhchap-ctrl-secret DHHC-1:03:MjJkNWI4NWRhOTZmOTRhMzc2MTMyYzNkMzU1M2ZkNTgzYWI1N2RiNmIyZDdlZDJlMDAxMTliYTJhMjIxZWRkYdVqnqo=: 00:19:01.096 19:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:01.356 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:01.356 19:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:01.356 19:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.356 19:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.356 19:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.356 19:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:01.356 19:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:01.357 19:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:01.357 19:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:19:01.357 19:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:01.357 19:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:01.357 19:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:01.357 19:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:01.357 19:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:01.357 19:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:01.357 19:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.357 19:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.357 19:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.357 19:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:01.357 19:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:01.357 19:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:01.618 00:19:01.618 19:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:01.618 19:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:01.618 19:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:01.878 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:01.878 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:01.878 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.878 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.878 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.878 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:01.878 { 00:19:01.878 "cntlid": 123, 00:19:01.878 "qid": 0, 00:19:01.878 "state": "enabled", 00:19:01.878 "thread": "nvmf_tgt_poll_group_000", 00:19:01.878 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:01.878 "listen_address": { 00:19:01.878 "trtype": "TCP", 00:19:01.878 "adrfam": "IPv4", 00:19:01.878 "traddr": "10.0.0.2", 00:19:01.878 "trsvcid": "4420" 00:19:01.878 }, 00:19:01.878 "peer_address": { 00:19:01.878 "trtype": "TCP", 00:19:01.878 "adrfam": "IPv4", 00:19:01.878 "traddr": "10.0.0.1", 00:19:01.878 "trsvcid": "46872" 00:19:01.878 }, 00:19:01.878 "auth": { 00:19:01.878 "state": "completed", 00:19:01.878 "digest": "sha512", 00:19:01.878 "dhgroup": "ffdhe4096" 00:19:01.878 } 00:19:01.878 } 00:19:01.878 ]' 00:19:01.878 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:01.878 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:01.878 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:01.878 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:01.878 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:02.138 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:02.138 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:02.138 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:02.138 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGRjYmE3M2IxNzU0NTdjYzU2MDFkOGJlNDA3NThiNWbPZiCO: --dhchap-ctrl-secret DHHC-1:02:NjFjNDQ5NzZjOTUzOGFlNmQyZWQ2OWNjMjQyZTkxMzUxNTJmYzQ4YzZlMDM1NDliJ6QLcw==: 00:19:02.138 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:OGRjYmE3M2IxNzU0NTdjYzU2MDFkOGJlNDA3NThiNWbPZiCO: --dhchap-ctrl-secret DHHC-1:02:NjFjNDQ5NzZjOTUzOGFlNmQyZWQ2OWNjMjQyZTkxMzUxNTJmYzQ4YzZlMDM1NDliJ6QLcw==: 00:19:03.078 19:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:03.078 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:03.078 19:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:03.078 19:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.078 19:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.078 19:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.079 19:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:03.079 19:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:03.079 19:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:03.079 19:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:19:03.079 19:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:03.079 19:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:03.079 19:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:03.079 19:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:03.079 19:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:03.079 19:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:03.079 19:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.079 19:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.079 19:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.079 19:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:03.079 19:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:03.079 19:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:03.339 00:19:03.339 19:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:03.339 19:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:03.339 19:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:03.599 19:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:03.599 19:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:03.599 19:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.599 19:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.599 19:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.599 19:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:03.599 { 00:19:03.599 "cntlid": 125, 00:19:03.599 "qid": 0, 00:19:03.599 "state": "enabled", 00:19:03.599 "thread": "nvmf_tgt_poll_group_000", 00:19:03.599 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:03.599 "listen_address": { 00:19:03.599 "trtype": "TCP", 00:19:03.599 "adrfam": "IPv4", 00:19:03.599 "traddr": "10.0.0.2", 00:19:03.599 "trsvcid": "4420" 00:19:03.599 }, 00:19:03.599 "peer_address": { 00:19:03.599 "trtype": "TCP", 00:19:03.599 "adrfam": "IPv4", 00:19:03.599 "traddr": "10.0.0.1", 00:19:03.599 "trsvcid": "46886" 00:19:03.599 }, 00:19:03.599 "auth": { 00:19:03.599 "state": "completed", 00:19:03.599 "digest": "sha512", 00:19:03.599 "dhgroup": "ffdhe4096" 00:19:03.599 } 00:19:03.599 } 00:19:03.599 ]' 00:19:03.599 19:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:03.599 19:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:03.599 19:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:03.599 19:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:03.599 19:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:03.859 19:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:03.859 19:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:03.859 19:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:03.859 19:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDAyODFmZGI0YTdmYmZlYzg3ODcxYTQ1NzRlMmQ0NjY0MjZiOWFkMzdjNDRjNGYzV9FB/w==: --dhchap-ctrl-secret DHHC-1:01:YmY5NmQ2NjNmNjAxMGZjMzFmNjM4MmNmNjgzZDBhMmH980Km: 00:19:03.859 19:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZDAyODFmZGI0YTdmYmZlYzg3ODcxYTQ1NzRlMmQ0NjY0MjZiOWFkMzdjNDRjNGYzV9FB/w==: --dhchap-ctrl-secret DHHC-1:01:YmY5NmQ2NjNmNjAxMGZjMzFmNjM4MmNmNjgzZDBhMmH980Km: 00:19:04.801 19:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:04.801 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:04.801 19:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:04.802 19:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.802 19:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.802 19:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.802 19:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:04.802 19:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:04.802 19:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:04.802 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:19:04.802 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:04.802 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:04.802 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:04.802 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:04.802 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:04.802 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:04.802 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.802 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.802 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.802 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:04.802 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:04.802 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:05.062 00:19:05.062 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:05.062 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:05.062 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:05.323 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:05.323 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:05.323 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.323 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.323 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.323 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:05.323 { 00:19:05.323 "cntlid": 127, 00:19:05.323 "qid": 0, 00:19:05.323 "state": "enabled", 00:19:05.323 "thread": "nvmf_tgt_poll_group_000", 00:19:05.323 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:05.323 "listen_address": { 00:19:05.323 "trtype": "TCP", 00:19:05.323 "adrfam": "IPv4", 00:19:05.323 "traddr": "10.0.0.2", 00:19:05.323 "trsvcid": "4420" 00:19:05.323 }, 00:19:05.323 "peer_address": { 00:19:05.323 "trtype": "TCP", 00:19:05.323 "adrfam": "IPv4", 00:19:05.323 "traddr": "10.0.0.1", 00:19:05.323 "trsvcid": "46902" 00:19:05.323 }, 00:19:05.323 "auth": { 00:19:05.323 "state": "completed", 00:19:05.323 "digest": "sha512", 00:19:05.323 "dhgroup": "ffdhe4096" 00:19:05.323 } 00:19:05.323 } 00:19:05.323 ]' 00:19:05.323 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:05.323 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:05.323 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:05.584 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:05.584 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:05.584 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:05.584 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:05.584 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:05.584 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjQ4MzExNmEyZWU3NWU4MThlZmE3YzBkZGQxMjA3YWU0NGQwYmRmMGE4ZWYzYjNjMmYyMzc5ZmMyZDQxMjFhZbmCPzk=: 00:19:05.584 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YjQ4MzExNmEyZWU3NWU4MThlZmE3YzBkZGQxMjA3YWU0NGQwYmRmMGE4ZWYzYjNjMmYyMzc5ZmMyZDQxMjFhZbmCPzk=: 00:19:06.526 19:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:06.526 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:06.526 19:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:06.526 19:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.526 19:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.526 19:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.526 19:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:06.526 19:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:06.526 19:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:06.526 19:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:06.526 19:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:19:06.526 19:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:06.526 19:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:06.526 19:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:06.526 19:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:06.526 19:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:06.526 19:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:06.526 19:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.526 19:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.526 19:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.526 19:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:06.526 19:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:06.526 19:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:07.097 00:19:07.097 19:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:07.097 19:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:07.097 19:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:07.097 19:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:07.097 19:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:07.097 19:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.097 19:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.097 19:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.097 19:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:07.097 { 00:19:07.097 "cntlid": 129, 00:19:07.097 "qid": 0, 00:19:07.097 "state": "enabled", 00:19:07.097 "thread": "nvmf_tgt_poll_group_000", 00:19:07.097 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:07.097 "listen_address": { 00:19:07.097 "trtype": "TCP", 00:19:07.097 "adrfam": "IPv4", 00:19:07.097 "traddr": "10.0.0.2", 00:19:07.097 "trsvcid": "4420" 00:19:07.097 }, 00:19:07.097 "peer_address": { 00:19:07.097 "trtype": "TCP", 00:19:07.097 "adrfam": "IPv4", 00:19:07.097 "traddr": "10.0.0.1", 00:19:07.097 "trsvcid": "46926" 00:19:07.097 }, 00:19:07.097 "auth": { 00:19:07.097 "state": "completed", 00:19:07.097 "digest": "sha512", 00:19:07.097 "dhgroup": "ffdhe6144" 00:19:07.097 } 00:19:07.097 } 00:19:07.097 ]' 00:19:07.097 19:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:07.358 19:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:07.358 19:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:07.358 19:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:07.358 19:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:07.358 19:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:07.358 19:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:07.358 19:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:07.618 19:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjA5MjE3ZGE3YTI0YzIxOGQyMTA3YzUwMzU1YTViODVjODQ3NjRkYjI1NWYzMTY4IHDRog==: --dhchap-ctrl-secret DHHC-1:03:MjJkNWI4NWRhOTZmOTRhMzc2MTMyYzNkMzU1M2ZkNTgzYWI1N2RiNmIyZDdlZDJlMDAxMTliYTJhMjIxZWRkYdVqnqo=: 00:19:07.618 19:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YjA5MjE3ZGE3YTI0YzIxOGQyMTA3YzUwMzU1YTViODVjODQ3NjRkYjI1NWYzMTY4IHDRog==: --dhchap-ctrl-secret DHHC-1:03:MjJkNWI4NWRhOTZmOTRhMzc2MTMyYzNkMzU1M2ZkNTgzYWI1N2RiNmIyZDdlZDJlMDAxMTliYTJhMjIxZWRkYdVqnqo=: 00:19:08.189 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:08.189 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:08.189 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:08.189 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.189 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.189 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.189 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:08.189 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:08.189 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:08.449 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:19:08.449 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:08.449 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:08.449 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:08.449 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:08.449 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:08.449 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:08.449 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.449 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.449 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.449 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:08.449 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:08.449 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:08.709 00:19:08.969 19:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:08.969 19:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:08.969 19:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:08.969 19:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:08.969 19:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:08.969 19:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.969 19:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.970 19:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.970 19:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:08.970 { 00:19:08.970 "cntlid": 131, 00:19:08.970 "qid": 0, 00:19:08.970 "state": "enabled", 00:19:08.970 "thread": "nvmf_tgt_poll_group_000", 00:19:08.970 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:08.970 "listen_address": { 00:19:08.970 "trtype": "TCP", 00:19:08.970 "adrfam": "IPv4", 00:19:08.970 "traddr": "10.0.0.2", 00:19:08.970 "trsvcid": "4420" 00:19:08.970 }, 00:19:08.970 "peer_address": { 00:19:08.970 "trtype": "TCP", 00:19:08.970 "adrfam": "IPv4", 00:19:08.970 "traddr": "10.0.0.1", 00:19:08.970 "trsvcid": "46946" 00:19:08.970 }, 00:19:08.970 "auth": { 00:19:08.970 "state": "completed", 00:19:08.970 "digest": "sha512", 00:19:08.970 "dhgroup": "ffdhe6144" 00:19:08.970 } 00:19:08.970 } 00:19:08.970 ]' 00:19:08.970 19:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:08.970 19:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:08.970 19:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:09.230 19:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:09.230 19:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:09.230 19:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:09.230 19:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:09.230 19:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:09.490 19:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGRjYmE3M2IxNzU0NTdjYzU2MDFkOGJlNDA3NThiNWbPZiCO: --dhchap-ctrl-secret DHHC-1:02:NjFjNDQ5NzZjOTUzOGFlNmQyZWQ2OWNjMjQyZTkxMzUxNTJmYzQ4YzZlMDM1NDliJ6QLcw==: 00:19:09.490 19:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:OGRjYmE3M2IxNzU0NTdjYzU2MDFkOGJlNDA3NThiNWbPZiCO: --dhchap-ctrl-secret DHHC-1:02:NjFjNDQ5NzZjOTUzOGFlNmQyZWQ2OWNjMjQyZTkxMzUxNTJmYzQ4YzZlMDM1NDliJ6QLcw==: 00:19:10.062 19:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:10.062 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:10.062 19:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:10.062 19:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.062 19:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.062 19:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.062 19:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:10.062 19:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:10.062 19:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:10.323 19:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:19:10.323 19:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:10.323 19:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:10.323 19:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:10.323 19:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:10.323 19:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:10.323 19:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:10.323 19:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.323 19:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.323 19:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.323 19:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:10.323 19:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:10.323 19:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:10.583 00:19:10.583 19:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:10.583 19:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:10.583 19:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:10.844 19:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:10.844 19:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:10.844 19:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.844 19:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.844 19:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.844 19:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:10.844 { 00:19:10.844 "cntlid": 133, 00:19:10.844 "qid": 0, 00:19:10.844 "state": "enabled", 00:19:10.844 "thread": "nvmf_tgt_poll_group_000", 00:19:10.844 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:10.844 "listen_address": { 00:19:10.844 "trtype": "TCP", 00:19:10.844 "adrfam": "IPv4", 00:19:10.844 "traddr": "10.0.0.2", 00:19:10.844 "trsvcid": "4420" 00:19:10.844 }, 00:19:10.844 "peer_address": { 00:19:10.844 "trtype": "TCP", 00:19:10.844 "adrfam": "IPv4", 00:19:10.844 "traddr": "10.0.0.1", 00:19:10.844 "trsvcid": "49134" 00:19:10.844 }, 00:19:10.844 "auth": { 00:19:10.844 "state": "completed", 00:19:10.844 "digest": "sha512", 00:19:10.844 "dhgroup": "ffdhe6144" 00:19:10.844 } 00:19:10.844 } 00:19:10.844 ]' 00:19:10.844 19:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:10.844 19:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:10.844 19:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:11.105 19:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:11.105 19:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:11.105 19:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:11.105 19:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:11.105 19:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:11.105 19:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDAyODFmZGI0YTdmYmZlYzg3ODcxYTQ1NzRlMmQ0NjY0MjZiOWFkMzdjNDRjNGYzV9FB/w==: --dhchap-ctrl-secret DHHC-1:01:YmY5NmQ2NjNmNjAxMGZjMzFmNjM4MmNmNjgzZDBhMmH980Km: 00:19:11.105 19:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZDAyODFmZGI0YTdmYmZlYzg3ODcxYTQ1NzRlMmQ0NjY0MjZiOWFkMzdjNDRjNGYzV9FB/w==: --dhchap-ctrl-secret DHHC-1:01:YmY5NmQ2NjNmNjAxMGZjMzFmNjM4MmNmNjgzZDBhMmH980Km: 00:19:12.046 19:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:12.046 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:12.046 19:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:12.046 19:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.046 19:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.046 19:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.046 19:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:12.046 19:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:12.046 19:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:12.046 19:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:19:12.046 19:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:12.046 19:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:12.046 19:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:12.046 19:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:12.046 19:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:12.046 19:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:12.046 19:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.046 19:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.307 19:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.307 19:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:12.307 19:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:12.307 19:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:12.567 00:19:12.567 19:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:12.567 19:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:12.567 19:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:12.828 19:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:12.828 19:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:12.828 19:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.828 19:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.828 19:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.828 19:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:12.828 { 00:19:12.828 "cntlid": 135, 00:19:12.828 "qid": 0, 00:19:12.828 "state": "enabled", 00:19:12.828 "thread": "nvmf_tgt_poll_group_000", 00:19:12.828 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:12.828 "listen_address": { 00:19:12.828 "trtype": "TCP", 00:19:12.828 "adrfam": "IPv4", 00:19:12.828 "traddr": "10.0.0.2", 00:19:12.828 "trsvcid": "4420" 00:19:12.828 }, 00:19:12.828 "peer_address": { 00:19:12.828 "trtype": "TCP", 00:19:12.828 "adrfam": "IPv4", 00:19:12.828 "traddr": "10.0.0.1", 00:19:12.828 "trsvcid": "49164" 00:19:12.828 }, 00:19:12.828 "auth": { 00:19:12.828 "state": "completed", 00:19:12.828 "digest": "sha512", 00:19:12.828 "dhgroup": "ffdhe6144" 00:19:12.828 } 00:19:12.828 } 00:19:12.828 ]' 00:19:12.828 19:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:12.828 19:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:12.828 19:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:12.828 19:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:12.828 19:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:12.828 19:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:12.828 19:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:12.828 19:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:13.089 19:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjQ4MzExNmEyZWU3NWU4MThlZmE3YzBkZGQxMjA3YWU0NGQwYmRmMGE4ZWYzYjNjMmYyMzc5ZmMyZDQxMjFhZbmCPzk=: 00:19:13.089 19:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YjQ4MzExNmEyZWU3NWU4MThlZmE3YzBkZGQxMjA3YWU0NGQwYmRmMGE4ZWYzYjNjMmYyMzc5ZmMyZDQxMjFhZbmCPzk=: 00:19:14.030 19:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:14.030 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:14.030 19:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:14.030 19:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.030 19:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.030 19:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.030 19:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:14.030 19:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:14.030 19:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:14.030 19:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:14.030 19:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:19:14.030 19:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:14.030 19:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:14.030 19:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:14.030 19:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:14.030 19:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:14.030 19:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:14.030 19:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.030 19:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.030 19:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.030 19:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:14.031 19:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:14.031 19:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:14.603 00:19:14.603 19:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:14.603 19:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:14.603 19:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:14.603 19:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:14.864 19:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:14.864 19:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.864 19:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.864 19:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.864 19:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:14.864 { 00:19:14.864 "cntlid": 137, 00:19:14.864 "qid": 0, 00:19:14.864 "state": "enabled", 00:19:14.864 "thread": "nvmf_tgt_poll_group_000", 00:19:14.864 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:14.864 "listen_address": { 00:19:14.864 "trtype": "TCP", 00:19:14.864 "adrfam": "IPv4", 00:19:14.864 "traddr": "10.0.0.2", 00:19:14.864 "trsvcid": "4420" 00:19:14.864 }, 00:19:14.864 "peer_address": { 00:19:14.864 "trtype": "TCP", 00:19:14.864 "adrfam": "IPv4", 00:19:14.864 "traddr": "10.0.0.1", 00:19:14.864 "trsvcid": "49182" 00:19:14.864 }, 00:19:14.864 "auth": { 00:19:14.864 "state": "completed", 00:19:14.864 "digest": "sha512", 00:19:14.864 "dhgroup": "ffdhe8192" 00:19:14.864 } 00:19:14.864 } 00:19:14.864 ]' 00:19:14.864 19:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:14.864 19:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:14.864 19:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:14.864 19:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:14.864 19:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:14.864 19:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:14.864 19:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:14.864 19:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:15.125 19:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjA5MjE3ZGE3YTI0YzIxOGQyMTA3YzUwMzU1YTViODVjODQ3NjRkYjI1NWYzMTY4IHDRog==: --dhchap-ctrl-secret DHHC-1:03:MjJkNWI4NWRhOTZmOTRhMzc2MTMyYzNkMzU1M2ZkNTgzYWI1N2RiNmIyZDdlZDJlMDAxMTliYTJhMjIxZWRkYdVqnqo=: 00:19:15.125 19:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YjA5MjE3ZGE3YTI0YzIxOGQyMTA3YzUwMzU1YTViODVjODQ3NjRkYjI1NWYzMTY4IHDRog==: --dhchap-ctrl-secret DHHC-1:03:MjJkNWI4NWRhOTZmOTRhMzc2MTMyYzNkMzU1M2ZkNTgzYWI1N2RiNmIyZDdlZDJlMDAxMTliYTJhMjIxZWRkYdVqnqo=: 00:19:15.697 19:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:15.697 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:15.697 19:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:15.697 19:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.697 19:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.697 19:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.697 19:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:15.697 19:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:15.697 19:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:15.957 19:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:19:15.957 19:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:15.957 19:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:15.957 19:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:15.957 19:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:15.957 19:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:15.957 19:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:15.957 19:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.957 19:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.957 19:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.957 19:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:15.957 19:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:15.957 19:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:16.527 00:19:16.527 19:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:16.527 19:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:16.527 19:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:16.788 19:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:16.788 19:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:16.788 19:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.788 19:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.788 19:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.788 19:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:16.788 { 00:19:16.788 "cntlid": 139, 00:19:16.788 "qid": 0, 00:19:16.788 "state": "enabled", 00:19:16.788 "thread": "nvmf_tgt_poll_group_000", 00:19:16.788 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:16.788 "listen_address": { 00:19:16.788 "trtype": "TCP", 00:19:16.788 "adrfam": "IPv4", 00:19:16.788 "traddr": "10.0.0.2", 00:19:16.788 "trsvcid": "4420" 00:19:16.788 }, 00:19:16.788 "peer_address": { 00:19:16.788 "trtype": "TCP", 00:19:16.788 "adrfam": "IPv4", 00:19:16.788 "traddr": "10.0.0.1", 00:19:16.788 "trsvcid": "49218" 00:19:16.788 }, 00:19:16.788 "auth": { 00:19:16.788 "state": "completed", 00:19:16.788 "digest": "sha512", 00:19:16.788 "dhgroup": "ffdhe8192" 00:19:16.788 } 00:19:16.788 } 00:19:16.788 ]' 00:19:16.788 19:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:16.788 19:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:16.788 19:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:16.788 19:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:16.788 19:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:16.788 19:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:16.788 19:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:16.788 19:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:17.049 19:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGRjYmE3M2IxNzU0NTdjYzU2MDFkOGJlNDA3NThiNWbPZiCO: --dhchap-ctrl-secret DHHC-1:02:NjFjNDQ5NzZjOTUzOGFlNmQyZWQ2OWNjMjQyZTkxMzUxNTJmYzQ4YzZlMDM1NDliJ6QLcw==: 00:19:17.049 19:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:OGRjYmE3M2IxNzU0NTdjYzU2MDFkOGJlNDA3NThiNWbPZiCO: --dhchap-ctrl-secret DHHC-1:02:NjFjNDQ5NzZjOTUzOGFlNmQyZWQ2OWNjMjQyZTkxMzUxNTJmYzQ4YzZlMDM1NDliJ6QLcw==: 00:19:17.997 19:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:17.997 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:17.997 19:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:17.997 19:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.997 19:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.997 19:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.997 19:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:17.997 19:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:17.997 19:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:17.997 19:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:19:17.997 19:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:17.997 19:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:17.997 19:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:17.997 19:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:17.997 19:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:17.997 19:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:17.997 19:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.997 19:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.997 19:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.997 19:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:17.997 19:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:17.997 19:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:18.630 00:19:18.630 19:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:18.630 19:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:18.630 19:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:18.630 19:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:18.630 19:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:18.630 19:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.630 19:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.630 19:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.630 19:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:18.630 { 00:19:18.630 "cntlid": 141, 00:19:18.630 "qid": 0, 00:19:18.630 "state": "enabled", 00:19:18.630 "thread": "nvmf_tgt_poll_group_000", 00:19:18.630 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:18.630 "listen_address": { 00:19:18.630 "trtype": "TCP", 00:19:18.631 "adrfam": "IPv4", 00:19:18.631 "traddr": "10.0.0.2", 00:19:18.631 "trsvcid": "4420" 00:19:18.631 }, 00:19:18.631 "peer_address": { 00:19:18.631 "trtype": "TCP", 00:19:18.631 "adrfam": "IPv4", 00:19:18.631 "traddr": "10.0.0.1", 00:19:18.631 "trsvcid": "49260" 00:19:18.631 }, 00:19:18.631 "auth": { 00:19:18.631 "state": "completed", 00:19:18.631 "digest": "sha512", 00:19:18.631 "dhgroup": "ffdhe8192" 00:19:18.631 } 00:19:18.631 } 00:19:18.631 ]' 00:19:18.631 19:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:18.899 19:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:18.899 19:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:18.899 19:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:18.899 19:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:18.899 19:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:18.899 19:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:18.899 19:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:18.899 19:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDAyODFmZGI0YTdmYmZlYzg3ODcxYTQ1NzRlMmQ0NjY0MjZiOWFkMzdjNDRjNGYzV9FB/w==: --dhchap-ctrl-secret DHHC-1:01:YmY5NmQ2NjNmNjAxMGZjMzFmNjM4MmNmNjgzZDBhMmH980Km: 00:19:18.899 19:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZDAyODFmZGI0YTdmYmZlYzg3ODcxYTQ1NzRlMmQ0NjY0MjZiOWFkMzdjNDRjNGYzV9FB/w==: --dhchap-ctrl-secret DHHC-1:01:YmY5NmQ2NjNmNjAxMGZjMzFmNjM4MmNmNjgzZDBhMmH980Km: 00:19:19.842 19:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:19.842 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:19.842 19:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:19.842 19:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.842 19:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.842 19:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.842 19:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:19.842 19:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:19.842 19:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:20.103 19:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:19:20.103 19:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:20.103 19:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:20.103 19:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:20.103 19:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:20.103 19:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:20.103 19:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:20.103 19:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.103 19:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.103 19:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.103 19:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:20.103 19:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:20.103 19:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:20.673 00:19:20.673 19:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:20.673 19:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:20.673 19:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:20.673 19:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:20.674 19:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:20.674 19:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.674 19:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.674 19:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.674 19:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:20.674 { 00:19:20.674 "cntlid": 143, 00:19:20.674 "qid": 0, 00:19:20.674 "state": "enabled", 00:19:20.674 "thread": "nvmf_tgt_poll_group_000", 00:19:20.674 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:20.674 "listen_address": { 00:19:20.674 "trtype": "TCP", 00:19:20.674 "adrfam": "IPv4", 00:19:20.674 "traddr": "10.0.0.2", 00:19:20.674 "trsvcid": "4420" 00:19:20.674 }, 00:19:20.674 "peer_address": { 00:19:20.674 "trtype": "TCP", 00:19:20.674 "adrfam": "IPv4", 00:19:20.674 "traddr": "10.0.0.1", 00:19:20.674 "trsvcid": "58544" 00:19:20.674 }, 00:19:20.674 "auth": { 00:19:20.674 "state": "completed", 00:19:20.674 "digest": "sha512", 00:19:20.674 "dhgroup": "ffdhe8192" 00:19:20.674 } 00:19:20.674 } 00:19:20.674 ]' 00:19:20.674 19:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:20.674 19:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:20.674 19:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:20.934 19:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:20.934 19:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:20.934 19:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:20.934 19:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:20.934 19:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:20.934 19:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjQ4MzExNmEyZWU3NWU4MThlZmE3YzBkZGQxMjA3YWU0NGQwYmRmMGE4ZWYzYjNjMmYyMzc5ZmMyZDQxMjFhZbmCPzk=: 00:19:20.934 19:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YjQ4MzExNmEyZWU3NWU4MThlZmE3YzBkZGQxMjA3YWU0NGQwYmRmMGE4ZWYzYjNjMmYyMzc5ZmMyZDQxMjFhZbmCPzk=: 00:19:21.873 19:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:21.873 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:21.873 19:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:21.873 19:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.873 19:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.873 19:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.873 19:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:19:21.873 19:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:19:21.873 19:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:19:21.873 19:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:21.873 19:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:21.873 19:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:21.873 19:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:19:21.874 19:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:21.874 19:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:21.874 19:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:21.874 19:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:21.874 19:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:21.874 19:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:21.874 19:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.874 19:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.133 19:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.133 19:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:22.133 19:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:22.133 19:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:22.393 00:19:22.654 19:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:22.654 19:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:22.654 19:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:22.654 19:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:22.654 19:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:22.654 19:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.654 19:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.654 19:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.654 19:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:22.654 { 00:19:22.654 "cntlid": 145, 00:19:22.654 "qid": 0, 00:19:22.654 "state": "enabled", 00:19:22.654 "thread": "nvmf_tgt_poll_group_000", 00:19:22.654 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:22.654 "listen_address": { 00:19:22.654 "trtype": "TCP", 00:19:22.654 "adrfam": "IPv4", 00:19:22.654 "traddr": "10.0.0.2", 00:19:22.654 "trsvcid": "4420" 00:19:22.654 }, 00:19:22.654 "peer_address": { 00:19:22.654 "trtype": "TCP", 00:19:22.654 "adrfam": "IPv4", 00:19:22.654 "traddr": "10.0.0.1", 00:19:22.654 "trsvcid": "58560" 00:19:22.654 }, 00:19:22.654 "auth": { 00:19:22.654 "state": "completed", 00:19:22.654 "digest": "sha512", 00:19:22.654 "dhgroup": "ffdhe8192" 00:19:22.654 } 00:19:22.654 } 00:19:22.654 ]' 00:19:22.654 19:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:22.654 19:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:22.654 19:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:22.914 19:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:22.914 19:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:22.914 19:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:22.914 19:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:22.914 19:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:22.914 19:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjA5MjE3ZGE3YTI0YzIxOGQyMTA3YzUwMzU1YTViODVjODQ3NjRkYjI1NWYzMTY4IHDRog==: --dhchap-ctrl-secret DHHC-1:03:MjJkNWI4NWRhOTZmOTRhMzc2MTMyYzNkMzU1M2ZkNTgzYWI1N2RiNmIyZDdlZDJlMDAxMTliYTJhMjIxZWRkYdVqnqo=: 00:19:22.914 19:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YjA5MjE3ZGE3YTI0YzIxOGQyMTA3YzUwMzU1YTViODVjODQ3NjRkYjI1NWYzMTY4IHDRog==: --dhchap-ctrl-secret DHHC-1:03:MjJkNWI4NWRhOTZmOTRhMzc2MTMyYzNkMzU1M2ZkNTgzYWI1N2RiNmIyZDdlZDJlMDAxMTliYTJhMjIxZWRkYdVqnqo=: 00:19:23.855 19:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:23.855 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:23.855 19:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:23.855 19:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.855 19:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.855 19:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.855 19:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:19:23.855 19:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.855 19:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.855 19:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.855 19:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:19:23.855 19:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:19:23.855 19:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:19:23.855 19:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:19:23.855 19:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:23.855 19:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:19:23.855 19:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:23.855 19:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key2 00:19:23.855 19:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:19:23.855 19:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:19:24.426 request: 00:19:24.426 { 00:19:24.426 "name": "nvme0", 00:19:24.426 "trtype": "tcp", 00:19:24.426 "traddr": "10.0.0.2", 00:19:24.426 "adrfam": "ipv4", 00:19:24.426 "trsvcid": "4420", 00:19:24.426 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:24.426 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:24.426 "prchk_reftag": false, 00:19:24.426 "prchk_guard": false, 00:19:24.426 "hdgst": false, 00:19:24.426 "ddgst": false, 00:19:24.426 "dhchap_key": "key2", 00:19:24.426 "allow_unrecognized_csi": false, 00:19:24.426 "method": "bdev_nvme_attach_controller", 00:19:24.426 "req_id": 1 00:19:24.426 } 00:19:24.426 Got JSON-RPC error response 00:19:24.426 response: 00:19:24.426 { 00:19:24.426 "code": -5, 00:19:24.426 "message": "Input/output error" 00:19:24.426 } 00:19:24.426 19:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:19:24.426 19:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:24.426 19:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:24.426 19:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:24.426 19:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:24.426 19:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.426 19:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.426 19:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.426 19:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:24.426 19:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.426 19:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.426 19:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.426 19:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:24.426 19:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:19:24.426 19:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:24.426 19:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:19:24.426 19:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:24.426 19:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:19:24.426 19:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:24.426 19:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:24.426 19:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:24.426 19:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:24.687 request: 00:19:24.687 { 00:19:24.687 "name": "nvme0", 00:19:24.687 "trtype": "tcp", 00:19:24.687 "traddr": "10.0.0.2", 00:19:24.687 "adrfam": "ipv4", 00:19:24.687 "trsvcid": "4420", 00:19:24.687 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:24.687 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:24.687 "prchk_reftag": false, 00:19:24.687 "prchk_guard": false, 00:19:24.687 "hdgst": false, 00:19:24.687 "ddgst": false, 00:19:24.687 "dhchap_key": "key1", 00:19:24.687 "dhchap_ctrlr_key": "ckey2", 00:19:24.687 "allow_unrecognized_csi": false, 00:19:24.687 "method": "bdev_nvme_attach_controller", 00:19:24.687 "req_id": 1 00:19:24.687 } 00:19:24.687 Got JSON-RPC error response 00:19:24.687 response: 00:19:24.687 { 00:19:24.687 "code": -5, 00:19:24.687 "message": "Input/output error" 00:19:24.687 } 00:19:24.948 19:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:19:24.948 19:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:24.948 19:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:24.948 19:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:24.948 19:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:24.948 19:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.948 19:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.948 19:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.948 19:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:19:24.948 19:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.948 19:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.948 19:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.948 19:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:24.948 19:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:19:24.948 19:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:24.948 19:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:19:24.948 19:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:24.948 19:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:19:24.948 19:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:24.948 19:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:24.948 19:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:24.948 19:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:25.209 request: 00:19:25.209 { 00:19:25.209 "name": "nvme0", 00:19:25.209 "trtype": "tcp", 00:19:25.209 "traddr": "10.0.0.2", 00:19:25.209 "adrfam": "ipv4", 00:19:25.209 "trsvcid": "4420", 00:19:25.209 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:25.209 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:25.209 "prchk_reftag": false, 00:19:25.209 "prchk_guard": false, 00:19:25.209 "hdgst": false, 00:19:25.209 "ddgst": false, 00:19:25.209 "dhchap_key": "key1", 00:19:25.209 "dhchap_ctrlr_key": "ckey1", 00:19:25.209 "allow_unrecognized_csi": false, 00:19:25.209 "method": "bdev_nvme_attach_controller", 00:19:25.209 "req_id": 1 00:19:25.209 } 00:19:25.209 Got JSON-RPC error response 00:19:25.209 response: 00:19:25.209 { 00:19:25.209 "code": -5, 00:19:25.209 "message": "Input/output error" 00:19:25.209 } 00:19:25.209 19:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:19:25.209 19:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:25.209 19:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:25.470 19:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:25.470 19:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:25.470 19:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.470 19:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.470 19:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.470 19:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 319959 00:19:25.470 19:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 319959 ']' 00:19:25.470 19:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 319959 00:19:25.470 19:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:19:25.470 19:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:25.470 19:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 319959 00:19:25.470 19:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:25.470 19:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:25.470 19:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 319959' 00:19:25.470 killing process with pid 319959 00:19:25.470 19:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 319959 00:19:25.470 19:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 319959 00:19:25.470 19:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:19:25.470 19:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:19:25.470 19:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:25.470 19:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.470 19:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # nvmfpid=347321 00:19:25.470 19:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@329 -- # waitforlisten 347321 00:19:25.470 19:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:19:25.470 19:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 347321 ']' 00:19:25.471 19:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:25.471 19:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:25.471 19:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:25.471 19:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:25.471 19:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.414 19:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:26.414 19:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:19:26.414 19:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:19:26.414 19:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:26.414 19:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.414 19:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:26.414 19:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:26.414 19:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 347321 00:19:26.414 19:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 347321 ']' 00:19:26.414 19:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:26.414 19:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:26.414 19:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:26.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:26.414 19:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:26.414 19:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.676 19:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:26.676 19:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:19:26.676 19:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:19:26.676 19:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.676 19:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.676 null0 00:19:26.676 19:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.676 19:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:19:26.676 19:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.FBG 00:19:26.676 19:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.676 19:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.676 19:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.676 19:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.EFg ]] 00:19:26.676 19:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.EFg 00:19:26.676 19:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.676 19:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.676 19:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.676 19:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:19:26.676 19:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.ugs 00:19:26.676 19:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.676 19:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.676 19:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.676 19:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.Xzm ]] 00:19:26.676 19:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Xzm 00:19:26.676 19:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.676 19:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.676 19:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.676 19:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:19:26.676 19:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.KtO 00:19:26.676 19:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.676 19:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.676 19:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.676 19:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.koY ]] 00:19:26.676 19:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.koY 00:19:26.676 19:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.676 19:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.676 19:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.676 19:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:19:26.676 19:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.8uv 00:19:26.676 19:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.676 19:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.676 19:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.676 19:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:19:26.676 19:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:19:26.676 19:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:26.676 19:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:26.676 19:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:26.676 19:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:26.676 19:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:26.676 19:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:26.676 19:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.676 19:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.676 19:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.676 19:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:26.676 19:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:26.676 19:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:27.620 nvme0n1 00:19:27.620 19:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:27.620 19:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:27.620 19:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:27.881 19:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:27.881 19:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:27.881 19:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.881 19:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.881 19:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.881 19:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:27.881 { 00:19:27.881 "cntlid": 1, 00:19:27.881 "qid": 0, 00:19:27.881 "state": "enabled", 00:19:27.881 "thread": "nvmf_tgt_poll_group_000", 00:19:27.881 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:27.881 "listen_address": { 00:19:27.881 "trtype": "TCP", 00:19:27.881 "adrfam": "IPv4", 00:19:27.881 "traddr": "10.0.0.2", 00:19:27.881 "trsvcid": "4420" 00:19:27.881 }, 00:19:27.881 "peer_address": { 00:19:27.881 "trtype": "TCP", 00:19:27.881 "adrfam": "IPv4", 00:19:27.881 "traddr": "10.0.0.1", 00:19:27.881 "trsvcid": "58602" 00:19:27.881 }, 00:19:27.881 "auth": { 00:19:27.881 "state": "completed", 00:19:27.881 "digest": "sha512", 00:19:27.881 "dhgroup": "ffdhe8192" 00:19:27.881 } 00:19:27.881 } 00:19:27.881 ]' 00:19:27.881 19:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:27.881 19:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:27.881 19:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:27.881 19:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:27.881 19:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:27.881 19:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:27.881 19:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:27.881 19:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:28.142 19:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjQ4MzExNmEyZWU3NWU4MThlZmE3YzBkZGQxMjA3YWU0NGQwYmRmMGE4ZWYzYjNjMmYyMzc5ZmMyZDQxMjFhZbmCPzk=: 00:19:28.142 19:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YjQ4MzExNmEyZWU3NWU4MThlZmE3YzBkZGQxMjA3YWU0NGQwYmRmMGE4ZWYzYjNjMmYyMzc5ZmMyZDQxMjFhZbmCPzk=: 00:19:29.084 19:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:29.084 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:29.084 19:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:29.084 19:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.084 19:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.084 19:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.084 19:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:29.084 19:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.084 19:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.084 19:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.084 19:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:19:29.084 19:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:19:29.084 19:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:19:29.084 19:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:19:29.084 19:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:19:29.084 19:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:19:29.084 19:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:29.084 19:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:19:29.084 19:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:29.085 19:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:29.085 19:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:29.085 19:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:29.346 request: 00:19:29.346 { 00:19:29.346 "name": "nvme0", 00:19:29.346 "trtype": "tcp", 00:19:29.346 "traddr": "10.0.0.2", 00:19:29.346 "adrfam": "ipv4", 00:19:29.346 "trsvcid": "4420", 00:19:29.346 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:29.346 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:29.346 "prchk_reftag": false, 00:19:29.346 "prchk_guard": false, 00:19:29.346 "hdgst": false, 00:19:29.346 "ddgst": false, 00:19:29.346 "dhchap_key": "key3", 00:19:29.346 "allow_unrecognized_csi": false, 00:19:29.346 "method": "bdev_nvme_attach_controller", 00:19:29.346 "req_id": 1 00:19:29.346 } 00:19:29.346 Got JSON-RPC error response 00:19:29.346 response: 00:19:29.346 { 00:19:29.346 "code": -5, 00:19:29.346 "message": "Input/output error" 00:19:29.346 } 00:19:29.346 19:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:19:29.346 19:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:29.346 19:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:29.346 19:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:29.346 19:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:19:29.346 19:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:19:29.346 19:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:19:29.346 19:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:19:29.346 19:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:19:29.346 19:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:19:29.346 19:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:19:29.346 19:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:19:29.346 19:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:29.346 19:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:19:29.346 19:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:29.346 19:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:29.346 19:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:29.346 19:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:29.608 request: 00:19:29.608 { 00:19:29.608 "name": "nvme0", 00:19:29.608 "trtype": "tcp", 00:19:29.608 "traddr": "10.0.0.2", 00:19:29.608 "adrfam": "ipv4", 00:19:29.608 "trsvcid": "4420", 00:19:29.608 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:29.608 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:29.608 "prchk_reftag": false, 00:19:29.608 "prchk_guard": false, 00:19:29.608 "hdgst": false, 00:19:29.608 "ddgst": false, 00:19:29.608 "dhchap_key": "key3", 00:19:29.608 "allow_unrecognized_csi": false, 00:19:29.608 "method": "bdev_nvme_attach_controller", 00:19:29.608 "req_id": 1 00:19:29.608 } 00:19:29.608 Got JSON-RPC error response 00:19:29.608 response: 00:19:29.608 { 00:19:29.608 "code": -5, 00:19:29.608 "message": "Input/output error" 00:19:29.608 } 00:19:29.608 19:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:19:29.608 19:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:29.608 19:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:29.608 19:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:29.608 19:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:19:29.608 19:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:19:29.608 19:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:19:29.608 19:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:29.608 19:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:29.608 19:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:29.869 19:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:29.869 19:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.869 19:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.869 19:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.869 19:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:29.869 19:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.869 19:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.869 19:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.869 19:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:29.869 19:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:19:29.869 19:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:29.869 19:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:19:29.869 19:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:29.869 19:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:19:29.869 19:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:29.869 19:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:29.869 19:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:29.869 19:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:30.131 request: 00:19:30.131 { 00:19:30.131 "name": "nvme0", 00:19:30.131 "trtype": "tcp", 00:19:30.131 "traddr": "10.0.0.2", 00:19:30.131 "adrfam": "ipv4", 00:19:30.131 "trsvcid": "4420", 00:19:30.131 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:30.131 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:30.131 "prchk_reftag": false, 00:19:30.131 "prchk_guard": false, 00:19:30.131 "hdgst": false, 00:19:30.131 "ddgst": false, 00:19:30.131 "dhchap_key": "key0", 00:19:30.131 "dhchap_ctrlr_key": "key1", 00:19:30.131 "allow_unrecognized_csi": false, 00:19:30.131 "method": "bdev_nvme_attach_controller", 00:19:30.131 "req_id": 1 00:19:30.131 } 00:19:30.131 Got JSON-RPC error response 00:19:30.131 response: 00:19:30.131 { 00:19:30.131 "code": -5, 00:19:30.131 "message": "Input/output error" 00:19:30.131 } 00:19:30.131 19:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:19:30.131 19:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:30.131 19:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:30.131 19:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:30.131 19:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:19:30.131 19:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:19:30.131 19:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:19:30.392 nvme0n1 00:19:30.392 19:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:19:30.392 19:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:19:30.392 19:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:30.652 19:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:30.652 19:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:30.652 19:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:30.652 19:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:19:30.652 19:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.652 19:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.652 19:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.652 19:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:19:30.652 19:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:19:30.652 19:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:19:31.592 nvme0n1 00:19:31.592 19:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:19:31.592 19:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:19:31.592 19:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:31.853 19:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:31.853 19:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:31.853 19:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.853 19:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.853 19:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.853 19:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:19:31.853 19:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:19:31.854 19:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:31.854 19:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:31.854 19:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDAyODFmZGI0YTdmYmZlYzg3ODcxYTQ1NzRlMmQ0NjY0MjZiOWFkMzdjNDRjNGYzV9FB/w==: --dhchap-ctrl-secret DHHC-1:03:YjQ4MzExNmEyZWU3NWU4MThlZmE3YzBkZGQxMjA3YWU0NGQwYmRmMGE4ZWYzYjNjMmYyMzc5ZmMyZDQxMjFhZbmCPzk=: 00:19:31.854 19:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZDAyODFmZGI0YTdmYmZlYzg3ODcxYTQ1NzRlMmQ0NjY0MjZiOWFkMzdjNDRjNGYzV9FB/w==: --dhchap-ctrl-secret DHHC-1:03:YjQ4MzExNmEyZWU3NWU4MThlZmE3YzBkZGQxMjA3YWU0NGQwYmRmMGE4ZWYzYjNjMmYyMzc5ZmMyZDQxMjFhZbmCPzk=: 00:19:32.795 19:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:19:32.795 19:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:19:32.795 19:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:19:32.795 19:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:19:32.795 19:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:19:32.795 19:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:19:32.795 19:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:19:32.795 19:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:32.795 19:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:32.795 19:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:19:32.795 19:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:19:32.795 19:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:19:32.795 19:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:19:32.795 19:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:32.795 19:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:19:32.795 19:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:32.795 19:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 00:19:32.795 19:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:19:32.795 19:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:19:33.366 request: 00:19:33.366 { 00:19:33.366 "name": "nvme0", 00:19:33.366 "trtype": "tcp", 00:19:33.366 "traddr": "10.0.0.2", 00:19:33.366 "adrfam": "ipv4", 00:19:33.366 "trsvcid": "4420", 00:19:33.366 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:33.366 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:33.366 "prchk_reftag": false, 00:19:33.366 "prchk_guard": false, 00:19:33.366 "hdgst": false, 00:19:33.366 "ddgst": false, 00:19:33.366 "dhchap_key": "key1", 00:19:33.366 "allow_unrecognized_csi": false, 00:19:33.366 "method": "bdev_nvme_attach_controller", 00:19:33.366 "req_id": 1 00:19:33.366 } 00:19:33.366 Got JSON-RPC error response 00:19:33.366 response: 00:19:33.366 { 00:19:33.366 "code": -5, 00:19:33.366 "message": "Input/output error" 00:19:33.366 } 00:19:33.366 19:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:19:33.366 19:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:33.366 19:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:33.366 19:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:33.366 19:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:33.366 19:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:33.366 19:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:34.307 nvme0n1 00:19:34.307 19:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:19:34.307 19:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:19:34.307 19:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:34.307 19:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:34.307 19:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:34.307 19:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:34.568 19:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:34.568 19:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.568 19:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.568 19:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.568 19:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:19:34.568 19:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:19:34.568 19:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:19:34.828 nvme0n1 00:19:34.828 19:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:19:34.828 19:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:19:34.828 19:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:35.088 19:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:35.088 19:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:35.088 19:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:35.088 19:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:35.088 19:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.088 19:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.088 19:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.088 19:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:OGRjYmE3M2IxNzU0NTdjYzU2MDFkOGJlNDA3NThiNWbPZiCO: '' 2s 00:19:35.088 19:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:19:35.088 19:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:19:35.088 19:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:OGRjYmE3M2IxNzU0NTdjYzU2MDFkOGJlNDA3NThiNWbPZiCO: 00:19:35.088 19:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:19:35.088 19:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:19:35.088 19:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:19:35.088 19:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:OGRjYmE3M2IxNzU0NTdjYzU2MDFkOGJlNDA3NThiNWbPZiCO: ]] 00:19:35.088 19:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:OGRjYmE3M2IxNzU0NTdjYzU2MDFkOGJlNDA3NThiNWbPZiCO: 00:19:35.089 19:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:19:35.089 19:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:19:35.089 19:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:19:37.633 19:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:19:37.633 19:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1237 -- # local i=0 00:19:37.633 19:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:19:37.633 19:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:19:37.633 19:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:19:37.633 19:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:19:37.633 19:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1248 -- # return 0 00:19:37.633 19:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key key2 00:19:37.633 19:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.633 19:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.633 19:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.633 19:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:ZDAyODFmZGI0YTdmYmZlYzg3ODcxYTQ1NzRlMmQ0NjY0MjZiOWFkMzdjNDRjNGYzV9FB/w==: 2s 00:19:37.633 19:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:19:37.633 19:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:19:37.633 19:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:19:37.633 19:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:ZDAyODFmZGI0YTdmYmZlYzg3ODcxYTQ1NzRlMmQ0NjY0MjZiOWFkMzdjNDRjNGYzV9FB/w==: 00:19:37.633 19:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:19:37.633 19:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:19:37.633 19:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:19:37.633 19:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:ZDAyODFmZGI0YTdmYmZlYzg3ODcxYTQ1NzRlMmQ0NjY0MjZiOWFkMzdjNDRjNGYzV9FB/w==: ]] 00:19:37.633 19:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:ZDAyODFmZGI0YTdmYmZlYzg3ODcxYTQ1NzRlMmQ0NjY0MjZiOWFkMzdjNDRjNGYzV9FB/w==: 00:19:37.633 19:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:19:37.633 19:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:19:39.548 19:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:19:39.548 19:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1237 -- # local i=0 00:19:39.548 19:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:19:39.548 19:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:19:39.548 19:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:19:39.548 19:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:19:39.548 19:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1248 -- # return 0 00:19:39.548 19:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:39.548 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:39.548 19:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:39.548 19:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.548 19:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.548 19:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.548 19:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:39.548 19:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:39.548 19:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:40.121 nvme0n1 00:19:40.121 19:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:40.121 19:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.121 19:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.121 19:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.121 19:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:40.121 19:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:40.693 19:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:19:40.693 19:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:40.693 19:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:19:40.954 19:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:40.954 19:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:40.954 19:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.954 19:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.954 19:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.954 19:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:19:40.954 19:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:19:40.954 19:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:19:40.954 19:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:19:40.954 19:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:41.216 19:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:41.216 19:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:41.216 19:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.216 19:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.216 19:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.216 19:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:41.216 19:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:19:41.216 19:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:41.216 19:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:19:41.216 19:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:41.216 19:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:19:41.216 19:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:41.216 19:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:41.216 19:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:41.786 request: 00:19:41.786 { 00:19:41.786 "name": "nvme0", 00:19:41.786 "dhchap_key": "key1", 00:19:41.786 "dhchap_ctrlr_key": "key3", 00:19:41.786 "method": "bdev_nvme_set_keys", 00:19:41.786 "req_id": 1 00:19:41.786 } 00:19:41.786 Got JSON-RPC error response 00:19:41.786 response: 00:19:41.786 { 00:19:41.786 "code": -13, 00:19:41.786 "message": "Permission denied" 00:19:41.786 } 00:19:41.786 19:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:19:41.786 19:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:41.786 19:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:41.786 19:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:41.786 19:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:19:41.786 19:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:19:41.786 19:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:42.046 19:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:19:42.046 19:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:19:42.987 19:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:19:42.987 19:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:19:42.987 19:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:42.987 19:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:19:42.987 19:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:42.987 19:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.987 19:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.987 19:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.987 19:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:42.987 19:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:42.987 19:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:43.927 nvme0n1 00:19:43.927 19:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:43.927 19:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.927 19:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.927 19:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.927 19:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:19:43.927 19:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:19:43.927 19:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:19:43.927 19:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:19:43.927 19:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:43.927 19:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:19:43.927 19:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:43.927 19:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:19:43.927 19:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:19:44.498 request: 00:19:44.498 { 00:19:44.498 "name": "nvme0", 00:19:44.498 "dhchap_key": "key2", 00:19:44.498 "dhchap_ctrlr_key": "key0", 00:19:44.498 "method": "bdev_nvme_set_keys", 00:19:44.498 "req_id": 1 00:19:44.498 } 00:19:44.498 Got JSON-RPC error response 00:19:44.498 response: 00:19:44.498 { 00:19:44.498 "code": -13, 00:19:44.498 "message": "Permission denied" 00:19:44.498 } 00:19:44.498 19:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:19:44.498 19:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:44.498 19:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:44.498 19:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:44.498 19:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:19:44.498 19:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:44.498 19:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:19:44.759 19:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:19:44.759 19:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:19:45.698 19:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:19:45.698 19:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:19:45.698 19:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:45.959 19:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:19:45.959 19:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:19:45.959 19:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:19:45.959 19:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 320117 00:19:45.959 19:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 320117 ']' 00:19:45.959 19:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 320117 00:19:45.959 19:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:19:45.959 19:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:45.959 19:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 320117 00:19:45.959 19:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:19:45.959 19:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:19:45.959 19:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 320117' 00:19:45.959 killing process with pid 320117 00:19:45.959 19:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 320117 00:19:45.959 19:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 320117 00:19:46.220 19:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:19:46.220 19:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@335 -- # nvmfcleanup 00:19:46.220 19:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@99 -- # sync 00:19:46.220 19:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:19:46.220 19:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@102 -- # set +e 00:19:46.220 19:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@103 -- # for i in {1..20} 00:19:46.220 19:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:19:46.220 rmmod nvme_tcp 00:19:46.220 rmmod nvme_fabrics 00:19:46.220 rmmod nvme_keyring 00:19:46.220 19:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:19:46.220 19:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # set -e 00:19:46.220 19:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # return 0 00:19:46.220 19:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # '[' -n 347321 ']' 00:19:46.220 19:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@337 -- # killprocess 347321 00:19:46.220 19:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 347321 ']' 00:19:46.220 19:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 347321 00:19:46.220 19:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:19:46.220 19:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:46.220 19:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 347321 00:19:46.220 19:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:46.220 19:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:46.220 19:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 347321' 00:19:46.220 killing process with pid 347321 00:19:46.220 19:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 347321 00:19:46.220 19:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 347321 00:19:46.480 19:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:19:46.480 19:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # nvmf_fini 00:19:46.480 19:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@264 -- # local dev 00:19:46.480 19:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@267 -- # remove_target_ns 00:19:46.480 19:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:19:46.480 19:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:19:46.480 19:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_target_ns 00:19:48.393 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@268 -- # delete_main_bridge 00:19:48.393 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:19:48.393 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@130 -- # return 0 00:19:48.393 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:19:48.393 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:19:48.393 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:19:48.393 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:19:48.393 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:19:48.393 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:19:48.393 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:19:48.393 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:19:48.393 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:19:48.393 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:19:48.393 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:19:48.393 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:19:48.393 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:19:48.393 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:19:48.393 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:19:48.393 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:19:48.393 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:19:48.393 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@41 -- # _dev=0 00:19:48.393 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@41 -- # dev_map=() 00:19:48.393 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@284 -- # iptr 00:19:48.393 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@542 -- # iptables-save 00:19:48.393 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:19:48.393 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@542 -- # iptables-restore 00:19:48.393 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.FBG /tmp/spdk.key-sha256.ugs /tmp/spdk.key-sha384.KtO /tmp/spdk.key-sha512.8uv /tmp/spdk.key-sha512.EFg /tmp/spdk.key-sha384.Xzm /tmp/spdk.key-sha256.koY '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:19:48.393 00:19:48.393 real 2m45.023s 00:19:48.393 user 6m8.195s 00:19:48.393 sys 0m24.207s 00:19:48.393 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:48.393 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.393 ************************************ 00:19:48.393 END TEST nvmf_auth_target 00:19:48.393 ************************************ 00:19:48.393 19:09:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:19:48.393 19:09:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:48.393 19:09:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:19:48.393 19:09:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:48.393 19:09:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:48.655 ************************************ 00:19:48.655 START TEST nvmf_bdevio_no_huge 00:19:48.655 ************************************ 00:19:48.655 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:48.655 * Looking for test storage... 00:19:48.655 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:48.655 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:48.655 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # lcov --version 00:19:48.655 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:48.655 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:48.655 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:48.655 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:48.655 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:48.655 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:19:48.655 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:19:48.655 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:19:48.655 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:19:48.655 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:19:48.655 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:19:48.655 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:19:48.655 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:48.655 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:19:48.655 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:19:48.655 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:48.655 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:48.655 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:19:48.655 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:19:48.655 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:48.655 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:19:48.655 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:19:48.655 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:19:48.655 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:19:48.655 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:48.655 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:19:48.655 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:19:48.655 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:48.655 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:48.655 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:19:48.655 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:48.655 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:48.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:48.655 --rc genhtml_branch_coverage=1 00:19:48.655 --rc genhtml_function_coverage=1 00:19:48.655 --rc genhtml_legend=1 00:19:48.655 --rc geninfo_all_blocks=1 00:19:48.655 --rc geninfo_unexecuted_blocks=1 00:19:48.655 00:19:48.655 ' 00:19:48.655 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:48.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:48.655 --rc genhtml_branch_coverage=1 00:19:48.655 --rc genhtml_function_coverage=1 00:19:48.655 --rc genhtml_legend=1 00:19:48.655 --rc geninfo_all_blocks=1 00:19:48.655 --rc geninfo_unexecuted_blocks=1 00:19:48.655 00:19:48.655 ' 00:19:48.655 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:48.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:48.655 --rc genhtml_branch_coverage=1 00:19:48.655 --rc genhtml_function_coverage=1 00:19:48.655 --rc genhtml_legend=1 00:19:48.655 --rc geninfo_all_blocks=1 00:19:48.655 --rc geninfo_unexecuted_blocks=1 00:19:48.655 00:19:48.655 ' 00:19:48.655 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:48.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:48.655 --rc genhtml_branch_coverage=1 00:19:48.655 --rc genhtml_function_coverage=1 00:19:48.655 --rc genhtml_legend=1 00:19:48.655 --rc geninfo_all_blocks=1 00:19:48.655 --rc geninfo_unexecuted_blocks=1 00:19:48.655 00:19:48.655 ' 00:19:48.655 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:48.655 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:19:48.655 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:48.655 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:48.655 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:48.655 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:48.655 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:48.655 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:19:48.655 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:48.655 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:19:48.655 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:48.655 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:48.655 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:48.655 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:19:48.655 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:19:48.655 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:48.655 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:48.655 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:19:48.655 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:48.655 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:48.655 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:48.655 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:48.655 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:48.655 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:48.655 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:19:48.656 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:48.656 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:19:48.656 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:19:48.656 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:19:48.656 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:19:48.656 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@50 -- # : 0 00:19:48.656 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:19:48.656 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:19:48.656 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:19:48.656 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:48.656 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:48.656 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:19:48.656 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:19:48.656 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:19:48.656 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:19:48.656 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@54 -- # have_pci_nics=0 00:19:48.656 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:48.656 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:48.656 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:19:48.656 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:19:48.656 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:48.656 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # prepare_net_devs 00:19:48.656 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # local -g is_hw=no 00:19:48.656 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # remove_target_ns 00:19:48.656 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:19:48.656 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:19:48.656 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_target_ns 00:19:48.656 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:19:48.656 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:19:48.656 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # xtrace_disable 00:19:48.656 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:56.798 19:09:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:56.798 19:09:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@131 -- # pci_devs=() 00:19:56.798 19:09:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@131 -- # local -a pci_devs 00:19:56.798 19:09:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@132 -- # pci_net_devs=() 00:19:56.798 19:09:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:19:56.798 19:09:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@133 -- # pci_drivers=() 00:19:56.798 19:09:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@133 -- # local -A pci_drivers 00:19:56.798 19:09:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@135 -- # net_devs=() 00:19:56.798 19:09:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@135 -- # local -ga net_devs 00:19:56.798 19:09:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@136 -- # e810=() 00:19:56.798 19:09:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@136 -- # local -ga e810 00:19:56.798 19:09:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@137 -- # x722=() 00:19:56.798 19:09:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@137 -- # local -ga x722 00:19:56.798 19:09:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@138 -- # mlx=() 00:19:56.798 19:09:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@138 -- # local -ga mlx 00:19:56.798 19:09:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:56.798 19:09:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:56.798 19:09:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:56.798 19:09:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:56.798 19:09:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:56.798 19:09:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:56.798 19:09:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:56.798 19:09:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:56.798 19:09:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:56.798 19:09:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:56.798 19:09:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:56.798 19:09:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:56.798 19:09:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:19:56.798 19:09:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:19:56.798 19:09:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:19:56.798 19:09:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:19:56.798 19:09:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:19:56.798 19:09:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:19:56.798 19:09:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:19:56.798 19:09:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:19:56.798 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:19:56.798 19:09:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:19:56.798 19:09:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:19:56.798 19:09:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:56.798 19:09:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:56.798 19:09:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:19:56.798 19:09:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:19:56.798 19:09:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:19:56.798 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:19:56.798 19:09:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:19:56.798 19:09:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:19:56.798 19:09:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:56.798 19:09:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:56.798 19:09:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:19:56.798 19:09:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:19:56.798 19:09:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:19:56.798 19:09:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:19:56.798 19:09:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:19:56.798 19:09:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:56.798 19:09:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:19:56.798 19:09:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:56.798 19:09:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # [[ up == up ]] 00:19:56.798 19:09:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:19:56.798 19:09:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:56.798 19:09:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:19:56.798 Found net devices under 0000:4b:00.0: cvl_0_0 00:19:56.798 19:09:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:19:56.798 19:09:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:19:56.798 19:09:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:56.798 19:09:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:19:56.798 19:09:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:56.798 19:09:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # [[ up == up ]] 00:19:56.798 19:09:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:19:56.798 19:09:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:56.798 19:09:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:19:56.798 Found net devices under 0000:4b:00.1: cvl_0_1 00:19:56.798 19:09:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:19:56.798 19:09:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:19:56.798 19:09:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:19:56.798 19:09:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # is_hw=yes 00:19:56.798 19:09:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:19:56.798 19:09:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:19:56.798 19:09:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:19:56.798 19:09:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:19:56.798 19:09:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@257 -- # create_target_ns 00:19:56.798 19:09:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:19:56.798 19:09:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:19:56.798 19:09:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:19:56.798 19:09:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:56.798 19:09:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:19:56.798 19:09:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:19:56.798 19:09:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:56.799 19:09:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:56.799 19:09:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:19:56.799 19:09:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:19:56.799 19:09:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:19:56.799 19:09:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:19:56.799 19:09:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@27 -- # local -gA dev_map 00:19:56.799 19:09:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@28 -- # local -g _dev 00:19:56.799 19:09:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:19:56.799 19:09:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:19:56.799 19:09:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:19:56.799 19:09:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:19:56.799 19:09:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@44 -- # ips=() 00:19:56.799 19:09:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:19:56.799 19:09:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:19:56.799 19:09:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:19:56.799 19:09:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:19:56.799 19:09:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:19:56.799 19:09:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:19:56.799 19:09:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:19:56.799 19:09:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:19:56.799 19:09:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:19:56.799 19:09:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:19:56.799 19:09:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:19:56.799 19:09:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:19:56.799 19:09:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:19:56.799 19:09:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:19:56.799 19:09:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:19:56.799 19:09:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:19:56.799 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:19:56.799 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:19:56.799 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:56.799 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:19:56.799 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@11 -- # local val=167772161 00:19:56.799 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:19:56.799 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:19:56.799 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:19:56.799 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:19:56.799 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:19:56.799 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:19:56.799 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:19:56.799 10.0.0.1 00:19:56.799 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:19:56.799 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:19:56.799 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:56.799 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:56.799 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:19:56.799 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@11 -- # local val=167772162 00:19:56.799 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:19:56.799 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:19:56.799 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:19:56.799 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:19:56.799 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:19:56.799 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:19:56.799 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:19:56.799 10.0.0.2 00:19:56.799 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:19:56.799 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:19:56.799 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:19:56.799 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:19:56.799 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:19:56.799 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:19:56.799 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:19:56.799 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:56.799 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:56.799 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:19:56.799 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:19:56.799 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:19:56.799 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:19:56.799 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:19:56.799 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:19:56.799 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:19:56.799 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:19:56.799 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:19:56.799 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:19:56.799 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:19:56.799 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@38 -- # ping_ips 1 00:19:56.799 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:19:56.799 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:19:56.799 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:19:56.799 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:19:56.799 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:19:56.799 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:19:56.799 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:19:56.799 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:19:56.799 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:19:56.799 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@107 -- # local dev=initiator0 00:19:56.799 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:19:56.799 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:19:56.799 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:19:56.799 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:19:56.799 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:19:56.799 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:19:56.799 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:19:56.799 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:19:56.799 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:19:56.799 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:19:56.799 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:19:56.799 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:56.799 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:56.799 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:19:56.799 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:19:56.799 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:56.799 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.575 ms 00:19:56.799 00:19:56.799 --- 10.0.0.1 ping statistics --- 00:19:56.799 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:56.799 rtt min/avg/max/mdev = 0.575/0.575/0.575/0.000 ms 00:19:56.800 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:19:56.800 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:19:56.800 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:19:56.800 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:19:56.800 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:56.800 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:56.800 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@168 -- # get_net_dev target0 00:19:56.800 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@107 -- # local dev=target0 00:19:56.800 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:19:56.800 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:19:56.800 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:19:56.800 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:19:56.800 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:19:56.800 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:19:56.800 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:19:56.800 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:19:56.800 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:19:56.800 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:19:56.800 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:19:56.800 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:19:56.800 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:19:56.800 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:19:56.800 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:56.800 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.201 ms 00:19:56.800 00:19:56.800 --- 10.0.0.2 ping statistics --- 00:19:56.800 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:56.800 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:19:56.800 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@98 -- # (( pair++ )) 00:19:56.800 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:19:56.800 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:56.800 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # return 0 00:19:56.800 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:19:56.800 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:19:56.800 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:19:56.800 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:19:56.800 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:19:56.800 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:19:56.800 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:19:56.800 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:19:56.800 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:19:56.800 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:19:56.800 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@107 -- # local dev=initiator0 00:19:56.800 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:19:56.800 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:19:56.800 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:19:56.800 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:19:56.800 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:19:56.800 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:19:56.800 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:19:56.800 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:19:56.800 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:19:56.800 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:56.800 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:19:56.800 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:19:56.800 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:19:56.800 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:19:56.800 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:19:56.800 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:19:56.800 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@107 -- # local dev=initiator1 00:19:56.800 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:19:56.800 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:19:56.800 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@109 -- # return 1 00:19:56.800 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@168 -- # dev= 00:19:56.800 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@169 -- # return 0 00:19:56.800 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:19:56.800 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:19:56.800 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:19:56.800 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:19:56.800 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:19:56.800 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:56.800 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:56.800 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@168 -- # get_net_dev target0 00:19:56.800 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@107 -- # local dev=target0 00:19:56.800 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:19:56.800 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:19:56.800 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:19:56.800 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:19:56.800 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:19:56.800 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:19:56.800 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:19:56.800 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:19:56.800 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:19:56.800 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:56.800 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:19:56.800 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:19:56.800 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:19:56.800 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:19:56.800 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:56.800 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:56.800 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@168 -- # get_net_dev target1 00:19:56.800 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@107 -- # local dev=target1 00:19:56.800 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:19:56.800 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:19:56.800 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@109 -- # return 1 00:19:56.800 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@168 -- # dev= 00:19:56.800 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@169 -- # return 0 00:19:56.800 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:19:56.800 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:56.800 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:19:56.800 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:19:56.801 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:56.801 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:19:56.801 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:19:56.801 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:56.801 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:19:56.801 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:56.801 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:56.801 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # nvmfpid=356135 00:19:56.801 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # waitforlisten 356135 00:19:56.801 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:19:56.801 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # '[' -z 356135 ']' 00:19:56.801 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:56.801 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:56.801 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:56.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:56.801 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:56.801 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:56.801 [2024-11-05 19:09:25.506539] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:19:56.801 [2024-11-05 19:09:25.506610] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:19:56.801 [2024-11-05 19:09:25.613503] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:56.801 [2024-11-05 19:09:25.673895] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:56.801 [2024-11-05 19:09:25.673943] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:56.801 [2024-11-05 19:09:25.673951] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:56.801 [2024-11-05 19:09:25.673958] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:56.801 [2024-11-05 19:09:25.673965] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:56.801 [2024-11-05 19:09:25.675517] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:19:56.801 [2024-11-05 19:09:25.675805] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:19:56.801 [2024-11-05 19:09:25.675998] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:19:56.801 [2024-11-05 19:09:25.676096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:57.062 19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:57.062 19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@866 -- # return 0 00:19:57.062 19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:19:57.062 19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:57.062 19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:57.062 19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:57.062 19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:57.062 19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.062 19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:57.062 [2024-11-05 19:09:26.381751] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:57.323 19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.323 19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:57.323 19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.323 19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:57.323 Malloc0 00:19:57.323 19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.323 19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:57.323 19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.323 19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:57.323 19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.323 19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:57.323 19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.323 19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:57.323 19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.323 19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:57.323 19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.323 19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:57.323 [2024-11-05 19:09:26.435568] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:57.323 19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.323 19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:19:57.323 19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:57.323 19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # config=() 00:19:57.323 19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # local subsystem config 00:19:57.323 19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:19:57.323 19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:19:57.323 { 00:19:57.323 "params": { 00:19:57.323 "name": "Nvme$subsystem", 00:19:57.323 "trtype": "$TEST_TRANSPORT", 00:19:57.323 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:57.323 "adrfam": "ipv4", 00:19:57.323 "trsvcid": "$NVMF_PORT", 00:19:57.323 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:57.323 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:57.323 "hdgst": ${hdgst:-false}, 00:19:57.323 "ddgst": ${ddgst:-false} 00:19:57.323 }, 00:19:57.323 "method": "bdev_nvme_attach_controller" 00:19:57.323 } 00:19:57.323 EOF 00:19:57.323 )") 00:19:57.323 19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # cat 00:19:57.323 19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@396 -- # jq . 00:19:57.323 19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@397 -- # IFS=, 00:19:57.323 19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:19:57.323 "params": { 00:19:57.323 "name": "Nvme1", 00:19:57.323 "trtype": "tcp", 00:19:57.323 "traddr": "10.0.0.2", 00:19:57.323 "adrfam": "ipv4", 00:19:57.323 "trsvcid": "4420", 00:19:57.323 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:57.323 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:57.323 "hdgst": false, 00:19:57.323 "ddgst": false 00:19:57.323 }, 00:19:57.323 "method": "bdev_nvme_attach_controller" 00:19:57.323 }' 00:19:57.323 [2024-11-05 19:09:26.493640] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:19:57.323 [2024-11-05 19:09:26.493725] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid356403 ] 00:19:57.323 [2024-11-05 19:09:26.575898] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:57.323 [2024-11-05 19:09:26.631395] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:57.323 [2024-11-05 19:09:26.631512] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:57.323 [2024-11-05 19:09:26.631515] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:57.584 I/O targets: 00:19:57.584 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:57.584 00:19:57.584 00:19:57.584 CUnit - A unit testing framework for C - Version 2.1-3 00:19:57.584 http://cunit.sourceforge.net/ 00:19:57.584 00:19:57.584 00:19:57.584 Suite: bdevio tests on: Nvme1n1 00:19:57.584 Test: blockdev write read block ...passed 00:19:57.584 Test: blockdev write zeroes read block ...passed 00:19:57.584 Test: blockdev write zeroes read no split ...passed 00:19:57.844 Test: blockdev write zeroes read split ...passed 00:19:57.845 Test: blockdev write zeroes read split partial ...passed 00:19:57.845 Test: blockdev reset ...[2024-11-05 19:09:26.933556] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:19:57.845 [2024-11-05 19:09:26.933625] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1126800 (9): Bad file descriptor 00:19:57.845 [2024-11-05 19:09:27.084997] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:19:57.845 passed 00:19:57.845 Test: blockdev write read 8 blocks ...passed 00:19:57.845 Test: blockdev write read size > 128k ...passed 00:19:57.845 Test: blockdev write read invalid size ...passed 00:19:58.105 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:58.105 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:58.105 Test: blockdev write read max offset ...passed 00:19:58.105 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:58.105 Test: blockdev writev readv 8 blocks ...passed 00:19:58.105 Test: blockdev writev readv 30 x 1block ...passed 00:19:58.105 Test: blockdev writev readv block ...passed 00:19:58.105 Test: blockdev writev readv size > 128k ...passed 00:19:58.105 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:58.105 Test: blockdev comparev and writev ...[2024-11-05 19:09:27.352105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:58.105 [2024-11-05 19:09:27.352131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:58.105 [2024-11-05 19:09:27.352142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:58.105 [2024-11-05 19:09:27.352148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:58.105 [2024-11-05 19:09:27.352576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:58.105 [2024-11-05 19:09:27.352585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:58.105 [2024-11-05 19:09:27.352594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:58.105 [2024-11-05 19:09:27.352600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:58.105 [2024-11-05 19:09:27.353037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:58.105 [2024-11-05 19:09:27.353046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:58.105 [2024-11-05 19:09:27.353056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:58.106 [2024-11-05 19:09:27.353065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:58.106 [2024-11-05 19:09:27.353548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:58.106 [2024-11-05 19:09:27.353555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:58.106 [2024-11-05 19:09:27.353565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:58.106 [2024-11-05 19:09:27.353570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:58.106 passed 00:19:58.367 Test: blockdev nvme passthru rw ...passed 00:19:58.367 Test: blockdev nvme passthru vendor specific ...[2024-11-05 19:09:27.437670] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:58.367 [2024-11-05 19:09:27.437682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:58.367 [2024-11-05 19:09:27.437993] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:58.367 [2024-11-05 19:09:27.438000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:58.367 [2024-11-05 19:09:27.438329] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:58.367 [2024-11-05 19:09:27.438337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:58.367 [2024-11-05 19:09:27.438669] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:58.367 [2024-11-05 19:09:27.438677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:58.367 passed 00:19:58.367 Test: blockdev nvme admin passthru ...passed 00:19:58.367 Test: blockdev copy ...passed 00:19:58.367 00:19:58.367 Run Summary: Type Total Ran Passed Failed Inactive 00:19:58.367 suites 1 1 n/a 0 0 00:19:58.367 tests 23 23 23 0 0 00:19:58.367 asserts 152 152 152 0 n/a 00:19:58.367 00:19:58.367 Elapsed time = 1.415 seconds 00:19:58.629 19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:58.629 19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.629 19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:58.629 19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.629 19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:58.629 19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:19:58.629 19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # nvmfcleanup 00:19:58.629 19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@99 -- # sync 00:19:58.629 19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:19:58.629 19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@102 -- # set +e 00:19:58.629 19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@103 -- # for i in {1..20} 00:19:58.629 19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:19:58.629 rmmod nvme_tcp 00:19:58.629 rmmod nvme_fabrics 00:19:58.629 rmmod nvme_keyring 00:19:58.629 19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:19:58.629 19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@106 -- # set -e 00:19:58.629 19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@107 -- # return 0 00:19:58.629 19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # '[' -n 356135 ']' 00:19:58.629 19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@337 -- # killprocess 356135 00:19:58.629 19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # '[' -z 356135 ']' 00:19:58.629 19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # kill -0 356135 00:19:58.629 19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@957 -- # uname 00:19:58.629 19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:58.629 19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 356135 00:19:58.629 19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # process_name=reactor_3 00:19:58.629 19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@962 -- # '[' reactor_3 = sudo ']' 00:19:58.630 19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@970 -- # echo 'killing process with pid 356135' 00:19:58.630 killing process with pid 356135 00:19:58.630 19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@971 -- # kill 356135 00:19:58.630 19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@976 -- # wait 356135 00:19:59.201 19:09:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:19:59.201 19:09:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # nvmf_fini 00:19:59.201 19:09:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@264 -- # local dev 00:19:59.201 19:09:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@267 -- # remove_target_ns 00:19:59.201 19:09:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:19:59.201 19:09:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:19:59.201 19:09:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_target_ns 00:20:01.115 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@268 -- # delete_main_bridge 00:20:01.115 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:20:01.115 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@130 -- # return 0 00:20:01.115 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:20:01.115 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:20:01.115 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:20:01.115 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:20:01.115 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:20:01.115 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:20:01.115 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:20:01.115 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:20:01.115 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:20:01.115 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:20:01.115 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:20:01.115 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:20:01.115 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:20:01.115 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:20:01.115 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:20:01.115 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:20:01.115 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:20:01.115 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@41 -- # _dev=0 00:20:01.115 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@41 -- # dev_map=() 00:20:01.115 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@284 -- # iptr 00:20:01.115 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:20:01.115 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@542 -- # iptables-save 00:20:01.115 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@542 -- # iptables-restore 00:20:01.115 00:20:01.115 real 0m12.626s 00:20:01.115 user 0m14.479s 00:20:01.115 sys 0m6.681s 00:20:01.115 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:01.115 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:01.115 ************************************ 00:20:01.115 END TEST nvmf_bdevio_no_huge 00:20:01.115 ************************************ 00:20:01.115 19:09:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:20:01.115 19:09:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:20:01.115 19:09:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:01.115 19:09:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:01.376 ************************************ 00:20:01.376 START TEST nvmf_tls 00:20:01.376 ************************************ 00:20:01.376 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:20:01.376 * Looking for test storage... 00:20:01.376 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:01.376 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:01.376 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # lcov --version 00:20:01.376 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:01.376 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:01.376 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:01.376 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:01.376 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:01.376 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:20:01.376 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:20:01.376 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:20:01.376 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:20:01.376 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:20:01.376 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:20:01.376 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:20:01.376 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:01.376 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:20:01.376 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:20:01.376 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:01.376 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:01.376 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:20:01.376 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:20:01.376 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:01.376 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:20:01.376 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:20:01.376 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:20:01.376 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:20:01.376 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:01.376 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:20:01.376 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:20:01.376 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:01.376 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:01.376 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:20:01.376 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:01.376 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:01.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:01.376 --rc genhtml_branch_coverage=1 00:20:01.376 --rc genhtml_function_coverage=1 00:20:01.376 --rc genhtml_legend=1 00:20:01.376 --rc geninfo_all_blocks=1 00:20:01.376 --rc geninfo_unexecuted_blocks=1 00:20:01.376 00:20:01.376 ' 00:20:01.376 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:01.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:01.376 --rc genhtml_branch_coverage=1 00:20:01.376 --rc genhtml_function_coverage=1 00:20:01.376 --rc genhtml_legend=1 00:20:01.376 --rc geninfo_all_blocks=1 00:20:01.376 --rc geninfo_unexecuted_blocks=1 00:20:01.376 00:20:01.376 ' 00:20:01.376 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:01.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:01.376 --rc genhtml_branch_coverage=1 00:20:01.377 --rc genhtml_function_coverage=1 00:20:01.377 --rc genhtml_legend=1 00:20:01.377 --rc geninfo_all_blocks=1 00:20:01.377 --rc geninfo_unexecuted_blocks=1 00:20:01.377 00:20:01.377 ' 00:20:01.377 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:01.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:01.377 --rc genhtml_branch_coverage=1 00:20:01.377 --rc genhtml_function_coverage=1 00:20:01.377 --rc genhtml_legend=1 00:20:01.377 --rc geninfo_all_blocks=1 00:20:01.377 --rc geninfo_unexecuted_blocks=1 00:20:01.377 00:20:01.377 ' 00:20:01.377 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:01.377 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:20:01.377 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:01.377 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:01.377 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:01.377 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:01.377 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:01.377 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:20:01.377 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:01.377 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:20:01.377 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:01.377 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:01.377 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:01.377 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:20:01.377 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:20:01.377 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:01.377 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:01.377 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:20:01.377 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:01.377 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:01.377 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:01.377 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:01.377 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:01.377 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:01.377 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:20:01.377 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:01.377 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:20:01.377 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:20:01.377 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:20:01.377 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:20:01.377 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@50 -- # : 0 00:20:01.377 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:20:01.377 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:20:01.377 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:20:01.377 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:01.377 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:01.377 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:20:01.377 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:20:01.377 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:20:01.377 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:20:01.377 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@54 -- # have_pci_nics=0 00:20:01.377 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:01.377 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:20:01.377 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:20:01.377 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:01.377 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@296 -- # prepare_net_devs 00:20:01.377 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # local -g is_hw=no 00:20:01.377 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@260 -- # remove_target_ns 00:20:01.377 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:20:01.377 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:20:01.377 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_target_ns 00:20:01.377 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:20:01.377 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:20:01.377 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # xtrace_disable 00:20:01.377 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:09.521 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:09.521 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@131 -- # pci_devs=() 00:20:09.521 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@131 -- # local -a pci_devs 00:20:09.521 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@132 -- # pci_net_devs=() 00:20:09.521 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:20:09.521 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@133 -- # pci_drivers=() 00:20:09.521 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@133 -- # local -A pci_drivers 00:20:09.521 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@135 -- # net_devs=() 00:20:09.521 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@135 -- # local -ga net_devs 00:20:09.521 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@136 -- # e810=() 00:20:09.521 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@136 -- # local -ga e810 00:20:09.521 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@137 -- # x722=() 00:20:09.521 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@137 -- # local -ga x722 00:20:09.521 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@138 -- # mlx=() 00:20:09.521 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@138 -- # local -ga mlx 00:20:09.521 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:09.521 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:09.521 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:09.521 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:09.521 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:09.521 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:09.521 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:09.521 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:09.521 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:09.521 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:09.521 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:09.521 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:09.521 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:20:09.521 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:20:09.521 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:20:09.521 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:20:09.521 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:20:09.522 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:20:09.522 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:20:09.522 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:20:09.522 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:20:09.522 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:20:09.522 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:20:09.522 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:09.522 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:09.522 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:20:09.522 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:20:09.522 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:20:09.522 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:20:09.522 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:20:09.522 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:20:09.522 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:09.522 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:09.522 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:20:09.522 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:20:09.522 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:20:09.522 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:20:09.522 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:20:09.522 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:09.522 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:20:09.522 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:09.522 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # [[ up == up ]] 00:20:09.522 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:20:09.522 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:09.522 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:20:09.522 Found net devices under 0000:4b:00.0: cvl_0_0 00:20:09.522 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:20:09.522 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:20:09.522 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:09.522 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:20:09.522 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:09.522 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # [[ up == up ]] 00:20:09.522 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:20:09.522 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:09.522 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:20:09.522 Found net devices under 0000:4b:00.1: cvl_0_1 00:20:09.522 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:20:09.522 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:20:09.522 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:20:09.522 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # is_hw=yes 00:20:09.522 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:20:09.522 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:20:09.522 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:20:09.522 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:20:09.522 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@257 -- # create_target_ns 00:20:09.522 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:20:09.522 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:20:09.522 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:20:09.522 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:09.522 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:20:09.522 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:20:09.522 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:09.522 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:09.522 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:20:09.522 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:20:09.522 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:20:09.522 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:20:09.522 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@27 -- # local -gA dev_map 00:20:09.522 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@28 -- # local -g _dev 00:20:09.522 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:20:09.522 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:20:09.522 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:20:09.522 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:20:09.522 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@44 -- # ips=() 00:20:09.522 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:20:09.522 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:20:09.522 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:20:09.522 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:20:09.522 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:20:09.522 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:20:09.522 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:20:09.522 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:20:09.522 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:20:09.522 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:20:09.522 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:20:09.522 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:20:09.522 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:20:09.522 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:20:09.522 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:20:09.522 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:20:09.522 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:20:09.522 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:20:09.522 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:09.522 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:20:09.522 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@11 -- # local val=167772161 00:20:09.522 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:20:09.522 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:20:09.522 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:20:09.522 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:20:09.522 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:20:09.522 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:20:09.522 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:20:09.522 10.0.0.1 00:20:09.522 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:20:09.522 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:20:09.522 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:09.522 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:09.522 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:20:09.522 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@11 -- # local val=167772162 00:20:09.522 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:20:09.522 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:20:09.522 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:20:09.522 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:20:09.523 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:20:09.523 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:20:09.523 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:20:09.523 10.0.0.2 00:20:09.523 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:20:09.523 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:20:09.523 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:20:09.523 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:20:09.523 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:20:09.523 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:20:09.523 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:20:09.523 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:09.523 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:09.523 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:20:09.523 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:20:09.523 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:20:09.523 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:20:09.523 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:20:09.523 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:20:09.523 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:20:09.523 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:20:09.523 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:20:09.523 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:20:09.523 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:20:09.523 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@38 -- # ping_ips 1 00:20:09.523 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:20:09.523 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:20:09.523 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:20:09.523 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:20:09.523 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:20:09.523 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:20:09.523 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:20:09.523 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:20:09.523 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:20:09.523 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@107 -- # local dev=initiator0 00:20:09.523 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:20:09.523 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:20:09.523 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:20:09.523 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:20:09.523 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:20:09.523 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:20:09.523 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:20:09.523 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:20:09.523 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:20:09.523 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:20:09.523 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:20:09.523 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:09.523 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:09.523 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:20:09.523 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:20:09.523 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:09.523 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.540 ms 00:20:09.523 00:20:09.523 --- 10.0.0.1 ping statistics --- 00:20:09.523 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:09.523 rtt min/avg/max/mdev = 0.540/0.540/0.540/0.000 ms 00:20:09.523 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:20:09.523 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:20:09.523 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:20:09.523 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:20:09.523 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:09.523 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:09.523 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@168 -- # get_net_dev target0 00:20:09.523 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@107 -- # local dev=target0 00:20:09.523 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:20:09.523 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:20:09.523 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:20:09.523 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:20:09.523 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:20:09.523 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:20:09.523 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:20:09.523 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:20:09.523 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:20:09.523 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:20:09.523 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:20:09.523 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:20:09.523 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:20:09.523 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:20:09.523 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:09.523 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.276 ms 00:20:09.523 00:20:09.523 --- 10.0.0.2 ping statistics --- 00:20:09.523 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:09.523 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:20:09.523 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@98 -- # (( pair++ )) 00:20:09.523 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:20:09.523 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:09.523 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@270 -- # return 0 00:20:09.523 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:20:09.523 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:20:09.523 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:20:09.523 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:20:09.523 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:20:09.523 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:20:09.523 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:20:09.523 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:20:09.523 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:20:09.523 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:20:09.523 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@107 -- # local dev=initiator0 00:20:09.523 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:20:09.523 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:20:09.523 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:20:09.523 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:20:09.523 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:20:09.523 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:20:09.523 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:20:09.523 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:20:09.523 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:20:09.523 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:09.523 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:20:09.523 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:20:09.523 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:20:09.523 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:20:09.523 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:20:09.523 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:20:09.523 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@107 -- # local dev=initiator1 00:20:09.523 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:20:09.523 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:20:09.523 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@109 -- # return 1 00:20:09.523 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@168 -- # dev= 00:20:09.524 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@169 -- # return 0 00:20:09.524 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:20:09.524 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:20:09.524 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:20:09.524 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:20:09.524 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:20:09.524 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:09.524 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:09.524 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@168 -- # get_net_dev target0 00:20:09.524 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@107 -- # local dev=target0 00:20:09.524 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:20:09.524 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:20:09.524 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:20:09.524 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:20:09.524 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:20:09.524 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:20:09.524 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:20:09.524 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:20:09.524 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:20:09.524 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:09.524 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:20:09.524 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:20:09.524 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:20:09.524 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:20:09.524 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:09.524 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:09.524 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@168 -- # get_net_dev target1 00:20:09.524 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@107 -- # local dev=target1 00:20:09.524 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:20:09.524 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:20:09.524 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@109 -- # return 1 00:20:09.524 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@168 -- # dev= 00:20:09.524 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@169 -- # return 0 00:20:09.524 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:20:09.524 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:09.524 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:20:09.524 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:20:09.524 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:09.524 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:20:09.524 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:20:09.524 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:20:09.524 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:20:09.524 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:09.524 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:09.524 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # nvmfpid=360857 00:20:09.524 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # waitforlisten 360857 00:20:09.524 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:20:09.524 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 360857 ']' 00:20:09.524 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:09.524 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:09.524 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:09.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:09.524 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:09.524 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:09.524 [2024-11-05 19:09:38.194352] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:20:09.524 [2024-11-05 19:09:38.194422] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:09.524 [2024-11-05 19:09:38.294958] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:09.524 [2024-11-05 19:09:38.345498] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:09.524 [2024-11-05 19:09:38.345546] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:09.524 [2024-11-05 19:09:38.345554] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:09.524 [2024-11-05 19:09:38.345562] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:09.524 [2024-11-05 19:09:38.345568] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:09.524 [2024-11-05 19:09:38.346323] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:09.785 19:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:09.785 19:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:09.785 19:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:20:09.785 19:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:09.785 19:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:09.785 19:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:09.785 19:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:20:10.046 true 00:20:10.046 19:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:10.046 19:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@69 -- # jq -r .tls_version 00:20:10.307 19:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@69 -- # version=0 00:20:10.307 19:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@70 -- # [[ 0 != \0 ]] 00:20:10.307 19:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:10.568 19:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:10.568 19:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@77 -- # jq -r .tls_version 00:20:10.568 19:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@77 -- # version=13 00:20:10.568 19:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@78 -- # [[ 13 != \1\3 ]] 00:20:10.568 19:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:20:10.836 19:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:10.836 19:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@85 -- # jq -r .tls_version 00:20:11.140 19:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@85 -- # version=7 00:20:11.140 19:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@86 -- # [[ 7 != \7 ]] 00:20:11.140 19:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:11.140 19:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@92 -- # jq -r .enable_ktls 00:20:11.140 19:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@92 -- # ktls=false 00:20:11.140 19:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@93 -- # [[ false != \f\a\l\s\e ]] 00:20:11.140 19:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:20:11.423 19:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:11.423 19:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@100 -- # jq -r .enable_ktls 00:20:11.423 19:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@100 -- # ktls=true 00:20:11.423 19:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@101 -- # [[ true != \t\r\u\e ]] 00:20:11.423 19:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:20:11.737 19:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@108 -- # jq -r .enable_ktls 00:20:11.737 19:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:11.998 19:09:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@108 -- # ktls=false 00:20:11.998 19:09:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@109 -- # [[ false != \f\a\l\s\e ]] 00:20:11.998 19:09:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:20:11.998 19:09:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:20:11.998 19:09:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # local prefix key digest 00:20:11.998 19:09:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # prefix=NVMeTLSkey-1 00:20:11.998 19:09:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # key=00112233445566778899aabbccddeeff 00:20:11.998 19:09:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # digest=1 00:20:11.998 19:09:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # python - 00:20:11.998 19:09:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:11.998 19:09:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@115 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:20:11.998 19:09:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:20:11.998 19:09:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # local prefix key digest 00:20:11.998 19:09:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # prefix=NVMeTLSkey-1 00:20:11.998 19:09:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # key=ffeeddccbbaa99887766554433221100 00:20:11.998 19:09:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # digest=1 00:20:11.998 19:09:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # python - 00:20:11.998 19:09:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@115 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:11.998 19:09:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@117 -- # mktemp 00:20:11.998 19:09:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@117 -- # key_path=/tmp/tmp.A3vqUAqQ7V 00:20:11.998 19:09:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # mktemp 00:20:11.998 19:09:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # key_2_path=/tmp/tmp.qLfm6iiLsE 00:20:11.998 19:09:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:11.998 19:09:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:11.998 19:09:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # chmod 0600 /tmp/tmp.A3vqUAqQ7V 00:20:11.998 19:09:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@124 -- # chmod 0600 /tmp/tmp.qLfm6iiLsE 00:20:11.998 19:09:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:12.259 19:09:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:20:12.519 19:09:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # setup_nvmf_tgt /tmp/tmp.A3vqUAqQ7V 00:20:12.519 19:09:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.A3vqUAqQ7V 00:20:12.519 19:09:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:12.519 [2024-11-05 19:09:41.752057] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:12.519 19:09:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:12.780 19:09:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:12.780 [2024-11-05 19:09:42.072834] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:12.780 [2024-11-05 19:09:42.073069] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:12.780 19:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:13.041 malloc0 00:20:13.041 19:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:13.301 19:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.A3vqUAqQ7V 00:20:13.301 19:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:13.562 19:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@133 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.A3vqUAqQ7V 00:20:23.562 Initializing NVMe Controllers 00:20:23.562 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:23.562 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:23.562 Initialization complete. Launching workers. 00:20:23.562 ======================================================== 00:20:23.562 Latency(us) 00:20:23.562 Device Information : IOPS MiB/s Average min max 00:20:23.562 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18529.68 72.38 3453.91 1135.92 4386.07 00:20:23.562 ======================================================== 00:20:23.562 Total : 18529.68 72.38 3453.91 1135.92 4386.07 00:20:23.562 00:20:23.562 19:09:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@139 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.A3vqUAqQ7V 00:20:23.562 19:09:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:23.562 19:09:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:23.562 19:09:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:23.562 19:09:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.A3vqUAqQ7V 00:20:23.562 19:09:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:23.562 19:09:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=363782 00:20:23.562 19:09:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:23.562 19:09:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 363782 /var/tmp/bdevperf.sock 00:20:23.562 19:09:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:23.562 19:09:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 363782 ']' 00:20:23.562 19:09:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:23.563 19:09:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:23.563 19:09:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:23.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:23.563 19:09:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:23.563 19:09:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:23.824 [2024-11-05 19:09:52.904804] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:20:23.824 [2024-11-05 19:09:52.904878] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid363782 ] 00:20:23.824 [2024-11-05 19:09:52.963380] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:23.824 [2024-11-05 19:09:52.992512] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:23.824 19:09:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:23.824 19:09:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:23.824 19:09:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.A3vqUAqQ7V 00:20:24.085 19:09:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:24.085 [2024-11-05 19:09:53.401577] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:24.345 TLSTESTn1 00:20:24.345 19:09:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:24.345 Running I/O for 10 seconds... 00:20:26.673 4729.00 IOPS, 18.47 MiB/s [2024-11-05T18:09:56.939Z] 5163.00 IOPS, 20.17 MiB/s [2024-11-05T18:09:57.881Z] 5349.00 IOPS, 20.89 MiB/s [2024-11-05T18:09:58.823Z] 5191.25 IOPS, 20.28 MiB/s [2024-11-05T18:09:59.765Z] 5229.00 IOPS, 20.43 MiB/s [2024-11-05T18:10:00.708Z] 5235.50 IOPS, 20.45 MiB/s [2024-11-05T18:10:01.649Z] 5217.71 IOPS, 20.38 MiB/s [2024-11-05T18:10:03.033Z] 5191.12 IOPS, 20.28 MiB/s [2024-11-05T18:10:03.975Z] 5244.67 IOPS, 20.49 MiB/s [2024-11-05T18:10:03.975Z] 5267.20 IOPS, 20.57 MiB/s 00:20:34.652 Latency(us) 00:20:34.652 [2024-11-05T18:10:03.975Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:34.652 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:34.652 Verification LBA range: start 0x0 length 0x2000 00:20:34.652 TLSTESTn1 : 10.01 5272.40 20.60 0.00 0.00 24246.49 4969.81 55050.24 00:20:34.652 [2024-11-05T18:10:03.975Z] =================================================================================================================== 00:20:34.652 [2024-11-05T18:10:03.975Z] Total : 5272.40 20.60 0.00 0.00 24246.49 4969.81 55050.24 00:20:34.652 { 00:20:34.652 "results": [ 00:20:34.652 { 00:20:34.652 "job": "TLSTESTn1", 00:20:34.652 "core_mask": "0x4", 00:20:34.652 "workload": "verify", 00:20:34.652 "status": "finished", 00:20:34.652 "verify_range": { 00:20:34.652 "start": 0, 00:20:34.652 "length": 8192 00:20:34.652 }, 00:20:34.652 "queue_depth": 128, 00:20:34.652 "io_size": 4096, 00:20:34.652 "runtime": 10.014407, 00:20:34.652 "iops": 5272.404047488783, 00:20:34.652 "mibps": 20.59532831050306, 00:20:34.652 "io_failed": 0, 00:20:34.652 "io_timeout": 0, 00:20:34.652 "avg_latency_us": 24246.494383838384, 00:20:34.652 "min_latency_us": 4969.8133333333335, 00:20:34.652 "max_latency_us": 55050.24 00:20:34.652 } 00:20:34.652 ], 00:20:34.652 "core_count": 1 00:20:34.652 } 00:20:34.652 19:10:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:34.652 19:10:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 363782 00:20:34.652 19:10:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 363782 ']' 00:20:34.652 19:10:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 363782 00:20:34.652 19:10:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:34.652 19:10:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:34.652 19:10:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 363782 00:20:34.652 19:10:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:20:34.652 19:10:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:20:34.652 19:10:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 363782' 00:20:34.652 killing process with pid 363782 00:20:34.652 19:10:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 363782 00:20:34.652 Received shutdown signal, test time was about 10.000000 seconds 00:20:34.652 00:20:34.652 Latency(us) 00:20:34.653 [2024-11-05T18:10:03.976Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:34.653 [2024-11-05T18:10:03.976Z] =================================================================================================================== 00:20:34.653 [2024-11-05T18:10:03.976Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:34.653 19:10:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 363782 00:20:34.653 19:10:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@142 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.qLfm6iiLsE 00:20:34.653 19:10:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:20:34.653 19:10:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.qLfm6iiLsE 00:20:34.653 19:10:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:20:34.653 19:10:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:34.653 19:10:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:20:34.653 19:10:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:34.653 19:10:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.qLfm6iiLsE 00:20:34.653 19:10:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:34.653 19:10:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:34.653 19:10:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:34.653 19:10:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.qLfm6iiLsE 00:20:34.653 19:10:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:34.653 19:10:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=365937 00:20:34.653 19:10:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:34.653 19:10:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 365937 /var/tmp/bdevperf.sock 00:20:34.653 19:10:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:34.653 19:10:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 365937 ']' 00:20:34.653 19:10:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:34.653 19:10:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:34.653 19:10:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:34.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:34.653 19:10:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:34.653 19:10:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:34.653 [2024-11-05 19:10:03.873882] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:20:34.653 [2024-11-05 19:10:03.873939] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid365937 ] 00:20:34.653 [2024-11-05 19:10:03.932191] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:34.653 [2024-11-05 19:10:03.961713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:34.914 19:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:34.914 19:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:34.914 19:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.qLfm6iiLsE 00:20:34.914 19:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:35.176 [2024-11-05 19:10:04.370672] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:35.176 [2024-11-05 19:10:04.380995] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:35.176 [2024-11-05 19:10:04.381852] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16b1bb0 (107): Transport endpoint is not connected 00:20:35.176 [2024-11-05 19:10:04.382847] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16b1bb0 (9): Bad file descriptor 00:20:35.176 [2024-11-05 19:10:04.383849] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:20:35.176 [2024-11-05 19:10:04.383856] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:35.176 [2024-11-05 19:10:04.383862] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:20:35.176 [2024-11-05 19:10:04.383870] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:20:35.176 request: 00:20:35.176 { 00:20:35.176 "name": "TLSTEST", 00:20:35.176 "trtype": "tcp", 00:20:35.176 "traddr": "10.0.0.2", 00:20:35.176 "adrfam": "ipv4", 00:20:35.176 "trsvcid": "4420", 00:20:35.176 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:35.176 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:35.176 "prchk_reftag": false, 00:20:35.176 "prchk_guard": false, 00:20:35.176 "hdgst": false, 00:20:35.176 "ddgst": false, 00:20:35.176 "psk": "key0", 00:20:35.176 "allow_unrecognized_csi": false, 00:20:35.176 "method": "bdev_nvme_attach_controller", 00:20:35.176 "req_id": 1 00:20:35.176 } 00:20:35.176 Got JSON-RPC error response 00:20:35.176 response: 00:20:35.176 { 00:20:35.176 "code": -5, 00:20:35.176 "message": "Input/output error" 00:20:35.176 } 00:20:35.176 19:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 365937 00:20:35.176 19:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 365937 ']' 00:20:35.176 19:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 365937 00:20:35.176 19:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:35.176 19:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:35.176 19:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 365937 00:20:35.176 19:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:20:35.176 19:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:20:35.176 19:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 365937' 00:20:35.176 killing process with pid 365937 00:20:35.176 19:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 365937 00:20:35.176 Received shutdown signal, test time was about 10.000000 seconds 00:20:35.176 00:20:35.176 Latency(us) 00:20:35.176 [2024-11-05T18:10:04.499Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:35.176 [2024-11-05T18:10:04.499Z] =================================================================================================================== 00:20:35.176 [2024-11-05T18:10:04.499Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:35.176 19:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 365937 00:20:35.437 19:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:35.437 19:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:20:35.437 19:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:35.437 19:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:35.437 19:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:35.437 19:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@145 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.A3vqUAqQ7V 00:20:35.437 19:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:20:35.437 19:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.A3vqUAqQ7V 00:20:35.437 19:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:20:35.437 19:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:35.437 19:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:20:35.437 19:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:35.437 19:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.A3vqUAqQ7V 00:20:35.437 19:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:35.437 19:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:35.437 19:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:20:35.437 19:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.A3vqUAqQ7V 00:20:35.437 19:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:35.437 19:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=365972 00:20:35.437 19:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:35.437 19:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 365972 /var/tmp/bdevperf.sock 00:20:35.437 19:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:35.437 19:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 365972 ']' 00:20:35.437 19:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:35.437 19:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:35.437 19:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:35.437 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:35.437 19:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:35.437 19:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:35.437 [2024-11-05 19:10:04.622480] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:20:35.437 [2024-11-05 19:10:04.622534] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid365972 ] 00:20:35.437 [2024-11-05 19:10:04.680998] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:35.437 [2024-11-05 19:10:04.709107] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:35.698 19:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:35.698 19:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:35.698 19:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.A3vqUAqQ7V 00:20:35.698 19:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:20:35.960 [2024-11-05 19:10:05.126372] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:35.960 [2024-11-05 19:10:05.132736] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:35.960 [2024-11-05 19:10:05.132761] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:35.960 [2024-11-05 19:10:05.132781] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:35.960 [2024-11-05 19:10:05.133521] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747bb0 (107): Transport endpoint is not connected 00:20:35.960 [2024-11-05 19:10:05.134516] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747bb0 (9): Bad file descriptor 00:20:35.960 [2024-11-05 19:10:05.135518] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:20:35.960 [2024-11-05 19:10:05.135525] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:35.960 [2024-11-05 19:10:05.135532] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:20:35.960 [2024-11-05 19:10:05.135540] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:20:35.960 request: 00:20:35.960 { 00:20:35.960 "name": "TLSTEST", 00:20:35.960 "trtype": "tcp", 00:20:35.960 "traddr": "10.0.0.2", 00:20:35.960 "adrfam": "ipv4", 00:20:35.960 "trsvcid": "4420", 00:20:35.960 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:35.960 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:35.960 "prchk_reftag": false, 00:20:35.960 "prchk_guard": false, 00:20:35.960 "hdgst": false, 00:20:35.960 "ddgst": false, 00:20:35.960 "psk": "key0", 00:20:35.960 "allow_unrecognized_csi": false, 00:20:35.960 "method": "bdev_nvme_attach_controller", 00:20:35.960 "req_id": 1 00:20:35.960 } 00:20:35.960 Got JSON-RPC error response 00:20:35.960 response: 00:20:35.960 { 00:20:35.960 "code": -5, 00:20:35.960 "message": "Input/output error" 00:20:35.960 } 00:20:35.960 19:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 365972 00:20:35.960 19:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 365972 ']' 00:20:35.960 19:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 365972 00:20:35.960 19:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:35.960 19:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:35.960 19:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 365972 00:20:35.960 19:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:20:35.960 19:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:20:35.960 19:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 365972' 00:20:35.960 killing process with pid 365972 00:20:35.960 19:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 365972 00:20:35.960 Received shutdown signal, test time was about 10.000000 seconds 00:20:35.960 00:20:35.960 Latency(us) 00:20:35.960 [2024-11-05T18:10:05.283Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:35.960 [2024-11-05T18:10:05.283Z] =================================================================================================================== 00:20:35.960 [2024-11-05T18:10:05.283Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:35.960 19:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 365972 00:20:36.222 19:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:36.222 19:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:20:36.222 19:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:36.222 19:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:36.222 19:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:36.222 19:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@148 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.A3vqUAqQ7V 00:20:36.222 19:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:20:36.222 19:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.A3vqUAqQ7V 00:20:36.222 19:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:20:36.222 19:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:36.222 19:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:20:36.222 19:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:36.222 19:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.A3vqUAqQ7V 00:20:36.222 19:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:36.222 19:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:20:36.222 19:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:36.222 19:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.A3vqUAqQ7V 00:20:36.222 19:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:36.222 19:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=366286 00:20:36.222 19:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:36.222 19:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 366286 /var/tmp/bdevperf.sock 00:20:36.222 19:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:36.222 19:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 366286 ']' 00:20:36.222 19:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:36.222 19:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:36.222 19:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:36.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:36.222 19:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:36.222 19:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:36.222 [2024-11-05 19:10:05.389212] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:20:36.222 [2024-11-05 19:10:05.389268] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid366286 ] 00:20:36.222 [2024-11-05 19:10:05.447684] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:36.222 [2024-11-05 19:10:05.476095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:36.484 19:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:36.484 19:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:36.484 19:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.A3vqUAqQ7V 00:20:36.484 19:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:36.745 [2024-11-05 19:10:05.889061] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:36.745 [2024-11-05 19:10:05.894580] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:36.745 [2024-11-05 19:10:05.894597] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:36.745 [2024-11-05 19:10:05.894616] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:36.745 [2024-11-05 19:10:05.895291] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5bb0 (107): Transport endpoint is not connected 00:20:36.745 [2024-11-05 19:10:05.896287] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5bb0 (9): Bad file descriptor 00:20:36.745 [2024-11-05 19:10:05.897289] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:20:36.745 [2024-11-05 19:10:05.897300] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:36.745 [2024-11-05 19:10:05.897306] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:20:36.745 [2024-11-05 19:10:05.897314] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:20:36.745 request: 00:20:36.745 { 00:20:36.745 "name": "TLSTEST", 00:20:36.745 "trtype": "tcp", 00:20:36.745 "traddr": "10.0.0.2", 00:20:36.745 "adrfam": "ipv4", 00:20:36.745 "trsvcid": "4420", 00:20:36.745 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:36.745 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:36.745 "prchk_reftag": false, 00:20:36.745 "prchk_guard": false, 00:20:36.745 "hdgst": false, 00:20:36.745 "ddgst": false, 00:20:36.745 "psk": "key0", 00:20:36.745 "allow_unrecognized_csi": false, 00:20:36.745 "method": "bdev_nvme_attach_controller", 00:20:36.745 "req_id": 1 00:20:36.745 } 00:20:36.745 Got JSON-RPC error response 00:20:36.745 response: 00:20:36.745 { 00:20:36.745 "code": -5, 00:20:36.745 "message": "Input/output error" 00:20:36.745 } 00:20:36.745 19:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 366286 00:20:36.745 19:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 366286 ']' 00:20:36.745 19:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 366286 00:20:36.745 19:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:36.745 19:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:36.745 19:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 366286 00:20:36.745 19:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:20:36.745 19:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:20:36.745 19:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 366286' 00:20:36.745 killing process with pid 366286 00:20:36.745 19:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 366286 00:20:36.745 Received shutdown signal, test time was about 10.000000 seconds 00:20:36.745 00:20:36.745 Latency(us) 00:20:36.745 [2024-11-05T18:10:06.068Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:36.745 [2024-11-05T18:10:06.068Z] =================================================================================================================== 00:20:36.745 [2024-11-05T18:10:06.068Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:36.745 19:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 366286 00:20:37.007 19:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:37.007 19:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:20:37.007 19:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:37.007 19:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:37.007 19:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:37.007 19:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@151 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:37.007 19:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:20:37.007 19:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:37.007 19:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:20:37.007 19:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:37.007 19:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:20:37.007 19:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:37.007 19:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:37.007 19:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:37.007 19:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:37.007 19:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:37.007 19:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:20:37.007 19:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:37.007 19:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=366306 00:20:37.007 19:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:37.007 19:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 366306 /var/tmp/bdevperf.sock 00:20:37.007 19:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:37.007 19:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 366306 ']' 00:20:37.007 19:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:37.007 19:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:37.007 19:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:37.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:37.007 19:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:37.007 19:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:37.007 [2024-11-05 19:10:06.143594] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:20:37.007 [2024-11-05 19:10:06.143646] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid366306 ] 00:20:37.007 [2024-11-05 19:10:06.202019] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:37.007 [2024-11-05 19:10:06.229851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:37.007 19:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:37.007 19:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:37.007 19:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:20:37.269 [2024-11-05 19:10:06.462488] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:20:37.269 [2024-11-05 19:10:06.462513] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:20:37.269 request: 00:20:37.269 { 00:20:37.269 "name": "key0", 00:20:37.269 "path": "", 00:20:37.269 "method": "keyring_file_add_key", 00:20:37.269 "req_id": 1 00:20:37.269 } 00:20:37.269 Got JSON-RPC error response 00:20:37.269 response: 00:20:37.269 { 00:20:37.269 "code": -1, 00:20:37.269 "message": "Operation not permitted" 00:20:37.269 } 00:20:37.269 19:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:37.530 [2024-11-05 19:10:06.647033] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:37.530 [2024-11-05 19:10:06.647054] bdev_nvme.c:6620:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:20:37.530 request: 00:20:37.530 { 00:20:37.530 "name": "TLSTEST", 00:20:37.530 "trtype": "tcp", 00:20:37.530 "traddr": "10.0.0.2", 00:20:37.530 "adrfam": "ipv4", 00:20:37.530 "trsvcid": "4420", 00:20:37.530 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:37.530 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:37.530 "prchk_reftag": false, 00:20:37.530 "prchk_guard": false, 00:20:37.530 "hdgst": false, 00:20:37.530 "ddgst": false, 00:20:37.530 "psk": "key0", 00:20:37.530 "allow_unrecognized_csi": false, 00:20:37.530 "method": "bdev_nvme_attach_controller", 00:20:37.530 "req_id": 1 00:20:37.530 } 00:20:37.530 Got JSON-RPC error response 00:20:37.530 response: 00:20:37.530 { 00:20:37.530 "code": -126, 00:20:37.530 "message": "Required key not available" 00:20:37.530 } 00:20:37.530 19:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 366306 00:20:37.530 19:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 366306 ']' 00:20:37.530 19:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 366306 00:20:37.530 19:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:37.530 19:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:37.530 19:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 366306 00:20:37.530 19:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:20:37.530 19:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:20:37.530 19:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 366306' 00:20:37.530 killing process with pid 366306 00:20:37.530 19:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 366306 00:20:37.530 Received shutdown signal, test time was about 10.000000 seconds 00:20:37.530 00:20:37.530 Latency(us) 00:20:37.530 [2024-11-05T18:10:06.853Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:37.530 [2024-11-05T18:10:06.853Z] =================================================================================================================== 00:20:37.530 [2024-11-05T18:10:06.853Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:37.530 19:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 366306 00:20:37.530 19:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:37.530 19:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:20:37.530 19:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:37.530 19:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:37.530 19:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:37.530 19:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@154 -- # killprocess 360857 00:20:37.530 19:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 360857 ']' 00:20:37.530 19:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 360857 00:20:37.530 19:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:37.530 19:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:37.530 19:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 360857 00:20:37.791 19:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:20:37.791 19:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:20:37.791 19:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 360857' 00:20:37.791 killing process with pid 360857 00:20:37.791 19:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 360857 00:20:37.791 19:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 360857 00:20:37.791 19:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@155 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:20:37.791 19:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:20:37.791 19:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # local prefix key digest 00:20:37.791 19:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # prefix=NVMeTLSkey-1 00:20:37.791 19:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:20:37.791 19:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # digest=2 00:20:37.791 19:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # python - 00:20:37.791 19:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@155 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:37.791 19:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # mktemp 00:20:37.791 19:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # key_long_path=/tmp/tmp.TTCTvr9xRO 00:20:37.791 19:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@157 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:37.791 19:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@158 -- # chmod 0600 /tmp/tmp.TTCTvr9xRO 00:20:37.791 19:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # nvmfappstart -m 0x2 00:20:37.791 19:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:20:37.791 19:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:37.791 19:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:37.791 19:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # nvmfpid=366652 00:20:37.791 19:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:37.791 19:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # waitforlisten 366652 00:20:37.791 19:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 366652 ']' 00:20:37.791 19:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:37.791 19:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:37.791 19:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:37.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:37.791 19:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:37.791 19:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:38.052 [2024-11-05 19:10:07.118873] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:20:38.052 [2024-11-05 19:10:07.118931] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:38.052 [2024-11-05 19:10:07.214511] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:38.052 [2024-11-05 19:10:07.245636] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:38.052 [2024-11-05 19:10:07.245670] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:38.052 [2024-11-05 19:10:07.245676] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:38.052 [2024-11-05 19:10:07.245681] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:38.052 [2024-11-05 19:10:07.245685] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:38.052 [2024-11-05 19:10:07.246200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:38.624 19:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:38.624 19:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:38.624 19:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:20:38.624 19:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:38.624 19:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:38.884 19:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:38.884 19:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # setup_nvmf_tgt /tmp/tmp.TTCTvr9xRO 00:20:38.884 19:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.TTCTvr9xRO 00:20:38.884 19:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:38.884 [2024-11-05 19:10:08.107145] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:38.884 19:10:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:39.144 19:10:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:39.144 [2024-11-05 19:10:08.439949] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:39.144 [2024-11-05 19:10:08.440181] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:39.144 19:10:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:39.404 malloc0 00:20:39.404 19:10:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:39.664 19:10:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.TTCTvr9xRO 00:20:39.664 19:10:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:39.924 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.TTCTvr9xRO 00:20:39.924 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:39.924 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:39.924 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:39.924 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.TTCTvr9xRO 00:20:39.924 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:39.924 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=367018 00:20:39.924 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:39.924 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 367018 /var/tmp/bdevperf.sock 00:20:39.924 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:39.924 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 367018 ']' 00:20:39.924 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:39.924 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:39.924 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:39.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:39.924 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:39.924 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:39.924 [2024-11-05 19:10:09.146269] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:20:39.924 [2024-11-05 19:10:09.146324] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid367018 ] 00:20:39.924 [2024-11-05 19:10:09.203684] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:39.924 [2024-11-05 19:10:09.232924] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:40.184 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:40.184 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:40.184 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.TTCTvr9xRO 00:20:40.184 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:40.447 [2024-11-05 19:10:09.589796] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:40.447 TLSTESTn1 00:20:40.447 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:40.447 Running I/O for 10 seconds... 00:20:42.771 5880.00 IOPS, 22.97 MiB/s [2024-11-05T18:10:13.034Z] 6192.50 IOPS, 24.19 MiB/s [2024-11-05T18:10:13.998Z] 6306.00 IOPS, 24.63 MiB/s [2024-11-05T18:10:14.946Z] 6190.00 IOPS, 24.18 MiB/s [2024-11-05T18:10:15.889Z] 6060.40 IOPS, 23.67 MiB/s [2024-11-05T18:10:16.831Z] 6088.00 IOPS, 23.78 MiB/s [2024-11-05T18:10:18.215Z] 6026.57 IOPS, 23.54 MiB/s [2024-11-05T18:10:19.157Z] 6031.25 IOPS, 23.56 MiB/s [2024-11-05T18:10:20.097Z] 5989.11 IOPS, 23.39 MiB/s [2024-11-05T18:10:20.097Z] 6047.10 IOPS, 23.62 MiB/s 00:20:50.774 Latency(us) 00:20:50.774 [2024-11-05T18:10:20.097Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:50.774 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:50.774 Verification LBA range: start 0x0 length 0x2000 00:20:50.775 TLSTESTn1 : 10.01 6052.74 23.64 0.00 0.00 21118.27 4505.60 83012.27 00:20:50.775 [2024-11-05T18:10:20.098Z] =================================================================================================================== 00:20:50.775 [2024-11-05T18:10:20.098Z] Total : 6052.74 23.64 0.00 0.00 21118.27 4505.60 83012.27 00:20:50.775 { 00:20:50.775 "results": [ 00:20:50.775 { 00:20:50.775 "job": "TLSTESTn1", 00:20:50.775 "core_mask": "0x4", 00:20:50.775 "workload": "verify", 00:20:50.775 "status": "finished", 00:20:50.775 "verify_range": { 00:20:50.775 "start": 0, 00:20:50.775 "length": 8192 00:20:50.775 }, 00:20:50.775 "queue_depth": 128, 00:20:50.775 "io_size": 4096, 00:20:50.775 "runtime": 10.011491, 00:20:50.775 "iops": 6052.744790960707, 00:20:50.775 "mibps": 23.643534339690262, 00:20:50.775 "io_failed": 0, 00:20:50.775 "io_timeout": 0, 00:20:50.775 "avg_latency_us": 21118.26657733331, 00:20:50.775 "min_latency_us": 4505.6, 00:20:50.775 "max_latency_us": 83012.26666666666 00:20:50.775 } 00:20:50.775 ], 00:20:50.775 "core_count": 1 00:20:50.775 } 00:20:50.775 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:50.775 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 367018 00:20:50.775 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 367018 ']' 00:20:50.775 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 367018 00:20:50.775 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:50.775 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:50.775 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 367018 00:20:50.775 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:20:50.775 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:20:50.775 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 367018' 00:20:50.775 killing process with pid 367018 00:20:50.775 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 367018 00:20:50.775 Received shutdown signal, test time was about 10.000000 seconds 00:20:50.775 00:20:50.775 Latency(us) 00:20:50.775 [2024-11-05T18:10:20.098Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:50.775 [2024-11-05T18:10:20.098Z] =================================================================================================================== 00:20:50.775 [2024-11-05T18:10:20.098Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:50.775 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 367018 00:20:50.775 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # chmod 0666 /tmp/tmp.TTCTvr9xRO 00:20:50.775 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@167 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.TTCTvr9xRO 00:20:50.775 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:20:50.775 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.TTCTvr9xRO 00:20:50.775 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:20:50.775 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:50.775 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:20:50.775 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:50.775 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.TTCTvr9xRO 00:20:50.775 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:50.775 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:50.775 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:50.775 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.TTCTvr9xRO 00:20:50.775 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:50.775 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=369032 00:20:50.775 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:50.775 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 369032 /var/tmp/bdevperf.sock 00:20:50.775 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:50.775 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 369032 ']' 00:20:50.775 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:50.775 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:50.775 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:50.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:50.775 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:50.775 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:50.775 [2024-11-05 19:10:20.039632] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:20:50.775 [2024-11-05 19:10:20.039692] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid369032 ] 00:20:50.775 [2024-11-05 19:10:20.097481] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:51.035 [2024-11-05 19:10:20.127574] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:51.035 19:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:51.035 19:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:51.036 19:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.TTCTvr9xRO 00:20:51.036 [2024-11-05 19:10:20.352004] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.TTCTvr9xRO': 0100666 00:20:51.036 [2024-11-05 19:10:20.352025] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:20:51.036 request: 00:20:51.036 { 00:20:51.036 "name": "key0", 00:20:51.036 "path": "/tmp/tmp.TTCTvr9xRO", 00:20:51.036 "method": "keyring_file_add_key", 00:20:51.036 "req_id": 1 00:20:51.036 } 00:20:51.036 Got JSON-RPC error response 00:20:51.036 response: 00:20:51.036 { 00:20:51.036 "code": -1, 00:20:51.036 "message": "Operation not permitted" 00:20:51.036 } 00:20:51.296 19:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:51.296 [2024-11-05 19:10:20.520498] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:51.296 [2024-11-05 19:10:20.520523] bdev_nvme.c:6620:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:20:51.296 request: 00:20:51.296 { 00:20:51.296 "name": "TLSTEST", 00:20:51.296 "trtype": "tcp", 00:20:51.296 "traddr": "10.0.0.2", 00:20:51.296 "adrfam": "ipv4", 00:20:51.296 "trsvcid": "4420", 00:20:51.296 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:51.296 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:51.296 "prchk_reftag": false, 00:20:51.296 "prchk_guard": false, 00:20:51.296 "hdgst": false, 00:20:51.296 "ddgst": false, 00:20:51.296 "psk": "key0", 00:20:51.296 "allow_unrecognized_csi": false, 00:20:51.296 "method": "bdev_nvme_attach_controller", 00:20:51.296 "req_id": 1 00:20:51.296 } 00:20:51.296 Got JSON-RPC error response 00:20:51.296 response: 00:20:51.296 { 00:20:51.296 "code": -126, 00:20:51.296 "message": "Required key not available" 00:20:51.296 } 00:20:51.296 19:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 369032 00:20:51.296 19:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 369032 ']' 00:20:51.296 19:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 369032 00:20:51.296 19:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:51.296 19:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:51.296 19:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 369032 00:20:51.296 19:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:20:51.296 19:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:20:51.296 19:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 369032' 00:20:51.296 killing process with pid 369032 00:20:51.296 19:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 369032 00:20:51.296 Received shutdown signal, test time was about 10.000000 seconds 00:20:51.296 00:20:51.296 Latency(us) 00:20:51.296 [2024-11-05T18:10:20.619Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:51.296 [2024-11-05T18:10:20.619Z] =================================================================================================================== 00:20:51.296 [2024-11-05T18:10:20.619Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:51.296 19:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 369032 00:20:51.557 19:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:51.557 19:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:20:51.557 19:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:51.557 19:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:51.557 19:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:51.557 19:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@170 -- # killprocess 366652 00:20:51.557 19:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 366652 ']' 00:20:51.557 19:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 366652 00:20:51.557 19:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:51.557 19:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:51.557 19:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 366652 00:20:51.557 19:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:20:51.557 19:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:20:51.557 19:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 366652' 00:20:51.557 killing process with pid 366652 00:20:51.557 19:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 366652 00:20:51.557 19:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 366652 00:20:51.557 19:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # nvmfappstart -m 0x2 00:20:51.557 19:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:20:51.557 19:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:51.557 19:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:51.557 19:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # nvmfpid=369371 00:20:51.557 19:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # waitforlisten 369371 00:20:51.557 19:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:51.557 19:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 369371 ']' 00:20:51.557 19:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:51.557 19:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:51.557 19:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:51.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:51.557 19:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:51.557 19:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:51.817 [2024-11-05 19:10:20.924109] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:20:51.817 [2024-11-05 19:10:20.924175] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:51.817 [2024-11-05 19:10:21.014314] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:51.817 [2024-11-05 19:10:21.043236] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:51.817 [2024-11-05 19:10:21.043263] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:51.817 [2024-11-05 19:10:21.043269] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:51.817 [2024-11-05 19:10:21.043274] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:51.817 [2024-11-05 19:10:21.043278] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:51.817 [2024-11-05 19:10:21.043725] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:52.389 19:10:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:52.389 19:10:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:52.389 19:10:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:20:52.389 19:10:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:52.389 19:10:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:52.650 19:10:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:52.650 19:10:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@173 -- # NOT setup_nvmf_tgt /tmp/tmp.TTCTvr9xRO 00:20:52.650 19:10:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:20:52.650 19:10:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.TTCTvr9xRO 00:20:52.650 19:10:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:20:52.650 19:10:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:52.650 19:10:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:20:52.650 19:10:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:52.650 19:10:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.TTCTvr9xRO 00:20:52.650 19:10:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.TTCTvr9xRO 00:20:52.650 19:10:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:52.650 [2024-11-05 19:10:21.886943] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:52.650 19:10:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:52.910 19:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:52.910 [2024-11-05 19:10:22.203714] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:52.910 [2024-11-05 19:10:22.203944] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:52.910 19:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:53.170 malloc0 00:20:53.170 19:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:53.430 19:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.TTCTvr9xRO 00:20:53.430 [2024-11-05 19:10:22.702831] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.TTCTvr9xRO': 0100666 00:20:53.430 [2024-11-05 19:10:22.702855] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:20:53.430 request: 00:20:53.430 { 00:20:53.430 "name": "key0", 00:20:53.430 "path": "/tmp/tmp.TTCTvr9xRO", 00:20:53.430 "method": "keyring_file_add_key", 00:20:53.430 "req_id": 1 00:20:53.430 } 00:20:53.430 Got JSON-RPC error response 00:20:53.430 response: 00:20:53.430 { 00:20:53.430 "code": -1, 00:20:53.430 "message": "Operation not permitted" 00:20:53.430 } 00:20:53.430 19:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:53.691 [2024-11-05 19:10:22.871271] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:20:53.691 [2024-11-05 19:10:22.871298] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:20:53.691 request: 00:20:53.691 { 00:20:53.691 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:53.691 "host": "nqn.2016-06.io.spdk:host1", 00:20:53.691 "psk": "key0", 00:20:53.691 "method": "nvmf_subsystem_add_host", 00:20:53.691 "req_id": 1 00:20:53.691 } 00:20:53.691 Got JSON-RPC error response 00:20:53.691 response: 00:20:53.691 { 00:20:53.691 "code": -32603, 00:20:53.691 "message": "Internal error" 00:20:53.691 } 00:20:53.691 19:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:20:53.691 19:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:53.691 19:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:53.691 19:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:53.691 19:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # killprocess 369371 00:20:53.691 19:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 369371 ']' 00:20:53.691 19:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 369371 00:20:53.691 19:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:53.691 19:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:53.691 19:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 369371 00:20:53.691 19:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:20:53.691 19:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:20:53.691 19:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 369371' 00:20:53.691 killing process with pid 369371 00:20:53.691 19:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 369371 00:20:53.691 19:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 369371 00:20:53.953 19:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@177 -- # chmod 0600 /tmp/tmp.TTCTvr9xRO 00:20:53.953 19:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@180 -- # nvmfappstart -m 0x2 00:20:53.953 19:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:20:53.953 19:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:53.953 19:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:53.953 19:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # nvmfpid=369743 00:20:53.953 19:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # waitforlisten 369743 00:20:53.953 19:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 369743 ']' 00:20:53.953 19:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:53.953 19:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:53.953 19:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:53.953 19:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:53.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:53.953 19:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:53.953 19:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:53.953 [2024-11-05 19:10:23.102960] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:20:53.953 [2024-11-05 19:10:23.103004] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:53.953 [2024-11-05 19:10:23.157016] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:53.953 [2024-11-05 19:10:23.185507] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:53.953 [2024-11-05 19:10:23.185533] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:53.953 [2024-11-05 19:10:23.185539] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:53.953 [2024-11-05 19:10:23.185543] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:53.953 [2024-11-05 19:10:23.185547] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:53.953 [2024-11-05 19:10:23.185983] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:53.953 19:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:53.953 19:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:53.953 19:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:20:53.953 19:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:53.953 19:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:54.215 19:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:54.215 19:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # setup_nvmf_tgt /tmp/tmp.TTCTvr9xRO 00:20:54.215 19:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.TTCTvr9xRO 00:20:54.215 19:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:54.215 [2024-11-05 19:10:23.443730] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:54.215 19:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:54.476 19:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:54.476 [2024-11-05 19:10:23.756494] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:54.476 [2024-11-05 19:10:23.756686] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:54.476 19:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:54.737 malloc0 00:20:54.737 19:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:54.998 19:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.TTCTvr9xRO 00:20:54.998 19:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:55.258 19:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@183 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:55.258 19:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@184 -- # bdevperf_pid=370106 00:20:55.258 19:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:55.258 19:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@187 -- # waitforlisten 370106 /var/tmp/bdevperf.sock 00:20:55.258 19:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 370106 ']' 00:20:55.258 19:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:55.258 19:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:55.259 19:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:55.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:55.259 19:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:55.259 19:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:55.259 [2024-11-05 19:10:24.458500] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:20:55.259 [2024-11-05 19:10:24.458595] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid370106 ] 00:20:55.259 [2024-11-05 19:10:24.520221] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:55.259 [2024-11-05 19:10:24.548962] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:55.519 19:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:55.519 19:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:55.519 19:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.TTCTvr9xRO 00:20:55.519 19:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:55.780 [2024-11-05 19:10:24.957859] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:55.780 TLSTESTn1 00:20:55.780 19:10:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:20:56.041 19:10:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # tgtconf='{ 00:20:56.041 "subsystems": [ 00:20:56.041 { 00:20:56.041 "subsystem": "keyring", 00:20:56.041 "config": [ 00:20:56.041 { 00:20:56.041 "method": "keyring_file_add_key", 00:20:56.041 "params": { 00:20:56.041 "name": "key0", 00:20:56.041 "path": "/tmp/tmp.TTCTvr9xRO" 00:20:56.041 } 00:20:56.041 } 00:20:56.041 ] 00:20:56.041 }, 00:20:56.041 { 00:20:56.041 "subsystem": "iobuf", 00:20:56.041 "config": [ 00:20:56.041 { 00:20:56.041 "method": "iobuf_set_options", 00:20:56.041 "params": { 00:20:56.041 "small_pool_count": 8192, 00:20:56.041 "large_pool_count": 1024, 00:20:56.041 "small_bufsize": 8192, 00:20:56.041 "large_bufsize": 135168, 00:20:56.041 "enable_numa": false 00:20:56.041 } 00:20:56.041 } 00:20:56.041 ] 00:20:56.041 }, 00:20:56.041 { 00:20:56.041 "subsystem": "sock", 00:20:56.041 "config": [ 00:20:56.041 { 00:20:56.041 "method": "sock_set_default_impl", 00:20:56.041 "params": { 00:20:56.041 "impl_name": "posix" 00:20:56.041 } 00:20:56.041 }, 00:20:56.041 { 00:20:56.041 "method": "sock_impl_set_options", 00:20:56.041 "params": { 00:20:56.041 "impl_name": "ssl", 00:20:56.041 "recv_buf_size": 4096, 00:20:56.041 "send_buf_size": 4096, 00:20:56.041 "enable_recv_pipe": true, 00:20:56.041 "enable_quickack": false, 00:20:56.041 "enable_placement_id": 0, 00:20:56.041 "enable_zerocopy_send_server": true, 00:20:56.041 "enable_zerocopy_send_client": false, 00:20:56.041 "zerocopy_threshold": 0, 00:20:56.041 "tls_version": 0, 00:20:56.041 "enable_ktls": false 00:20:56.041 } 00:20:56.041 }, 00:20:56.041 { 00:20:56.041 "method": "sock_impl_set_options", 00:20:56.041 "params": { 00:20:56.041 "impl_name": "posix", 00:20:56.041 "recv_buf_size": 2097152, 00:20:56.041 "send_buf_size": 2097152, 00:20:56.041 "enable_recv_pipe": true, 00:20:56.041 "enable_quickack": false, 00:20:56.041 "enable_placement_id": 0, 00:20:56.041 "enable_zerocopy_send_server": true, 00:20:56.041 "enable_zerocopy_send_client": false, 00:20:56.041 "zerocopy_threshold": 0, 00:20:56.041 "tls_version": 0, 00:20:56.041 "enable_ktls": false 00:20:56.041 } 00:20:56.041 } 00:20:56.041 ] 00:20:56.041 }, 00:20:56.041 { 00:20:56.041 "subsystem": "vmd", 00:20:56.041 "config": [] 00:20:56.041 }, 00:20:56.041 { 00:20:56.041 "subsystem": "accel", 00:20:56.041 "config": [ 00:20:56.041 { 00:20:56.041 "method": "accel_set_options", 00:20:56.041 "params": { 00:20:56.041 "small_cache_size": 128, 00:20:56.041 "large_cache_size": 16, 00:20:56.041 "task_count": 2048, 00:20:56.041 "sequence_count": 2048, 00:20:56.041 "buf_count": 2048 00:20:56.041 } 00:20:56.041 } 00:20:56.041 ] 00:20:56.041 }, 00:20:56.041 { 00:20:56.041 "subsystem": "bdev", 00:20:56.041 "config": [ 00:20:56.041 { 00:20:56.041 "method": "bdev_set_options", 00:20:56.041 "params": { 00:20:56.041 "bdev_io_pool_size": 65535, 00:20:56.041 "bdev_io_cache_size": 256, 00:20:56.041 "bdev_auto_examine": true, 00:20:56.041 "iobuf_small_cache_size": 128, 00:20:56.041 "iobuf_large_cache_size": 16 00:20:56.041 } 00:20:56.041 }, 00:20:56.041 { 00:20:56.041 "method": "bdev_raid_set_options", 00:20:56.041 "params": { 00:20:56.041 "process_window_size_kb": 1024, 00:20:56.041 "process_max_bandwidth_mb_sec": 0 00:20:56.041 } 00:20:56.041 }, 00:20:56.041 { 00:20:56.041 "method": "bdev_iscsi_set_options", 00:20:56.041 "params": { 00:20:56.041 "timeout_sec": 30 00:20:56.041 } 00:20:56.041 }, 00:20:56.041 { 00:20:56.041 "method": "bdev_nvme_set_options", 00:20:56.041 "params": { 00:20:56.041 "action_on_timeout": "none", 00:20:56.041 "timeout_us": 0, 00:20:56.041 "timeout_admin_us": 0, 00:20:56.041 "keep_alive_timeout_ms": 10000, 00:20:56.041 "arbitration_burst": 0, 00:20:56.041 "low_priority_weight": 0, 00:20:56.041 "medium_priority_weight": 0, 00:20:56.041 "high_priority_weight": 0, 00:20:56.041 "nvme_adminq_poll_period_us": 10000, 00:20:56.041 "nvme_ioq_poll_period_us": 0, 00:20:56.041 "io_queue_requests": 0, 00:20:56.041 "delay_cmd_submit": true, 00:20:56.041 "transport_retry_count": 4, 00:20:56.041 "bdev_retry_count": 3, 00:20:56.041 "transport_ack_timeout": 0, 00:20:56.041 "ctrlr_loss_timeout_sec": 0, 00:20:56.041 "reconnect_delay_sec": 0, 00:20:56.041 "fast_io_fail_timeout_sec": 0, 00:20:56.041 "disable_auto_failback": false, 00:20:56.041 "generate_uuids": false, 00:20:56.041 "transport_tos": 0, 00:20:56.041 "nvme_error_stat": false, 00:20:56.041 "rdma_srq_size": 0, 00:20:56.041 "io_path_stat": false, 00:20:56.041 "allow_accel_sequence": false, 00:20:56.041 "rdma_max_cq_size": 0, 00:20:56.041 "rdma_cm_event_timeout_ms": 0, 00:20:56.041 "dhchap_digests": [ 00:20:56.041 "sha256", 00:20:56.041 "sha384", 00:20:56.041 "sha512" 00:20:56.041 ], 00:20:56.041 "dhchap_dhgroups": [ 00:20:56.041 "null", 00:20:56.041 "ffdhe2048", 00:20:56.041 "ffdhe3072", 00:20:56.041 "ffdhe4096", 00:20:56.041 "ffdhe6144", 00:20:56.041 "ffdhe8192" 00:20:56.041 ] 00:20:56.041 } 00:20:56.041 }, 00:20:56.041 { 00:20:56.041 "method": "bdev_nvme_set_hotplug", 00:20:56.041 "params": { 00:20:56.041 "period_us": 100000, 00:20:56.041 "enable": false 00:20:56.041 } 00:20:56.041 }, 00:20:56.041 { 00:20:56.041 "method": "bdev_malloc_create", 00:20:56.042 "params": { 00:20:56.042 "name": "malloc0", 00:20:56.042 "num_blocks": 8192, 00:20:56.042 "block_size": 4096, 00:20:56.042 "physical_block_size": 4096, 00:20:56.042 "uuid": "022d863d-86f3-4076-a411-436e8913bcea", 00:20:56.042 "optimal_io_boundary": 0, 00:20:56.042 "md_size": 0, 00:20:56.042 "dif_type": 0, 00:20:56.042 "dif_is_head_of_md": false, 00:20:56.042 "dif_pi_format": 0 00:20:56.042 } 00:20:56.042 }, 00:20:56.042 { 00:20:56.042 "method": "bdev_wait_for_examine" 00:20:56.042 } 00:20:56.042 ] 00:20:56.042 }, 00:20:56.042 { 00:20:56.042 "subsystem": "nbd", 00:20:56.042 "config": [] 00:20:56.042 }, 00:20:56.042 { 00:20:56.042 "subsystem": "scheduler", 00:20:56.042 "config": [ 00:20:56.042 { 00:20:56.042 "method": "framework_set_scheduler", 00:20:56.042 "params": { 00:20:56.042 "name": "static" 00:20:56.042 } 00:20:56.042 } 00:20:56.042 ] 00:20:56.042 }, 00:20:56.042 { 00:20:56.042 "subsystem": "nvmf", 00:20:56.042 "config": [ 00:20:56.042 { 00:20:56.042 "method": "nvmf_set_config", 00:20:56.042 "params": { 00:20:56.042 "discovery_filter": "match_any", 00:20:56.042 "admin_cmd_passthru": { 00:20:56.042 "identify_ctrlr": false 00:20:56.042 }, 00:20:56.042 "dhchap_digests": [ 00:20:56.042 "sha256", 00:20:56.042 "sha384", 00:20:56.042 "sha512" 00:20:56.042 ], 00:20:56.042 "dhchap_dhgroups": [ 00:20:56.042 "null", 00:20:56.042 "ffdhe2048", 00:20:56.042 "ffdhe3072", 00:20:56.042 "ffdhe4096", 00:20:56.042 "ffdhe6144", 00:20:56.042 "ffdhe8192" 00:20:56.042 ] 00:20:56.042 } 00:20:56.042 }, 00:20:56.042 { 00:20:56.042 "method": "nvmf_set_max_subsystems", 00:20:56.042 "params": { 00:20:56.042 "max_subsystems": 1024 00:20:56.042 } 00:20:56.042 }, 00:20:56.042 { 00:20:56.042 "method": "nvmf_set_crdt", 00:20:56.042 "params": { 00:20:56.042 "crdt1": 0, 00:20:56.042 "crdt2": 0, 00:20:56.042 "crdt3": 0 00:20:56.042 } 00:20:56.042 }, 00:20:56.042 { 00:20:56.042 "method": "nvmf_create_transport", 00:20:56.042 "params": { 00:20:56.042 "trtype": "TCP", 00:20:56.042 "max_queue_depth": 128, 00:20:56.042 "max_io_qpairs_per_ctrlr": 127, 00:20:56.042 "in_capsule_data_size": 4096, 00:20:56.042 "max_io_size": 131072, 00:20:56.042 "io_unit_size": 131072, 00:20:56.042 "max_aq_depth": 128, 00:20:56.042 "num_shared_buffers": 511, 00:20:56.042 "buf_cache_size": 4294967295, 00:20:56.042 "dif_insert_or_strip": false, 00:20:56.042 "zcopy": false, 00:20:56.042 "c2h_success": false, 00:20:56.042 "sock_priority": 0, 00:20:56.042 "abort_timeout_sec": 1, 00:20:56.042 "ack_timeout": 0, 00:20:56.042 "data_wr_pool_size": 0 00:20:56.042 } 00:20:56.042 }, 00:20:56.042 { 00:20:56.042 "method": "nvmf_create_subsystem", 00:20:56.042 "params": { 00:20:56.042 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:56.042 "allow_any_host": false, 00:20:56.042 "serial_number": "SPDK00000000000001", 00:20:56.042 "model_number": "SPDK bdev Controller", 00:20:56.042 "max_namespaces": 10, 00:20:56.042 "min_cntlid": 1, 00:20:56.042 "max_cntlid": 65519, 00:20:56.042 "ana_reporting": false 00:20:56.042 } 00:20:56.042 }, 00:20:56.042 { 00:20:56.042 "method": "nvmf_subsystem_add_host", 00:20:56.042 "params": { 00:20:56.042 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:56.042 "host": "nqn.2016-06.io.spdk:host1", 00:20:56.042 "psk": "key0" 00:20:56.042 } 00:20:56.042 }, 00:20:56.042 { 00:20:56.042 "method": "nvmf_subsystem_add_ns", 00:20:56.042 "params": { 00:20:56.042 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:56.042 "namespace": { 00:20:56.042 "nsid": 1, 00:20:56.042 "bdev_name": "malloc0", 00:20:56.042 "nguid": "022D863D86F34076A411436E8913BCEA", 00:20:56.042 "uuid": "022d863d-86f3-4076-a411-436e8913bcea", 00:20:56.042 "no_auto_visible": false 00:20:56.042 } 00:20:56.042 } 00:20:56.042 }, 00:20:56.042 { 00:20:56.042 "method": "nvmf_subsystem_add_listener", 00:20:56.042 "params": { 00:20:56.042 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:56.042 "listen_address": { 00:20:56.042 "trtype": "TCP", 00:20:56.042 "adrfam": "IPv4", 00:20:56.042 "traddr": "10.0.0.2", 00:20:56.042 "trsvcid": "4420" 00:20:56.042 }, 00:20:56.042 "secure_channel": true 00:20:56.042 } 00:20:56.042 } 00:20:56.042 ] 00:20:56.042 } 00:20:56.042 ] 00:20:56.042 }' 00:20:56.042 19:10:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:56.303 19:10:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # bdevperfconf='{ 00:20:56.303 "subsystems": [ 00:20:56.303 { 00:20:56.303 "subsystem": "keyring", 00:20:56.303 "config": [ 00:20:56.303 { 00:20:56.303 "method": "keyring_file_add_key", 00:20:56.303 "params": { 00:20:56.303 "name": "key0", 00:20:56.303 "path": "/tmp/tmp.TTCTvr9xRO" 00:20:56.303 } 00:20:56.303 } 00:20:56.303 ] 00:20:56.303 }, 00:20:56.303 { 00:20:56.303 "subsystem": "iobuf", 00:20:56.303 "config": [ 00:20:56.303 { 00:20:56.303 "method": "iobuf_set_options", 00:20:56.303 "params": { 00:20:56.303 "small_pool_count": 8192, 00:20:56.303 "large_pool_count": 1024, 00:20:56.304 "small_bufsize": 8192, 00:20:56.304 "large_bufsize": 135168, 00:20:56.304 "enable_numa": false 00:20:56.304 } 00:20:56.304 } 00:20:56.304 ] 00:20:56.304 }, 00:20:56.304 { 00:20:56.304 "subsystem": "sock", 00:20:56.304 "config": [ 00:20:56.304 { 00:20:56.304 "method": "sock_set_default_impl", 00:20:56.304 "params": { 00:20:56.304 "impl_name": "posix" 00:20:56.304 } 00:20:56.304 }, 00:20:56.304 { 00:20:56.304 "method": "sock_impl_set_options", 00:20:56.304 "params": { 00:20:56.304 "impl_name": "ssl", 00:20:56.304 "recv_buf_size": 4096, 00:20:56.304 "send_buf_size": 4096, 00:20:56.304 "enable_recv_pipe": true, 00:20:56.304 "enable_quickack": false, 00:20:56.304 "enable_placement_id": 0, 00:20:56.304 "enable_zerocopy_send_server": true, 00:20:56.304 "enable_zerocopy_send_client": false, 00:20:56.304 "zerocopy_threshold": 0, 00:20:56.304 "tls_version": 0, 00:20:56.304 "enable_ktls": false 00:20:56.304 } 00:20:56.304 }, 00:20:56.304 { 00:20:56.304 "method": "sock_impl_set_options", 00:20:56.304 "params": { 00:20:56.304 "impl_name": "posix", 00:20:56.304 "recv_buf_size": 2097152, 00:20:56.304 "send_buf_size": 2097152, 00:20:56.304 "enable_recv_pipe": true, 00:20:56.304 "enable_quickack": false, 00:20:56.304 "enable_placement_id": 0, 00:20:56.304 "enable_zerocopy_send_server": true, 00:20:56.304 "enable_zerocopy_send_client": false, 00:20:56.304 "zerocopy_threshold": 0, 00:20:56.304 "tls_version": 0, 00:20:56.304 "enable_ktls": false 00:20:56.304 } 00:20:56.304 } 00:20:56.304 ] 00:20:56.304 }, 00:20:56.304 { 00:20:56.304 "subsystem": "vmd", 00:20:56.304 "config": [] 00:20:56.304 }, 00:20:56.304 { 00:20:56.304 "subsystem": "accel", 00:20:56.304 "config": [ 00:20:56.304 { 00:20:56.304 "method": "accel_set_options", 00:20:56.304 "params": { 00:20:56.304 "small_cache_size": 128, 00:20:56.304 "large_cache_size": 16, 00:20:56.304 "task_count": 2048, 00:20:56.304 "sequence_count": 2048, 00:20:56.304 "buf_count": 2048 00:20:56.304 } 00:20:56.304 } 00:20:56.304 ] 00:20:56.304 }, 00:20:56.304 { 00:20:56.304 "subsystem": "bdev", 00:20:56.304 "config": [ 00:20:56.304 { 00:20:56.304 "method": "bdev_set_options", 00:20:56.304 "params": { 00:20:56.304 "bdev_io_pool_size": 65535, 00:20:56.304 "bdev_io_cache_size": 256, 00:20:56.304 "bdev_auto_examine": true, 00:20:56.304 "iobuf_small_cache_size": 128, 00:20:56.304 "iobuf_large_cache_size": 16 00:20:56.304 } 00:20:56.304 }, 00:20:56.304 { 00:20:56.304 "method": "bdev_raid_set_options", 00:20:56.304 "params": { 00:20:56.304 "process_window_size_kb": 1024, 00:20:56.304 "process_max_bandwidth_mb_sec": 0 00:20:56.304 } 00:20:56.304 }, 00:20:56.304 { 00:20:56.304 "method": "bdev_iscsi_set_options", 00:20:56.304 "params": { 00:20:56.304 "timeout_sec": 30 00:20:56.304 } 00:20:56.304 }, 00:20:56.304 { 00:20:56.304 "method": "bdev_nvme_set_options", 00:20:56.304 "params": { 00:20:56.304 "action_on_timeout": "none", 00:20:56.304 "timeout_us": 0, 00:20:56.304 "timeout_admin_us": 0, 00:20:56.304 "keep_alive_timeout_ms": 10000, 00:20:56.304 "arbitration_burst": 0, 00:20:56.304 "low_priority_weight": 0, 00:20:56.304 "medium_priority_weight": 0, 00:20:56.304 "high_priority_weight": 0, 00:20:56.304 "nvme_adminq_poll_period_us": 10000, 00:20:56.304 "nvme_ioq_poll_period_us": 0, 00:20:56.304 "io_queue_requests": 512, 00:20:56.304 "delay_cmd_submit": true, 00:20:56.304 "transport_retry_count": 4, 00:20:56.304 "bdev_retry_count": 3, 00:20:56.304 "transport_ack_timeout": 0, 00:20:56.304 "ctrlr_loss_timeout_sec": 0, 00:20:56.304 "reconnect_delay_sec": 0, 00:20:56.304 "fast_io_fail_timeout_sec": 0, 00:20:56.304 "disable_auto_failback": false, 00:20:56.304 "generate_uuids": false, 00:20:56.304 "transport_tos": 0, 00:20:56.304 "nvme_error_stat": false, 00:20:56.304 "rdma_srq_size": 0, 00:20:56.304 "io_path_stat": false, 00:20:56.304 "allow_accel_sequence": false, 00:20:56.304 "rdma_max_cq_size": 0, 00:20:56.304 "rdma_cm_event_timeout_ms": 0, 00:20:56.304 "dhchap_digests": [ 00:20:56.304 "sha256", 00:20:56.304 "sha384", 00:20:56.304 "sha512" 00:20:56.304 ], 00:20:56.304 "dhchap_dhgroups": [ 00:20:56.304 "null", 00:20:56.304 "ffdhe2048", 00:20:56.304 "ffdhe3072", 00:20:56.304 "ffdhe4096", 00:20:56.304 "ffdhe6144", 00:20:56.304 "ffdhe8192" 00:20:56.304 ] 00:20:56.304 } 00:20:56.304 }, 00:20:56.304 { 00:20:56.304 "method": "bdev_nvme_attach_controller", 00:20:56.304 "params": { 00:20:56.304 "name": "TLSTEST", 00:20:56.304 "trtype": "TCP", 00:20:56.304 "adrfam": "IPv4", 00:20:56.304 "traddr": "10.0.0.2", 00:20:56.304 "trsvcid": "4420", 00:20:56.304 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:56.304 "prchk_reftag": false, 00:20:56.304 "prchk_guard": false, 00:20:56.304 "ctrlr_loss_timeout_sec": 0, 00:20:56.304 "reconnect_delay_sec": 0, 00:20:56.304 "fast_io_fail_timeout_sec": 0, 00:20:56.304 "psk": "key0", 00:20:56.305 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:56.305 "hdgst": false, 00:20:56.305 "ddgst": false, 00:20:56.305 "multipath": "multipath" 00:20:56.305 } 00:20:56.305 }, 00:20:56.305 { 00:20:56.305 "method": "bdev_nvme_set_hotplug", 00:20:56.305 "params": { 00:20:56.305 "period_us": 100000, 00:20:56.305 "enable": false 00:20:56.305 } 00:20:56.305 }, 00:20:56.305 { 00:20:56.305 "method": "bdev_wait_for_examine" 00:20:56.305 } 00:20:56.305 ] 00:20:56.305 }, 00:20:56.305 { 00:20:56.305 "subsystem": "nbd", 00:20:56.305 "config": [] 00:20:56.305 } 00:20:56.305 ] 00:20:56.305 }' 00:20:56.305 19:10:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # killprocess 370106 00:20:56.305 19:10:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 370106 ']' 00:20:56.305 19:10:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 370106 00:20:56.305 19:10:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:56.305 19:10:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:56.305 19:10:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 370106 00:20:56.305 19:10:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:20:56.305 19:10:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:20:56.305 19:10:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 370106' 00:20:56.305 killing process with pid 370106 00:20:56.305 19:10:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 370106 00:20:56.305 Received shutdown signal, test time was about 10.000000 seconds 00:20:56.305 00:20:56.305 Latency(us) 00:20:56.305 [2024-11-05T18:10:25.628Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:56.305 [2024-11-05T18:10:25.628Z] =================================================================================================================== 00:20:56.305 [2024-11-05T18:10:25.628Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:56.305 19:10:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 370106 00:20:56.566 19:10:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # killprocess 369743 00:20:56.567 19:10:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 369743 ']' 00:20:56.567 19:10:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 369743 00:20:56.567 19:10:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:56.567 19:10:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:56.567 19:10:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 369743 00:20:56.567 19:10:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:20:56.567 19:10:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:20:56.567 19:10:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 369743' 00:20:56.567 killing process with pid 369743 00:20:56.567 19:10:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 369743 00:20:56.567 19:10:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 369743 00:20:56.567 19:10:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@200 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:20:56.567 19:10:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:20:56.567 19:10:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:56.567 19:10:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:56.567 19:10:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@200 -- # echo '{ 00:20:56.567 "subsystems": [ 00:20:56.567 { 00:20:56.567 "subsystem": "keyring", 00:20:56.567 "config": [ 00:20:56.567 { 00:20:56.567 "method": "keyring_file_add_key", 00:20:56.567 "params": { 00:20:56.567 "name": "key0", 00:20:56.567 "path": "/tmp/tmp.TTCTvr9xRO" 00:20:56.567 } 00:20:56.567 } 00:20:56.567 ] 00:20:56.567 }, 00:20:56.567 { 00:20:56.567 "subsystem": "iobuf", 00:20:56.567 "config": [ 00:20:56.567 { 00:20:56.567 "method": "iobuf_set_options", 00:20:56.567 "params": { 00:20:56.567 "small_pool_count": 8192, 00:20:56.567 "large_pool_count": 1024, 00:20:56.567 "small_bufsize": 8192, 00:20:56.567 "large_bufsize": 135168, 00:20:56.567 "enable_numa": false 00:20:56.567 } 00:20:56.567 } 00:20:56.567 ] 00:20:56.567 }, 00:20:56.567 { 00:20:56.567 "subsystem": "sock", 00:20:56.567 "config": [ 00:20:56.567 { 00:20:56.567 "method": "sock_set_default_impl", 00:20:56.567 "params": { 00:20:56.567 "impl_name": "posix" 00:20:56.567 } 00:20:56.567 }, 00:20:56.567 { 00:20:56.567 "method": "sock_impl_set_options", 00:20:56.567 "params": { 00:20:56.567 "impl_name": "ssl", 00:20:56.567 "recv_buf_size": 4096, 00:20:56.567 "send_buf_size": 4096, 00:20:56.567 "enable_recv_pipe": true, 00:20:56.567 "enable_quickack": false, 00:20:56.567 "enable_placement_id": 0, 00:20:56.567 "enable_zerocopy_send_server": true, 00:20:56.567 "enable_zerocopy_send_client": false, 00:20:56.567 "zerocopy_threshold": 0, 00:20:56.567 "tls_version": 0, 00:20:56.567 "enable_ktls": false 00:20:56.567 } 00:20:56.567 }, 00:20:56.567 { 00:20:56.567 "method": "sock_impl_set_options", 00:20:56.567 "params": { 00:20:56.567 "impl_name": "posix", 00:20:56.567 "recv_buf_size": 2097152, 00:20:56.567 "send_buf_size": 2097152, 00:20:56.567 "enable_recv_pipe": true, 00:20:56.567 "enable_quickack": false, 00:20:56.567 "enable_placement_id": 0, 00:20:56.567 "enable_zerocopy_send_server": true, 00:20:56.567 "enable_zerocopy_send_client": false, 00:20:56.567 "zerocopy_threshold": 0, 00:20:56.567 "tls_version": 0, 00:20:56.567 "enable_ktls": false 00:20:56.567 } 00:20:56.567 } 00:20:56.567 ] 00:20:56.567 }, 00:20:56.567 { 00:20:56.567 "subsystem": "vmd", 00:20:56.567 "config": [] 00:20:56.567 }, 00:20:56.567 { 00:20:56.567 "subsystem": "accel", 00:20:56.567 "config": [ 00:20:56.567 { 00:20:56.567 "method": "accel_set_options", 00:20:56.567 "params": { 00:20:56.567 "small_cache_size": 128, 00:20:56.567 "large_cache_size": 16, 00:20:56.567 "task_count": 2048, 00:20:56.567 "sequence_count": 2048, 00:20:56.567 "buf_count": 2048 00:20:56.567 } 00:20:56.567 } 00:20:56.567 ] 00:20:56.567 }, 00:20:56.567 { 00:20:56.567 "subsystem": "bdev", 00:20:56.567 "config": [ 00:20:56.567 { 00:20:56.567 "method": "bdev_set_options", 00:20:56.567 "params": { 00:20:56.567 "bdev_io_pool_size": 65535, 00:20:56.567 "bdev_io_cache_size": 256, 00:20:56.567 "bdev_auto_examine": true, 00:20:56.567 "iobuf_small_cache_size": 128, 00:20:56.567 "iobuf_large_cache_size": 16 00:20:56.567 } 00:20:56.567 }, 00:20:56.567 { 00:20:56.567 "method": "bdev_raid_set_options", 00:20:56.567 "params": { 00:20:56.567 "process_window_size_kb": 1024, 00:20:56.567 "process_max_bandwidth_mb_sec": 0 00:20:56.567 } 00:20:56.567 }, 00:20:56.567 { 00:20:56.567 "method": "bdev_iscsi_set_options", 00:20:56.567 "params": { 00:20:56.567 "timeout_sec": 30 00:20:56.567 } 00:20:56.567 }, 00:20:56.567 { 00:20:56.567 "method": "bdev_nvme_set_options", 00:20:56.567 "params": { 00:20:56.567 "action_on_timeout": "none", 00:20:56.567 "timeout_us": 0, 00:20:56.567 "timeout_admin_us": 0, 00:20:56.567 "keep_alive_timeout_ms": 10000, 00:20:56.567 "arbitration_burst": 0, 00:20:56.567 "low_priority_weight": 0, 00:20:56.567 "medium_priority_weight": 0, 00:20:56.567 "high_priority_weight": 0, 00:20:56.567 "nvme_adminq_poll_period_us": 10000, 00:20:56.567 "nvme_ioq_poll_period_us": 0, 00:20:56.567 "io_queue_requests": 0, 00:20:56.567 "delay_cmd_submit": true, 00:20:56.567 "transport_retry_count": 4, 00:20:56.567 "bdev_retry_count": 3, 00:20:56.567 "transport_ack_timeout": 0, 00:20:56.567 "ctrlr_loss_timeout_sec": 0, 00:20:56.567 "reconnect_delay_sec": 0, 00:20:56.567 "fast_io_fail_timeout_sec": 0, 00:20:56.567 "disable_auto_failback": false, 00:20:56.567 "generate_uuids": false, 00:20:56.567 "transport_tos": 0, 00:20:56.567 "nvme_error_stat": false, 00:20:56.567 "rdma_srq_size": 0, 00:20:56.567 "io_path_stat": false, 00:20:56.567 "allow_accel_sequence": false, 00:20:56.567 "rdma_max_cq_size": 0, 00:20:56.567 "rdma_cm_event_timeout_ms": 0, 00:20:56.567 "dhchap_digests": [ 00:20:56.567 "sha256", 00:20:56.567 "sha384", 00:20:56.567 "sha512" 00:20:56.567 ], 00:20:56.567 "dhchap_dhgroups": [ 00:20:56.567 "null", 00:20:56.567 "ffdhe2048", 00:20:56.567 "ffdhe3072", 00:20:56.567 "ffdhe4096", 00:20:56.567 "ffdhe6144", 00:20:56.567 "ffdhe8192" 00:20:56.567 ] 00:20:56.567 } 00:20:56.567 }, 00:20:56.567 { 00:20:56.567 "method": "bdev_nvme_set_hotplug", 00:20:56.567 "params": { 00:20:56.567 "period_us": 100000, 00:20:56.567 "enable": false 00:20:56.567 } 00:20:56.567 }, 00:20:56.567 { 00:20:56.567 "method": "bdev_malloc_create", 00:20:56.567 "params": { 00:20:56.567 "name": "malloc0", 00:20:56.567 "num_blocks": 8192, 00:20:56.567 "block_size": 4096, 00:20:56.567 "physical_block_size": 4096, 00:20:56.567 "uuid": "022d863d-86f3-4076-a411-436e8913bcea", 00:20:56.567 "optimal_io_boundary": 0, 00:20:56.567 "md_size": 0, 00:20:56.567 "dif_type": 0, 00:20:56.567 "dif_is_head_of_md": false, 00:20:56.567 "dif_pi_format": 0 00:20:56.567 } 00:20:56.567 }, 00:20:56.567 { 00:20:56.567 "method": "bdev_wait_for_examine" 00:20:56.567 } 00:20:56.567 ] 00:20:56.567 }, 00:20:56.567 { 00:20:56.567 "subsystem": "nbd", 00:20:56.567 "config": [] 00:20:56.567 }, 00:20:56.567 { 00:20:56.567 "subsystem": "scheduler", 00:20:56.567 "config": [ 00:20:56.567 { 00:20:56.567 "method": "framework_set_scheduler", 00:20:56.567 "params": { 00:20:56.567 "name": "static" 00:20:56.567 } 00:20:56.567 } 00:20:56.567 ] 00:20:56.567 }, 00:20:56.567 { 00:20:56.567 "subsystem": "nvmf", 00:20:56.567 "config": [ 00:20:56.567 { 00:20:56.567 "method": "nvmf_set_config", 00:20:56.567 "params": { 00:20:56.567 "discovery_filter": "match_any", 00:20:56.568 "admin_cmd_passthru": { 00:20:56.568 "identify_ctrlr": false 00:20:56.568 }, 00:20:56.568 "dhchap_digests": [ 00:20:56.568 "sha256", 00:20:56.568 "sha384", 00:20:56.568 "sha512" 00:20:56.568 ], 00:20:56.568 "dhchap_dhgroups": [ 00:20:56.568 "null", 00:20:56.568 "ffdhe2048", 00:20:56.568 "ffdhe3072", 00:20:56.568 "ffdhe4096", 00:20:56.568 "ffdhe6144", 00:20:56.568 "ffdhe8192" 00:20:56.568 ] 00:20:56.568 } 00:20:56.568 }, 00:20:56.568 { 00:20:56.568 "method": "nvmf_set_max_subsystems", 00:20:56.568 "params": { 00:20:56.568 "max_subsystems": 1024 00:20:56.568 } 00:20:56.568 }, 00:20:56.568 { 00:20:56.568 "method": "nvmf_set_crdt", 00:20:56.568 "params": { 00:20:56.568 "crdt1": 0, 00:20:56.568 "crdt2": 0, 00:20:56.568 "crdt3": 0 00:20:56.568 } 00:20:56.568 }, 00:20:56.568 { 00:20:56.568 "method": "nvmf_create_transport", 00:20:56.568 "params": { 00:20:56.568 "trtype": "TCP", 00:20:56.568 "max_queue_depth": 128, 00:20:56.568 "max_io_qpairs_per_ctrlr": 127, 00:20:56.568 "in_capsule_data_size": 4096, 00:20:56.568 "max_io_size": 131072, 00:20:56.568 "io_unit_size": 131072, 00:20:56.568 "max_aq_depth": 128, 00:20:56.568 "num_shared_buffers": 511, 00:20:56.568 "buf_cache_size": 4294967295, 00:20:56.568 "dif_insert_or_strip": false, 00:20:56.568 "zcopy": false, 00:20:56.568 "c2h_success": false, 00:20:56.568 "sock_priority": 0, 00:20:56.568 "abort_timeout_sec": 1, 00:20:56.568 "ack_timeout": 0, 00:20:56.568 "data_wr_pool_size": 0 00:20:56.568 } 00:20:56.568 }, 00:20:56.568 { 00:20:56.568 "method": "nvmf_create_subsystem", 00:20:56.568 "params": { 00:20:56.568 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:56.568 "allow_any_host": false, 00:20:56.568 "serial_number": "SPDK00000000000001", 00:20:56.568 "model_number": "SPDK bdev Controller", 00:20:56.568 "max_namespaces": 10, 00:20:56.568 "min_cntlid": 1, 00:20:56.568 "max_cntlid": 65519, 00:20:56.568 "ana_reporting": false 00:20:56.568 } 00:20:56.568 }, 00:20:56.568 { 00:20:56.568 "method": "nvmf_subsystem_add_host", 00:20:56.568 "params": { 00:20:56.568 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:56.568 "host": "nqn.2016-06.io.spdk:host1", 00:20:56.568 "psk": "key0" 00:20:56.568 } 00:20:56.568 }, 00:20:56.568 { 00:20:56.568 "method": "nvmf_subsystem_add_ns", 00:20:56.568 "params": { 00:20:56.568 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:56.568 "namespace": { 00:20:56.568 "nsid": 1, 00:20:56.568 "bdev_name": "malloc0", 00:20:56.568 "nguid": "022D863D86F34076A411436E8913BCEA", 00:20:56.568 "uuid": "022d863d-86f3-4076-a411-436e8913bcea", 00:20:56.568 "no_auto_visible": false 00:20:56.568 } 00:20:56.568 } 00:20:56.568 }, 00:20:56.568 { 00:20:56.568 "method": "nvmf_subsystem_add_listener", 00:20:56.568 "params": { 00:20:56.568 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:56.568 "listen_address": { 00:20:56.568 "trtype": "TCP", 00:20:56.568 "adrfam": "IPv4", 00:20:56.568 "traddr": "10.0.0.2", 00:20:56.568 "trsvcid": "4420" 00:20:56.568 }, 00:20:56.568 "secure_channel": true 00:20:56.568 } 00:20:56.568 } 00:20:56.568 ] 00:20:56.568 } 00:20:56.568 ] 00:20:56.568 }' 00:20:56.828 19:10:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # nvmfpid=370433 00:20:56.828 19:10:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # waitforlisten 370433 00:20:56.828 19:10:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:20:56.828 19:10:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 370433 ']' 00:20:56.828 19:10:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:56.828 19:10:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:56.828 19:10:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:56.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:56.829 19:10:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:56.829 19:10:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:56.829 [2024-11-05 19:10:25.952846] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:20:56.829 [2024-11-05 19:10:25.952903] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:56.829 [2024-11-05 19:10:26.041341] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:56.829 [2024-11-05 19:10:26.069498] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:56.829 [2024-11-05 19:10:26.069527] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:56.829 [2024-11-05 19:10:26.069532] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:56.829 [2024-11-05 19:10:26.069537] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:56.829 [2024-11-05 19:10:26.069541] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:56.829 [2024-11-05 19:10:26.070012] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:57.089 [2024-11-05 19:10:26.262487] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:57.089 [2024-11-05 19:10:26.294517] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:57.089 [2024-11-05 19:10:26.294734] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:57.661 19:10:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:57.661 19:10:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:57.661 19:10:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:20:57.661 19:10:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:57.661 19:10:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:57.661 19:10:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:57.661 19:10:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # bdevperf_pid=370485 00:20:57.661 19:10:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # waitforlisten 370485 /var/tmp/bdevperf.sock 00:20:57.661 19:10:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 370485 ']' 00:20:57.661 19:10:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:57.661 19:10:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:20:57.661 19:10:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:57.661 19:10:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:57.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:57.661 19:10:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:57.661 19:10:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:57.661 19:10:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # echo '{ 00:20:57.661 "subsystems": [ 00:20:57.661 { 00:20:57.661 "subsystem": "keyring", 00:20:57.661 "config": [ 00:20:57.661 { 00:20:57.661 "method": "keyring_file_add_key", 00:20:57.661 "params": { 00:20:57.661 "name": "key0", 00:20:57.661 "path": "/tmp/tmp.TTCTvr9xRO" 00:20:57.661 } 00:20:57.661 } 00:20:57.661 ] 00:20:57.661 }, 00:20:57.661 { 00:20:57.661 "subsystem": "iobuf", 00:20:57.661 "config": [ 00:20:57.661 { 00:20:57.662 "method": "iobuf_set_options", 00:20:57.662 "params": { 00:20:57.662 "small_pool_count": 8192, 00:20:57.662 "large_pool_count": 1024, 00:20:57.662 "small_bufsize": 8192, 00:20:57.662 "large_bufsize": 135168, 00:20:57.662 "enable_numa": false 00:20:57.662 } 00:20:57.662 } 00:20:57.662 ] 00:20:57.662 }, 00:20:57.662 { 00:20:57.662 "subsystem": "sock", 00:20:57.662 "config": [ 00:20:57.662 { 00:20:57.662 "method": "sock_set_default_impl", 00:20:57.662 "params": { 00:20:57.662 "impl_name": "posix" 00:20:57.662 } 00:20:57.662 }, 00:20:57.662 { 00:20:57.662 "method": "sock_impl_set_options", 00:20:57.662 "params": { 00:20:57.662 "impl_name": "ssl", 00:20:57.662 "recv_buf_size": 4096, 00:20:57.662 "send_buf_size": 4096, 00:20:57.662 "enable_recv_pipe": true, 00:20:57.662 "enable_quickack": false, 00:20:57.662 "enable_placement_id": 0, 00:20:57.662 "enable_zerocopy_send_server": true, 00:20:57.662 "enable_zerocopy_send_client": false, 00:20:57.662 "zerocopy_threshold": 0, 00:20:57.662 "tls_version": 0, 00:20:57.662 "enable_ktls": false 00:20:57.662 } 00:20:57.662 }, 00:20:57.662 { 00:20:57.662 "method": "sock_impl_set_options", 00:20:57.662 "params": { 00:20:57.662 "impl_name": "posix", 00:20:57.662 "recv_buf_size": 2097152, 00:20:57.662 "send_buf_size": 2097152, 00:20:57.662 "enable_recv_pipe": true, 00:20:57.662 "enable_quickack": false, 00:20:57.662 "enable_placement_id": 0, 00:20:57.662 "enable_zerocopy_send_server": true, 00:20:57.662 "enable_zerocopy_send_client": false, 00:20:57.662 "zerocopy_threshold": 0, 00:20:57.662 "tls_version": 0, 00:20:57.662 "enable_ktls": false 00:20:57.662 } 00:20:57.662 } 00:20:57.662 ] 00:20:57.662 }, 00:20:57.662 { 00:20:57.662 "subsystem": "vmd", 00:20:57.662 "config": [] 00:20:57.662 }, 00:20:57.662 { 00:20:57.662 "subsystem": "accel", 00:20:57.662 "config": [ 00:20:57.662 { 00:20:57.662 "method": "accel_set_options", 00:20:57.662 "params": { 00:20:57.662 "small_cache_size": 128, 00:20:57.662 "large_cache_size": 16, 00:20:57.662 "task_count": 2048, 00:20:57.662 "sequence_count": 2048, 00:20:57.662 "buf_count": 2048 00:20:57.662 } 00:20:57.662 } 00:20:57.662 ] 00:20:57.662 }, 00:20:57.662 { 00:20:57.662 "subsystem": "bdev", 00:20:57.662 "config": [ 00:20:57.662 { 00:20:57.662 "method": "bdev_set_options", 00:20:57.662 "params": { 00:20:57.662 "bdev_io_pool_size": 65535, 00:20:57.662 "bdev_io_cache_size": 256, 00:20:57.662 "bdev_auto_examine": true, 00:20:57.662 "iobuf_small_cache_size": 128, 00:20:57.662 "iobuf_large_cache_size": 16 00:20:57.662 } 00:20:57.662 }, 00:20:57.662 { 00:20:57.662 "method": "bdev_raid_set_options", 00:20:57.662 "params": { 00:20:57.662 "process_window_size_kb": 1024, 00:20:57.662 "process_max_bandwidth_mb_sec": 0 00:20:57.662 } 00:20:57.662 }, 00:20:57.662 { 00:20:57.662 "method": "bdev_iscsi_set_options", 00:20:57.662 "params": { 00:20:57.662 "timeout_sec": 30 00:20:57.662 } 00:20:57.662 }, 00:20:57.662 { 00:20:57.662 "method": "bdev_nvme_set_options", 00:20:57.662 "params": { 00:20:57.662 "action_on_timeout": "none", 00:20:57.662 "timeout_us": 0, 00:20:57.662 "timeout_admin_us": 0, 00:20:57.662 "keep_alive_timeout_ms": 10000, 00:20:57.662 "arbitration_burst": 0, 00:20:57.662 "low_priority_weight": 0, 00:20:57.662 "medium_priority_weight": 0, 00:20:57.662 "high_priority_weight": 0, 00:20:57.662 "nvme_adminq_poll_period_us": 10000, 00:20:57.662 "nvme_ioq_poll_period_us": 0, 00:20:57.662 "io_queue_requests": 512, 00:20:57.662 "delay_cmd_submit": true, 00:20:57.662 "transport_retry_count": 4, 00:20:57.662 "bdev_retry_count": 3, 00:20:57.662 "transport_ack_timeout": 0, 00:20:57.662 "ctrlr_loss_timeout_sec": 0, 00:20:57.662 "reconnect_delay_sec": 0, 00:20:57.662 "fast_io_fail_timeout_sec": 0, 00:20:57.662 "disable_auto_failback": false, 00:20:57.662 "generate_uuids": false, 00:20:57.662 "transport_tos": 0, 00:20:57.662 "nvme_error_stat": false, 00:20:57.662 "rdma_srq_size": 0, 00:20:57.662 "io_path_stat": false, 00:20:57.662 "allow_accel_sequence": false, 00:20:57.662 "rdma_max_cq_size": 0, 00:20:57.662 "rdma_cm_event_timeout_ms": 0, 00:20:57.662 "dhchap_digests": [ 00:20:57.662 "sha256", 00:20:57.662 "sha384", 00:20:57.662 "sha512" 00:20:57.662 ], 00:20:57.662 "dhchap_dhgroups": [ 00:20:57.662 "null", 00:20:57.662 "ffdhe2048", 00:20:57.662 "ffdhe3072", 00:20:57.662 "ffdhe4096", 00:20:57.662 "ffdhe6144", 00:20:57.662 "ffdhe8192" 00:20:57.662 ] 00:20:57.662 } 00:20:57.662 }, 00:20:57.662 { 00:20:57.662 "method": "bdev_nvme_attach_controller", 00:20:57.662 "params": { 00:20:57.662 "name": "TLSTEST", 00:20:57.662 "trtype": "TCP", 00:20:57.662 "adrfam": "IPv4", 00:20:57.662 "traddr": "10.0.0.2", 00:20:57.662 "trsvcid": "4420", 00:20:57.662 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:57.662 "prchk_reftag": false, 00:20:57.662 "prchk_guard": false, 00:20:57.662 "ctrlr_loss_timeout_sec": 0, 00:20:57.662 "reconnect_delay_sec": 0, 00:20:57.662 "fast_io_fail_timeout_sec": 0, 00:20:57.662 "psk": "key0", 00:20:57.662 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:57.662 "hdgst": false, 00:20:57.662 "ddgst": false, 00:20:57.662 "multipath": "multipath" 00:20:57.662 } 00:20:57.662 }, 00:20:57.662 { 00:20:57.662 "method": "bdev_nvme_set_hotplug", 00:20:57.662 "params": { 00:20:57.662 "period_us": 100000, 00:20:57.662 "enable": false 00:20:57.662 } 00:20:57.662 }, 00:20:57.662 { 00:20:57.662 "method": "bdev_wait_for_examine" 00:20:57.662 } 00:20:57.662 ] 00:20:57.662 }, 00:20:57.662 { 00:20:57.662 "subsystem": "nbd", 00:20:57.662 "config": [] 00:20:57.662 } 00:20:57.662 ] 00:20:57.662 }' 00:20:57.662 [2024-11-05 19:10:26.868799] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:20:57.662 [2024-11-05 19:10:26.868848] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid370485 ] 00:20:57.662 [2024-11-05 19:10:26.927794] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:57.662 [2024-11-05 19:10:26.956837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:57.923 [2024-11-05 19:10:27.090555] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:58.494 19:10:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:58.494 19:10:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:58.494 19:10:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@208 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:58.494 Running I/O for 10 seconds... 00:21:00.821 5875.00 IOPS, 22.95 MiB/s [2024-11-05T18:10:31.087Z] 5909.50 IOPS, 23.08 MiB/s [2024-11-05T18:10:32.027Z] 5984.00 IOPS, 23.38 MiB/s [2024-11-05T18:10:32.969Z] 6056.25 IOPS, 23.66 MiB/s [2024-11-05T18:10:33.910Z] 5989.00 IOPS, 23.39 MiB/s [2024-11-05T18:10:34.851Z] 5895.17 IOPS, 23.03 MiB/s [2024-11-05T18:10:35.792Z] 5853.29 IOPS, 22.86 MiB/s [2024-11-05T18:10:37.174Z] 5917.25 IOPS, 23.11 MiB/s [2024-11-05T18:10:38.116Z] 5819.44 IOPS, 22.73 MiB/s [2024-11-05T18:10:38.116Z] 5806.50 IOPS, 22.68 MiB/s 00:21:08.793 Latency(us) 00:21:08.793 [2024-11-05T18:10:38.116Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:08.793 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:08.793 Verification LBA range: start 0x0 length 0x2000 00:21:08.793 TLSTESTn1 : 10.01 5812.04 22.70 0.00 0.00 21992.26 4969.81 24139.09 00:21:08.793 [2024-11-05T18:10:38.116Z] =================================================================================================================== 00:21:08.793 [2024-11-05T18:10:38.116Z] Total : 5812.04 22.70 0.00 0.00 21992.26 4969.81 24139.09 00:21:08.793 { 00:21:08.793 "results": [ 00:21:08.793 { 00:21:08.793 "job": "TLSTESTn1", 00:21:08.793 "core_mask": "0x4", 00:21:08.793 "workload": "verify", 00:21:08.793 "status": "finished", 00:21:08.793 "verify_range": { 00:21:08.793 "start": 0, 00:21:08.793 "length": 8192 00:21:08.793 }, 00:21:08.793 "queue_depth": 128, 00:21:08.793 "io_size": 4096, 00:21:08.793 "runtime": 10.012497, 00:21:08.793 "iops": 5812.036697738836, 00:21:08.793 "mibps": 22.703268350542327, 00:21:08.793 "io_failed": 0, 00:21:08.793 "io_timeout": 0, 00:21:08.793 "avg_latency_us": 21992.255710022397, 00:21:08.793 "min_latency_us": 4969.8133333333335, 00:21:08.793 "max_latency_us": 24139.093333333334 00:21:08.793 } 00:21:08.793 ], 00:21:08.793 "core_count": 1 00:21:08.793 } 00:21:08.793 19:10:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:08.793 19:10:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@211 -- # killprocess 370485 00:21:08.793 19:10:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 370485 ']' 00:21:08.793 19:10:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 370485 00:21:08.793 19:10:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:21:08.793 19:10:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:08.793 19:10:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 370485 00:21:08.793 19:10:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:21:08.793 19:10:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:21:08.793 19:10:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 370485' 00:21:08.793 killing process with pid 370485 00:21:08.793 19:10:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 370485 00:21:08.793 Received shutdown signal, test time was about 10.000000 seconds 00:21:08.793 00:21:08.793 Latency(us) 00:21:08.793 [2024-11-05T18:10:38.116Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:08.793 [2024-11-05T18:10:38.116Z] =================================================================================================================== 00:21:08.793 [2024-11-05T18:10:38.116Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:08.793 19:10:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 370485 00:21:08.793 19:10:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@212 -- # killprocess 370433 00:21:08.793 19:10:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 370433 ']' 00:21:08.793 19:10:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 370433 00:21:08.793 19:10:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:21:08.793 19:10:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:08.793 19:10:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 370433 00:21:08.793 19:10:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:21:08.793 19:10:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:21:08.793 19:10:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 370433' 00:21:08.793 killing process with pid 370433 00:21:08.793 19:10:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 370433 00:21:08.793 19:10:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 370433 00:21:09.053 19:10:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # nvmfappstart 00:21:09.053 19:10:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:21:09.053 19:10:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:09.053 19:10:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:09.053 19:10:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # nvmfpid=372822 00:21:09.053 19:10:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # waitforlisten 372822 00:21:09.053 19:10:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:09.053 19:10:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 372822 ']' 00:21:09.053 19:10:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:09.053 19:10:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:09.053 19:10:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:09.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:09.053 19:10:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:09.053 19:10:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:09.053 [2024-11-05 19:10:38.196252] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:21:09.053 [2024-11-05 19:10:38.196311] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:09.053 [2024-11-05 19:10:38.270815] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:09.053 [2024-11-05 19:10:38.305336] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:09.053 [2024-11-05 19:10:38.305368] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:09.053 [2024-11-05 19:10:38.305375] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:09.053 [2024-11-05 19:10:38.305382] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:09.053 [2024-11-05 19:10:38.305388] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:09.053 [2024-11-05 19:10:38.305944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:09.993 19:10:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:09.993 19:10:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:21:09.993 19:10:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:21:09.993 19:10:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:09.993 19:10:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:09.993 19:10:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:09.993 19:10:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # setup_nvmf_tgt /tmp/tmp.TTCTvr9xRO 00:21:09.993 19:10:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.TTCTvr9xRO 00:21:09.993 19:10:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:09.993 [2024-11-05 19:10:39.166261] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:09.993 19:10:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:10.254 19:10:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:10.254 [2024-11-05 19:10:39.535183] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:10.254 [2024-11-05 19:10:39.535432] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:10.254 19:10:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:10.515 malloc0 00:21:10.515 19:10:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:10.775 19:10:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.TTCTvr9xRO 00:21:11.037 19:10:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:21:11.037 19:10:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:11.037 19:10:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@219 -- # bdevperf_pid=373189 00:21:11.037 19:10:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:11.037 19:10:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # waitforlisten 373189 /var/tmp/bdevperf.sock 00:21:11.037 19:10:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 373189 ']' 00:21:11.038 19:10:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:11.038 19:10:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:11.038 19:10:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:11.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:11.038 19:10:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:11.038 19:10:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:11.038 [2024-11-05 19:10:40.324467] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:21:11.038 [2024-11-05 19:10:40.324518] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid373189 ] 00:21:11.298 [2024-11-05 19:10:40.408483] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:11.298 [2024-11-05 19:10:40.437682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:11.298 19:10:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:11.298 19:10:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:21:11.298 19:10:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.TTCTvr9xRO 00:21:11.559 19:10:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@225 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:11.559 [2024-11-05 19:10:40.823335] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:11.820 nvme0n1 00:21:11.820 19:10:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:11.820 Running I/O for 1 seconds... 00:21:12.763 4144.00 IOPS, 16.19 MiB/s 00:21:12.763 Latency(us) 00:21:12.763 [2024-11-05T18:10:42.086Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:12.763 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:12.763 Verification LBA range: start 0x0 length 0x2000 00:21:12.763 nvme0n1 : 1.02 4182.09 16.34 0.00 0.00 30405.55 5843.63 37137.07 00:21:12.763 [2024-11-05T18:10:42.086Z] =================================================================================================================== 00:21:12.763 [2024-11-05T18:10:42.086Z] Total : 4182.09 16.34 0.00 0.00 30405.55 5843.63 37137.07 00:21:12.763 { 00:21:12.763 "results": [ 00:21:12.763 { 00:21:12.763 "job": "nvme0n1", 00:21:12.763 "core_mask": "0x2", 00:21:12.763 "workload": "verify", 00:21:12.763 "status": "finished", 00:21:12.763 "verify_range": { 00:21:12.763 "start": 0, 00:21:12.763 "length": 8192 00:21:12.763 }, 00:21:12.763 "queue_depth": 128, 00:21:12.763 "io_size": 4096, 00:21:12.763 "runtime": 1.021499, 00:21:12.763 "iops": 4182.089262936136, 00:21:12.763 "mibps": 16.33628618334428, 00:21:12.763 "io_failed": 0, 00:21:12.763 "io_timeout": 0, 00:21:12.763 "avg_latency_us": 30405.551460674153, 00:21:12.763 "min_latency_us": 5843.626666666667, 00:21:12.763 "max_latency_us": 37137.066666666666 00:21:12.763 } 00:21:12.763 ], 00:21:12.763 "core_count": 1 00:21:12.763 } 00:21:12.763 19:10:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@231 -- # killprocess 373189 00:21:12.763 19:10:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 373189 ']' 00:21:12.763 19:10:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 373189 00:21:12.763 19:10:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:21:12.763 19:10:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:12.763 19:10:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 373189 00:21:13.023 19:10:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:21:13.023 19:10:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:21:13.023 19:10:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 373189' 00:21:13.023 killing process with pid 373189 00:21:13.023 19:10:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 373189 00:21:13.023 Received shutdown signal, test time was about 1.000000 seconds 00:21:13.023 00:21:13.023 Latency(us) 00:21:13.023 [2024-11-05T18:10:42.346Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:13.023 [2024-11-05T18:10:42.346Z] =================================================================================================================== 00:21:13.023 [2024-11-05T18:10:42.346Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:13.023 19:10:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 373189 00:21:13.023 19:10:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@232 -- # killprocess 372822 00:21:13.023 19:10:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 372822 ']' 00:21:13.023 19:10:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 372822 00:21:13.023 19:10:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:21:13.023 19:10:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:13.023 19:10:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 372822 00:21:13.023 19:10:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:13.024 19:10:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:13.024 19:10:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 372822' 00:21:13.024 killing process with pid 372822 00:21:13.024 19:10:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 372822 00:21:13.024 19:10:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 372822 00:21:13.284 19:10:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # nvmfappstart 00:21:13.284 19:10:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:21:13.284 19:10:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:13.284 19:10:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:13.284 19:10:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # nvmfpid=373543 00:21:13.284 19:10:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # waitforlisten 373543 00:21:13.284 19:10:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:13.284 19:10:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 373543 ']' 00:21:13.284 19:10:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:13.284 19:10:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:13.284 19:10:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:13.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:13.284 19:10:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:13.284 19:10:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:13.284 [2024-11-05 19:10:42.457702] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:21:13.284 [2024-11-05 19:10:42.457761] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:13.284 [2024-11-05 19:10:42.534594] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:13.284 [2024-11-05 19:10:42.568361] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:13.284 [2024-11-05 19:10:42.568392] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:13.284 [2024-11-05 19:10:42.568401] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:13.284 [2024-11-05 19:10:42.568408] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:13.285 [2024-11-05 19:10:42.568414] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:13.285 [2024-11-05 19:10:42.568993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:14.226 19:10:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:14.226 19:10:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:21:14.226 19:10:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:21:14.226 19:10:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:14.226 19:10:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:14.226 19:10:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:14.226 19:10:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@238 -- # rpc_cmd 00:21:14.226 19:10:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.226 19:10:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:14.226 [2024-11-05 19:10:43.305430] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:14.226 malloc0 00:21:14.226 [2024-11-05 19:10:43.332158] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:14.226 [2024-11-05 19:10:43.332392] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:14.226 19:10:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.226 19:10:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@251 -- # bdevperf_pid=373891 00:21:14.226 19:10:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@253 -- # waitforlisten 373891 /var/tmp/bdevperf.sock 00:21:14.226 19:10:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@249 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:14.226 19:10:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 373891 ']' 00:21:14.226 19:10:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:14.226 19:10:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:14.226 19:10:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:14.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:14.226 19:10:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:14.226 19:10:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:14.226 [2024-11-05 19:10:43.412861] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:21:14.226 [2024-11-05 19:10:43.412910] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid373891 ] 00:21:14.226 [2024-11-05 19:10:43.494829] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:14.226 [2024-11-05 19:10:43.524578] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:15.167 19:10:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:15.167 19:10:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:21:15.167 19:10:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.TTCTvr9xRO 00:21:15.167 19:10:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:15.427 [2024-11-05 19:10:44.507786] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:15.427 nvme0n1 00:21:15.427 19:10:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:15.427 Running I/O for 1 seconds... 00:21:16.628 4823.00 IOPS, 18.84 MiB/s 00:21:16.628 Latency(us) 00:21:16.628 [2024-11-05T18:10:45.951Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:16.628 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:16.628 Verification LBA range: start 0x0 length 0x2000 00:21:16.628 nvme0n1 : 1.02 4870.74 19.03 0.00 0.00 26064.82 4724.05 70778.88 00:21:16.628 [2024-11-05T18:10:45.951Z] =================================================================================================================== 00:21:16.628 [2024-11-05T18:10:45.951Z] Total : 4870.74 19.03 0.00 0.00 26064.82 4724.05 70778.88 00:21:16.628 { 00:21:16.628 "results": [ 00:21:16.628 { 00:21:16.628 "job": "nvme0n1", 00:21:16.628 "core_mask": "0x2", 00:21:16.628 "workload": "verify", 00:21:16.628 "status": "finished", 00:21:16.628 "verify_range": { 00:21:16.628 "start": 0, 00:21:16.628 "length": 8192 00:21:16.628 }, 00:21:16.628 "queue_depth": 128, 00:21:16.628 "io_size": 4096, 00:21:16.628 "runtime": 1.016477, 00:21:16.628 "iops": 4870.7447389365425, 00:21:16.628 "mibps": 19.02634663647087, 00:21:16.628 "io_failed": 0, 00:21:16.628 "io_timeout": 0, 00:21:16.628 "avg_latency_us": 26064.822783276108, 00:21:16.628 "min_latency_us": 4724.053333333333, 00:21:16.628 "max_latency_us": 70778.88 00:21:16.628 } 00:21:16.628 ], 00:21:16.628 "core_count": 1 00:21:16.628 } 00:21:16.628 19:10:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@262 -- # rpc_cmd save_config 00:21:16.628 19:10:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.628 19:10:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:16.628 19:10:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.628 19:10:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@262 -- # tgtcfg='{ 00:21:16.628 "subsystems": [ 00:21:16.628 { 00:21:16.628 "subsystem": "keyring", 00:21:16.628 "config": [ 00:21:16.628 { 00:21:16.628 "method": "keyring_file_add_key", 00:21:16.628 "params": { 00:21:16.628 "name": "key0", 00:21:16.628 "path": "/tmp/tmp.TTCTvr9xRO" 00:21:16.628 } 00:21:16.628 } 00:21:16.628 ] 00:21:16.628 }, 00:21:16.628 { 00:21:16.628 "subsystem": "iobuf", 00:21:16.628 "config": [ 00:21:16.628 { 00:21:16.628 "method": "iobuf_set_options", 00:21:16.628 "params": { 00:21:16.628 "small_pool_count": 8192, 00:21:16.628 "large_pool_count": 1024, 00:21:16.628 "small_bufsize": 8192, 00:21:16.628 "large_bufsize": 135168, 00:21:16.628 "enable_numa": false 00:21:16.628 } 00:21:16.628 } 00:21:16.628 ] 00:21:16.628 }, 00:21:16.628 { 00:21:16.628 "subsystem": "sock", 00:21:16.628 "config": [ 00:21:16.628 { 00:21:16.628 "method": "sock_set_default_impl", 00:21:16.628 "params": { 00:21:16.628 "impl_name": "posix" 00:21:16.628 } 00:21:16.628 }, 00:21:16.628 { 00:21:16.628 "method": "sock_impl_set_options", 00:21:16.628 "params": { 00:21:16.628 "impl_name": "ssl", 00:21:16.628 "recv_buf_size": 4096, 00:21:16.628 "send_buf_size": 4096, 00:21:16.628 "enable_recv_pipe": true, 00:21:16.628 "enable_quickack": false, 00:21:16.628 "enable_placement_id": 0, 00:21:16.628 "enable_zerocopy_send_server": true, 00:21:16.628 "enable_zerocopy_send_client": false, 00:21:16.628 "zerocopy_threshold": 0, 00:21:16.628 "tls_version": 0, 00:21:16.628 "enable_ktls": false 00:21:16.628 } 00:21:16.628 }, 00:21:16.628 { 00:21:16.628 "method": "sock_impl_set_options", 00:21:16.628 "params": { 00:21:16.628 "impl_name": "posix", 00:21:16.628 "recv_buf_size": 2097152, 00:21:16.628 "send_buf_size": 2097152, 00:21:16.628 "enable_recv_pipe": true, 00:21:16.628 "enable_quickack": false, 00:21:16.628 "enable_placement_id": 0, 00:21:16.628 "enable_zerocopy_send_server": true, 00:21:16.628 "enable_zerocopy_send_client": false, 00:21:16.628 "zerocopy_threshold": 0, 00:21:16.628 "tls_version": 0, 00:21:16.628 "enable_ktls": false 00:21:16.628 } 00:21:16.628 } 00:21:16.628 ] 00:21:16.628 }, 00:21:16.628 { 00:21:16.628 "subsystem": "vmd", 00:21:16.628 "config": [] 00:21:16.628 }, 00:21:16.628 { 00:21:16.628 "subsystem": "accel", 00:21:16.628 "config": [ 00:21:16.628 { 00:21:16.628 "method": "accel_set_options", 00:21:16.628 "params": { 00:21:16.628 "small_cache_size": 128, 00:21:16.628 "large_cache_size": 16, 00:21:16.628 "task_count": 2048, 00:21:16.628 "sequence_count": 2048, 00:21:16.628 "buf_count": 2048 00:21:16.628 } 00:21:16.628 } 00:21:16.628 ] 00:21:16.628 }, 00:21:16.628 { 00:21:16.628 "subsystem": "bdev", 00:21:16.628 "config": [ 00:21:16.628 { 00:21:16.628 "method": "bdev_set_options", 00:21:16.628 "params": { 00:21:16.628 "bdev_io_pool_size": 65535, 00:21:16.628 "bdev_io_cache_size": 256, 00:21:16.628 "bdev_auto_examine": true, 00:21:16.628 "iobuf_small_cache_size": 128, 00:21:16.628 "iobuf_large_cache_size": 16 00:21:16.628 } 00:21:16.628 }, 00:21:16.628 { 00:21:16.628 "method": "bdev_raid_set_options", 00:21:16.628 "params": { 00:21:16.628 "process_window_size_kb": 1024, 00:21:16.628 "process_max_bandwidth_mb_sec": 0 00:21:16.628 } 00:21:16.628 }, 00:21:16.628 { 00:21:16.628 "method": "bdev_iscsi_set_options", 00:21:16.628 "params": { 00:21:16.628 "timeout_sec": 30 00:21:16.628 } 00:21:16.628 }, 00:21:16.628 { 00:21:16.628 "method": "bdev_nvme_set_options", 00:21:16.628 "params": { 00:21:16.628 "action_on_timeout": "none", 00:21:16.628 "timeout_us": 0, 00:21:16.628 "timeout_admin_us": 0, 00:21:16.628 "keep_alive_timeout_ms": 10000, 00:21:16.628 "arbitration_burst": 0, 00:21:16.628 "low_priority_weight": 0, 00:21:16.628 "medium_priority_weight": 0, 00:21:16.628 "high_priority_weight": 0, 00:21:16.628 "nvme_adminq_poll_period_us": 10000, 00:21:16.628 "nvme_ioq_poll_period_us": 0, 00:21:16.628 "io_queue_requests": 0, 00:21:16.628 "delay_cmd_submit": true, 00:21:16.628 "transport_retry_count": 4, 00:21:16.628 "bdev_retry_count": 3, 00:21:16.628 "transport_ack_timeout": 0, 00:21:16.628 "ctrlr_loss_timeout_sec": 0, 00:21:16.628 "reconnect_delay_sec": 0, 00:21:16.628 "fast_io_fail_timeout_sec": 0, 00:21:16.628 "disable_auto_failback": false, 00:21:16.628 "generate_uuids": false, 00:21:16.628 "transport_tos": 0, 00:21:16.628 "nvme_error_stat": false, 00:21:16.628 "rdma_srq_size": 0, 00:21:16.628 "io_path_stat": false, 00:21:16.628 "allow_accel_sequence": false, 00:21:16.628 "rdma_max_cq_size": 0, 00:21:16.628 "rdma_cm_event_timeout_ms": 0, 00:21:16.628 "dhchap_digests": [ 00:21:16.628 "sha256", 00:21:16.628 "sha384", 00:21:16.628 "sha512" 00:21:16.628 ], 00:21:16.628 "dhchap_dhgroups": [ 00:21:16.628 "null", 00:21:16.628 "ffdhe2048", 00:21:16.628 "ffdhe3072", 00:21:16.628 "ffdhe4096", 00:21:16.628 "ffdhe6144", 00:21:16.628 "ffdhe8192" 00:21:16.628 ] 00:21:16.628 } 00:21:16.628 }, 00:21:16.628 { 00:21:16.628 "method": "bdev_nvme_set_hotplug", 00:21:16.628 "params": { 00:21:16.628 "period_us": 100000, 00:21:16.628 "enable": false 00:21:16.628 } 00:21:16.628 }, 00:21:16.628 { 00:21:16.628 "method": "bdev_malloc_create", 00:21:16.628 "params": { 00:21:16.628 "name": "malloc0", 00:21:16.628 "num_blocks": 8192, 00:21:16.628 "block_size": 4096, 00:21:16.628 "physical_block_size": 4096, 00:21:16.628 "uuid": "41363e9c-f560-408f-861a-a3180c9a27a8", 00:21:16.628 "optimal_io_boundary": 0, 00:21:16.628 "md_size": 0, 00:21:16.628 "dif_type": 0, 00:21:16.628 "dif_is_head_of_md": false, 00:21:16.628 "dif_pi_format": 0 00:21:16.628 } 00:21:16.628 }, 00:21:16.628 { 00:21:16.628 "method": "bdev_wait_for_examine" 00:21:16.628 } 00:21:16.629 ] 00:21:16.629 }, 00:21:16.629 { 00:21:16.629 "subsystem": "nbd", 00:21:16.629 "config": [] 00:21:16.629 }, 00:21:16.629 { 00:21:16.629 "subsystem": "scheduler", 00:21:16.629 "config": [ 00:21:16.629 { 00:21:16.629 "method": "framework_set_scheduler", 00:21:16.629 "params": { 00:21:16.629 "name": "static" 00:21:16.629 } 00:21:16.629 } 00:21:16.629 ] 00:21:16.629 }, 00:21:16.629 { 00:21:16.629 "subsystem": "nvmf", 00:21:16.629 "config": [ 00:21:16.629 { 00:21:16.629 "method": "nvmf_set_config", 00:21:16.629 "params": { 00:21:16.629 "discovery_filter": "match_any", 00:21:16.629 "admin_cmd_passthru": { 00:21:16.629 "identify_ctrlr": false 00:21:16.629 }, 00:21:16.629 "dhchap_digests": [ 00:21:16.629 "sha256", 00:21:16.629 "sha384", 00:21:16.629 "sha512" 00:21:16.629 ], 00:21:16.629 "dhchap_dhgroups": [ 00:21:16.629 "null", 00:21:16.629 "ffdhe2048", 00:21:16.629 "ffdhe3072", 00:21:16.629 "ffdhe4096", 00:21:16.629 "ffdhe6144", 00:21:16.629 "ffdhe8192" 00:21:16.629 ] 00:21:16.629 } 00:21:16.629 }, 00:21:16.629 { 00:21:16.629 "method": "nvmf_set_max_subsystems", 00:21:16.629 "params": { 00:21:16.629 "max_subsystems": 1024 00:21:16.629 } 00:21:16.629 }, 00:21:16.629 { 00:21:16.629 "method": "nvmf_set_crdt", 00:21:16.629 "params": { 00:21:16.629 "crdt1": 0, 00:21:16.629 "crdt2": 0, 00:21:16.629 "crdt3": 0 00:21:16.629 } 00:21:16.629 }, 00:21:16.629 { 00:21:16.629 "method": "nvmf_create_transport", 00:21:16.629 "params": { 00:21:16.629 "trtype": "TCP", 00:21:16.629 "max_queue_depth": 128, 00:21:16.629 "max_io_qpairs_per_ctrlr": 127, 00:21:16.629 "in_capsule_data_size": 4096, 00:21:16.629 "max_io_size": 131072, 00:21:16.629 "io_unit_size": 131072, 00:21:16.629 "max_aq_depth": 128, 00:21:16.629 "num_shared_buffers": 511, 00:21:16.629 "buf_cache_size": 4294967295, 00:21:16.629 "dif_insert_or_strip": false, 00:21:16.629 "zcopy": false, 00:21:16.629 "c2h_success": false, 00:21:16.629 "sock_priority": 0, 00:21:16.629 "abort_timeout_sec": 1, 00:21:16.629 "ack_timeout": 0, 00:21:16.629 "data_wr_pool_size": 0 00:21:16.629 } 00:21:16.629 }, 00:21:16.629 { 00:21:16.629 "method": "nvmf_create_subsystem", 00:21:16.629 "params": { 00:21:16.629 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:16.629 "allow_any_host": false, 00:21:16.629 "serial_number": "00000000000000000000", 00:21:16.629 "model_number": "SPDK bdev Controller", 00:21:16.629 "max_namespaces": 32, 00:21:16.629 "min_cntlid": 1, 00:21:16.629 "max_cntlid": 65519, 00:21:16.629 "ana_reporting": false 00:21:16.629 } 00:21:16.629 }, 00:21:16.629 { 00:21:16.629 "method": "nvmf_subsystem_add_host", 00:21:16.629 "params": { 00:21:16.629 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:16.629 "host": "nqn.2016-06.io.spdk:host1", 00:21:16.629 "psk": "key0" 00:21:16.629 } 00:21:16.629 }, 00:21:16.629 { 00:21:16.629 "method": "nvmf_subsystem_add_ns", 00:21:16.629 "params": { 00:21:16.629 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:16.629 "namespace": { 00:21:16.629 "nsid": 1, 00:21:16.629 "bdev_name": "malloc0", 00:21:16.629 "nguid": "41363E9CF560408F861AA3180C9A27A8", 00:21:16.629 "uuid": "41363e9c-f560-408f-861a-a3180c9a27a8", 00:21:16.629 "no_auto_visible": false 00:21:16.629 } 00:21:16.629 } 00:21:16.629 }, 00:21:16.629 { 00:21:16.629 "method": "nvmf_subsystem_add_listener", 00:21:16.629 "params": { 00:21:16.629 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:16.629 "listen_address": { 00:21:16.629 "trtype": "TCP", 00:21:16.629 "adrfam": "IPv4", 00:21:16.629 "traddr": "10.0.0.2", 00:21:16.629 "trsvcid": "4420" 00:21:16.629 }, 00:21:16.629 "secure_channel": false, 00:21:16.629 "sock_impl": "ssl" 00:21:16.629 } 00:21:16.629 } 00:21:16.629 ] 00:21:16.629 } 00:21:16.629 ] 00:21:16.629 }' 00:21:16.629 19:10:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@263 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:21:16.890 19:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@263 -- # bperfcfg='{ 00:21:16.890 "subsystems": [ 00:21:16.890 { 00:21:16.890 "subsystem": "keyring", 00:21:16.890 "config": [ 00:21:16.890 { 00:21:16.890 "method": "keyring_file_add_key", 00:21:16.890 "params": { 00:21:16.890 "name": "key0", 00:21:16.890 "path": "/tmp/tmp.TTCTvr9xRO" 00:21:16.890 } 00:21:16.890 } 00:21:16.890 ] 00:21:16.890 }, 00:21:16.890 { 00:21:16.890 "subsystem": "iobuf", 00:21:16.890 "config": [ 00:21:16.890 { 00:21:16.890 "method": "iobuf_set_options", 00:21:16.890 "params": { 00:21:16.890 "small_pool_count": 8192, 00:21:16.890 "large_pool_count": 1024, 00:21:16.890 "small_bufsize": 8192, 00:21:16.890 "large_bufsize": 135168, 00:21:16.890 "enable_numa": false 00:21:16.890 } 00:21:16.890 } 00:21:16.890 ] 00:21:16.890 }, 00:21:16.890 { 00:21:16.890 "subsystem": "sock", 00:21:16.890 "config": [ 00:21:16.890 { 00:21:16.890 "method": "sock_set_default_impl", 00:21:16.890 "params": { 00:21:16.890 "impl_name": "posix" 00:21:16.890 } 00:21:16.890 }, 00:21:16.890 { 00:21:16.890 "method": "sock_impl_set_options", 00:21:16.890 "params": { 00:21:16.890 "impl_name": "ssl", 00:21:16.890 "recv_buf_size": 4096, 00:21:16.890 "send_buf_size": 4096, 00:21:16.890 "enable_recv_pipe": true, 00:21:16.890 "enable_quickack": false, 00:21:16.890 "enable_placement_id": 0, 00:21:16.890 "enable_zerocopy_send_server": true, 00:21:16.890 "enable_zerocopy_send_client": false, 00:21:16.890 "zerocopy_threshold": 0, 00:21:16.890 "tls_version": 0, 00:21:16.890 "enable_ktls": false 00:21:16.890 } 00:21:16.890 }, 00:21:16.890 { 00:21:16.891 "method": "sock_impl_set_options", 00:21:16.891 "params": { 00:21:16.891 "impl_name": "posix", 00:21:16.891 "recv_buf_size": 2097152, 00:21:16.891 "send_buf_size": 2097152, 00:21:16.891 "enable_recv_pipe": true, 00:21:16.891 "enable_quickack": false, 00:21:16.891 "enable_placement_id": 0, 00:21:16.891 "enable_zerocopy_send_server": true, 00:21:16.891 "enable_zerocopy_send_client": false, 00:21:16.891 "zerocopy_threshold": 0, 00:21:16.891 "tls_version": 0, 00:21:16.891 "enable_ktls": false 00:21:16.891 } 00:21:16.891 } 00:21:16.891 ] 00:21:16.891 }, 00:21:16.891 { 00:21:16.891 "subsystem": "vmd", 00:21:16.891 "config": [] 00:21:16.891 }, 00:21:16.891 { 00:21:16.891 "subsystem": "accel", 00:21:16.891 "config": [ 00:21:16.891 { 00:21:16.891 "method": "accel_set_options", 00:21:16.891 "params": { 00:21:16.891 "small_cache_size": 128, 00:21:16.891 "large_cache_size": 16, 00:21:16.891 "task_count": 2048, 00:21:16.891 "sequence_count": 2048, 00:21:16.891 "buf_count": 2048 00:21:16.891 } 00:21:16.891 } 00:21:16.891 ] 00:21:16.891 }, 00:21:16.891 { 00:21:16.891 "subsystem": "bdev", 00:21:16.891 "config": [ 00:21:16.891 { 00:21:16.891 "method": "bdev_set_options", 00:21:16.891 "params": { 00:21:16.891 "bdev_io_pool_size": 65535, 00:21:16.891 "bdev_io_cache_size": 256, 00:21:16.891 "bdev_auto_examine": true, 00:21:16.891 "iobuf_small_cache_size": 128, 00:21:16.891 "iobuf_large_cache_size": 16 00:21:16.891 } 00:21:16.891 }, 00:21:16.891 { 00:21:16.891 "method": "bdev_raid_set_options", 00:21:16.891 "params": { 00:21:16.891 "process_window_size_kb": 1024, 00:21:16.891 "process_max_bandwidth_mb_sec": 0 00:21:16.891 } 00:21:16.891 }, 00:21:16.891 { 00:21:16.891 "method": "bdev_iscsi_set_options", 00:21:16.891 "params": { 00:21:16.891 "timeout_sec": 30 00:21:16.891 } 00:21:16.891 }, 00:21:16.891 { 00:21:16.891 "method": "bdev_nvme_set_options", 00:21:16.891 "params": { 00:21:16.891 "action_on_timeout": "none", 00:21:16.891 "timeout_us": 0, 00:21:16.891 "timeout_admin_us": 0, 00:21:16.891 "keep_alive_timeout_ms": 10000, 00:21:16.891 "arbitration_burst": 0, 00:21:16.891 "low_priority_weight": 0, 00:21:16.891 "medium_priority_weight": 0, 00:21:16.891 "high_priority_weight": 0, 00:21:16.891 "nvme_adminq_poll_period_us": 10000, 00:21:16.891 "nvme_ioq_poll_period_us": 0, 00:21:16.891 "io_queue_requests": 512, 00:21:16.891 "delay_cmd_submit": true, 00:21:16.891 "transport_retry_count": 4, 00:21:16.891 "bdev_retry_count": 3, 00:21:16.891 "transport_ack_timeout": 0, 00:21:16.891 "ctrlr_loss_timeout_sec": 0, 00:21:16.891 "reconnect_delay_sec": 0, 00:21:16.891 "fast_io_fail_timeout_sec": 0, 00:21:16.891 "disable_auto_failback": false, 00:21:16.891 "generate_uuids": false, 00:21:16.891 "transport_tos": 0, 00:21:16.891 "nvme_error_stat": false, 00:21:16.891 "rdma_srq_size": 0, 00:21:16.891 "io_path_stat": false, 00:21:16.891 "allow_accel_sequence": false, 00:21:16.891 "rdma_max_cq_size": 0, 00:21:16.891 "rdma_cm_event_timeout_ms": 0, 00:21:16.891 "dhchap_digests": [ 00:21:16.891 "sha256", 00:21:16.891 "sha384", 00:21:16.891 "sha512" 00:21:16.891 ], 00:21:16.891 "dhchap_dhgroups": [ 00:21:16.891 "null", 00:21:16.891 "ffdhe2048", 00:21:16.891 "ffdhe3072", 00:21:16.891 "ffdhe4096", 00:21:16.891 "ffdhe6144", 00:21:16.891 "ffdhe8192" 00:21:16.891 ] 00:21:16.891 } 00:21:16.891 }, 00:21:16.891 { 00:21:16.891 "method": "bdev_nvme_attach_controller", 00:21:16.891 "params": { 00:21:16.891 "name": "nvme0", 00:21:16.891 "trtype": "TCP", 00:21:16.891 "adrfam": "IPv4", 00:21:16.891 "traddr": "10.0.0.2", 00:21:16.891 "trsvcid": "4420", 00:21:16.891 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:16.891 "prchk_reftag": false, 00:21:16.891 "prchk_guard": false, 00:21:16.891 "ctrlr_loss_timeout_sec": 0, 00:21:16.891 "reconnect_delay_sec": 0, 00:21:16.891 "fast_io_fail_timeout_sec": 0, 00:21:16.891 "psk": "key0", 00:21:16.891 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:16.891 "hdgst": false, 00:21:16.891 "ddgst": false, 00:21:16.891 "multipath": "multipath" 00:21:16.891 } 00:21:16.891 }, 00:21:16.891 { 00:21:16.891 "method": "bdev_nvme_set_hotplug", 00:21:16.891 "params": { 00:21:16.891 "period_us": 100000, 00:21:16.891 "enable": false 00:21:16.891 } 00:21:16.891 }, 00:21:16.891 { 00:21:16.891 "method": "bdev_enable_histogram", 00:21:16.891 "params": { 00:21:16.891 "name": "nvme0n1", 00:21:16.891 "enable": true 00:21:16.891 } 00:21:16.891 }, 00:21:16.891 { 00:21:16.891 "method": "bdev_wait_for_examine" 00:21:16.891 } 00:21:16.891 ] 00:21:16.891 }, 00:21:16.891 { 00:21:16.891 "subsystem": "nbd", 00:21:16.891 "config": [] 00:21:16.891 } 00:21:16.891 ] 00:21:16.891 }' 00:21:16.891 19:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # killprocess 373891 00:21:16.891 19:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 373891 ']' 00:21:16.891 19:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 373891 00:21:16.891 19:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:21:16.891 19:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:16.891 19:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 373891 00:21:16.891 19:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:21:16.891 19:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:21:16.891 19:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 373891' 00:21:16.891 killing process with pid 373891 00:21:16.891 19:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 373891 00:21:16.891 Received shutdown signal, test time was about 1.000000 seconds 00:21:16.891 00:21:16.891 Latency(us) 00:21:16.891 [2024-11-05T18:10:46.214Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:16.891 [2024-11-05T18:10:46.214Z] =================================================================================================================== 00:21:16.891 [2024-11-05T18:10:46.214Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:16.891 19:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 373891 00:21:16.891 19:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # killprocess 373543 00:21:16.891 19:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 373543 ']' 00:21:16.891 19:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 373543 00:21:16.891 19:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:21:16.891 19:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:16.891 19:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 373543 00:21:17.153 19:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:17.153 19:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:17.153 19:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 373543' 00:21:17.153 killing process with pid 373543 00:21:17.153 19:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 373543 00:21:17.153 19:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 373543 00:21:17.153 19:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # nvmfappstart -c /dev/fd/62 00:21:17.153 19:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:21:17.153 19:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:17.153 19:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:17.153 19:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # echo '{ 00:21:17.153 "subsystems": [ 00:21:17.153 { 00:21:17.153 "subsystem": "keyring", 00:21:17.153 "config": [ 00:21:17.153 { 00:21:17.153 "method": "keyring_file_add_key", 00:21:17.153 "params": { 00:21:17.153 "name": "key0", 00:21:17.153 "path": "/tmp/tmp.TTCTvr9xRO" 00:21:17.153 } 00:21:17.153 } 00:21:17.153 ] 00:21:17.153 }, 00:21:17.153 { 00:21:17.153 "subsystem": "iobuf", 00:21:17.153 "config": [ 00:21:17.153 { 00:21:17.153 "method": "iobuf_set_options", 00:21:17.153 "params": { 00:21:17.153 "small_pool_count": 8192, 00:21:17.153 "large_pool_count": 1024, 00:21:17.153 "small_bufsize": 8192, 00:21:17.153 "large_bufsize": 135168, 00:21:17.153 "enable_numa": false 00:21:17.153 } 00:21:17.153 } 00:21:17.153 ] 00:21:17.153 }, 00:21:17.153 { 00:21:17.153 "subsystem": "sock", 00:21:17.153 "config": [ 00:21:17.153 { 00:21:17.153 "method": "sock_set_default_impl", 00:21:17.153 "params": { 00:21:17.153 "impl_name": "posix" 00:21:17.153 } 00:21:17.153 }, 00:21:17.153 { 00:21:17.153 "method": "sock_impl_set_options", 00:21:17.153 "params": { 00:21:17.153 "impl_name": "ssl", 00:21:17.153 "recv_buf_size": 4096, 00:21:17.153 "send_buf_size": 4096, 00:21:17.153 "enable_recv_pipe": true, 00:21:17.153 "enable_quickack": false, 00:21:17.153 "enable_placement_id": 0, 00:21:17.153 "enable_zerocopy_send_server": true, 00:21:17.153 "enable_zerocopy_send_client": false, 00:21:17.153 "zerocopy_threshold": 0, 00:21:17.153 "tls_version": 0, 00:21:17.153 "enable_ktls": false 00:21:17.153 } 00:21:17.153 }, 00:21:17.153 { 00:21:17.153 "method": "sock_impl_set_options", 00:21:17.153 "params": { 00:21:17.153 "impl_name": "posix", 00:21:17.153 "recv_buf_size": 2097152, 00:21:17.153 "send_buf_size": 2097152, 00:21:17.153 "enable_recv_pipe": true, 00:21:17.153 "enable_quickack": false, 00:21:17.153 "enable_placement_id": 0, 00:21:17.153 "enable_zerocopy_send_server": true, 00:21:17.153 "enable_zerocopy_send_client": false, 00:21:17.153 "zerocopy_threshold": 0, 00:21:17.153 "tls_version": 0, 00:21:17.153 "enable_ktls": false 00:21:17.153 } 00:21:17.153 } 00:21:17.153 ] 00:21:17.153 }, 00:21:17.153 { 00:21:17.153 "subsystem": "vmd", 00:21:17.153 "config": [] 00:21:17.153 }, 00:21:17.153 { 00:21:17.153 "subsystem": "accel", 00:21:17.153 "config": [ 00:21:17.153 { 00:21:17.153 "method": "accel_set_options", 00:21:17.153 "params": { 00:21:17.153 "small_cache_size": 128, 00:21:17.153 "large_cache_size": 16, 00:21:17.153 "task_count": 2048, 00:21:17.153 "sequence_count": 2048, 00:21:17.153 "buf_count": 2048 00:21:17.153 } 00:21:17.153 } 00:21:17.153 ] 00:21:17.153 }, 00:21:17.153 { 00:21:17.153 "subsystem": "bdev", 00:21:17.153 "config": [ 00:21:17.153 { 00:21:17.153 "method": "bdev_set_options", 00:21:17.153 "params": { 00:21:17.153 "bdev_io_pool_size": 65535, 00:21:17.153 "bdev_io_cache_size": 256, 00:21:17.153 "bdev_auto_examine": true, 00:21:17.153 "iobuf_small_cache_size": 128, 00:21:17.153 "iobuf_large_cache_size": 16 00:21:17.153 } 00:21:17.153 }, 00:21:17.153 { 00:21:17.153 "method": "bdev_raid_set_options", 00:21:17.153 "params": { 00:21:17.153 "process_window_size_kb": 1024, 00:21:17.153 "process_max_bandwidth_mb_sec": 0 00:21:17.153 } 00:21:17.153 }, 00:21:17.153 { 00:21:17.153 "method": "bdev_iscsi_set_options", 00:21:17.153 "params": { 00:21:17.153 "timeout_sec": 30 00:21:17.153 } 00:21:17.153 }, 00:21:17.153 { 00:21:17.153 "method": "bdev_nvme_set_options", 00:21:17.153 "params": { 00:21:17.153 "action_on_timeout": "none", 00:21:17.153 "timeout_us": 0, 00:21:17.153 "timeout_admin_us": 0, 00:21:17.153 "keep_alive_timeout_ms": 10000, 00:21:17.153 "arbitration_burst": 0, 00:21:17.153 "low_priority_weight": 0, 00:21:17.153 "medium_priority_weight": 0, 00:21:17.153 "high_priority_weight": 0, 00:21:17.153 "nvme_adminq_poll_period_us": 10000, 00:21:17.153 "nvme_ioq_poll_period_us": 0, 00:21:17.153 "io_queue_requests": 0, 00:21:17.153 "delay_cmd_submit": true, 00:21:17.153 "transport_retry_count": 4, 00:21:17.153 "bdev_retry_count": 3, 00:21:17.153 "transport_ack_timeout": 0, 00:21:17.153 "ctrlr_loss_timeout_sec": 0, 00:21:17.153 "reconnect_delay_sec": 0, 00:21:17.153 "fast_io_fail_timeout_sec": 0, 00:21:17.153 "disable_auto_failback": false, 00:21:17.153 "generate_uuids": false, 00:21:17.153 "transport_tos": 0, 00:21:17.153 "nvme_error_stat": false, 00:21:17.153 "rdma_srq_size": 0, 00:21:17.153 "io_path_stat": false, 00:21:17.153 "allow_accel_sequence": false, 00:21:17.153 "rdma_max_cq_size": 0, 00:21:17.153 "rdma_cm_event_timeout_ms": 0, 00:21:17.153 "dhchap_digests": [ 00:21:17.153 "sha256", 00:21:17.153 "sha384", 00:21:17.153 "sha512" 00:21:17.153 ], 00:21:17.153 "dhchap_dhgroups": [ 00:21:17.153 "null", 00:21:17.153 "ffdhe2048", 00:21:17.153 "ffdhe3072", 00:21:17.153 "ffdhe4096", 00:21:17.153 "ffdhe6144", 00:21:17.153 "ffdhe8192" 00:21:17.153 ] 00:21:17.153 } 00:21:17.153 }, 00:21:17.153 { 00:21:17.153 "method": "bdev_nvme_set_hotplug", 00:21:17.153 "params": { 00:21:17.153 "period_us": 100000, 00:21:17.153 "enable": false 00:21:17.153 } 00:21:17.153 }, 00:21:17.153 { 00:21:17.153 "method": "bdev_malloc_create", 00:21:17.153 "params": { 00:21:17.153 "name": "malloc0", 00:21:17.153 "num_blocks": 8192, 00:21:17.153 "block_size": 4096, 00:21:17.153 "physical_block_size": 4096, 00:21:17.153 "uuid": "41363e9c-f560-408f-861a-a3180c9a27a8", 00:21:17.153 "optimal_io_boundary": 0, 00:21:17.153 "md_size": 0, 00:21:17.153 "dif_type": 0, 00:21:17.153 "dif_is_head_of_md": false, 00:21:17.153 "dif_pi_format": 0 00:21:17.153 } 00:21:17.153 }, 00:21:17.153 { 00:21:17.153 "method": "bdev_wait_for_examine" 00:21:17.153 } 00:21:17.153 ] 00:21:17.153 }, 00:21:17.153 { 00:21:17.153 "subsystem": "nbd", 00:21:17.153 "config": [] 00:21:17.153 }, 00:21:17.153 { 00:21:17.153 "subsystem": "scheduler", 00:21:17.153 "config": [ 00:21:17.153 { 00:21:17.153 "method": "framework_set_scheduler", 00:21:17.153 "params": { 00:21:17.153 "name": "static" 00:21:17.153 } 00:21:17.153 } 00:21:17.153 ] 00:21:17.153 }, 00:21:17.153 { 00:21:17.153 "subsystem": "nvmf", 00:21:17.153 "config": [ 00:21:17.153 { 00:21:17.153 "method": "nvmf_set_config", 00:21:17.153 "params": { 00:21:17.153 "discovery_filter": "match_any", 00:21:17.153 "admin_cmd_passthru": { 00:21:17.153 "identify_ctrlr": false 00:21:17.153 }, 00:21:17.153 "dhchap_digests": [ 00:21:17.153 "sha256", 00:21:17.153 "sha384", 00:21:17.153 "sha512" 00:21:17.153 ], 00:21:17.153 "dhchap_dhgroups": [ 00:21:17.153 "null", 00:21:17.153 "ffdhe2048", 00:21:17.153 "ffdhe3072", 00:21:17.153 "ffdhe4096", 00:21:17.153 "ffdhe6144", 00:21:17.153 "ffdhe8192" 00:21:17.153 ] 00:21:17.153 } 00:21:17.153 }, 00:21:17.153 { 00:21:17.153 "method": "nvmf_set_max_subsystems", 00:21:17.153 "params": { 00:21:17.153 "max_subsystems": 1024 00:21:17.154 } 00:21:17.154 }, 00:21:17.154 { 00:21:17.154 "method": "nvmf_set_crdt", 00:21:17.154 "params": { 00:21:17.154 "crdt1": 0, 00:21:17.154 "crdt2": 0, 00:21:17.154 "crdt3": 0 00:21:17.154 } 00:21:17.154 }, 00:21:17.154 { 00:21:17.154 "method": "nvmf_create_transport", 00:21:17.154 "params": { 00:21:17.154 "trtype": "TCP", 00:21:17.154 "max_queue_depth": 128, 00:21:17.154 "max_io_qpairs_per_ctrlr": 127, 00:21:17.154 "in_capsule_data_size": 4096, 00:21:17.154 "max_io_size": 131072, 00:21:17.154 "io_unit_size": 131072, 00:21:17.154 "max_aq_depth": 128, 00:21:17.154 "num_shared_buffers": 511, 00:21:17.154 "buf_cache_size": 4294967295, 00:21:17.154 "dif_insert_or_strip": false, 00:21:17.154 "zcopy": false, 00:21:17.154 "c2h_success": false, 00:21:17.154 "sock_priority": 0, 00:21:17.154 "abort_timeout_sec": 1, 00:21:17.154 "ack_timeout": 0, 00:21:17.154 "data_wr_pool_size": 0 00:21:17.154 } 00:21:17.154 }, 00:21:17.154 { 00:21:17.154 "method": "nvmf_create_subsystem", 00:21:17.154 "params": { 00:21:17.154 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:17.154 "allow_any_host": false, 00:21:17.154 "serial_number": "00000000000000000000", 00:21:17.154 "model_number": "SPDK bdev Controller", 00:21:17.154 "max_namespaces": 32, 00:21:17.154 "min_cntlid": 1, 00:21:17.154 "max_cntlid": 65519, 00:21:17.154 "ana_reporting": false 00:21:17.154 } 00:21:17.154 }, 00:21:17.154 { 00:21:17.154 "method": "nvmf_subsystem_add_host", 00:21:17.154 "params": { 00:21:17.154 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:17.154 "host": "nqn.2016-06.io.spdk:host1", 00:21:17.154 "psk": "key0" 00:21:17.154 } 00:21:17.154 }, 00:21:17.154 { 00:21:17.154 "method": "nvmf_subsystem_add_ns", 00:21:17.154 "params": { 00:21:17.154 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:17.154 "namespace": { 00:21:17.154 "nsid": 1, 00:21:17.154 "bdev_name": "malloc0", 00:21:17.154 "nguid": "41363E9CF560408F861AA3180C9A27A8", 00:21:17.154 "uuid": "41363e9c-f560-408f-861a-a3180c9a27a8", 00:21:17.154 "no_auto_visible": false 00:21:17.154 } 00:21:17.154 } 00:21:17.154 }, 00:21:17.154 { 00:21:17.154 "method": "nvmf_subsystem_add_listener", 00:21:17.154 "params": { 00:21:17.154 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:17.154 "listen_address": { 00:21:17.154 "trtype": "TCP", 00:21:17.154 "adrfam": "IPv4", 00:21:17.154 "traddr": "10.0.0.2", 00:21:17.154 "trsvcid": "4420" 00:21:17.154 }, 00:21:17.154 "secure_channel": false, 00:21:17.154 "sock_impl": "ssl" 00:21:17.154 } 00:21:17.154 } 00:21:17.154 ] 00:21:17.154 } 00:21:17.154 ] 00:21:17.154 }' 00:21:17.154 19:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:21:17.154 19:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # nvmfpid=374461 00:21:17.154 19:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # waitforlisten 374461 00:21:17.154 19:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 374461 ']' 00:21:17.154 19:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:17.154 19:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:17.154 19:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:17.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:17.154 19:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:17.154 19:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:17.154 [2024-11-05 19:10:46.430811] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:21:17.154 [2024-11-05 19:10:46.430868] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:17.414 [2024-11-05 19:10:46.505168] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:17.414 [2024-11-05 19:10:46.539441] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:17.414 [2024-11-05 19:10:46.539473] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:17.414 [2024-11-05 19:10:46.539481] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:17.414 [2024-11-05 19:10:46.539487] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:17.414 [2024-11-05 19:10:46.539493] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:17.414 [2024-11-05 19:10:46.540118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:17.414 [2024-11-05 19:10:46.738708] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:17.717 [2024-11-05 19:10:46.770719] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:17.718 [2024-11-05 19:10:46.770970] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:18.110 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:18.110 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:21:18.110 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:21:18.111 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:18.111 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:18.111 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:18.111 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # bdevperf_pid=374609 00:21:18.111 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # waitforlisten 374609 /var/tmp/bdevperf.sock 00:21:18.111 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 374609 ']' 00:21:18.111 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:18.111 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:18.111 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:18.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:18.111 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@269 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:21:18.111 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:18.111 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:18.111 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:21:18.111 "subsystems": [ 00:21:18.111 { 00:21:18.111 "subsystem": "keyring", 00:21:18.111 "config": [ 00:21:18.111 { 00:21:18.111 "method": "keyring_file_add_key", 00:21:18.111 "params": { 00:21:18.111 "name": "key0", 00:21:18.111 "path": "/tmp/tmp.TTCTvr9xRO" 00:21:18.111 } 00:21:18.111 } 00:21:18.111 ] 00:21:18.111 }, 00:21:18.111 { 00:21:18.111 "subsystem": "iobuf", 00:21:18.111 "config": [ 00:21:18.111 { 00:21:18.111 "method": "iobuf_set_options", 00:21:18.111 "params": { 00:21:18.111 "small_pool_count": 8192, 00:21:18.111 "large_pool_count": 1024, 00:21:18.111 "small_bufsize": 8192, 00:21:18.111 "large_bufsize": 135168, 00:21:18.111 "enable_numa": false 00:21:18.111 } 00:21:18.111 } 00:21:18.111 ] 00:21:18.111 }, 00:21:18.111 { 00:21:18.111 "subsystem": "sock", 00:21:18.111 "config": [ 00:21:18.111 { 00:21:18.111 "method": "sock_set_default_impl", 00:21:18.111 "params": { 00:21:18.111 "impl_name": "posix" 00:21:18.111 } 00:21:18.111 }, 00:21:18.111 { 00:21:18.111 "method": "sock_impl_set_options", 00:21:18.111 "params": { 00:21:18.111 "impl_name": "ssl", 00:21:18.111 "recv_buf_size": 4096, 00:21:18.111 "send_buf_size": 4096, 00:21:18.111 "enable_recv_pipe": true, 00:21:18.111 "enable_quickack": false, 00:21:18.111 "enable_placement_id": 0, 00:21:18.111 "enable_zerocopy_send_server": true, 00:21:18.111 "enable_zerocopy_send_client": false, 00:21:18.111 "zerocopy_threshold": 0, 00:21:18.111 "tls_version": 0, 00:21:18.111 "enable_ktls": false 00:21:18.111 } 00:21:18.111 }, 00:21:18.111 { 00:21:18.111 "method": "sock_impl_set_options", 00:21:18.111 "params": { 00:21:18.111 "impl_name": "posix", 00:21:18.111 "recv_buf_size": 2097152, 00:21:18.111 "send_buf_size": 2097152, 00:21:18.111 "enable_recv_pipe": true, 00:21:18.111 "enable_quickack": false, 00:21:18.111 "enable_placement_id": 0, 00:21:18.111 "enable_zerocopy_send_server": true, 00:21:18.111 "enable_zerocopy_send_client": false, 00:21:18.111 "zerocopy_threshold": 0, 00:21:18.111 "tls_version": 0, 00:21:18.111 "enable_ktls": false 00:21:18.111 } 00:21:18.111 } 00:21:18.111 ] 00:21:18.111 }, 00:21:18.111 { 00:21:18.111 "subsystem": "vmd", 00:21:18.111 "config": [] 00:21:18.111 }, 00:21:18.111 { 00:21:18.111 "subsystem": "accel", 00:21:18.111 "config": [ 00:21:18.111 { 00:21:18.111 "method": "accel_set_options", 00:21:18.111 "params": { 00:21:18.111 "small_cache_size": 128, 00:21:18.111 "large_cache_size": 16, 00:21:18.111 "task_count": 2048, 00:21:18.111 "sequence_count": 2048, 00:21:18.111 "buf_count": 2048 00:21:18.111 } 00:21:18.111 } 00:21:18.111 ] 00:21:18.111 }, 00:21:18.111 { 00:21:18.111 "subsystem": "bdev", 00:21:18.111 "config": [ 00:21:18.111 { 00:21:18.111 "method": "bdev_set_options", 00:21:18.111 "params": { 00:21:18.111 "bdev_io_pool_size": 65535, 00:21:18.111 "bdev_io_cache_size": 256, 00:21:18.111 "bdev_auto_examine": true, 00:21:18.111 "iobuf_small_cache_size": 128, 00:21:18.111 "iobuf_large_cache_size": 16 00:21:18.111 } 00:21:18.111 }, 00:21:18.111 { 00:21:18.111 "method": "bdev_raid_set_options", 00:21:18.111 "params": { 00:21:18.111 "process_window_size_kb": 1024, 00:21:18.111 "process_max_bandwidth_mb_sec": 0 00:21:18.111 } 00:21:18.111 }, 00:21:18.111 { 00:21:18.111 "method": "bdev_iscsi_set_options", 00:21:18.111 "params": { 00:21:18.111 "timeout_sec": 30 00:21:18.111 } 00:21:18.111 }, 00:21:18.111 { 00:21:18.111 "method": "bdev_nvme_set_options", 00:21:18.111 "params": { 00:21:18.111 "action_on_timeout": "none", 00:21:18.111 "timeout_us": 0, 00:21:18.111 "timeout_admin_us": 0, 00:21:18.111 "keep_alive_timeout_ms": 10000, 00:21:18.111 "arbitration_burst": 0, 00:21:18.111 "low_priority_weight": 0, 00:21:18.111 "medium_priority_weight": 0, 00:21:18.111 "high_priority_weight": 0, 00:21:18.111 "nvme_adminq_poll_period_us": 10000, 00:21:18.111 "nvme_ioq_poll_period_us": 0, 00:21:18.111 "io_queue_requests": 512, 00:21:18.111 "delay_cmd_submit": true, 00:21:18.111 "transport_retry_count": 4, 00:21:18.111 "bdev_retry_count": 3, 00:21:18.111 "transport_ack_timeout": 0, 00:21:18.111 "ctrlr_loss_timeout_sec": 0, 00:21:18.111 "reconnect_delay_sec": 0, 00:21:18.111 "fast_io_fail_timeout_sec": 0, 00:21:18.111 "disable_auto_failback": false, 00:21:18.111 "generate_uuids": false, 00:21:18.111 "transport_tos": 0, 00:21:18.111 "nvme_error_stat": false, 00:21:18.111 "rdma_srq_size": 0, 00:21:18.111 "io_path_stat": false, 00:21:18.111 "allow_accel_sequence": false, 00:21:18.111 "rdma_max_cq_size": 0, 00:21:18.111 "rdma_cm_event_timeout_ms": 0, 00:21:18.111 "dhchap_digests": [ 00:21:18.111 "sha256", 00:21:18.111 "sha384", 00:21:18.111 "sha512" 00:21:18.111 ], 00:21:18.111 "dhchap_dhgroups": [ 00:21:18.111 "null", 00:21:18.111 "ffdhe2048", 00:21:18.111 "ffdhe3072", 00:21:18.111 "ffdhe4096", 00:21:18.111 "ffdhe6144", 00:21:18.111 "ffdhe8192" 00:21:18.111 ] 00:21:18.111 } 00:21:18.111 }, 00:21:18.111 { 00:21:18.111 "method": "bdev_nvme_attach_controller", 00:21:18.111 "params": { 00:21:18.111 "name": "nvme0", 00:21:18.111 "trtype": "TCP", 00:21:18.111 "adrfam": "IPv4", 00:21:18.111 "traddr": "10.0.0.2", 00:21:18.111 "trsvcid": "4420", 00:21:18.111 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:18.111 "prchk_reftag": false, 00:21:18.111 "prchk_guard": false, 00:21:18.111 "ctrlr_loss_timeout_sec": 0, 00:21:18.111 "reconnect_delay_sec": 0, 00:21:18.111 "fast_io_fail_timeout_sec": 0, 00:21:18.111 "psk": "key0", 00:21:18.111 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:18.111 "hdgst": false, 00:21:18.111 "ddgst": false, 00:21:18.111 "multipath": "multipath" 00:21:18.111 } 00:21:18.111 }, 00:21:18.111 { 00:21:18.111 "method": "bdev_nvme_set_hotplug", 00:21:18.111 "params": { 00:21:18.111 "period_us": 100000, 00:21:18.111 "enable": false 00:21:18.111 } 00:21:18.111 }, 00:21:18.111 { 00:21:18.111 "method": "bdev_enable_histogram", 00:21:18.111 "params": { 00:21:18.111 "name": "nvme0n1", 00:21:18.111 "enable": true 00:21:18.111 } 00:21:18.111 }, 00:21:18.111 { 00:21:18.111 "method": "bdev_wait_for_examine" 00:21:18.111 } 00:21:18.111 ] 00:21:18.111 }, 00:21:18.111 { 00:21:18.111 "subsystem": "nbd", 00:21:18.111 "config": [] 00:21:18.111 } 00:21:18.111 ] 00:21:18.111 }' 00:21:18.111 [2024-11-05 19:10:47.323954] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:21:18.111 [2024-11-05 19:10:47.324007] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid374609 ] 00:21:18.112 [2024-11-05 19:10:47.400216] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:18.372 [2024-11-05 19:10:47.429800] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:18.372 [2024-11-05 19:10:47.564565] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:18.943 19:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:18.943 19:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:21:18.943 19:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:18.943 19:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # jq -r '.[].name' 00:21:19.203 19:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:19.203 19:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:19.203 Running I/O for 1 seconds... 00:21:20.144 4904.00 IOPS, 19.16 MiB/s 00:21:20.144 Latency(us) 00:21:20.144 [2024-11-05T18:10:49.467Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:20.144 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:20.144 Verification LBA range: start 0x0 length 0x2000 00:21:20.144 nvme0n1 : 1.01 4963.01 19.39 0.00 0.00 25631.26 5761.71 32986.45 00:21:20.144 [2024-11-05T18:10:49.467Z] =================================================================================================================== 00:21:20.144 [2024-11-05T18:10:49.467Z] Total : 4963.01 19.39 0.00 0.00 25631.26 5761.71 32986.45 00:21:20.144 { 00:21:20.144 "results": [ 00:21:20.144 { 00:21:20.144 "job": "nvme0n1", 00:21:20.144 "core_mask": "0x2", 00:21:20.144 "workload": "verify", 00:21:20.144 "status": "finished", 00:21:20.144 "verify_range": { 00:21:20.144 "start": 0, 00:21:20.144 "length": 8192 00:21:20.144 }, 00:21:20.144 "queue_depth": 128, 00:21:20.144 "io_size": 4096, 00:21:20.144 "runtime": 1.0139, 00:21:20.144 "iops": 4963.014103955025, 00:21:20.144 "mibps": 19.386773843574318, 00:21:20.144 "io_failed": 0, 00:21:20.144 "io_timeout": 0, 00:21:20.144 "avg_latency_us": 25631.25994700583, 00:21:20.144 "min_latency_us": 5761.706666666667, 00:21:20.144 "max_latency_us": 32986.45333333333 00:21:20.144 } 00:21:20.144 ], 00:21:20.144 "core_count": 1 00:21:20.144 } 00:21:20.144 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # trap - SIGINT SIGTERM EXIT 00:21:20.144 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@278 -- # cleanup 00:21:20.144 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:21:20.144 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # type=--id 00:21:20.144 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@811 -- # id=0 00:21:20.144 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:21:20.144 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:20.144 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:21:20.144 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:21:20.144 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@822 -- # for n in $shm_files 00:21:20.144 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:20.144 nvmf_trace.0 00:21:20.404 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # return 0 00:21:20.404 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 374609 00:21:20.404 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 374609 ']' 00:21:20.404 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 374609 00:21:20.404 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:21:20.404 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:20.404 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 374609 00:21:20.404 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:21:20.404 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:21:20.404 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 374609' 00:21:20.404 killing process with pid 374609 00:21:20.404 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 374609 00:21:20.404 Received shutdown signal, test time was about 1.000000 seconds 00:21:20.404 00:21:20.404 Latency(us) 00:21:20.404 [2024-11-05T18:10:49.727Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:20.404 [2024-11-05T18:10:49.727Z] =================================================================================================================== 00:21:20.404 [2024-11-05T18:10:49.727Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:20.404 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 374609 00:21:20.404 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:21:20.404 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@335 -- # nvmfcleanup 00:21:20.404 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@99 -- # sync 00:21:20.404 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:21:20.404 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@102 -- # set +e 00:21:20.404 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@103 -- # for i in {1..20} 00:21:20.404 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:21:20.404 rmmod nvme_tcp 00:21:20.404 rmmod nvme_fabrics 00:21:20.404 rmmod nvme_keyring 00:21:20.664 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:21:20.664 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@106 -- # set -e 00:21:20.664 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@107 -- # return 0 00:21:20.664 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # '[' -n 374461 ']' 00:21:20.664 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@337 -- # killprocess 374461 00:21:20.664 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 374461 ']' 00:21:20.664 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 374461 00:21:20.664 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:21:20.664 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:20.664 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 374461 00:21:20.664 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:20.664 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:20.664 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 374461' 00:21:20.664 killing process with pid 374461 00:21:20.664 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 374461 00:21:20.664 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 374461 00:21:20.664 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:21:20.664 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # nvmf_fini 00:21:20.664 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@264 -- # local dev 00:21:20.664 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@267 -- # remove_target_ns 00:21:20.664 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:21:20.664 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:21:20.664 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_target_ns 00:21:23.209 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@268 -- # delete_main_bridge 00:21:23.209 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:21:23.209 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@130 -- # return 0 00:21:23.209 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:21:23.209 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:21:23.209 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:21:23.209 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:21:23.209 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:21:23.209 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:21:23.209 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:21:23.209 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:21:23.209 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:21:23.209 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:21:23.209 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:21:23.209 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:21:23.209 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:21:23.209 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:21:23.209 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:21:23.209 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:21:23.209 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:21:23.209 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@41 -- # _dev=0 00:21:23.209 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@41 -- # dev_map=() 00:21:23.209 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@284 -- # iptr 00:21:23.209 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:21:23.209 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@542 -- # iptables-save 00:21:23.209 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@542 -- # iptables-restore 00:21:23.209 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.A3vqUAqQ7V /tmp/tmp.qLfm6iiLsE /tmp/tmp.TTCTvr9xRO 00:21:23.209 00:21:23.209 real 1m21.559s 00:21:23.209 user 2m5.749s 00:21:23.209 sys 0m26.757s 00:21:23.209 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:23.209 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:23.209 ************************************ 00:21:23.209 END TEST nvmf_tls 00:21:23.209 ************************************ 00:21:23.209 19:10:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:23.209 19:10:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:21:23.209 19:10:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:23.209 19:10:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:23.209 ************************************ 00:21:23.209 START TEST nvmf_fips 00:21:23.209 ************************************ 00:21:23.209 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:23.209 * Looking for test storage... 00:21:23.209 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:21:23.209 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:23.209 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # lcov --version 00:21:23.209 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:23.209 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:23.209 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:23.209 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:23.209 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:23.209 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:21:23.209 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:21:23.209 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:21:23.209 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:21:23.209 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:21:23.209 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:21:23.209 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:21:23.209 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:23.209 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:21:23.209 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:21:23.209 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:23.209 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:23.209 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:21:23.209 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:21:23.209 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:23.209 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:21:23.209 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:21:23.209 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:21:23.209 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:21:23.209 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:23.209 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:21:23.209 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:21:23.209 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:23.209 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:23.209 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:21:23.209 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:23.209 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:23.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:23.209 --rc genhtml_branch_coverage=1 00:21:23.209 --rc genhtml_function_coverage=1 00:21:23.209 --rc genhtml_legend=1 00:21:23.209 --rc geninfo_all_blocks=1 00:21:23.209 --rc geninfo_unexecuted_blocks=1 00:21:23.209 00:21:23.209 ' 00:21:23.209 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:23.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:23.209 --rc genhtml_branch_coverage=1 00:21:23.209 --rc genhtml_function_coverage=1 00:21:23.209 --rc genhtml_legend=1 00:21:23.209 --rc geninfo_all_blocks=1 00:21:23.209 --rc geninfo_unexecuted_blocks=1 00:21:23.209 00:21:23.209 ' 00:21:23.209 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:23.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:23.209 --rc genhtml_branch_coverage=1 00:21:23.209 --rc genhtml_function_coverage=1 00:21:23.209 --rc genhtml_legend=1 00:21:23.209 --rc geninfo_all_blocks=1 00:21:23.209 --rc geninfo_unexecuted_blocks=1 00:21:23.209 00:21:23.209 ' 00:21:23.209 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:23.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:23.209 --rc genhtml_branch_coverage=1 00:21:23.209 --rc genhtml_function_coverage=1 00:21:23.209 --rc genhtml_legend=1 00:21:23.209 --rc geninfo_all_blocks=1 00:21:23.209 --rc geninfo_unexecuted_blocks=1 00:21:23.209 00:21:23.209 ' 00:21:23.209 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:23.209 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:21:23.210 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:23.210 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:23.210 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:23.210 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:23.210 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:23.210 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:21:23.210 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:23.210 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:21:23.210 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:23.210 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:23.210 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:23.210 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:21:23.210 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:21:23.210 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:23.210 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:23.210 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:21:23.210 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:23.210 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:23.210 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:23.210 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:23.210 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:23.210 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:23.210 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:21:23.210 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:23.210 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:21:23.210 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:21:23.210 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:21:23.210 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:21:23.210 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@50 -- # : 0 00:21:23.210 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:21:23.210 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:21:23.210 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:21:23.210 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:23.210 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:23.210 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:21:23.210 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:21:23.210 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:21:23.210 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:21:23.210 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@54 -- # have_pci_nics=0 00:21:23.210 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:23.210 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:21:23.210 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:21:23.210 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:21:23.210 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:21:23.210 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:21:23.210 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:21:23.210 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:23.210 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:23.210 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:21:23.210 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:21:23.210 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:21:23.210 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:21:23.210 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:21:23.210 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:21:23.210 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:21:23.210 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:23.210 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:21:23.210 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:21:23.210 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:23.210 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:23.210 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:21:23.210 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:21:23.210 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:23.210 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:21:23.210 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:21:23.210 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:21:23.210 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:21:23.210 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:23.210 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:21:23.210 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:21:23.210 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:23.210 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:23.210 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:21:23.210 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:23.210 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:21:23.210 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:21:23.210 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:23.210 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:21:23.210 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:21:23.210 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:21:23.210 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:21:23.210 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:23.210 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:21:23.210 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:21:23.210 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:23.210 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:21:23.210 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:21:23.210 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:21:23.210 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:21:23.210 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:21:23.210 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:21:23.211 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:21:23.211 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:21:23.211 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:21:23.211 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:21:23.211 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:21:23.211 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:21:23.211 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:21:23.211 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:21:23.211 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:21:23.211 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:21:23.211 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:21:23.211 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:21:23.211 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:21:23.211 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:21:23.211 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:21:23.211 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:21:23.211 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:21:23.211 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:21:23.211 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:21:23.211 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:23.211 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:21:23.211 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:23.211 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:21:23.211 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:23.211 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:21:23.211 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:21:23.211 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:21:23.211 Error setting digest 00:21:23.211 40F29B30317F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:21:23.211 40F29B30317F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:21:23.211 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:21:23.211 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:23.211 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:23.211 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:23.211 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:21:23.211 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:21:23.211 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:23.211 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@296 -- # prepare_net_devs 00:21:23.211 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # local -g is_hw=no 00:21:23.211 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@260 -- # remove_target_ns 00:21:23.211 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:21:23.211 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:21:23.211 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_target_ns 00:21:23.211 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:21:23.211 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:21:23.211 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # xtrace_disable 00:21:23.211 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:29.803 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:29.803 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@131 -- # pci_devs=() 00:21:29.803 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@131 -- # local -a pci_devs 00:21:29.803 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@132 -- # pci_net_devs=() 00:21:29.803 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:21:29.803 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@133 -- # pci_drivers=() 00:21:29.803 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@133 -- # local -A pci_drivers 00:21:29.803 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@135 -- # net_devs=() 00:21:29.803 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@135 -- # local -ga net_devs 00:21:29.803 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@136 -- # e810=() 00:21:29.803 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@136 -- # local -ga e810 00:21:29.803 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@137 -- # x722=() 00:21:29.803 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@137 -- # local -ga x722 00:21:29.803 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@138 -- # mlx=() 00:21:29.803 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@138 -- # local -ga mlx 00:21:29.803 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:29.803 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:29.803 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:29.803 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:29.803 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:29.803 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:29.803 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:29.803 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:29.803 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:29.803 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:29.803 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:29.803 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:29.803 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:21:29.803 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:21:29.803 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:21:29.803 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:21:29.803 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:21:29.803 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:21:29.803 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:21:29.803 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:29.803 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:29.803 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:21:29.803 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:21:29.803 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:29.803 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:29.803 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:21:29.803 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:21:29.803 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:29.803 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:29.803 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:21:29.803 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:21:29.803 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:29.803 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:29.803 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:21:29.803 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:21:29.803 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:21:29.803 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:21:29.803 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:21:29.803 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:29.803 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:21:29.803 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:29.803 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # [[ up == up ]] 00:21:29.803 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:21:29.803 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:29.803 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:29.803 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:29.803 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:21:29.803 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:21:29.803 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:29.803 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:21:29.803 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:29.803 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # [[ up == up ]] 00:21:29.803 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:21:29.803 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:29.803 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:29.803 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:29.803 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:21:29.803 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:21:29.803 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:21:29.803 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # is_hw=yes 00:21:29.803 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:21:29.803 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:21:29.803 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:21:29.803 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:21:29.803 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@257 -- # create_target_ns 00:21:29.803 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:21:29.803 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:21:29.803 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:21:30.067 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:30.067 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:21:30.067 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:21:30.067 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:30.067 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:30.067 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:21:30.067 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:21:30.067 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:21:30.067 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:21:30.067 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@27 -- # local -gA dev_map 00:21:30.067 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@28 -- # local -g _dev 00:21:30.067 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:21:30.067 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:21:30.067 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:21:30.067 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:21:30.067 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@44 -- # ips=() 00:21:30.067 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:21:30.067 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:21:30.067 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:21:30.067 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:21:30.067 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:21:30.067 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:21:30.067 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:21:30.067 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:21:30.067 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:21:30.067 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:21:30.067 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:21:30.067 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:21:30.067 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:21:30.067 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:21:30.067 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:21:30.067 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:21:30.067 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:21:30.067 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:21:30.067 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:30.067 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:21:30.067 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@11 -- # local val=167772161 00:21:30.067 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:21:30.067 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:21:30.067 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:21:30.067 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:21:30.067 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:21:30.067 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:21:30.067 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:21:30.067 10.0.0.1 00:21:30.067 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:21:30.067 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:21:30.067 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:30.067 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:30.067 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:21:30.067 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@11 -- # local val=167772162 00:21:30.067 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:21:30.067 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:21:30.067 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:21:30.067 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:21:30.067 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:21:30.067 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:21:30.067 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:21:30.067 10.0.0.2 00:21:30.067 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:21:30.067 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:21:30.067 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:21:30.067 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:21:30.067 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:21:30.067 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:21:30.067 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:21:30.067 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:30.067 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:30.067 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:21:30.067 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:21:30.330 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:21:30.330 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:21:30.330 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:21:30.330 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:21:30.330 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:21:30.330 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:21:30.330 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:21:30.330 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:21:30.330 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:21:30.330 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@38 -- # ping_ips 1 00:21:30.331 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:21:30.331 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:21:30.331 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:21:30.331 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:21:30.331 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:21:30.331 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:21:30.331 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:21:30.331 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:21:30.331 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:21:30.331 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@107 -- # local dev=initiator0 00:21:30.331 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:21:30.331 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:21:30.331 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:21:30.331 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:21:30.331 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:21:30.331 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:21:30.331 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:21:30.331 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:21:30.331 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:21:30.331 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:21:30.331 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:21:30.331 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:30.331 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:30.331 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:21:30.331 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:21:30.331 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:30.331 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.709 ms 00:21:30.331 00:21:30.331 --- 10.0.0.1 ping statistics --- 00:21:30.331 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:30.331 rtt min/avg/max/mdev = 0.709/0.709/0.709/0.000 ms 00:21:30.331 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:21:30.331 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:21:30.331 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:21:30.331 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:21:30.331 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:30.331 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:30.331 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@168 -- # get_net_dev target0 00:21:30.331 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@107 -- # local dev=target0 00:21:30.331 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:21:30.331 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:21:30.331 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:21:30.331 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:21:30.331 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:21:30.331 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:21:30.331 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:21:30.331 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:21:30.331 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:21:30.331 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:21:30.331 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:21:30.331 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:21:30.331 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:21:30.331 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:21:30.331 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:30.331 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.291 ms 00:21:30.331 00:21:30.331 --- 10.0.0.2 ping statistics --- 00:21:30.331 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:30.331 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:21:30.331 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@98 -- # (( pair++ )) 00:21:30.331 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:21:30.331 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:30.331 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@270 -- # return 0 00:21:30.331 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:21:30.331 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:21:30.331 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:21:30.331 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:21:30.331 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:21:30.331 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:21:30.331 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:21:30.331 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:21:30.331 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:21:30.331 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:21:30.331 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@107 -- # local dev=initiator0 00:21:30.331 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:21:30.331 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:21:30.331 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:21:30.331 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:21:30.331 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:21:30.331 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:21:30.331 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:21:30.331 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:21:30.331 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:21:30.331 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:30.331 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:21:30.331 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:21:30.331 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:21:30.331 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:21:30.331 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:21:30.331 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:21:30.331 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@107 -- # local dev=initiator1 00:21:30.331 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:21:30.331 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:21:30.331 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@109 -- # return 1 00:21:30.331 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@168 -- # dev= 00:21:30.331 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@169 -- # return 0 00:21:30.331 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:21:30.331 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:21:30.331 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:21:30.331 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:21:30.331 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:21:30.331 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:30.331 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:30.331 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@168 -- # get_net_dev target0 00:21:30.331 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@107 -- # local dev=target0 00:21:30.331 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:21:30.331 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:21:30.331 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:21:30.331 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:21:30.331 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:21:30.331 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:21:30.331 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:21:30.331 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:21:30.331 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:21:30.331 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:30.331 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:21:30.331 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:21:30.331 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:21:30.331 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:21:30.332 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:30.332 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:30.332 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@168 -- # get_net_dev target1 00:21:30.332 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@107 -- # local dev=target1 00:21:30.332 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:21:30.332 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:21:30.332 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@109 -- # return 1 00:21:30.332 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@168 -- # dev= 00:21:30.332 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@169 -- # return 0 00:21:30.332 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:21:30.332 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:30.332 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:21:30.332 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:21:30.332 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:30.332 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:21:30.332 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:21:30.332 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:21:30.332 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:21:30.332 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:30.332 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:30.332 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # nvmfpid=379337 00:21:30.332 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@329 -- # waitforlisten 379337 00:21:30.332 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:30.332 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # '[' -z 379337 ']' 00:21:30.332 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:30.332 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:30.332 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:30.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:30.332 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:30.332 19:10:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:30.593 [2024-11-05 19:10:59.692699] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:21:30.593 [2024-11-05 19:10:59.692782] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:30.593 [2024-11-05 19:10:59.792216] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:30.593 [2024-11-05 19:10:59.842231] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:30.593 [2024-11-05 19:10:59.842283] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:30.593 [2024-11-05 19:10:59.842292] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:30.593 [2024-11-05 19:10:59.842299] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:30.593 [2024-11-05 19:10:59.842305] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:30.593 [2024-11-05 19:10:59.843066] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:31.163 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:31.163 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@866 -- # return 0 00:21:31.163 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:21:31.163 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:31.163 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:31.424 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:31.424 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:21:31.424 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:31.424 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:21:31.424 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.8lO 00:21:31.424 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:31.424 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.8lO 00:21:31.424 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.8lO 00:21:31.424 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.8lO 00:21:31.424 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:31.424 [2024-11-05 19:11:00.704898] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:31.424 [2024-11-05 19:11:00.720894] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:31.424 [2024-11-05 19:11:00.721232] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:31.684 malloc0 00:21:31.684 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:31.684 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=379685 00:21:31.684 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 379685 /var/tmp/bdevperf.sock 00:21:31.685 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:31.685 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # '[' -z 379685 ']' 00:21:31.685 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:31.685 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:31.685 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:31.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:31.685 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:31.685 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:31.685 [2024-11-05 19:11:00.850200] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:21:31.685 [2024-11-05 19:11:00.850259] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid379685 ] 00:21:31.685 [2024-11-05 19:11:00.909396] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:31.685 [2024-11-05 19:11:00.938459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:31.685 19:11:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:31.685 19:11:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@866 -- # return 0 00:21:31.685 19:11:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.8lO 00:21:31.944 19:11:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:32.204 [2024-11-05 19:11:01.339643] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:32.204 TLSTESTn1 00:21:32.204 19:11:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:32.204 Running I/O for 10 seconds... 00:21:34.528 4303.00 IOPS, 16.81 MiB/s [2024-11-05T18:11:04.792Z] 4404.00 IOPS, 17.20 MiB/s [2024-11-05T18:11:05.731Z] 4769.67 IOPS, 18.63 MiB/s [2024-11-05T18:11:06.670Z] 4815.25 IOPS, 18.81 MiB/s [2024-11-05T18:11:07.611Z] 4828.40 IOPS, 18.86 MiB/s [2024-11-05T18:11:08.552Z] 4872.33 IOPS, 19.03 MiB/s [2024-11-05T18:11:09.934Z] 4895.57 IOPS, 19.12 MiB/s [2024-11-05T18:11:10.874Z] 4902.88 IOPS, 19.15 MiB/s [2024-11-05T18:11:11.815Z] 4922.78 IOPS, 19.23 MiB/s [2024-11-05T18:11:11.815Z] 4937.60 IOPS, 19.29 MiB/s 00:21:42.492 Latency(us) 00:21:42.492 [2024-11-05T18:11:11.815Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:42.492 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:42.492 Verification LBA range: start 0x0 length 0x2000 00:21:42.492 TLSTESTn1 : 10.02 4939.78 19.30 0.00 0.00 25870.32 5980.16 58108.59 00:21:42.492 [2024-11-05T18:11:11.815Z] =================================================================================================================== 00:21:42.492 [2024-11-05T18:11:11.815Z] Total : 4939.78 19.30 0.00 0.00 25870.32 5980.16 58108.59 00:21:42.492 { 00:21:42.492 "results": [ 00:21:42.492 { 00:21:42.492 "job": "TLSTESTn1", 00:21:42.492 "core_mask": "0x4", 00:21:42.492 "workload": "verify", 00:21:42.492 "status": "finished", 00:21:42.492 "verify_range": { 00:21:42.492 "start": 0, 00:21:42.492 "length": 8192 00:21:42.492 }, 00:21:42.492 "queue_depth": 128, 00:21:42.492 "io_size": 4096, 00:21:42.492 "runtime": 10.021287, 00:21:42.492 "iops": 4939.784680350937, 00:21:42.492 "mibps": 19.29603390762085, 00:21:42.492 "io_failed": 0, 00:21:42.492 "io_timeout": 0, 00:21:42.492 "avg_latency_us": 25870.322375074913, 00:21:42.492 "min_latency_us": 5980.16, 00:21:42.492 "max_latency_us": 58108.58666666667 00:21:42.492 } 00:21:42.492 ], 00:21:42.492 "core_count": 1 00:21:42.492 } 00:21:42.492 19:11:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:21:42.492 19:11:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:21:42.492 19:11:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # type=--id 00:21:42.492 19:11:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@811 -- # id=0 00:21:42.492 19:11:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:21:42.492 19:11:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:42.492 19:11:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:21:42.492 19:11:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:21:42.492 19:11:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@822 -- # for n in $shm_files 00:21:42.492 19:11:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:42.492 nvmf_trace.0 00:21:42.492 19:11:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # return 0 00:21:42.492 19:11:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 379685 00:21:42.492 19:11:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # '[' -z 379685 ']' 00:21:42.492 19:11:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # kill -0 379685 00:21:42.492 19:11:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # uname 00:21:42.492 19:11:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:42.492 19:11:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 379685 00:21:42.492 19:11:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:21:42.492 19:11:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:21:42.492 19:11:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@970 -- # echo 'killing process with pid 379685' 00:21:42.492 killing process with pid 379685 00:21:42.492 19:11:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@971 -- # kill 379685 00:21:42.492 Received shutdown signal, test time was about 10.000000 seconds 00:21:42.492 00:21:42.492 Latency(us) 00:21:42.492 [2024-11-05T18:11:11.815Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:42.492 [2024-11-05T18:11:11.815Z] =================================================================================================================== 00:21:42.492 [2024-11-05T18:11:11.815Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:42.492 19:11:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@976 -- # wait 379685 00:21:42.752 19:11:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:21:42.752 19:11:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@335 -- # nvmfcleanup 00:21:42.752 19:11:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@99 -- # sync 00:21:42.752 19:11:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:21:42.752 19:11:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@102 -- # set +e 00:21:42.752 19:11:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@103 -- # for i in {1..20} 00:21:42.752 19:11:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:21:42.752 rmmod nvme_tcp 00:21:42.752 rmmod nvme_fabrics 00:21:42.752 rmmod nvme_keyring 00:21:42.752 19:11:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:21:42.752 19:11:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@106 -- # set -e 00:21:42.752 19:11:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@107 -- # return 0 00:21:42.752 19:11:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # '[' -n 379337 ']' 00:21:42.752 19:11:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@337 -- # killprocess 379337 00:21:42.753 19:11:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # '[' -z 379337 ']' 00:21:42.753 19:11:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # kill -0 379337 00:21:42.753 19:11:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # uname 00:21:42.753 19:11:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:42.753 19:11:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 379337 00:21:42.753 19:11:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:21:42.753 19:11:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:21:42.753 19:11:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@970 -- # echo 'killing process with pid 379337' 00:21:42.753 killing process with pid 379337 00:21:42.753 19:11:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@971 -- # kill 379337 00:21:42.753 19:11:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@976 -- # wait 379337 00:21:43.013 19:11:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:21:43.013 19:11:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # nvmf_fini 00:21:43.013 19:11:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@264 -- # local dev 00:21:43.013 19:11:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@267 -- # remove_target_ns 00:21:43.013 19:11:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:21:43.013 19:11:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:21:43.013 19:11:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_target_ns 00:21:44.925 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@268 -- # delete_main_bridge 00:21:44.925 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:21:44.925 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@130 -- # return 0 00:21:44.925 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:21:44.925 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:21:44.925 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:21:44.925 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:21:44.925 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:21:44.925 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:21:44.925 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:21:44.925 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:21:44.925 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:21:44.926 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:21:44.926 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:21:44.926 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:21:44.926 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:21:44.926 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:21:44.926 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:21:44.926 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:21:44.926 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:21:44.926 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@41 -- # _dev=0 00:21:44.926 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@41 -- # dev_map=() 00:21:44.926 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@284 -- # iptr 00:21:44.926 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@542 -- # iptables-save 00:21:44.926 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:21:44.926 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@542 -- # iptables-restore 00:21:44.926 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.8lO 00:21:44.926 00:21:44.926 real 0m22.095s 00:21:44.926 user 0m22.587s 00:21:44.926 sys 0m9.940s 00:21:44.926 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:44.926 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:44.926 ************************************ 00:21:44.926 END TEST nvmf_fips 00:21:44.926 ************************************ 00:21:44.926 19:11:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:21:44.926 19:11:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:21:44.926 19:11:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:44.926 19:11:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:45.188 ************************************ 00:21:45.188 START TEST nvmf_control_msg_list 00:21:45.188 ************************************ 00:21:45.188 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:21:45.188 * Looking for test storage... 00:21:45.188 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:45.188 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:45.188 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # lcov --version 00:21:45.188 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:45.188 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:45.188 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:45.188 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:45.188 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:45.188 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:21:45.188 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:21:45.188 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:21:45.188 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:21:45.188 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:21:45.188 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:21:45.188 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:21:45.188 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:45.188 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:21:45.188 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:21:45.188 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:45.188 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:45.188 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:21:45.188 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:21:45.188 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:45.188 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:21:45.188 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:21:45.188 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:21:45.188 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:21:45.188 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:45.188 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:21:45.188 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:21:45.188 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:45.188 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:45.188 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:21:45.188 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:45.188 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:45.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:45.188 --rc genhtml_branch_coverage=1 00:21:45.188 --rc genhtml_function_coverage=1 00:21:45.188 --rc genhtml_legend=1 00:21:45.188 --rc geninfo_all_blocks=1 00:21:45.188 --rc geninfo_unexecuted_blocks=1 00:21:45.188 00:21:45.188 ' 00:21:45.188 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:45.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:45.188 --rc genhtml_branch_coverage=1 00:21:45.188 --rc genhtml_function_coverage=1 00:21:45.188 --rc genhtml_legend=1 00:21:45.188 --rc geninfo_all_blocks=1 00:21:45.188 --rc geninfo_unexecuted_blocks=1 00:21:45.188 00:21:45.188 ' 00:21:45.188 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:45.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:45.188 --rc genhtml_branch_coverage=1 00:21:45.188 --rc genhtml_function_coverage=1 00:21:45.188 --rc genhtml_legend=1 00:21:45.188 --rc geninfo_all_blocks=1 00:21:45.188 --rc geninfo_unexecuted_blocks=1 00:21:45.188 00:21:45.188 ' 00:21:45.188 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:45.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:45.188 --rc genhtml_branch_coverage=1 00:21:45.188 --rc genhtml_function_coverage=1 00:21:45.188 --rc genhtml_legend=1 00:21:45.188 --rc geninfo_all_blocks=1 00:21:45.188 --rc geninfo_unexecuted_blocks=1 00:21:45.188 00:21:45.188 ' 00:21:45.188 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:45.188 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:21:45.188 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:45.188 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:45.188 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:45.188 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:45.188 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:45.188 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:21:45.188 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:45.188 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:21:45.188 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:45.188 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:45.188 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:45.188 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:21:45.188 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:21:45.188 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:45.188 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:45.188 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:21:45.188 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:45.188 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:45.188 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:45.188 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:45.188 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:45.188 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:45.188 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:21:45.189 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:45.189 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:21:45.189 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:21:45.189 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:21:45.189 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:21:45.189 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@50 -- # : 0 00:21:45.189 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:21:45.189 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:21:45.189 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:21:45.189 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:45.189 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:45.189 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:21:45.189 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:21:45.189 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:21:45.189 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:21:45.189 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@54 -- # have_pci_nics=0 00:21:45.189 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:21:45.189 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:21:45.189 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:45.189 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@296 -- # prepare_net_devs 00:21:45.189 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # local -g is_hw=no 00:21:45.189 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@260 -- # remove_target_ns 00:21:45.189 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:21:45.189 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:21:45.189 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_target_ns 00:21:45.189 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:21:45.189 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:21:45.189 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # xtrace_disable 00:21:45.189 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:51.779 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:51.779 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@131 -- # pci_devs=() 00:21:51.779 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@131 -- # local -a pci_devs 00:21:51.779 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@132 -- # pci_net_devs=() 00:21:51.779 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:21:51.779 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@133 -- # pci_drivers=() 00:21:51.779 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@133 -- # local -A pci_drivers 00:21:51.779 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@135 -- # net_devs=() 00:21:51.779 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@135 -- # local -ga net_devs 00:21:51.780 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@136 -- # e810=() 00:21:51.780 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@136 -- # local -ga e810 00:21:51.780 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@137 -- # x722=() 00:21:51.780 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@137 -- # local -ga x722 00:21:51.780 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@138 -- # mlx=() 00:21:51.780 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@138 -- # local -ga mlx 00:21:51.780 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:51.780 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:51.780 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:51.780 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:51.780 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:51.780 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:51.780 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:51.780 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:51.780 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:51.780 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:51.780 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:51.780 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:51.780 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:21:51.780 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:21:51.780 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:21:51.780 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:21:51.780 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:21:51.780 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:21:51.780 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:21:51.780 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:51.780 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:51.780 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:21:51.780 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:21:51.780 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:51.780 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:51.780 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:21:51.780 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:21:51.780 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:51.780 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:51.780 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:21:51.780 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:21:51.780 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:51.780 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:51.780 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:21:51.780 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:21:51.780 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:21:51.780 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:21:51.780 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:21:51.780 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:51.780 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:21:51.780 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:51.780 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@234 -- # [[ up == up ]] 00:21:51.780 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:21:51.780 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:51.780 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:51.780 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:51.780 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:21:51.780 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:21:51.780 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:51.780 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:21:51.780 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:51.780 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@234 -- # [[ up == up ]] 00:21:51.780 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:21:51.780 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:51.780 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:51.780 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:51.780 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:21:51.780 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:21:51.780 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:21:51.780 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # is_hw=yes 00:21:51.780 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:21:51.780 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:21:51.780 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:21:51.780 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:21:51.780 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@257 -- # create_target_ns 00:21:51.780 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:21:51.780 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:21:51.780 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:21:51.780 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:51.780 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:21:51.780 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:21:51.780 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:51.780 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:51.780 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:21:51.780 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:21:51.780 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:21:51.780 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:21:51.780 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@27 -- # local -gA dev_map 00:21:51.780 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@28 -- # local -g _dev 00:21:51.780 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:21:51.780 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:21:51.780 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:21:51.780 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:21:51.780 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@44 -- # ips=() 00:21:51.780 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:21:51.780 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:21:51.780 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:21:51.780 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:21:51.780 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:21:51.780 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:21:51.780 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:21:51.780 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:21:51.780 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:21:51.780 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:21:51.780 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:21:51.780 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:21:51.780 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:21:51.780 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:21:51.781 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:21:51.781 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:21:52.042 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:21:52.042 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:21:52.042 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:52.042 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:21:52.042 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@11 -- # local val=167772161 00:21:52.042 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:21:52.042 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:21:52.042 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:21:52.042 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:21:52.042 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:21:52.042 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:21:52.042 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:21:52.042 10.0.0.1 00:21:52.042 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:21:52.042 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:21:52.042 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:52.042 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:52.042 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:21:52.042 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@11 -- # local val=167772162 00:21:52.042 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:21:52.042 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:21:52.042 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:21:52.042 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:21:52.042 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:21:52.042 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:21:52.042 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:21:52.042 10.0.0.2 00:21:52.042 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:21:52.042 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:21:52.042 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:21:52.042 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:21:52.042 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:21:52.042 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:21:52.042 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:21:52.042 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:52.042 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:52.042 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:21:52.042 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:21:52.042 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:21:52.042 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:21:52.042 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:21:52.042 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:21:52.042 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:21:52.042 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:21:52.042 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:21:52.042 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:21:52.042 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:21:52.042 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@38 -- # ping_ips 1 00:21:52.042 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:21:52.042 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:21:52.042 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:21:52.042 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:21:52.042 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:21:52.042 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:21:52.042 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:21:52.042 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:21:52.042 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:21:52.043 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@107 -- # local dev=initiator0 00:21:52.043 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:21:52.043 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:21:52.043 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:21:52.043 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:21:52.043 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:21:52.043 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:21:52.043 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:21:52.043 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:21:52.043 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:21:52.043 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:21:52.043 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:21:52.043 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:52.043 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:52.043 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:21:52.043 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:21:52.043 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:52.043 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.591 ms 00:21:52.043 00:21:52.043 --- 10.0.0.1 ping statistics --- 00:21:52.043 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:52.043 rtt min/avg/max/mdev = 0.591/0.591/0.591/0.000 ms 00:21:52.043 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:21:52.043 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:21:52.043 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:21:52.043 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:21:52.043 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:52.043 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:52.043 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@168 -- # get_net_dev target0 00:21:52.043 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@107 -- # local dev=target0 00:21:52.043 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:21:52.043 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:21:52.043 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:21:52.043 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:21:52.043 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:21:52.043 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:21:52.043 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:21:52.043 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:21:52.043 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:21:52.043 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:21:52.043 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:21:52.043 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:21:52.043 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:21:52.043 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:21:52.043 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:52.043 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.245 ms 00:21:52.043 00:21:52.043 --- 10.0.0.2 ping statistics --- 00:21:52.043 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:52.043 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:21:52.043 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@98 -- # (( pair++ )) 00:21:52.043 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:21:52.043 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:52.043 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@270 -- # return 0 00:21:52.305 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:21:52.305 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:21:52.305 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:21:52.305 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:21:52.305 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:21:52.305 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:21:52.305 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:21:52.305 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:21:52.305 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:21:52.305 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:21:52.305 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@107 -- # local dev=initiator0 00:21:52.305 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:21:52.305 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:21:52.305 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:21:52.305 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:21:52.305 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:21:52.305 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:21:52.305 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:21:52.305 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:21:52.305 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:21:52.305 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:52.305 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:21:52.305 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:21:52.305 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:21:52.305 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:21:52.305 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:21:52.305 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:21:52.305 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@107 -- # local dev=initiator1 00:21:52.305 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:21:52.305 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:21:52.305 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@109 -- # return 1 00:21:52.305 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@168 -- # dev= 00:21:52.305 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@169 -- # return 0 00:21:52.305 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:21:52.305 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:21:52.305 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:21:52.305 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:21:52.305 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:21:52.305 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:52.305 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:52.305 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@168 -- # get_net_dev target0 00:21:52.305 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@107 -- # local dev=target0 00:21:52.305 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:21:52.305 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:21:52.305 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:21:52.305 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:21:52.305 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:21:52.305 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:21:52.305 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:21:52.305 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:21:52.305 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:21:52.305 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:52.305 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:21:52.305 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:21:52.305 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:21:52.305 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:21:52.305 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:52.305 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:52.305 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@168 -- # get_net_dev target1 00:21:52.305 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@107 -- # local dev=target1 00:21:52.305 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:21:52.305 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:21:52.305 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@109 -- # return 1 00:21:52.305 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@168 -- # dev= 00:21:52.305 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@169 -- # return 0 00:21:52.305 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:21:52.305 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:52.305 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:21:52.305 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:21:52.305 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:52.306 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:21:52.306 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:21:52.306 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:21:52.306 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:21:52.306 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:52.306 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:52.306 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # nvmfpid=385887 00:21:52.306 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@329 -- # waitforlisten 385887 00:21:52.306 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:52.306 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@833 -- # '[' -z 385887 ']' 00:21:52.306 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:52.306 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:52.306 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:52.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:52.306 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:52.306 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:52.306 [2024-11-05 19:11:21.538129] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:21:52.306 [2024-11-05 19:11:21.538191] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:52.306 [2024-11-05 19:11:21.618735] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:52.566 [2024-11-05 19:11:21.659054] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:52.566 [2024-11-05 19:11:21.659090] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:52.566 [2024-11-05 19:11:21.659099] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:52.566 [2024-11-05 19:11:21.659105] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:52.566 [2024-11-05 19:11:21.659111] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:52.566 [2024-11-05 19:11:21.659736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:53.138 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:53.138 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@866 -- # return 0 00:21:53.138 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:21:53.138 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:53.138 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:53.138 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:53.138 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:21:53.138 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:53.138 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:21:53.138 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.138 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:53.138 [2024-11-05 19:11:22.364836] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:53.138 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.138 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:21:53.138 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.138 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:53.138 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.138 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:21:53.138 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.138 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:53.138 Malloc0 00:21:53.138 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.138 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:21:53.138 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.138 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:53.138 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.138 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:53.138 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.138 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:53.138 [2024-11-05 19:11:22.415675] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:53.138 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.138 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:53.138 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=386078 00:21:53.138 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=386079 00:21:53.138 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:53.138 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=386080 00:21:53.138 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 386078 00:21:53.139 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:53.400 [2024-11-05 19:11:22.486423] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:53.400 [2024-11-05 19:11:22.486714] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:53.400 [2024-11-05 19:11:22.487040] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:54.344 Initializing NVMe Controllers 00:21:54.344 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:54.344 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:21:54.344 Initialization complete. Launching workers. 00:21:54.344 ======================================================== 00:21:54.344 Latency(us) 00:21:54.344 Device Information : IOPS MiB/s Average min max 00:21:54.344 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 25.00 0.10 41009.16 40722.07 41927.63 00:21:54.344 ======================================================== 00:21:54.344 Total : 25.00 0.10 41009.16 40722.07 41927.63 00:21:54.344 00:21:54.344 Initializing NVMe Controllers 00:21:54.344 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:54.344 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:21:54.344 Initialization complete. Launching workers. 00:21:54.344 ======================================================== 00:21:54.344 Latency(us) 00:21:54.344 Device Information : IOPS MiB/s Average min max 00:21:54.344 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 2213.00 8.64 451.92 140.74 1011.45 00:21:54.344 ======================================================== 00:21:54.344 Total : 2213.00 8.64 451.92 140.74 1011.45 00:21:54.344 00:21:54.344 Initializing NVMe Controllers 00:21:54.344 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:54.344 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:21:54.344 Initialization complete. Launching workers. 00:21:54.344 ======================================================== 00:21:54.344 Latency(us) 00:21:54.344 Device Information : IOPS MiB/s Average min max 00:21:54.344 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 25.00 0.10 41074.87 40784.41 41947.43 00:21:54.344 ======================================================== 00:21:54.344 Total : 25.00 0.10 41074.87 40784.41 41947.43 00:21:54.344 00:21:54.344 19:11:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 386079 00:21:54.344 19:11:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 386080 00:21:54.344 19:11:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:21:54.344 19:11:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:21:54.344 19:11:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@335 -- # nvmfcleanup 00:21:54.344 19:11:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@99 -- # sync 00:21:54.605 19:11:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:21:54.605 19:11:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@102 -- # set +e 00:21:54.605 19:11:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@103 -- # for i in {1..20} 00:21:54.605 19:11:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:21:54.605 rmmod nvme_tcp 00:21:54.605 rmmod nvme_fabrics 00:21:54.605 rmmod nvme_keyring 00:21:54.605 19:11:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:21:54.605 19:11:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@106 -- # set -e 00:21:54.605 19:11:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@107 -- # return 0 00:21:54.605 19:11:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # '[' -n 385887 ']' 00:21:54.605 19:11:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@337 -- # killprocess 385887 00:21:54.605 19:11:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@952 -- # '[' -z 385887 ']' 00:21:54.605 19:11:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # kill -0 385887 00:21:54.605 19:11:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@957 -- # uname 00:21:54.605 19:11:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:54.605 19:11:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 385887 00:21:54.605 19:11:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:54.605 19:11:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:54.605 19:11:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@970 -- # echo 'killing process with pid 385887' 00:21:54.605 killing process with pid 385887 00:21:54.605 19:11:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@971 -- # kill 385887 00:21:54.605 19:11:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@976 -- # wait 385887 00:21:54.605 19:11:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:21:54.605 19:11:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@342 -- # nvmf_fini 00:21:54.605 19:11:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@264 -- # local dev 00:21:54.605 19:11:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@267 -- # remove_target_ns 00:21:54.605 19:11:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:21:54.605 19:11:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:21:54.605 19:11:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_target_ns 00:21:57.151 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@268 -- # delete_main_bridge 00:21:57.151 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:21:57.151 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@130 -- # return 0 00:21:57.151 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:21:57.151 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:21:57.151 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:21:57.151 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:21:57.151 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:21:57.151 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:21:57.151 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:21:57.151 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:21:57.151 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:21:57.151 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:21:57.151 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:21:57.151 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:21:57.151 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:21:57.151 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:21:57.151 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:21:57.152 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:21:57.152 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:21:57.152 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@41 -- # _dev=0 00:21:57.152 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@41 -- # dev_map=() 00:21:57.152 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@284 -- # iptr 00:21:57.152 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:21:57.152 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@542 -- # iptables-save 00:21:57.152 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@542 -- # iptables-restore 00:21:57.152 00:21:57.152 real 0m11.739s 00:21:57.152 user 0m7.665s 00:21:57.152 sys 0m6.055s 00:21:57.152 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:57.152 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:57.152 ************************************ 00:21:57.152 END TEST nvmf_control_msg_list 00:21:57.152 ************************************ 00:21:57.152 19:11:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:21:57.152 19:11:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:21:57.152 19:11:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:57.152 19:11:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:57.152 ************************************ 00:21:57.152 START TEST nvmf_wait_for_buf 00:21:57.152 ************************************ 00:21:57.152 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:21:57.152 * Looking for test storage... 00:21:57.152 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:57.152 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:57.152 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # lcov --version 00:21:57.152 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:57.152 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:57.152 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:57.152 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:57.152 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:57.152 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:21:57.152 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:21:57.152 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:21:57.152 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:21:57.152 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:21:57.152 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:21:57.152 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:21:57.152 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:57.152 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:21:57.152 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:21:57.152 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:57.152 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:57.152 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:21:57.152 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:21:57.152 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:57.152 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:21:57.152 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:21:57.152 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:21:57.152 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:21:57.152 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:57.152 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:21:57.152 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:21:57.152 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:57.152 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:57.152 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:21:57.152 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:57.152 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:57.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:57.152 --rc genhtml_branch_coverage=1 00:21:57.152 --rc genhtml_function_coverage=1 00:21:57.152 --rc genhtml_legend=1 00:21:57.152 --rc geninfo_all_blocks=1 00:21:57.152 --rc geninfo_unexecuted_blocks=1 00:21:57.152 00:21:57.152 ' 00:21:57.152 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:57.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:57.152 --rc genhtml_branch_coverage=1 00:21:57.152 --rc genhtml_function_coverage=1 00:21:57.152 --rc genhtml_legend=1 00:21:57.152 --rc geninfo_all_blocks=1 00:21:57.152 --rc geninfo_unexecuted_blocks=1 00:21:57.152 00:21:57.152 ' 00:21:57.152 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:57.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:57.152 --rc genhtml_branch_coverage=1 00:21:57.152 --rc genhtml_function_coverage=1 00:21:57.152 --rc genhtml_legend=1 00:21:57.152 --rc geninfo_all_blocks=1 00:21:57.152 --rc geninfo_unexecuted_blocks=1 00:21:57.152 00:21:57.152 ' 00:21:57.152 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:57.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:57.152 --rc genhtml_branch_coverage=1 00:21:57.152 --rc genhtml_function_coverage=1 00:21:57.152 --rc genhtml_legend=1 00:21:57.152 --rc geninfo_all_blocks=1 00:21:57.152 --rc geninfo_unexecuted_blocks=1 00:21:57.152 00:21:57.152 ' 00:21:57.152 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:57.152 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:21:57.152 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:57.152 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:57.152 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:57.152 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:57.152 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:57.152 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:21:57.152 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:57.152 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:21:57.152 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:57.152 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:57.152 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:57.152 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:21:57.152 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:21:57.152 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:57.152 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:57.152 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:21:57.152 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:57.152 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:57.152 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:57.152 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:57.152 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:57.152 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:57.153 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:21:57.153 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:57.153 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:21:57.153 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:21:57.153 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:21:57.153 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:21:57.153 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@50 -- # : 0 00:21:57.153 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:21:57.153 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:21:57.153 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:21:57.153 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:57.153 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:57.153 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:21:57.153 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:21:57.153 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:21:57.153 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:21:57.153 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@54 -- # have_pci_nics=0 00:21:57.153 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:21:57.153 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:21:57.153 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:57.153 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@296 -- # prepare_net_devs 00:21:57.153 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # local -g is_hw=no 00:21:57.153 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@260 -- # remove_target_ns 00:21:57.153 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:21:57.153 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:21:57.153 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_target_ns 00:21:57.153 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:21:57.153 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:21:57.153 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # xtrace_disable 00:21:57.153 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:05.293 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:05.293 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@131 -- # pci_devs=() 00:22:05.293 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@131 -- # local -a pci_devs 00:22:05.293 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@132 -- # pci_net_devs=() 00:22:05.293 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:22:05.293 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@133 -- # pci_drivers=() 00:22:05.293 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@133 -- # local -A pci_drivers 00:22:05.293 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@135 -- # net_devs=() 00:22:05.293 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@135 -- # local -ga net_devs 00:22:05.293 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@136 -- # e810=() 00:22:05.293 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@136 -- # local -ga e810 00:22:05.293 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@137 -- # x722=() 00:22:05.293 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@137 -- # local -ga x722 00:22:05.293 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@138 -- # mlx=() 00:22:05.293 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@138 -- # local -ga mlx 00:22:05.293 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:05.293 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:05.293 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:05.293 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:05.293 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:05.293 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:05.294 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:05.294 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:05.294 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:05.294 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:05.294 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:05.294 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:05.294 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:22:05.294 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:22:05.294 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:22:05.294 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:22:05.294 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:22:05.294 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:22:05.294 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:22:05.294 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:05.294 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:05.294 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:22:05.294 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:22:05.294 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:05.294 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:05.294 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:22:05.294 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:22:05.294 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:05.294 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:05.294 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:22:05.294 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:22:05.294 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:05.294 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:05.294 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:22:05.294 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:22:05.294 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:22:05.294 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:22:05.294 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:22:05.294 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:05.294 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:22:05.294 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:05.294 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@234 -- # [[ up == up ]] 00:22:05.294 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:22:05.294 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:05.294 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:05.294 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:05.294 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:22:05.294 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:22:05.294 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:05.294 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:22:05.294 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:05.294 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@234 -- # [[ up == up ]] 00:22:05.294 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:22:05.294 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:05.294 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:05.294 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:05.294 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:22:05.294 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:22:05.294 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:22:05.294 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # is_hw=yes 00:22:05.294 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:22:05.294 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:22:05.294 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:22:05.294 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:22:05.294 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@257 -- # create_target_ns 00:22:05.294 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:22:05.294 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:22:05.294 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:22:05.294 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:05.294 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:22:05.294 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:22:05.294 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:05.294 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:05.294 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:22:05.294 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:22:05.294 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:22:05.294 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:22:05.294 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@27 -- # local -gA dev_map 00:22:05.294 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@28 -- # local -g _dev 00:22:05.294 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:22:05.294 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:22:05.294 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:22:05.294 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:22:05.294 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@44 -- # ips=() 00:22:05.294 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:22:05.294 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:22:05.294 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:22:05.294 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:22:05.294 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:22:05.294 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:22:05.294 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:22:05.294 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:22:05.294 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:22:05.294 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:22:05.294 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:22:05.294 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:22:05.294 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:22:05.294 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:22:05.294 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:22:05.294 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:22:05.294 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:22:05.294 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:22:05.294 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:22:05.294 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:22:05.294 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@11 -- # local val=167772161 00:22:05.294 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:22:05.294 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:22:05.294 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:22:05.294 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:22:05.294 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:22:05.294 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:22:05.294 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:22:05.294 10.0.0.1 00:22:05.294 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:22:05.294 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:22:05.294 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:05.295 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:05.295 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:22:05.295 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@11 -- # local val=167772162 00:22:05.295 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:22:05.295 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:22:05.295 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:22:05.295 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:22:05.295 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:22:05.295 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:22:05.295 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:22:05.295 10.0.0.2 00:22:05.295 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:22:05.295 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:22:05.295 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:22:05.295 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:22:05.295 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:22:05.295 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:22:05.295 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:22:05.295 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:05.295 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:05.295 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:22:05.295 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:22:05.295 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:22:05.295 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:22:05.295 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:22:05.295 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:22:05.295 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:22:05.295 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:22:05.295 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:22:05.295 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:22:05.295 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:22:05.295 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@38 -- # ping_ips 1 00:22:05.295 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:22:05.295 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:22:05.295 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:22:05.295 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:22:05.295 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:22:05.295 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:22:05.295 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:22:05.295 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:22:05.295 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:22:05.295 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@107 -- # local dev=initiator0 00:22:05.295 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:22:05.295 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:22:05.295 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:22:05.295 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:22:05.295 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:22:05.295 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:22:05.295 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:22:05.295 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:22:05.295 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:22:05.295 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:22:05.295 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:22:05.295 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:05.295 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:05.295 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:22:05.295 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:22:05.295 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:05.295 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.686 ms 00:22:05.295 00:22:05.295 --- 10.0.0.1 ping statistics --- 00:22:05.295 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:05.295 rtt min/avg/max/mdev = 0.686/0.686/0.686/0.000 ms 00:22:05.295 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:22:05.295 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:22:05.295 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:22:05.295 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:22:05.295 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:05.295 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:05.295 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@168 -- # get_net_dev target0 00:22:05.295 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@107 -- # local dev=target0 00:22:05.295 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:22:05.295 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:22:05.295 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:22:05.295 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:22:05.295 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:22:05.295 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:22:05.295 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:22:05.295 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:22:05.295 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:22:05.295 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:22:05.295 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:22:05.295 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:22:05.295 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:22:05.295 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:22:05.295 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:05.295 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.297 ms 00:22:05.295 00:22:05.295 --- 10.0.0.2 ping statistics --- 00:22:05.295 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:05.295 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:22:05.295 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@98 -- # (( pair++ )) 00:22:05.295 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:22:05.295 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:05.295 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@270 -- # return 0 00:22:05.295 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:22:05.295 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:22:05.295 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:22:05.295 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:22:05.295 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:22:05.295 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:22:05.295 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:22:05.295 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:22:05.295 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:22:05.295 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:22:05.295 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@107 -- # local dev=initiator0 00:22:05.295 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:22:05.295 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:22:05.295 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:22:05.295 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:22:05.295 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:22:05.295 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:22:05.295 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:22:05.295 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:22:05.296 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:22:05.296 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:05.296 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:22:05.296 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:22:05.296 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:22:05.296 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:22:05.296 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:22:05.296 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:22:05.296 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@107 -- # local dev=initiator1 00:22:05.296 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:22:05.296 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:22:05.296 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@109 -- # return 1 00:22:05.296 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@168 -- # dev= 00:22:05.296 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@169 -- # return 0 00:22:05.296 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:22:05.296 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:22:05.296 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:22:05.296 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:22:05.296 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:22:05.296 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:05.296 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:05.296 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@168 -- # get_net_dev target0 00:22:05.296 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@107 -- # local dev=target0 00:22:05.296 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:22:05.296 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:22:05.296 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:22:05.296 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:22:05.296 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:22:05.296 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:22:05.296 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:22:05.296 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:22:05.296 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:22:05.296 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:05.296 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:22:05.296 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:22:05.296 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:22:05.296 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:22:05.296 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:05.296 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:05.296 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@168 -- # get_net_dev target1 00:22:05.296 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@107 -- # local dev=target1 00:22:05.296 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:22:05.296 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:22:05.296 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@109 -- # return 1 00:22:05.296 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@168 -- # dev= 00:22:05.296 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@169 -- # return 0 00:22:05.296 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:22:05.296 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:05.296 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:22:05.296 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:22:05.296 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:05.296 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:22:05.296 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:22:05.296 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:22:05.296 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:22:05.296 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:05.296 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:05.296 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # nvmfpid=390476 00:22:05.296 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@329 -- # waitforlisten 390476 00:22:05.296 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:22:05.296 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@833 -- # '[' -z 390476 ']' 00:22:05.296 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:05.296 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:05.296 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:05.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:05.296 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:05.296 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:05.296 [2024-11-05 19:11:33.578025] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:22:05.296 [2024-11-05 19:11:33.578123] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:05.296 [2024-11-05 19:11:33.660677] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:05.296 [2024-11-05 19:11:33.701670] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:05.296 [2024-11-05 19:11:33.701708] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:05.296 [2024-11-05 19:11:33.701718] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:05.296 [2024-11-05 19:11:33.701726] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:05.296 [2024-11-05 19:11:33.701733] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:05.296 [2024-11-05 19:11:33.702325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:05.296 19:11:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:05.296 19:11:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@866 -- # return 0 00:22:05.296 19:11:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:22:05.296 19:11:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:05.296 19:11:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:05.296 19:11:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:05.296 19:11:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:22:05.296 19:11:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:22:05.296 19:11:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:22:05.296 19:11:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.296 19:11:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:05.296 19:11:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.296 19:11:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:22:05.296 19:11:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.296 19:11:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:05.296 19:11:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.296 19:11:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:22:05.296 19:11:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.296 19:11:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:05.296 19:11:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.296 19:11:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:22:05.296 19:11:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.296 19:11:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:05.296 Malloc0 00:22:05.296 19:11:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.296 19:11:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:22:05.296 19:11:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.296 19:11:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:05.296 [2024-11-05 19:11:34.500966] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:05.296 19:11:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.296 19:11:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:22:05.297 19:11:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.297 19:11:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:05.297 19:11:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.297 19:11:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:22:05.297 19:11:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.297 19:11:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:05.297 19:11:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.297 19:11:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:05.297 19:11:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.297 19:11:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:05.297 [2024-11-05 19:11:34.537175] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:05.297 19:11:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.297 19:11:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:05.557 [2024-11-05 19:11:34.638824] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:22:06.941 Initializing NVMe Controllers 00:22:06.941 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:22:06.941 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:22:06.941 Initialization complete. Launching workers. 00:22:06.941 ======================================================== 00:22:06.941 Latency(us) 00:22:06.941 Device Information : IOPS MiB/s Average min max 00:22:06.941 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 118.45 14.81 34950.83 30934.98 110705.78 00:22:06.941 ======================================================== 00:22:06.941 Total : 118.45 14.81 34950.83 30934.98 110705.78 00:22:06.941 00:22:06.941 19:11:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:22:06.941 19:11:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:22:06.941 19:11:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.941 19:11:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:06.941 19:11:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.941 19:11:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=1878 00:22:06.941 19:11:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 1878 -eq 0 ]] 00:22:06.941 19:11:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:22:06.941 19:11:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:22:06.941 19:11:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@335 -- # nvmfcleanup 00:22:06.941 19:11:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@99 -- # sync 00:22:06.941 19:11:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:22:06.941 19:11:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@102 -- # set +e 00:22:06.941 19:11:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@103 -- # for i in {1..20} 00:22:06.941 19:11:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:22:06.941 rmmod nvme_tcp 00:22:06.941 rmmod nvme_fabrics 00:22:06.941 rmmod nvme_keyring 00:22:06.941 19:11:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:22:06.941 19:11:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@106 -- # set -e 00:22:06.941 19:11:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@107 -- # return 0 00:22:06.941 19:11:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # '[' -n 390476 ']' 00:22:06.941 19:11:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@337 -- # killprocess 390476 00:22:06.941 19:11:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@952 -- # '[' -z 390476 ']' 00:22:06.941 19:11:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # kill -0 390476 00:22:06.941 19:11:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@957 -- # uname 00:22:06.941 19:11:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:06.941 19:11:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 390476 00:22:07.201 19:11:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:07.201 19:11:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:07.201 19:11:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 390476' 00:22:07.201 killing process with pid 390476 00:22:07.201 19:11:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@971 -- # kill 390476 00:22:07.201 19:11:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@976 -- # wait 390476 00:22:07.201 19:11:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:22:07.201 19:11:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@342 -- # nvmf_fini 00:22:07.201 19:11:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@264 -- # local dev 00:22:07.201 19:11:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@267 -- # remove_target_ns 00:22:07.201 19:11:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:22:07.201 19:11:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:22:07.201 19:11:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_target_ns 00:22:09.743 19:11:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@268 -- # delete_main_bridge 00:22:09.743 19:11:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:22:09.743 19:11:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@130 -- # return 0 00:22:09.743 19:11:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:22:09.743 19:11:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:22:09.743 19:11:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:22:09.743 19:11:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:22:09.743 19:11:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:22:09.743 19:11:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:22:09.743 19:11:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:22:09.743 19:11:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:22:09.743 19:11:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:22:09.743 19:11:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:22:09.743 19:11:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:22:09.743 19:11:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:22:09.743 19:11:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:22:09.743 19:11:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:22:09.743 19:11:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:22:09.743 19:11:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:22:09.743 19:11:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:22:09.743 19:11:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@41 -- # _dev=0 00:22:09.743 19:11:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@41 -- # dev_map=() 00:22:09.743 19:11:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@284 -- # iptr 00:22:09.743 19:11:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@542 -- # iptables-save 00:22:09.743 19:11:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:22:09.743 19:11:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@542 -- # iptables-restore 00:22:09.743 00:22:09.743 real 0m12.379s 00:22:09.743 user 0m4.944s 00:22:09.743 sys 0m5.973s 00:22:09.743 19:11:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:09.743 19:11:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:09.743 ************************************ 00:22:09.743 END TEST nvmf_wait_for_buf 00:22:09.743 ************************************ 00:22:09.743 19:11:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:22:09.743 19:11:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:22:09.743 19:11:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:22:09.743 19:11:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:22:09.743 19:11:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@125 -- # xtrace_disable 00:22:09.743 19:11:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:16.333 19:11:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:16.333 19:11:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@131 -- # pci_devs=() 00:22:16.333 19:11:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@131 -- # local -a pci_devs 00:22:16.333 19:11:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@132 -- # pci_net_devs=() 00:22:16.333 19:11:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:22:16.333 19:11:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@133 -- # pci_drivers=() 00:22:16.333 19:11:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@133 -- # local -A pci_drivers 00:22:16.333 19:11:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@135 -- # net_devs=() 00:22:16.333 19:11:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@135 -- # local -ga net_devs 00:22:16.333 19:11:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@136 -- # e810=() 00:22:16.333 19:11:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@136 -- # local -ga e810 00:22:16.333 19:11:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@137 -- # x722=() 00:22:16.333 19:11:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@137 -- # local -ga x722 00:22:16.333 19:11:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@138 -- # mlx=() 00:22:16.333 19:11:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@138 -- # local -ga mlx 00:22:16.333 19:11:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:16.333 19:11:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:16.333 19:11:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:16.333 19:11:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:16.333 19:11:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:16.333 19:11:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:16.333 19:11:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:16.333 19:11:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:16.333 19:11:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:16.333 19:11:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:16.333 19:11:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:16.333 19:11:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:16.333 19:11:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:22:16.333 19:11:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:22:16.333 19:11:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:22:16.333 19:11:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:22:16.333 19:11:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:22:16.333 19:11:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:22:16.333 19:11:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:22:16.333 19:11:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:16.333 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:16.333 19:11:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:22:16.333 19:11:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:22:16.333 19:11:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:16.333 19:11:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:16.333 19:11:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:22:16.333 19:11:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:22:16.333 19:11:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:16.333 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:16.333 19:11:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:22:16.333 19:11:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:22:16.333 19:11:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:16.333 19:11:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:16.333 19:11:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:22:16.333 19:11:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:22:16.333 19:11:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:22:16.333 19:11:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:22:16.333 19:11:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:22:16.333 19:11:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:16.333 19:11:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:22:16.333 19:11:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:16.333 19:11:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@234 -- # [[ up == up ]] 00:22:16.333 19:11:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:22:16.333 19:11:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:16.333 19:11:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:16.333 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:16.333 19:11:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:22:16.333 19:11:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:22:16.333 19:11:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:16.333 19:11:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:22:16.333 19:11:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:16.333 19:11:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@234 -- # [[ up == up ]] 00:22:16.333 19:11:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:22:16.333 19:11:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:16.333 19:11:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:16.333 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:16.333 19:11:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:22:16.333 19:11:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:22:16.333 19:11:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:22:16.333 19:11:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:16.333 19:11:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:22:16.333 19:11:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:22:16.333 19:11:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:22:16.333 19:11:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:16.333 19:11:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:16.333 ************************************ 00:22:16.333 START TEST nvmf_perf_adq 00:22:16.333 ************************************ 00:22:16.333 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:22:16.333 * Looking for test storage... 00:22:16.333 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:16.333 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:16.333 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # lcov --version 00:22:16.333 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:16.333 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:16.333 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:16.333 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:16.333 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:16.333 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:22:16.333 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:22:16.333 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:22:16.333 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:22:16.333 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:22:16.333 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:22:16.333 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:22:16.333 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:16.333 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:22:16.333 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:22:16.333 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:16.333 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:16.333 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:22:16.333 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:22:16.333 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:16.333 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:22:16.333 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:22:16.333 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:22:16.333 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:22:16.333 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:16.333 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:22:16.333 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:22:16.333 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:16.333 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:16.333 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:22:16.333 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:16.333 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:16.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:16.334 --rc genhtml_branch_coverage=1 00:22:16.334 --rc genhtml_function_coverage=1 00:22:16.334 --rc genhtml_legend=1 00:22:16.334 --rc geninfo_all_blocks=1 00:22:16.334 --rc geninfo_unexecuted_blocks=1 00:22:16.334 00:22:16.334 ' 00:22:16.334 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:16.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:16.334 --rc genhtml_branch_coverage=1 00:22:16.334 --rc genhtml_function_coverage=1 00:22:16.334 --rc genhtml_legend=1 00:22:16.334 --rc geninfo_all_blocks=1 00:22:16.334 --rc geninfo_unexecuted_blocks=1 00:22:16.334 00:22:16.334 ' 00:22:16.334 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:16.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:16.334 --rc genhtml_branch_coverage=1 00:22:16.334 --rc genhtml_function_coverage=1 00:22:16.334 --rc genhtml_legend=1 00:22:16.334 --rc geninfo_all_blocks=1 00:22:16.334 --rc geninfo_unexecuted_blocks=1 00:22:16.334 00:22:16.334 ' 00:22:16.334 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:16.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:16.334 --rc genhtml_branch_coverage=1 00:22:16.334 --rc genhtml_function_coverage=1 00:22:16.334 --rc genhtml_legend=1 00:22:16.334 --rc geninfo_all_blocks=1 00:22:16.334 --rc geninfo_unexecuted_blocks=1 00:22:16.334 00:22:16.334 ' 00:22:16.334 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:16.334 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:22:16.334 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:16.334 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:16.334 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:16.334 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:16.334 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:16.334 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:22:16.334 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:16.334 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:22:16.334 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:16.334 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:16.334 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:16.334 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:22:16.334 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:22:16.334 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:16.334 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:16.334 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:22:16.334 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:16.334 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:16.334 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:16.334 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.334 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.334 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.334 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:22:16.334 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.334 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:22:16.334 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:22:16.334 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:22:16.334 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:22:16.334 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@50 -- # : 0 00:22:16.334 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:22:16.334 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:22:16.334 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:22:16.334 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:16.334 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:16.334 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:22:16.334 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:22:16.334 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:22:16.334 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:22:16.334 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@54 -- # have_pci_nics=0 00:22:16.334 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:22:16.334 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # xtrace_disable 00:22:16.334 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:24.478 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:24.478 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@131 -- # pci_devs=() 00:22:24.478 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@131 -- # local -a pci_devs 00:22:24.478 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@132 -- # pci_net_devs=() 00:22:24.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:22:24.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@133 -- # pci_drivers=() 00:22:24.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@133 -- # local -A pci_drivers 00:22:24.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@135 -- # net_devs=() 00:22:24.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@135 -- # local -ga net_devs 00:22:24.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@136 -- # e810=() 00:22:24.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@136 -- # local -ga e810 00:22:24.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@137 -- # x722=() 00:22:24.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@137 -- # local -ga x722 00:22:24.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@138 -- # mlx=() 00:22:24.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@138 -- # local -ga mlx 00:22:24.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:24.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:24.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:24.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:24.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:24.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:24.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:24.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:24.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:24.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:24.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:24.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:24.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:22:24.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:22:24.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:22:24.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:22:24.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:22:24.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:22:24.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:22:24.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:24.479 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:24.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:22:24.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:22:24.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:24.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:24.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:22:24.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:22:24.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:24.479 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:24.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:22:24.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:22:24.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:24.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:24.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:22:24.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:22:24.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:22:24.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:22:24.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:22:24.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:24.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:22:24.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:24.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # [[ up == up ]] 00:22:24.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:22:24.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:24.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:24.479 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:24.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:22:24.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:22:24.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:24.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:22:24.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:24.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # [[ up == up ]] 00:22:24.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:22:24.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:24.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:24.479 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:24.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:22:24.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:22:24.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:22:24.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:24.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:22:24.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:22:24.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:22:24.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:22:24.479 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:22:25.055 19:11:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:22:26.966 19:11:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:22:32.255 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:22:32.255 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:22:32.255 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:32.255 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # prepare_net_devs 00:22:32.255 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # local -g is_hw=no 00:22:32.255 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@260 -- # remove_target_ns 00:22:32.255 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:22:32.255 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:22:32.255 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_target_ns 00:22:32.255 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:22:32.255 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:22:32.255 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # xtrace_disable 00:22:32.255 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:32.255 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:32.255 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@131 -- # pci_devs=() 00:22:32.255 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@131 -- # local -a pci_devs 00:22:32.255 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@132 -- # pci_net_devs=() 00:22:32.255 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:22:32.255 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@133 -- # pci_drivers=() 00:22:32.255 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@133 -- # local -A pci_drivers 00:22:32.255 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@135 -- # net_devs=() 00:22:32.255 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@135 -- # local -ga net_devs 00:22:32.255 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@136 -- # e810=() 00:22:32.255 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@136 -- # local -ga e810 00:22:32.255 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@137 -- # x722=() 00:22:32.255 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@137 -- # local -ga x722 00:22:32.255 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@138 -- # mlx=() 00:22:32.255 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@138 -- # local -ga mlx 00:22:32.255 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:32.255 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:32.255 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:32.255 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:32.255 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:32.255 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:32.255 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:32.255 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:32.255 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:32.255 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:32.256 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:32.256 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:32.256 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:22:32.256 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:22:32.256 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:22:32.256 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:22:32.256 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:22:32.256 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:22:32.256 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:22:32.256 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:32.256 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:32.256 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:22:32.256 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:22:32.256 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:32.256 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:32.256 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:22:32.256 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:22:32.256 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:32.256 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:32.256 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:22:32.256 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:22:32.256 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:32.256 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:32.256 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:22:32.256 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:22:32.256 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:22:32.256 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:22:32.256 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:22:32.256 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:32.256 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:22:32.256 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:32.256 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # [[ up == up ]] 00:22:32.256 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:22:32.256 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:32.256 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:32.256 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:32.256 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:22:32.256 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:22:32.256 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:32.256 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:22:32.256 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:32.256 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # [[ up == up ]] 00:22:32.256 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:22:32.256 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:32.256 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:32.256 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:32.256 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:22:32.256 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:22:32.256 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:22:32.256 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # is_hw=yes 00:22:32.256 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:22:32.256 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:22:32.256 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:22:32.256 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:22:32.256 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@257 -- # create_target_ns 00:22:32.256 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:22:32.256 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:22:32.256 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:22:32.256 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:32.256 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:22:32.256 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:22:32.256 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:32.256 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:32.256 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:22:32.256 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:22:32.256 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:22:32.256 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:22:32.256 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@27 -- # local -gA dev_map 00:22:32.256 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@28 -- # local -g _dev 00:22:32.256 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:22:32.256 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:22:32.256 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:22:32.256 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:22:32.256 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@44 -- # ips=() 00:22:32.256 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:22:32.256 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:22:32.256 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:22:32.256 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:22:32.256 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:22:32.256 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:22:32.256 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:22:32.256 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:22:32.256 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:22:32.256 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:22:32.256 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:22:32.256 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:22:32.256 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:22:32.256 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:22:32.256 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:22:32.256 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:22:32.256 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:22:32.256 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:22:32.256 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:22:32.256 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:22:32.256 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@11 -- # local val=167772161 00:22:32.256 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:22:32.256 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:22:32.256 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:22:32.256 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:22:32.256 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:22:32.256 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:22:32.256 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:22:32.256 10.0.0.1 00:22:32.256 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:22:32.256 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:22:32.256 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:32.256 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:32.256 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:22:32.256 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@11 -- # local val=167772162 00:22:32.256 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:22:32.256 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:22:32.257 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:22:32.257 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:22:32.257 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:22:32.257 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:22:32.257 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:22:32.257 10.0.0.2 00:22:32.257 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:22:32.257 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:22:32.257 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:22:32.257 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:22:32.257 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:22:32.257 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:22:32.257 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:22:32.257 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:32.257 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:32.257 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:22:32.257 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:22:32.257 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:22:32.257 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:22:32.257 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:22:32.257 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:22:32.257 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:22:32.257 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:22:32.257 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:22:32.257 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:22:32.257 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:22:32.257 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@38 -- # ping_ips 1 00:22:32.257 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:22:32.257 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:22:32.257 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:22:32.257 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:22:32.257 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:22:32.257 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:22:32.257 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:22:32.257 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:22:32.257 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:22:32.257 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@107 -- # local dev=initiator0 00:22:32.257 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:22:32.257 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:22:32.257 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:22:32.257 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:22:32.257 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:22:32.257 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:22:32.257 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:22:32.257 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:22:32.257 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:22:32.257 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:22:32.257 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:22:32.257 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:32.257 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:32.257 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:22:32.257 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:22:32.257 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:32.257 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.704 ms 00:22:32.257 00:22:32.257 --- 10.0.0.1 ping statistics --- 00:22:32.257 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:32.257 rtt min/avg/max/mdev = 0.704/0.704/0.704/0.000 ms 00:22:32.257 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:22:32.257 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:22:32.257 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:22:32.257 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:22:32.257 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:32.257 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:32.257 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@168 -- # get_net_dev target0 00:22:32.257 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@107 -- # local dev=target0 00:22:32.257 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:22:32.257 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:22:32.257 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:22:32.257 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:22:32.257 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:22:32.257 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:22:32.257 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:22:32.257 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:22:32.257 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:22:32.257 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:22:32.257 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:22:32.257 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:22:32.257 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:22:32.257 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:22:32.257 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:32.257 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.262 ms 00:22:32.257 00:22:32.257 --- 10.0.0.2 ping statistics --- 00:22:32.257 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:32.257 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:22:32.257 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@98 -- # (( pair++ )) 00:22:32.257 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:22:32.257 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:32.257 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@270 -- # return 0 00:22:32.257 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:22:32.257 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:22:32.257 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:22:32.257 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:22:32.257 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:22:32.257 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:22:32.257 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:22:32.257 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:22:32.257 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:22:32.257 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:22:32.257 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@107 -- # local dev=initiator0 00:22:32.257 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:22:32.257 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:22:32.257 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:22:32.257 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:22:32.257 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:22:32.257 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:22:32.257 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:22:32.257 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:22:32.257 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:22:32.257 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:32.257 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:22:32.257 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:22:32.257 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:22:32.257 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:22:32.257 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:22:32.257 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:22:32.258 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@107 -- # local dev=initiator1 00:22:32.258 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:22:32.258 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:22:32.258 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@109 -- # return 1 00:22:32.258 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@168 -- # dev= 00:22:32.258 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@169 -- # return 0 00:22:32.258 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:22:32.258 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:22:32.258 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:22:32.258 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:22:32.258 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:22:32.258 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:32.258 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:32.258 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@168 -- # get_net_dev target0 00:22:32.258 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@107 -- # local dev=target0 00:22:32.258 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:22:32.258 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:22:32.258 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:22:32.258 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:22:32.258 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:22:32.258 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:22:32.258 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:22:32.258 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:22:32.258 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:22:32.258 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:32.258 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:22:32.258 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:22:32.258 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:22:32.258 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:22:32.258 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:32.258 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:32.258 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@168 -- # get_net_dev target1 00:22:32.258 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@107 -- # local dev=target1 00:22:32.258 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:22:32.258 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:22:32.258 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@109 -- # return 1 00:22:32.258 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@168 -- # dev= 00:22:32.258 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@169 -- # return 0 00:22:32.258 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:22:32.258 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:32.258 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:22:32.258 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:22:32.258 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:32.258 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:22:32.258 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:22:32.519 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:32.519 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:22:32.519 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:32.519 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:32.519 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # nvmfpid=400795 00:22:32.519 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # waitforlisten 400795 00:22:32.519 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:32.519 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@833 -- # '[' -z 400795 ']' 00:22:32.519 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:32.519 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:32.519 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:32.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:32.519 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:32.519 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:32.519 [2024-11-05 19:12:01.656807] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:22:32.519 [2024-11-05 19:12:01.656878] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:32.519 [2024-11-05 19:12:01.739456] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:32.519 [2024-11-05 19:12:01.781179] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:32.519 [2024-11-05 19:12:01.781215] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:32.519 [2024-11-05 19:12:01.781224] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:32.519 [2024-11-05 19:12:01.781231] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:32.519 [2024-11-05 19:12:01.781237] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:32.519 [2024-11-05 19:12:01.783072] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:32.519 [2024-11-05 19:12:01.783188] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:32.519 [2024-11-05 19:12:01.783342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:32.519 [2024-11-05 19:12:01.783344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:33.464 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:33.464 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@866 -- # return 0 00:22:33.464 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:22:33.464 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:33.464 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:33.464 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:33.464 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:22:33.464 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:22:33.464 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:22:33.464 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.464 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:33.464 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.464 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:22:33.464 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:22:33.464 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.464 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:33.464 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.464 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:22:33.464 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.464 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:33.464 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.464 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:22:33.464 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.464 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:33.464 [2024-11-05 19:12:02.640958] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:33.464 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.464 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:33.464 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.464 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:33.464 Malloc1 00:22:33.464 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.464 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:33.464 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.464 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:33.464 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.464 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:33.464 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.464 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:33.464 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.464 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:33.464 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.464 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:33.464 [2024-11-05 19:12:02.712080] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:33.464 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.464 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=401165 00:22:33.464 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:22:33.464 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:36.011 19:12:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:22:36.011 19:12:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.011 19:12:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:36.011 19:12:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.011 19:12:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:22:36.011 "tick_rate": 2400000000, 00:22:36.011 "poll_groups": [ 00:22:36.011 { 00:22:36.011 "name": "nvmf_tgt_poll_group_000", 00:22:36.011 "admin_qpairs": 1, 00:22:36.011 "io_qpairs": 1, 00:22:36.011 "current_admin_qpairs": 1, 00:22:36.011 "current_io_qpairs": 1, 00:22:36.011 "pending_bdev_io": 0, 00:22:36.011 "completed_nvme_io": 18875, 00:22:36.011 "transports": [ 00:22:36.011 { 00:22:36.011 "trtype": "TCP" 00:22:36.011 } 00:22:36.011 ] 00:22:36.011 }, 00:22:36.011 { 00:22:36.011 "name": "nvmf_tgt_poll_group_001", 00:22:36.011 "admin_qpairs": 0, 00:22:36.011 "io_qpairs": 1, 00:22:36.011 "current_admin_qpairs": 0, 00:22:36.011 "current_io_qpairs": 1, 00:22:36.011 "pending_bdev_io": 0, 00:22:36.011 "completed_nvme_io": 27128, 00:22:36.011 "transports": [ 00:22:36.011 { 00:22:36.011 "trtype": "TCP" 00:22:36.011 } 00:22:36.011 ] 00:22:36.011 }, 00:22:36.011 { 00:22:36.011 "name": "nvmf_tgt_poll_group_002", 00:22:36.011 "admin_qpairs": 0, 00:22:36.011 "io_qpairs": 1, 00:22:36.011 "current_admin_qpairs": 0, 00:22:36.011 "current_io_qpairs": 1, 00:22:36.011 "pending_bdev_io": 0, 00:22:36.011 "completed_nvme_io": 19324, 00:22:36.011 "transports": [ 00:22:36.011 { 00:22:36.011 "trtype": "TCP" 00:22:36.011 } 00:22:36.011 ] 00:22:36.011 }, 00:22:36.011 { 00:22:36.011 "name": "nvmf_tgt_poll_group_003", 00:22:36.011 "admin_qpairs": 0, 00:22:36.011 "io_qpairs": 1, 00:22:36.011 "current_admin_qpairs": 0, 00:22:36.011 "current_io_qpairs": 1, 00:22:36.011 "pending_bdev_io": 0, 00:22:36.011 "completed_nvme_io": 19218, 00:22:36.011 "transports": [ 00:22:36.011 { 00:22:36.011 "trtype": "TCP" 00:22:36.011 } 00:22:36.011 ] 00:22:36.011 } 00:22:36.011 ] 00:22:36.011 }' 00:22:36.011 19:12:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:22:36.011 19:12:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:22:36.011 19:12:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:22:36.011 19:12:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:22:36.011 19:12:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 401165 00:22:44.152 Initializing NVMe Controllers 00:22:44.152 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:44.152 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:22:44.152 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:22:44.152 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:22:44.152 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:22:44.152 Initialization complete. Launching workers. 00:22:44.152 ======================================================== 00:22:44.152 Latency(us) 00:22:44.152 Device Information : IOPS MiB/s Average min max 00:22:44.152 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10827.10 42.29 5911.36 1395.18 9066.24 00:22:44.152 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 14392.60 56.22 4447.31 1081.68 9483.94 00:22:44.152 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 13353.00 52.16 4792.02 1623.14 10891.58 00:22:44.152 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 13034.30 50.92 4920.55 1216.92 46264.04 00:22:44.152 ======================================================== 00:22:44.152 Total : 51606.99 201.59 4963.18 1081.68 46264.04 00:22:44.152 00:22:44.152 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:22:44.152 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # nvmfcleanup 00:22:44.152 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@99 -- # sync 00:22:44.152 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:22:44.152 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@102 -- # set +e 00:22:44.152 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@103 -- # for i in {1..20} 00:22:44.152 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:22:44.152 rmmod nvme_tcp 00:22:44.152 rmmod nvme_fabrics 00:22:44.152 rmmod nvme_keyring 00:22:44.152 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:22:44.152 19:12:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@106 -- # set -e 00:22:44.152 19:12:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@107 -- # return 0 00:22:44.152 19:12:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # '[' -n 400795 ']' 00:22:44.152 19:12:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@337 -- # killprocess 400795 00:22:44.152 19:12:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@952 -- # '[' -z 400795 ']' 00:22:44.152 19:12:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # kill -0 400795 00:22:44.152 19:12:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # uname 00:22:44.152 19:12:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:44.152 19:12:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 400795 00:22:44.152 19:12:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:44.152 19:12:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:44.152 19:12:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@970 -- # echo 'killing process with pid 400795' 00:22:44.152 killing process with pid 400795 00:22:44.152 19:12:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@971 -- # kill 400795 00:22:44.152 19:12:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@976 -- # wait 400795 00:22:44.152 19:12:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:22:44.152 19:12:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # nvmf_fini 00:22:44.152 19:12:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@264 -- # local dev 00:22:44.152 19:12:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@267 -- # remove_target_ns 00:22:44.152 19:12:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:22:44.152 19:12:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:22:44.152 19:12:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_target_ns 00:22:46.067 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@268 -- # delete_main_bridge 00:22:46.067 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:22:46.067 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@130 -- # return 0 00:22:46.067 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:22:46.067 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:22:46.067 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:22:46.067 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:22:46.067 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:22:46.067 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:22:46.067 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:22:46.067 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:22:46.067 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:22:46.067 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:22:46.067 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:22:46.067 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:22:46.067 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:22:46.067 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:22:46.067 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:22:46.067 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:22:46.067 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:22:46.067 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@41 -- # _dev=0 00:22:46.067 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@41 -- # dev_map=() 00:22:46.067 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@284 -- # iptr 00:22:46.067 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@542 -- # iptables-save 00:22:46.067 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:22:46.067 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@542 -- # iptables-restore 00:22:46.067 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:22:46.067 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:22:46.067 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:22:47.980 19:12:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:22:49.893 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:22:55.188 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:22:55.188 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:22:55.188 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:55.188 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # prepare_net_devs 00:22:55.188 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # local -g is_hw=no 00:22:55.188 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@260 -- # remove_target_ns 00:22:55.188 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:22:55.188 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:22:55.188 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_target_ns 00:22:55.188 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:22:55.188 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:22:55.188 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # xtrace_disable 00:22:55.188 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:55.188 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:55.188 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@131 -- # pci_devs=() 00:22:55.188 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@131 -- # local -a pci_devs 00:22:55.188 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@132 -- # pci_net_devs=() 00:22:55.188 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:22:55.188 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@133 -- # pci_drivers=() 00:22:55.188 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@133 -- # local -A pci_drivers 00:22:55.188 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@135 -- # net_devs=() 00:22:55.188 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@135 -- # local -ga net_devs 00:22:55.188 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@136 -- # e810=() 00:22:55.188 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@136 -- # local -ga e810 00:22:55.188 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@137 -- # x722=() 00:22:55.188 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@137 -- # local -ga x722 00:22:55.188 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@138 -- # mlx=() 00:22:55.188 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@138 -- # local -ga mlx 00:22:55.188 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:55.188 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:55.188 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:55.188 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:55.188 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:55.188 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:55.188 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:55.188 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:55.188 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:55.188 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:55.188 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:55.188 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:55.188 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:22:55.188 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:22:55.188 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:22:55.188 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:22:55.188 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:22:55.188 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:22:55.188 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:22:55.188 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:55.188 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:55.188 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:22:55.188 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:22:55.188 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:55.188 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:55.189 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:22:55.189 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:22:55.189 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:55.189 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:55.189 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:22:55.189 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:22:55.189 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:55.189 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:55.189 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:22:55.189 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:22:55.189 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:22:55.189 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:22:55.189 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:22:55.189 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:55.189 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:22:55.189 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:55.189 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # [[ up == up ]] 00:22:55.189 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:22:55.189 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:55.189 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:55.189 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:55.189 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:22:55.189 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:22:55.189 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:55.189 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:22:55.189 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:55.189 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # [[ up == up ]] 00:22:55.189 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:22:55.189 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:55.189 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:55.189 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:55.189 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:22:55.189 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:22:55.189 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:22:55.189 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # is_hw=yes 00:22:55.189 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:22:55.189 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:22:55.189 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:22:55.189 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:22:55.189 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@257 -- # create_target_ns 00:22:55.189 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:22:55.189 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:22:55.189 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:22:55.189 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:55.189 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:22:55.189 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:22:55.189 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:55.189 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:55.189 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:22:55.189 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:22:55.189 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:22:55.189 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:22:55.189 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@27 -- # local -gA dev_map 00:22:55.189 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@28 -- # local -g _dev 00:22:55.189 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:22:55.189 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:22:55.189 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:22:55.189 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:22:55.189 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@44 -- # ips=() 00:22:55.189 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:22:55.189 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:22:55.189 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:22:55.189 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:22:55.189 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:22:55.189 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:22:55.189 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:22:55.189 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:22:55.189 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:22:55.189 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:22:55.189 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:22:55.189 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:22:55.189 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:22:55.189 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:22:55.189 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:22:55.189 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:22:55.189 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:22:55.189 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:22:55.189 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:22:55.189 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:22:55.189 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@11 -- # local val=167772161 00:22:55.189 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:22:55.189 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:22:55.189 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:22:55.189 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:22:55.189 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:22:55.189 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:22:55.189 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:22:55.189 10.0.0.1 00:22:55.189 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:22:55.189 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:22:55.189 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:55.189 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:55.189 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:22:55.189 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@11 -- # local val=167772162 00:22:55.189 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:22:55.189 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:22:55.189 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:22:55.189 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:22:55.189 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:22:55.189 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:22:55.189 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:22:55.189 10.0.0.2 00:22:55.189 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:22:55.189 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:22:55.189 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:22:55.189 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:22:55.189 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:22:55.189 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:22:55.189 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:22:55.189 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:55.189 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:55.190 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:22:55.190 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:22:55.190 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:22:55.190 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:22:55.190 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:22:55.190 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:22:55.190 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:22:55.190 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:22:55.190 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:22:55.190 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:22:55.190 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:22:55.190 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@38 -- # ping_ips 1 00:22:55.190 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:22:55.190 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:22:55.190 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:22:55.190 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:22:55.190 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:22:55.190 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:22:55.190 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:22:55.190 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:22:55.190 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:22:55.190 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@107 -- # local dev=initiator0 00:22:55.190 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:22:55.190 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:22:55.190 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:22:55.190 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:22:55.190 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:22:55.190 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:22:55.190 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:22:55.190 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:22:55.190 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:22:55.190 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:22:55.190 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:22:55.190 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:55.190 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:55.190 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:22:55.190 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:22:55.190 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:55.190 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.682 ms 00:22:55.190 00:22:55.190 --- 10.0.0.1 ping statistics --- 00:22:55.190 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:55.190 rtt min/avg/max/mdev = 0.682/0.682/0.682/0.000 ms 00:22:55.190 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:22:55.190 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:22:55.190 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:22:55.190 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:22:55.190 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:55.190 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:55.190 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@168 -- # get_net_dev target0 00:22:55.190 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@107 -- # local dev=target0 00:22:55.190 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:22:55.190 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:22:55.190 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:22:55.190 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:22:55.190 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:22:55.190 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:22:55.190 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:22:55.190 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:22:55.190 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:22:55.190 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:22:55.190 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:22:55.190 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:22:55.190 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:22:55.190 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:22:55.190 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:55.190 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.282 ms 00:22:55.190 00:22:55.190 --- 10.0.0.2 ping statistics --- 00:22:55.190 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:55.190 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:22:55.190 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@98 -- # (( pair++ )) 00:22:55.190 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:22:55.190 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:55.190 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@270 -- # return 0 00:22:55.190 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:22:55.190 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:22:55.190 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:22:55.190 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:22:55.190 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:22:55.190 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:22:55.190 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:22:55.190 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:22:55.190 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:22:55.190 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:22:55.190 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@107 -- # local dev=initiator0 00:22:55.190 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:22:55.190 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:22:55.190 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:22:55.190 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:22:55.190 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:22:55.190 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:22:55.190 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:22:55.190 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:22:55.190 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:22:55.190 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:55.190 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:22:55.190 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:22:55.190 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:22:55.190 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:22:55.190 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:22:55.190 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:22:55.190 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@107 -- # local dev=initiator1 00:22:55.190 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:22:55.190 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:22:55.190 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@109 -- # return 1 00:22:55.190 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@168 -- # dev= 00:22:55.190 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@169 -- # return 0 00:22:55.190 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:22:55.190 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:22:55.190 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:22:55.190 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:22:55.190 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:22:55.190 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:55.190 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:55.190 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@168 -- # get_net_dev target0 00:22:55.190 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@107 -- # local dev=target0 00:22:55.191 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:22:55.191 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:22:55.191 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:22:55.191 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:22:55.191 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:22:55.191 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:22:55.191 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:22:55.191 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:22:55.191 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:22:55.191 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:55.191 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:22:55.191 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:22:55.191 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:22:55.191 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:22:55.191 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:55.191 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:55.191 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@168 -- # get_net_dev target1 00:22:55.191 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@107 -- # local dev=target1 00:22:55.191 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:22:55.191 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:22:55.191 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@109 -- # return 1 00:22:55.191 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@168 -- # dev= 00:22:55.191 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@169 -- # return 0 00:22:55.191 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:22:55.191 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:55.191 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:22:55.191 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:22:55.191 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:55.191 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:22:55.191 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:22:55.191 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:22:55.191 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec nvmf_ns_spdk ethtool --offload cvl_0_1 hw-tc-offload on 00:22:55.191 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec nvmf_ns_spdk ethtool --set-priv-flags cvl_0_1 channel-pkt-inspect-optimize off 00:22:55.191 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:22:55.191 net.core.busy_poll = 1 00:22:55.191 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:22:55.191 net.core.busy_read = 1 00:22:55.191 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:22:55.191 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec nvmf_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_1 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:22:55.191 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec nvmf_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_1 ingress 00:22:55.452 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec nvmf_ns_spdk /usr/sbin/tc filter add dev cvl_0_1 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:22:55.452 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_1 00:22:55.452 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:55.452 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:22:55.452 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:55.452 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:55.452 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # nvmfpid=406187 00:22:55.452 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # waitforlisten 406187 00:22:55.452 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:55.452 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@833 -- # '[' -z 406187 ']' 00:22:55.452 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:55.452 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:55.452 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:55.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:55.452 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:55.452 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:55.452 [2024-11-05 19:12:24.645165] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:22:55.452 [2024-11-05 19:12:24.645236] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:55.452 [2024-11-05 19:12:24.728418] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:55.452 [2024-11-05 19:12:24.770492] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:55.452 [2024-11-05 19:12:24.770531] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:55.452 [2024-11-05 19:12:24.770539] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:55.452 [2024-11-05 19:12:24.770546] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:55.452 [2024-11-05 19:12:24.770551] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:55.452 [2024-11-05 19:12:24.773635] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:55.452 [2024-11-05 19:12:24.773763] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:55.452 [2024-11-05 19:12:24.773869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:55.452 [2024-11-05 19:12:24.773870] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:56.395 19:12:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:56.395 19:12:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@866 -- # return 0 00:22:56.395 19:12:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:22:56.395 19:12:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:56.395 19:12:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:56.395 19:12:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:56.395 19:12:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:22:56.395 19:12:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:22:56.395 19:12:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:22:56.395 19:12:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.395 19:12:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:56.395 19:12:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:56.395 19:12:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:22:56.395 19:12:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:22:56.395 19:12:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.395 19:12:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:56.395 19:12:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:56.395 19:12:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:22:56.395 19:12:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.395 19:12:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:56.395 19:12:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:56.395 19:12:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:22:56.395 19:12:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.395 19:12:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:56.395 [2024-11-05 19:12:25.619720] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:56.395 19:12:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:56.395 19:12:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:56.395 19:12:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.395 19:12:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:56.395 Malloc1 00:22:56.395 19:12:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:56.395 19:12:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:56.395 19:12:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.395 19:12:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:56.395 19:12:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:56.395 19:12:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:56.395 19:12:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.395 19:12:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:56.395 19:12:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:56.395 19:12:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:56.395 19:12:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.395 19:12:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:56.395 [2024-11-05 19:12:25.690113] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:56.395 19:12:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:56.395 19:12:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=406460 00:22:56.395 19:12:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:22:56.395 19:12:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:58.949 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:22:58.949 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.949 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:58.949 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.949 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:22:58.949 "tick_rate": 2400000000, 00:22:58.949 "poll_groups": [ 00:22:58.949 { 00:22:58.949 "name": "nvmf_tgt_poll_group_000", 00:22:58.949 "admin_qpairs": 1, 00:22:58.949 "io_qpairs": 2, 00:22:58.949 "current_admin_qpairs": 1, 00:22:58.949 "current_io_qpairs": 2, 00:22:58.949 "pending_bdev_io": 0, 00:22:58.949 "completed_nvme_io": 27654, 00:22:58.949 "transports": [ 00:22:58.949 { 00:22:58.950 "trtype": "TCP" 00:22:58.950 } 00:22:58.950 ] 00:22:58.950 }, 00:22:58.950 { 00:22:58.950 "name": "nvmf_tgt_poll_group_001", 00:22:58.950 "admin_qpairs": 0, 00:22:58.950 "io_qpairs": 2, 00:22:58.950 "current_admin_qpairs": 0, 00:22:58.950 "current_io_qpairs": 2, 00:22:58.950 "pending_bdev_io": 0, 00:22:58.950 "completed_nvme_io": 37870, 00:22:58.950 "transports": [ 00:22:58.950 { 00:22:58.950 "trtype": "TCP" 00:22:58.950 } 00:22:58.950 ] 00:22:58.950 }, 00:22:58.950 { 00:22:58.950 "name": "nvmf_tgt_poll_group_002", 00:22:58.950 "admin_qpairs": 0, 00:22:58.950 "io_qpairs": 0, 00:22:58.950 "current_admin_qpairs": 0, 00:22:58.950 "current_io_qpairs": 0, 00:22:58.950 "pending_bdev_io": 0, 00:22:58.950 "completed_nvme_io": 0, 00:22:58.950 "transports": [ 00:22:58.950 { 00:22:58.950 "trtype": "TCP" 00:22:58.950 } 00:22:58.950 ] 00:22:58.950 }, 00:22:58.950 { 00:22:58.950 "name": "nvmf_tgt_poll_group_003", 00:22:58.950 "admin_qpairs": 0, 00:22:58.950 "io_qpairs": 0, 00:22:58.950 "current_admin_qpairs": 0, 00:22:58.950 "current_io_qpairs": 0, 00:22:58.950 "pending_bdev_io": 0, 00:22:58.950 "completed_nvme_io": 0, 00:22:58.950 "transports": [ 00:22:58.950 { 00:22:58.950 "trtype": "TCP" 00:22:58.950 } 00:22:58.950 ] 00:22:58.950 } 00:22:58.950 ] 00:22:58.950 }' 00:22:58.950 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:22:58.950 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:22:58.950 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:22:58.950 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:22:58.950 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 406460 00:23:07.106 Initializing NVMe Controllers 00:23:07.106 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:07.106 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:23:07.106 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:23:07.106 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:23:07.106 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:23:07.106 Initialization complete. Launching workers. 00:23:07.106 ======================================================== 00:23:07.106 Latency(us) 00:23:07.106 Device Information : IOPS MiB/s Average min max 00:23:07.106 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 9484.13 37.05 6749.72 1235.69 51056.98 00:23:07.106 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 11095.11 43.34 5786.01 1223.54 49466.31 00:23:07.106 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 9068.03 35.42 7058.76 992.87 50148.49 00:23:07.106 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 9783.32 38.22 6543.26 958.41 53350.12 00:23:07.106 ======================================================== 00:23:07.106 Total : 39430.60 154.03 6498.39 958.41 53350.12 00:23:07.106 00:23:07.106 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:23:07.106 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # nvmfcleanup 00:23:07.106 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@99 -- # sync 00:23:07.106 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:23:07.106 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@102 -- # set +e 00:23:07.106 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@103 -- # for i in {1..20} 00:23:07.106 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:23:07.106 rmmod nvme_tcp 00:23:07.106 rmmod nvme_fabrics 00:23:07.106 rmmod nvme_keyring 00:23:07.106 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:23:07.106 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@106 -- # set -e 00:23:07.106 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@107 -- # return 0 00:23:07.106 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # '[' -n 406187 ']' 00:23:07.106 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@337 -- # killprocess 406187 00:23:07.106 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@952 -- # '[' -z 406187 ']' 00:23:07.106 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # kill -0 406187 00:23:07.106 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # uname 00:23:07.106 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:07.106 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 406187 00:23:07.106 19:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:23:07.106 19:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:23:07.106 19:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@970 -- # echo 'killing process with pid 406187' 00:23:07.106 killing process with pid 406187 00:23:07.106 19:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@971 -- # kill 406187 00:23:07.106 19:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@976 -- # wait 406187 00:23:07.106 19:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:23:07.106 19:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # nvmf_fini 00:23:07.106 19:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@264 -- # local dev 00:23:07.106 19:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@267 -- # remove_target_ns 00:23:07.106 19:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:23:07.106 19:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:23:07.106 19:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_target_ns 00:23:09.039 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@268 -- # delete_main_bridge 00:23:09.039 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:23:09.039 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@130 -- # return 0 00:23:09.039 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:23:09.039 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:23:09.039 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:23:09.039 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:23:09.039 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:23:09.039 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:23:09.039 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:23:09.039 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:23:09.039 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:23:09.039 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:23:09.039 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:23:09.039 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:23:09.039 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:23:09.039 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:23:09.039 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:23:09.039 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:23:09.039 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:23:09.039 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@41 -- # _dev=0 00:23:09.039 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@41 -- # dev_map=() 00:23:09.039 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@284 -- # iptr 00:23:09.039 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@542 -- # iptables-save 00:23:09.039 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:23:09.039 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@542 -- # iptables-restore 00:23:09.039 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:23:09.039 00:23:09.039 real 0m52.891s 00:23:09.039 user 2m50.196s 00:23:09.039 sys 0m11.429s 00:23:09.039 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:09.039 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:09.039 ************************************ 00:23:09.039 END TEST nvmf_perf_adq 00:23:09.039 ************************************ 00:23:09.039 19:12:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:23:09.039 19:12:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:23:09.039 19:12:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:09.039 19:12:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:09.039 ************************************ 00:23:09.039 START TEST nvmf_shutdown 00:23:09.039 ************************************ 00:23:09.039 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:23:09.340 * Looking for test storage... 00:23:09.340 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:09.340 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:09.340 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # lcov --version 00:23:09.340 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:09.340 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:09.340 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:09.340 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:09.340 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:09.340 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:23:09.340 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:23:09.340 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:23:09.340 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:23:09.340 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:23:09.340 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:23:09.340 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:23:09.340 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:09.340 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:23:09.340 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:23:09.340 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:09.340 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:09.340 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:23:09.340 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:23:09.340 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:09.340 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:23:09.340 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:23:09.340 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:23:09.340 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:23:09.340 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:09.340 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:23:09.340 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:23:09.340 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:09.340 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:09.340 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:23:09.340 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:09.340 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:09.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:09.340 --rc genhtml_branch_coverage=1 00:23:09.340 --rc genhtml_function_coverage=1 00:23:09.340 --rc genhtml_legend=1 00:23:09.340 --rc geninfo_all_blocks=1 00:23:09.340 --rc geninfo_unexecuted_blocks=1 00:23:09.340 00:23:09.340 ' 00:23:09.340 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:09.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:09.340 --rc genhtml_branch_coverage=1 00:23:09.340 --rc genhtml_function_coverage=1 00:23:09.340 --rc genhtml_legend=1 00:23:09.340 --rc geninfo_all_blocks=1 00:23:09.340 --rc geninfo_unexecuted_blocks=1 00:23:09.340 00:23:09.340 ' 00:23:09.340 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:09.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:09.340 --rc genhtml_branch_coverage=1 00:23:09.340 --rc genhtml_function_coverage=1 00:23:09.340 --rc genhtml_legend=1 00:23:09.340 --rc geninfo_all_blocks=1 00:23:09.340 --rc geninfo_unexecuted_blocks=1 00:23:09.340 00:23:09.340 ' 00:23:09.340 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:09.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:09.340 --rc genhtml_branch_coverage=1 00:23:09.340 --rc genhtml_function_coverage=1 00:23:09.340 --rc genhtml_legend=1 00:23:09.340 --rc geninfo_all_blocks=1 00:23:09.340 --rc geninfo_unexecuted_blocks=1 00:23:09.340 00:23:09.340 ' 00:23:09.340 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:09.340 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:23:09.340 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:09.340 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:09.340 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:09.340 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:09.340 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:09.340 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:23:09.340 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:09.340 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:23:09.340 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:09.340 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:09.340 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:09.340 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:23:09.340 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:23:09.340 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:09.340 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:09.340 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:23:09.340 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:09.340 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:09.340 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:09.340 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:09.340 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:09.340 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:09.340 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:23:09.340 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:09.340 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:23:09.340 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:23:09.340 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:23:09.340 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:23:09.340 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@50 -- # : 0 00:23:09.341 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:23:09.341 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:23:09.341 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:23:09.341 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:09.341 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:09.341 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:23:09.341 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:23:09.341 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:23:09.341 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:23:09.341 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@54 -- # have_pci_nics=0 00:23:09.341 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:09.341 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:09.341 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:23:09.341 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:23:09.341 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:09.341 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:09.341 ************************************ 00:23:09.341 START TEST nvmf_shutdown_tc1 00:23:09.341 ************************************ 00:23:09.341 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc1 00:23:09.341 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:23:09.341 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:23:09.341 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:23:09.341 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:09.341 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # prepare_net_devs 00:23:09.341 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # local -g is_hw=no 00:23:09.341 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # remove_target_ns 00:23:09.341 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:23:09.341 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:23:09.341 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_target_ns 00:23:09.341 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:23:09.341 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:23:09.341 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # xtrace_disable 00:23:09.341 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:17.541 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:17.541 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@131 -- # pci_devs=() 00:23:17.541 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@131 -- # local -a pci_devs 00:23:17.541 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@132 -- # pci_net_devs=() 00:23:17.541 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:23:17.541 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@133 -- # pci_drivers=() 00:23:17.541 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@133 -- # local -A pci_drivers 00:23:17.541 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@135 -- # net_devs=() 00:23:17.541 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@135 -- # local -ga net_devs 00:23:17.541 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@136 -- # e810=() 00:23:17.541 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@136 -- # local -ga e810 00:23:17.541 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@137 -- # x722=() 00:23:17.541 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@137 -- # local -ga x722 00:23:17.541 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@138 -- # mlx=() 00:23:17.541 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@138 -- # local -ga mlx 00:23:17.541 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:17.541 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:17.541 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:17.541 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:17.541 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:17.541 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:17.541 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:17.541 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:17.541 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:17.541 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:17.541 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:17.541 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:17.541 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:23:17.541 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:23:17.541 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:23:17.541 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:23:17.541 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:23:17.541 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:23:17.541 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:23:17.541 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:17.541 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:17.541 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:23:17.541 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:23:17.541 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:17.541 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:17.541 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:23:17.541 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:23:17.541 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:17.541 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:17.541 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:23:17.541 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:23:17.541 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:17.541 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:17.541 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:23:17.541 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:23:17.541 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:23:17.541 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:23:17.541 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:23:17.541 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:17.541 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:23:17.541 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:17.541 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # [[ up == up ]] 00:23:17.541 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:23:17.541 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:17.541 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:17.541 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:17.541 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:23:17.541 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:23:17.541 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:17.541 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:23:17.541 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:17.541 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # [[ up == up ]] 00:23:17.541 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:23:17.541 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:17.541 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:17.541 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:17.541 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:23:17.541 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:23:17.541 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:23:17.541 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # is_hw=yes 00:23:17.541 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:23:17.541 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:23:17.541 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:23:17.541 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:23:17.541 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@257 -- # create_target_ns 00:23:17.541 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:23:17.541 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:23:17.541 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:23:17.541 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:17.542 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:23:17.542 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:23:17.542 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:17.542 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:17.542 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:23:17.542 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:23:17.542 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:23:17.542 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:23:17.542 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@27 -- # local -gA dev_map 00:23:17.542 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@28 -- # local -g _dev 00:23:17.542 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:23:17.542 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:23:17.542 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:23:17.542 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:23:17.542 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@44 -- # ips=() 00:23:17.542 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:23:17.542 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:23:17.542 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:23:17.542 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:23:17.542 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:23:17.542 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:23:17.542 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:23:17.542 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:23:17.542 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:23:17.542 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:23:17.542 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:23:17.542 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:23:17.542 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:23:17.542 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:23:17.542 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:23:17.542 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:23:17.542 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:23:17.542 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:23:17.542 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:23:17.542 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:23:17.542 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@11 -- # local val=167772161 00:23:17.542 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:23:17.542 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:23:17.542 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:23:17.542 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:23:17.542 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:23:17.542 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:23:17.542 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:23:17.542 10.0.0.1 00:23:17.542 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:23:17.542 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:23:17.542 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:17.542 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:17.542 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:23:17.542 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@11 -- # local val=167772162 00:23:17.542 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:23:17.542 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:23:17.542 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:23:17.542 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:23:17.542 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:23:17.542 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:23:17.542 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:23:17.542 10.0.0.2 00:23:17.542 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:23:17.542 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:23:17.542 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:23:17.542 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:23:17.542 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:23:17.542 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:23:17.542 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:23:17.542 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:17.542 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:17.542 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:23:17.542 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:23:17.542 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:23:17.542 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:23:17.542 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:23:17.542 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:23:17.542 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:23:17.542 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:23:17.542 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:23:17.542 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:23:17.542 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:23:17.542 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@38 -- # ping_ips 1 00:23:17.542 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:23:17.542 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:23:17.542 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:23:17.542 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:23:17.542 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:23:17.542 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:23:17.542 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:23:17.542 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:23:17.542 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:23:17.542 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@107 -- # local dev=initiator0 00:23:17.542 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:23:17.542 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:23:17.542 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:23:17.542 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:23:17.542 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:23:17.542 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:23:17.542 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:23:17.542 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:23:17.542 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:23:17.542 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:23:17.543 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:23:17.543 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:17.543 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:17.543 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:23:17.543 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:23:17.543 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:17.543 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.678 ms 00:23:17.543 00:23:17.543 --- 10.0.0.1 ping statistics --- 00:23:17.543 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:17.543 rtt min/avg/max/mdev = 0.678/0.678/0.678/0.000 ms 00:23:17.543 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:23:17.543 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:23:17.543 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:23:17.543 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:23:17.543 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:17.543 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:17.543 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@168 -- # get_net_dev target0 00:23:17.543 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@107 -- # local dev=target0 00:23:17.543 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:23:17.543 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:23:17.543 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:23:17.543 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:23:17.543 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:23:17.543 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:23:17.543 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:23:17.543 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:23:17.543 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:23:17.543 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:23:17.543 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:23:17.543 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:23:17.543 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:23:17.543 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:23:17.543 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:17.543 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.334 ms 00:23:17.543 00:23:17.543 --- 10.0.0.2 ping statistics --- 00:23:17.543 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:17.543 rtt min/avg/max/mdev = 0.334/0.334/0.334/0.000 ms 00:23:17.543 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@98 -- # (( pair++ )) 00:23:17.543 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:23:17.543 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:17.543 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # return 0 00:23:17.543 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:23:17.543 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:23:17.543 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:23:17.543 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:23:17.543 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:23:17.543 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:23:17.543 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:23:17.543 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:23:17.543 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:23:17.543 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:23:17.543 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@107 -- # local dev=initiator0 00:23:17.543 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:23:17.543 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:23:17.543 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:23:17.543 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:23:17.543 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:23:17.543 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:23:17.543 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:23:17.543 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:23:17.543 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:23:17.543 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:17.543 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:23:17.543 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:23:17.543 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:23:17.543 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:23:17.543 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:23:17.543 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:23:17.543 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@107 -- # local dev=initiator1 00:23:17.543 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:23:17.543 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:23:17.543 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@109 -- # return 1 00:23:17.543 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@168 -- # dev= 00:23:17.543 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@169 -- # return 0 00:23:17.543 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:23:17.543 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:23:17.543 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:23:17.543 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:23:17.543 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:23:17.543 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:17.543 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:17.543 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@168 -- # get_net_dev target0 00:23:17.543 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@107 -- # local dev=target0 00:23:17.543 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:23:17.543 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:23:17.543 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:23:17.543 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:23:17.543 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:23:17.543 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:23:17.543 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:23:17.543 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:23:17.543 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:23:17.543 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:17.543 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:23:17.543 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:23:17.543 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:23:17.543 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:23:17.543 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:17.543 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:17.543 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@168 -- # get_net_dev target1 00:23:17.543 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@107 -- # local dev=target1 00:23:17.543 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:23:17.543 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:23:17.544 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@109 -- # return 1 00:23:17.544 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@168 -- # dev= 00:23:17.544 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@169 -- # return 0 00:23:17.544 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:23:17.544 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:17.544 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:23:17.544 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:23:17.544 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:17.544 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:23:17.544 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:23:17.544 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:23:17.544 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:23:17.544 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:17.544 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:17.544 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # nvmfpid=412899 00:23:17.544 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # waitforlisten 412899 00:23:17.544 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:17.544 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # '[' -z 412899 ']' 00:23:17.544 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:17.544 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:17.544 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:17.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:17.544 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:17.544 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:17.544 [2024-11-05 19:12:46.202575] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:23:17.544 [2024-11-05 19:12:46.202649] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:17.544 [2024-11-05 19:12:46.301274] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:17.544 [2024-11-05 19:12:46.352953] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:17.544 [2024-11-05 19:12:46.353005] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:17.544 [2024-11-05 19:12:46.353015] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:17.544 [2024-11-05 19:12:46.353022] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:17.544 [2024-11-05 19:12:46.353028] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:17.544 [2024-11-05 19:12:46.355354] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:17.544 [2024-11-05 19:12:46.355522] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:17.544 [2024-11-05 19:12:46.355686] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:23:17.544 [2024-11-05 19:12:46.355687] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:17.806 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:17.806 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@866 -- # return 0 00:23:17.806 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:23:17.806 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:17.806 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:17.806 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:17.806 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:17.806 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.806 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:17.806 [2024-11-05 19:12:47.059646] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:17.806 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.806 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:23:17.806 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:23:17.806 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:17.806 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:17.806 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:17.806 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:17.806 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:17.806 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:17.806 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:17.806 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:17.806 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:17.806 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:17.806 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:17.806 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:17.806 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:17.806 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:17.806 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:17.806 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:17.806 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:17.806 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:17.806 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:17.806 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:17.806 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:17.807 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:17.807 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:17.807 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:23:17.807 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.807 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:18.067 Malloc1 00:23:18.067 [2024-11-05 19:12:47.176954] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:18.067 Malloc2 00:23:18.067 Malloc3 00:23:18.067 Malloc4 00:23:18.067 Malloc5 00:23:18.067 Malloc6 00:23:18.067 Malloc7 00:23:18.328 Malloc8 00:23:18.328 Malloc9 00:23:18.328 Malloc10 00:23:18.328 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.328 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:23:18.328 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:18.328 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:18.328 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=413149 00:23:18.328 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 413149 /var/tmp/bdevperf.sock 00:23:18.328 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # '[' -z 413149 ']' 00:23:18.328 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:18.328 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:18.328 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:18.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:18.328 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:23:18.328 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:18.328 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:18.329 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:18.329 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # config=() 00:23:18.329 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # local subsystem config 00:23:18.329 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:23:18.329 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:23:18.329 { 00:23:18.329 "params": { 00:23:18.329 "name": "Nvme$subsystem", 00:23:18.329 "trtype": "$TEST_TRANSPORT", 00:23:18.329 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:18.329 "adrfam": "ipv4", 00:23:18.329 "trsvcid": "$NVMF_PORT", 00:23:18.329 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:18.329 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:18.329 "hdgst": ${hdgst:-false}, 00:23:18.329 "ddgst": ${ddgst:-false} 00:23:18.329 }, 00:23:18.329 "method": "bdev_nvme_attach_controller" 00:23:18.329 } 00:23:18.329 EOF 00:23:18.329 )") 00:23:18.329 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:23:18.329 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:23:18.329 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:23:18.329 { 00:23:18.329 "params": { 00:23:18.329 "name": "Nvme$subsystem", 00:23:18.329 "trtype": "$TEST_TRANSPORT", 00:23:18.329 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:18.329 "adrfam": "ipv4", 00:23:18.329 "trsvcid": "$NVMF_PORT", 00:23:18.329 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:18.329 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:18.329 "hdgst": ${hdgst:-false}, 00:23:18.329 "ddgst": ${ddgst:-false} 00:23:18.329 }, 00:23:18.329 "method": "bdev_nvme_attach_controller" 00:23:18.329 } 00:23:18.329 EOF 00:23:18.329 )") 00:23:18.329 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:23:18.329 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:23:18.329 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:23:18.329 { 00:23:18.329 "params": { 00:23:18.329 "name": "Nvme$subsystem", 00:23:18.329 "trtype": "$TEST_TRANSPORT", 00:23:18.329 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:18.329 "adrfam": "ipv4", 00:23:18.329 "trsvcid": "$NVMF_PORT", 00:23:18.329 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:18.329 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:18.329 "hdgst": ${hdgst:-false}, 00:23:18.329 "ddgst": ${ddgst:-false} 00:23:18.329 }, 00:23:18.329 "method": "bdev_nvme_attach_controller" 00:23:18.329 } 00:23:18.329 EOF 00:23:18.329 )") 00:23:18.329 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:23:18.329 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:23:18.329 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:23:18.329 { 00:23:18.329 "params": { 00:23:18.329 "name": "Nvme$subsystem", 00:23:18.329 "trtype": "$TEST_TRANSPORT", 00:23:18.329 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:18.329 "adrfam": "ipv4", 00:23:18.329 "trsvcid": "$NVMF_PORT", 00:23:18.329 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:18.329 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:18.329 "hdgst": ${hdgst:-false}, 00:23:18.329 "ddgst": ${ddgst:-false} 00:23:18.329 }, 00:23:18.329 "method": "bdev_nvme_attach_controller" 00:23:18.329 } 00:23:18.329 EOF 00:23:18.329 )") 00:23:18.329 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:23:18.329 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:23:18.329 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:23:18.329 { 00:23:18.329 "params": { 00:23:18.329 "name": "Nvme$subsystem", 00:23:18.329 "trtype": "$TEST_TRANSPORT", 00:23:18.329 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:18.329 "adrfam": "ipv4", 00:23:18.329 "trsvcid": "$NVMF_PORT", 00:23:18.329 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:18.329 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:18.329 "hdgst": ${hdgst:-false}, 00:23:18.329 "ddgst": ${ddgst:-false} 00:23:18.329 }, 00:23:18.329 "method": "bdev_nvme_attach_controller" 00:23:18.329 } 00:23:18.329 EOF 00:23:18.329 )") 00:23:18.329 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:23:18.329 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:23:18.329 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:23:18.329 { 00:23:18.329 "params": { 00:23:18.329 "name": "Nvme$subsystem", 00:23:18.329 "trtype": "$TEST_TRANSPORT", 00:23:18.329 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:18.329 "adrfam": "ipv4", 00:23:18.329 "trsvcid": "$NVMF_PORT", 00:23:18.329 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:18.329 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:18.329 "hdgst": ${hdgst:-false}, 00:23:18.329 "ddgst": ${ddgst:-false} 00:23:18.329 }, 00:23:18.329 "method": "bdev_nvme_attach_controller" 00:23:18.329 } 00:23:18.329 EOF 00:23:18.329 )") 00:23:18.329 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:23:18.329 [2024-11-05 19:12:47.634547] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:23:18.329 [2024-11-05 19:12:47.634602] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:23:18.329 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:23:18.329 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:23:18.329 { 00:23:18.329 "params": { 00:23:18.329 "name": "Nvme$subsystem", 00:23:18.329 "trtype": "$TEST_TRANSPORT", 00:23:18.329 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:18.329 "adrfam": "ipv4", 00:23:18.329 "trsvcid": "$NVMF_PORT", 00:23:18.329 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:18.329 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:18.329 "hdgst": ${hdgst:-false}, 00:23:18.329 "ddgst": ${ddgst:-false} 00:23:18.329 }, 00:23:18.329 "method": "bdev_nvme_attach_controller" 00:23:18.329 } 00:23:18.329 EOF 00:23:18.329 )") 00:23:18.329 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:23:18.329 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:23:18.329 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:23:18.329 { 00:23:18.329 "params": { 00:23:18.329 "name": "Nvme$subsystem", 00:23:18.329 "trtype": "$TEST_TRANSPORT", 00:23:18.329 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:18.329 "adrfam": "ipv4", 00:23:18.329 "trsvcid": "$NVMF_PORT", 00:23:18.329 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:18.329 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:18.329 "hdgst": ${hdgst:-false}, 00:23:18.329 "ddgst": ${ddgst:-false} 00:23:18.329 }, 00:23:18.329 "method": "bdev_nvme_attach_controller" 00:23:18.329 } 00:23:18.329 EOF 00:23:18.329 )") 00:23:18.329 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:23:18.590 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:23:18.590 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:23:18.590 { 00:23:18.590 "params": { 00:23:18.590 "name": "Nvme$subsystem", 00:23:18.590 "trtype": "$TEST_TRANSPORT", 00:23:18.590 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:18.590 "adrfam": "ipv4", 00:23:18.590 "trsvcid": "$NVMF_PORT", 00:23:18.590 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:18.590 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:18.590 "hdgst": ${hdgst:-false}, 00:23:18.590 "ddgst": ${ddgst:-false} 00:23:18.590 }, 00:23:18.590 "method": "bdev_nvme_attach_controller" 00:23:18.590 } 00:23:18.590 EOF 00:23:18.590 )") 00:23:18.590 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:23:18.590 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:23:18.590 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:23:18.590 { 00:23:18.590 "params": { 00:23:18.590 "name": "Nvme$subsystem", 00:23:18.590 "trtype": "$TEST_TRANSPORT", 00:23:18.590 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:18.590 "adrfam": "ipv4", 00:23:18.590 "trsvcid": "$NVMF_PORT", 00:23:18.590 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:18.590 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:18.590 "hdgst": ${hdgst:-false}, 00:23:18.590 "ddgst": ${ddgst:-false} 00:23:18.590 }, 00:23:18.590 "method": "bdev_nvme_attach_controller" 00:23:18.590 } 00:23:18.590 EOF 00:23:18.590 )") 00:23:18.590 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:23:18.590 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@396 -- # jq . 00:23:18.590 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@397 -- # IFS=, 00:23:18.590 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:23:18.590 "params": { 00:23:18.590 "name": "Nvme1", 00:23:18.590 "trtype": "tcp", 00:23:18.590 "traddr": "10.0.0.2", 00:23:18.590 "adrfam": "ipv4", 00:23:18.590 "trsvcid": "4420", 00:23:18.590 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:18.590 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:18.590 "hdgst": false, 00:23:18.590 "ddgst": false 00:23:18.590 }, 00:23:18.590 "method": "bdev_nvme_attach_controller" 00:23:18.590 },{ 00:23:18.590 "params": { 00:23:18.590 "name": "Nvme2", 00:23:18.590 "trtype": "tcp", 00:23:18.590 "traddr": "10.0.0.2", 00:23:18.590 "adrfam": "ipv4", 00:23:18.590 "trsvcid": "4420", 00:23:18.590 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:18.590 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:18.590 "hdgst": false, 00:23:18.590 "ddgst": false 00:23:18.590 }, 00:23:18.590 "method": "bdev_nvme_attach_controller" 00:23:18.590 },{ 00:23:18.590 "params": { 00:23:18.590 "name": "Nvme3", 00:23:18.590 "trtype": "tcp", 00:23:18.590 "traddr": "10.0.0.2", 00:23:18.590 "adrfam": "ipv4", 00:23:18.590 "trsvcid": "4420", 00:23:18.590 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:18.590 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:18.590 "hdgst": false, 00:23:18.590 "ddgst": false 00:23:18.590 }, 00:23:18.590 "method": "bdev_nvme_attach_controller" 00:23:18.590 },{ 00:23:18.590 "params": { 00:23:18.590 "name": "Nvme4", 00:23:18.590 "trtype": "tcp", 00:23:18.590 "traddr": "10.0.0.2", 00:23:18.590 "adrfam": "ipv4", 00:23:18.590 "trsvcid": "4420", 00:23:18.590 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:18.590 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:18.590 "hdgst": false, 00:23:18.590 "ddgst": false 00:23:18.590 }, 00:23:18.590 "method": "bdev_nvme_attach_controller" 00:23:18.590 },{ 00:23:18.590 "params": { 00:23:18.590 "name": "Nvme5", 00:23:18.590 "trtype": "tcp", 00:23:18.590 "traddr": "10.0.0.2", 00:23:18.590 "adrfam": "ipv4", 00:23:18.590 "trsvcid": "4420", 00:23:18.590 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:18.590 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:18.590 "hdgst": false, 00:23:18.590 "ddgst": false 00:23:18.590 }, 00:23:18.590 "method": "bdev_nvme_attach_controller" 00:23:18.590 },{ 00:23:18.590 "params": { 00:23:18.590 "name": "Nvme6", 00:23:18.590 "trtype": "tcp", 00:23:18.590 "traddr": "10.0.0.2", 00:23:18.590 "adrfam": "ipv4", 00:23:18.590 "trsvcid": "4420", 00:23:18.590 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:18.590 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:18.590 "hdgst": false, 00:23:18.590 "ddgst": false 00:23:18.590 }, 00:23:18.590 "method": "bdev_nvme_attach_controller" 00:23:18.590 },{ 00:23:18.590 "params": { 00:23:18.590 "name": "Nvme7", 00:23:18.590 "trtype": "tcp", 00:23:18.590 "traddr": "10.0.0.2", 00:23:18.590 "adrfam": "ipv4", 00:23:18.590 "trsvcid": "4420", 00:23:18.590 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:18.590 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:18.590 "hdgst": false, 00:23:18.590 "ddgst": false 00:23:18.590 }, 00:23:18.590 "method": "bdev_nvme_attach_controller" 00:23:18.590 },{ 00:23:18.590 "params": { 00:23:18.590 "name": "Nvme8", 00:23:18.590 "trtype": "tcp", 00:23:18.590 "traddr": "10.0.0.2", 00:23:18.590 "adrfam": "ipv4", 00:23:18.590 "trsvcid": "4420", 00:23:18.590 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:18.590 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:18.590 "hdgst": false, 00:23:18.590 "ddgst": false 00:23:18.590 }, 00:23:18.590 "method": "bdev_nvme_attach_controller" 00:23:18.591 },{ 00:23:18.591 "params": { 00:23:18.591 "name": "Nvme9", 00:23:18.591 "trtype": "tcp", 00:23:18.591 "traddr": "10.0.0.2", 00:23:18.591 "adrfam": "ipv4", 00:23:18.591 "trsvcid": "4420", 00:23:18.591 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:18.591 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:18.591 "hdgst": false, 00:23:18.591 "ddgst": false 00:23:18.591 }, 00:23:18.591 "method": "bdev_nvme_attach_controller" 00:23:18.591 },{ 00:23:18.591 "params": { 00:23:18.591 "name": "Nvme10", 00:23:18.591 "trtype": "tcp", 00:23:18.591 "traddr": "10.0.0.2", 00:23:18.591 "adrfam": "ipv4", 00:23:18.591 "trsvcid": "4420", 00:23:18.591 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:18.591 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:18.591 "hdgst": false, 00:23:18.591 "ddgst": false 00:23:18.591 }, 00:23:18.591 "method": "bdev_nvme_attach_controller" 00:23:18.591 }' 00:23:18.591 [2024-11-05 19:12:47.707242] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:18.591 [2024-11-05 19:12:47.743735] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:19.973 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:19.973 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@866 -- # return 0 00:23:19.973 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:19.973 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.973 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:19.973 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.973 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 413149 00:23:19.973 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:23:19.973 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:23:20.914 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 413149 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:23:20.914 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 412899 00:23:20.914 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:23:20.914 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:20.914 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # config=() 00:23:20.914 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # local subsystem config 00:23:20.914 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:23:20.914 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:23:20.914 { 00:23:20.914 "params": { 00:23:20.914 "name": "Nvme$subsystem", 00:23:20.914 "trtype": "$TEST_TRANSPORT", 00:23:20.914 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:20.914 "adrfam": "ipv4", 00:23:20.914 "trsvcid": "$NVMF_PORT", 00:23:20.914 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:20.914 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:20.914 "hdgst": ${hdgst:-false}, 00:23:20.914 "ddgst": ${ddgst:-false} 00:23:20.914 }, 00:23:20.914 "method": "bdev_nvme_attach_controller" 00:23:20.914 } 00:23:20.914 EOF 00:23:20.914 )") 00:23:20.914 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:23:20.914 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:23:20.914 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:23:20.914 { 00:23:20.914 "params": { 00:23:20.914 "name": "Nvme$subsystem", 00:23:20.914 "trtype": "$TEST_TRANSPORT", 00:23:20.914 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:20.914 "adrfam": "ipv4", 00:23:20.914 "trsvcid": "$NVMF_PORT", 00:23:20.914 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:20.914 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:20.914 "hdgst": ${hdgst:-false}, 00:23:20.914 "ddgst": ${ddgst:-false} 00:23:20.914 }, 00:23:20.914 "method": "bdev_nvme_attach_controller" 00:23:20.914 } 00:23:20.914 EOF 00:23:20.914 )") 00:23:20.914 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:23:20.914 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:23:20.914 19:12:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:23:20.914 { 00:23:20.914 "params": { 00:23:20.914 "name": "Nvme$subsystem", 00:23:20.914 "trtype": "$TEST_TRANSPORT", 00:23:20.914 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:20.914 "adrfam": "ipv4", 00:23:20.914 "trsvcid": "$NVMF_PORT", 00:23:20.914 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:20.914 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:20.914 "hdgst": ${hdgst:-false}, 00:23:20.914 "ddgst": ${ddgst:-false} 00:23:20.914 }, 00:23:20.914 "method": "bdev_nvme_attach_controller" 00:23:20.914 } 00:23:20.914 EOF 00:23:20.914 )") 00:23:20.914 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:23:20.914 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:23:20.915 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:23:20.915 { 00:23:20.915 "params": { 00:23:20.915 "name": "Nvme$subsystem", 00:23:20.915 "trtype": "$TEST_TRANSPORT", 00:23:20.915 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:20.915 "adrfam": "ipv4", 00:23:20.915 "trsvcid": "$NVMF_PORT", 00:23:20.915 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:20.915 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:20.915 "hdgst": ${hdgst:-false}, 00:23:20.915 "ddgst": ${ddgst:-false} 00:23:20.915 }, 00:23:20.915 "method": "bdev_nvme_attach_controller" 00:23:20.915 } 00:23:20.915 EOF 00:23:20.915 )") 00:23:20.915 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:23:20.915 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:23:20.915 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:23:20.915 { 00:23:20.915 "params": { 00:23:20.915 "name": "Nvme$subsystem", 00:23:20.915 "trtype": "$TEST_TRANSPORT", 00:23:20.915 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:20.915 "adrfam": "ipv4", 00:23:20.915 "trsvcid": "$NVMF_PORT", 00:23:20.915 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:20.915 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:20.915 "hdgst": ${hdgst:-false}, 00:23:20.915 "ddgst": ${ddgst:-false} 00:23:20.915 }, 00:23:20.915 "method": "bdev_nvme_attach_controller" 00:23:20.915 } 00:23:20.915 EOF 00:23:20.915 )") 00:23:20.915 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:23:20.915 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:23:20.915 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:23:20.915 { 00:23:20.915 "params": { 00:23:20.915 "name": "Nvme$subsystem", 00:23:20.915 "trtype": "$TEST_TRANSPORT", 00:23:20.915 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:20.915 "adrfam": "ipv4", 00:23:20.915 "trsvcid": "$NVMF_PORT", 00:23:20.915 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:20.915 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:20.915 "hdgst": ${hdgst:-false}, 00:23:20.915 "ddgst": ${ddgst:-false} 00:23:20.915 }, 00:23:20.915 "method": "bdev_nvme_attach_controller" 00:23:20.915 } 00:23:20.915 EOF 00:23:20.915 )") 00:23:20.915 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:23:20.915 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:23:20.915 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:23:20.915 { 00:23:20.915 "params": { 00:23:20.915 "name": "Nvme$subsystem", 00:23:20.915 "trtype": "$TEST_TRANSPORT", 00:23:20.915 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:20.915 "adrfam": "ipv4", 00:23:20.915 "trsvcid": "$NVMF_PORT", 00:23:20.915 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:20.915 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:20.915 "hdgst": ${hdgst:-false}, 00:23:20.915 "ddgst": ${ddgst:-false} 00:23:20.915 }, 00:23:20.915 "method": "bdev_nvme_attach_controller" 00:23:20.915 } 00:23:20.915 EOF 00:23:20.915 )") 00:23:20.915 [2024-11-05 19:12:50.034449] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:23:20.915 [2024-11-05 19:12:50.034504] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid413695 ] 00:23:20.915 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:23:20.915 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:23:20.915 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:23:20.915 { 00:23:20.915 "params": { 00:23:20.915 "name": "Nvme$subsystem", 00:23:20.915 "trtype": "$TEST_TRANSPORT", 00:23:20.915 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:20.915 "adrfam": "ipv4", 00:23:20.915 "trsvcid": "$NVMF_PORT", 00:23:20.915 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:20.915 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:20.915 "hdgst": ${hdgst:-false}, 00:23:20.915 "ddgst": ${ddgst:-false} 00:23:20.915 }, 00:23:20.915 "method": "bdev_nvme_attach_controller" 00:23:20.915 } 00:23:20.915 EOF 00:23:20.915 )") 00:23:20.915 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:23:20.915 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:23:20.915 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:23:20.915 { 00:23:20.915 "params": { 00:23:20.915 "name": "Nvme$subsystem", 00:23:20.915 "trtype": "$TEST_TRANSPORT", 00:23:20.915 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:20.915 "adrfam": "ipv4", 00:23:20.915 "trsvcid": "$NVMF_PORT", 00:23:20.915 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:20.915 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:20.915 "hdgst": ${hdgst:-false}, 00:23:20.915 "ddgst": ${ddgst:-false} 00:23:20.915 }, 00:23:20.915 "method": "bdev_nvme_attach_controller" 00:23:20.915 } 00:23:20.915 EOF 00:23:20.915 )") 00:23:20.915 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:23:20.915 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:23:20.915 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:23:20.915 { 00:23:20.915 "params": { 00:23:20.915 "name": "Nvme$subsystem", 00:23:20.915 "trtype": "$TEST_TRANSPORT", 00:23:20.915 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:20.915 "adrfam": "ipv4", 00:23:20.915 "trsvcid": "$NVMF_PORT", 00:23:20.915 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:20.915 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:20.915 "hdgst": ${hdgst:-false}, 00:23:20.915 "ddgst": ${ddgst:-false} 00:23:20.915 }, 00:23:20.915 "method": "bdev_nvme_attach_controller" 00:23:20.915 } 00:23:20.915 EOF 00:23:20.915 )") 00:23:20.915 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:23:20.915 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@396 -- # jq . 00:23:20.915 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@397 -- # IFS=, 00:23:20.915 19:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:23:20.915 "params": { 00:23:20.915 "name": "Nvme1", 00:23:20.915 "trtype": "tcp", 00:23:20.915 "traddr": "10.0.0.2", 00:23:20.915 "adrfam": "ipv4", 00:23:20.915 "trsvcid": "4420", 00:23:20.915 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:20.915 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:20.915 "hdgst": false, 00:23:20.915 "ddgst": false 00:23:20.915 }, 00:23:20.915 "method": "bdev_nvme_attach_controller" 00:23:20.915 },{ 00:23:20.915 "params": { 00:23:20.915 "name": "Nvme2", 00:23:20.915 "trtype": "tcp", 00:23:20.915 "traddr": "10.0.0.2", 00:23:20.915 "adrfam": "ipv4", 00:23:20.915 "trsvcid": "4420", 00:23:20.915 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:20.915 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:20.915 "hdgst": false, 00:23:20.915 "ddgst": false 00:23:20.915 }, 00:23:20.915 "method": "bdev_nvme_attach_controller" 00:23:20.915 },{ 00:23:20.915 "params": { 00:23:20.915 "name": "Nvme3", 00:23:20.915 "trtype": "tcp", 00:23:20.915 "traddr": "10.0.0.2", 00:23:20.915 "adrfam": "ipv4", 00:23:20.915 "trsvcid": "4420", 00:23:20.915 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:20.915 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:20.915 "hdgst": false, 00:23:20.915 "ddgst": false 00:23:20.915 }, 00:23:20.915 "method": "bdev_nvme_attach_controller" 00:23:20.915 },{ 00:23:20.915 "params": { 00:23:20.915 "name": "Nvme4", 00:23:20.915 "trtype": "tcp", 00:23:20.915 "traddr": "10.0.0.2", 00:23:20.915 "adrfam": "ipv4", 00:23:20.915 "trsvcid": "4420", 00:23:20.915 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:20.915 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:20.915 "hdgst": false, 00:23:20.915 "ddgst": false 00:23:20.915 }, 00:23:20.915 "method": "bdev_nvme_attach_controller" 00:23:20.915 },{ 00:23:20.915 "params": { 00:23:20.915 "name": "Nvme5", 00:23:20.915 "trtype": "tcp", 00:23:20.915 "traddr": "10.0.0.2", 00:23:20.915 "adrfam": "ipv4", 00:23:20.915 "trsvcid": "4420", 00:23:20.915 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:20.915 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:20.915 "hdgst": false, 00:23:20.915 "ddgst": false 00:23:20.915 }, 00:23:20.915 "method": "bdev_nvme_attach_controller" 00:23:20.915 },{ 00:23:20.915 "params": { 00:23:20.915 "name": "Nvme6", 00:23:20.915 "trtype": "tcp", 00:23:20.915 "traddr": "10.0.0.2", 00:23:20.915 "adrfam": "ipv4", 00:23:20.915 "trsvcid": "4420", 00:23:20.915 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:20.915 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:20.915 "hdgst": false, 00:23:20.915 "ddgst": false 00:23:20.915 }, 00:23:20.915 "method": "bdev_nvme_attach_controller" 00:23:20.915 },{ 00:23:20.915 "params": { 00:23:20.915 "name": "Nvme7", 00:23:20.915 "trtype": "tcp", 00:23:20.915 "traddr": "10.0.0.2", 00:23:20.915 "adrfam": "ipv4", 00:23:20.915 "trsvcid": "4420", 00:23:20.915 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:20.915 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:20.915 "hdgst": false, 00:23:20.916 "ddgst": false 00:23:20.916 }, 00:23:20.916 "method": "bdev_nvme_attach_controller" 00:23:20.916 },{ 00:23:20.916 "params": { 00:23:20.916 "name": "Nvme8", 00:23:20.916 "trtype": "tcp", 00:23:20.916 "traddr": "10.0.0.2", 00:23:20.916 "adrfam": "ipv4", 00:23:20.916 "trsvcid": "4420", 00:23:20.916 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:20.916 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:20.916 "hdgst": false, 00:23:20.916 "ddgst": false 00:23:20.916 }, 00:23:20.916 "method": "bdev_nvme_attach_controller" 00:23:20.916 },{ 00:23:20.916 "params": { 00:23:20.916 "name": "Nvme9", 00:23:20.916 "trtype": "tcp", 00:23:20.916 "traddr": "10.0.0.2", 00:23:20.916 "adrfam": "ipv4", 00:23:20.916 "trsvcid": "4420", 00:23:20.916 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:20.916 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:20.916 "hdgst": false, 00:23:20.916 "ddgst": false 00:23:20.916 }, 00:23:20.916 "method": "bdev_nvme_attach_controller" 00:23:20.916 },{ 00:23:20.916 "params": { 00:23:20.916 "name": "Nvme10", 00:23:20.916 "trtype": "tcp", 00:23:20.916 "traddr": "10.0.0.2", 00:23:20.916 "adrfam": "ipv4", 00:23:20.916 "trsvcid": "4420", 00:23:20.916 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:20.916 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:20.916 "hdgst": false, 00:23:20.916 "ddgst": false 00:23:20.916 }, 00:23:20.916 "method": "bdev_nvme_attach_controller" 00:23:20.916 }' 00:23:20.916 [2024-11-05 19:12:50.110260] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:20.916 [2024-11-05 19:12:50.148490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:22.298 Running I/O for 1 seconds... 00:23:23.683 1864.00 IOPS, 116.50 MiB/s 00:23:23.683 Latency(us) 00:23:23.683 [2024-11-05T18:12:53.006Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:23.683 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:23.683 Verification LBA range: start 0x0 length 0x400 00:23:23.683 Nvme1n1 : 1.14 223.81 13.99 0.00 0.00 283101.44 15182.51 256901.12 00:23:23.683 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:23.683 Verification LBA range: start 0x0 length 0x400 00:23:23.683 Nvme2n1 : 1.10 232.13 14.51 0.00 0.00 268171.31 18240.85 253405.87 00:23:23.683 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:23.683 Verification LBA range: start 0x0 length 0x400 00:23:23.683 Nvme3n1 : 1.07 248.76 15.55 0.00 0.00 244129.22 4068.69 263891.63 00:23:23.683 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:23.683 Verification LBA range: start 0x0 length 0x400 00:23:23.683 Nvme4n1 : 1.14 228.35 14.27 0.00 0.00 263338.77 20097.71 234181.97 00:23:23.683 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:23.683 Verification LBA range: start 0x0 length 0x400 00:23:23.683 Nvme5n1 : 1.18 217.25 13.58 0.00 0.00 272701.01 17913.17 272629.76 00:23:23.683 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:23.683 Verification LBA range: start 0x0 length 0x400 00:23:23.683 Nvme6n1 : 1.13 229.18 14.32 0.00 0.00 251193.40 3522.56 241172.48 00:23:23.683 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:23.683 Verification LBA range: start 0x0 length 0x400 00:23:23.683 Nvme7n1 : 1.18 270.84 16.93 0.00 0.00 211204.78 20971.52 253405.87 00:23:23.683 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:23.683 Verification LBA range: start 0x0 length 0x400 00:23:23.683 Nvme8n1 : 1.18 273.60 17.10 0.00 0.00 205377.00 1672.53 251658.24 00:23:23.683 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:23.683 Verification LBA range: start 0x0 length 0x400 00:23:23.683 Nvme9n1 : 1.17 218.19 13.64 0.00 0.00 252642.35 17367.04 270882.13 00:23:23.683 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:23.683 Verification LBA range: start 0x0 length 0x400 00:23:23.683 Nvme10n1 : 1.19 272.80 17.05 0.00 0.00 198744.38 1727.15 269134.51 00:23:23.683 [2024-11-05T18:12:53.006Z] =================================================================================================================== 00:23:23.683 [2024-11-05T18:12:53.006Z] Total : 2414.92 150.93 0.00 0.00 242192.15 1672.53 272629.76 00:23:23.683 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:23:23.683 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:23:23.683 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:23.683 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:23.683 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:23:23.683 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # nvmfcleanup 00:23:23.683 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@99 -- # sync 00:23:23.683 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:23:23.683 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # set +e 00:23:23.683 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # for i in {1..20} 00:23:23.683 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:23:23.683 rmmod nvme_tcp 00:23:23.683 rmmod nvme_fabrics 00:23:23.683 rmmod nvme_keyring 00:23:23.683 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:23:23.683 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # set -e 00:23:23.683 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # return 0 00:23:23.683 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # '[' -n 412899 ']' 00:23:23.683 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@337 -- # killprocess 412899 00:23:23.683 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # '[' -z 412899 ']' 00:23:23.683 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # kill -0 412899 00:23:23.683 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@957 -- # uname 00:23:23.683 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:23.683 19:12:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 412899 00:23:23.944 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:23:23.944 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:23:23.944 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 412899' 00:23:23.944 killing process with pid 412899 00:23:23.944 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@971 -- # kill 412899 00:23:23.944 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@976 -- # wait 412899 00:23:23.944 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:23:23.944 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # nvmf_fini 00:23:23.944 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@264 -- # local dev 00:23:23.944 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@267 -- # remove_target_ns 00:23:23.944 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:23:23.944 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:23:23.944 19:12:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_target_ns 00:23:26.494 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@268 -- # delete_main_bridge 00:23:26.494 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:23:26.494 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@130 -- # return 0 00:23:26.494 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:23:26.494 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:23:26.494 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:23:26.494 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:23:26.494 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:23:26.494 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:23:26.494 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:23:26.494 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:23:26.494 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:23:26.494 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:23:26.494 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:23:26.494 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:23:26.494 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:23:26.494 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:23:26.494 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:23:26.494 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:23:26.494 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:23:26.494 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@41 -- # _dev=0 00:23:26.494 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@41 -- # dev_map=() 00:23:26.494 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@284 -- # iptr 00:23:26.494 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@542 -- # iptables-save 00:23:26.494 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@542 -- # iptables-restore 00:23:26.494 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:23:26.494 00:23:26.494 real 0m16.719s 00:23:26.494 user 0m33.711s 00:23:26.494 sys 0m6.765s 00:23:26.494 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:26.494 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:26.494 ************************************ 00:23:26.494 END TEST nvmf_shutdown_tc1 00:23:26.494 ************************************ 00:23:26.494 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:23:26.494 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:23:26.494 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:26.494 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:26.494 ************************************ 00:23:26.494 START TEST nvmf_shutdown_tc2 00:23:26.494 ************************************ 00:23:26.494 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc2 00:23:26.494 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:23:26.494 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:23:26.494 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:23:26.494 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:26.494 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # prepare_net_devs 00:23:26.494 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # local -g is_hw=no 00:23:26.494 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # remove_target_ns 00:23:26.494 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:23:26.494 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:23:26.494 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_target_ns 00:23:26.494 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:23:26.494 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:23:26.494 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # xtrace_disable 00:23:26.494 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:26.495 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:26.495 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@131 -- # pci_devs=() 00:23:26.495 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@131 -- # local -a pci_devs 00:23:26.495 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@132 -- # pci_net_devs=() 00:23:26.495 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:23:26.495 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@133 -- # pci_drivers=() 00:23:26.495 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@133 -- # local -A pci_drivers 00:23:26.495 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@135 -- # net_devs=() 00:23:26.495 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@135 -- # local -ga net_devs 00:23:26.495 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@136 -- # e810=() 00:23:26.495 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@136 -- # local -ga e810 00:23:26.495 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@137 -- # x722=() 00:23:26.495 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@137 -- # local -ga x722 00:23:26.495 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@138 -- # mlx=() 00:23:26.495 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@138 -- # local -ga mlx 00:23:26.495 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:26.495 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:26.495 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:26.495 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:26.495 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:26.495 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:26.495 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:26.495 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:26.495 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:26.495 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:26.495 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:26.495 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:26.495 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:23:26.495 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:23:26.495 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:23:26.495 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:23:26.495 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:23:26.495 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:23:26.495 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:23:26.495 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:26.495 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:26.495 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:23:26.495 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:23:26.495 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:26.495 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:26.495 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:23:26.495 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:23:26.495 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:26.495 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:26.495 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:23:26.495 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:23:26.495 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:26.495 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:26.495 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:23:26.495 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:23:26.495 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:23:26.495 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:23:26.495 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:23:26.495 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:26.495 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:23:26.495 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:26.495 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # [[ up == up ]] 00:23:26.495 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:23:26.495 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:26.495 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:26.495 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:26.495 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:23:26.495 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:23:26.495 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:26.495 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:23:26.495 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:26.495 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # [[ up == up ]] 00:23:26.495 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:23:26.495 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:26.495 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:26.495 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:26.495 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:23:26.495 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:23:26.495 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:23:26.495 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # is_hw=yes 00:23:26.495 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:23:26.495 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:23:26.495 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:23:26.495 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:23:26.495 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@257 -- # create_target_ns 00:23:26.495 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:23:26.495 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:23:26.495 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:23:26.495 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:26.495 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:23:26.495 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:23:26.495 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:26.495 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:26.495 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:23:26.495 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:23:26.495 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:23:26.495 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:23:26.495 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@27 -- # local -gA dev_map 00:23:26.495 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@28 -- # local -g _dev 00:23:26.495 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:23:26.495 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:23:26.495 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:23:26.495 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:23:26.495 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@44 -- # ips=() 00:23:26.495 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:23:26.495 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:23:26.496 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:23:26.496 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:23:26.496 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:23:26.496 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:23:26.496 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:23:26.496 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:23:26.496 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:23:26.496 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:23:26.496 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:23:26.496 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:23:26.496 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:23:26.496 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:23:26.496 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:23:26.496 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:23:26.496 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:23:26.496 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:23:26.496 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:23:26.496 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:23:26.496 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@11 -- # local val=167772161 00:23:26.496 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:23:26.496 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:23:26.496 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:23:26.496 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:23:26.496 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:23:26.496 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:23:26.496 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:23:26.496 10.0.0.1 00:23:26.496 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:23:26.496 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:23:26.496 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:26.496 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:26.496 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:23:26.496 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@11 -- # local val=167772162 00:23:26.496 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:23:26.496 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:23:26.496 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:23:26.496 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:23:26.496 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:23:26.496 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:23:26.496 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:23:26.496 10.0.0.2 00:23:26.496 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:23:26.496 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:23:26.496 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:23:26.496 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:23:26.496 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:23:26.496 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:23:26.496 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:23:26.496 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:26.496 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:26.496 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:23:26.496 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:23:26.496 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:23:26.496 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:23:26.496 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:23:26.496 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:23:26.496 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:23:26.496 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:23:26.496 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:23:26.496 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:23:26.496 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:23:26.496 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@38 -- # ping_ips 1 00:23:26.496 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:23:26.496 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:23:26.496 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:23:26.496 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:23:26.496 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:23:26.496 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:23:26.496 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:23:26.496 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:23:26.496 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:23:26.496 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@107 -- # local dev=initiator0 00:23:26.496 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:23:26.496 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:23:26.496 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:23:26.496 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:23:26.496 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:23:26.496 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:23:26.496 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:23:26.496 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:23:26.496 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:23:26.496 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:23:26.496 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:23:26.496 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:26.496 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:26.496 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:23:26.496 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:23:26.496 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:26.496 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.618 ms 00:23:26.496 00:23:26.496 --- 10.0.0.1 ping statistics --- 00:23:26.496 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:26.496 rtt min/avg/max/mdev = 0.618/0.618/0.618/0.000 ms 00:23:26.496 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:23:26.496 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:23:26.496 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:23:26.496 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:23:26.496 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:26.496 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:26.496 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@168 -- # get_net_dev target0 00:23:26.496 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@107 -- # local dev=target0 00:23:26.496 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:23:26.497 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:23:26.497 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:23:26.497 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:23:26.497 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:23:26.497 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:23:26.497 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:23:26.497 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:23:26.497 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:23:26.497 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:23:26.497 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:23:26.497 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:23:26.497 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:23:26.497 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:23:26.497 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:26.497 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.279 ms 00:23:26.497 00:23:26.497 --- 10.0.0.2 ping statistics --- 00:23:26.497 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:26.497 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:23:26.497 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@98 -- # (( pair++ )) 00:23:26.497 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:23:26.497 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:26.497 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # return 0 00:23:26.497 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:23:26.497 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:23:26.497 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:23:26.497 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:23:26.497 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:23:26.497 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:23:26.497 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:23:26.497 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:23:26.497 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:23:26.497 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:23:26.497 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@107 -- # local dev=initiator0 00:23:26.497 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:23:26.497 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:23:26.497 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:23:26.497 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:23:26.497 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:23:26.497 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:23:26.497 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:23:26.497 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:23:26.497 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:23:26.497 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:26.497 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:23:26.497 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:23:26.497 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:23:26.497 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:23:26.497 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:23:26.758 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:23:26.758 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@107 -- # local dev=initiator1 00:23:26.758 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:23:26.758 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:23:26.758 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@109 -- # return 1 00:23:26.758 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@168 -- # dev= 00:23:26.758 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@169 -- # return 0 00:23:26.758 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:23:26.758 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:23:26.758 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:23:26.758 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:23:26.758 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:23:26.758 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:26.758 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:26.758 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@168 -- # get_net_dev target0 00:23:26.758 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@107 -- # local dev=target0 00:23:26.758 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:23:26.758 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:23:26.758 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:23:26.758 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:23:26.758 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:23:26.758 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:23:26.758 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:23:26.758 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:23:26.758 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:23:26.758 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:26.758 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:23:26.758 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:23:26.758 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:23:26.758 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:23:26.758 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:26.758 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:26.758 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@168 -- # get_net_dev target1 00:23:26.758 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@107 -- # local dev=target1 00:23:26.758 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:23:26.758 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:23:26.758 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@109 -- # return 1 00:23:26.758 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@168 -- # dev= 00:23:26.758 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@169 -- # return 0 00:23:26.758 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:23:26.758 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:26.758 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:23:26.758 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:23:26.758 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:26.758 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:23:26.758 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:23:26.758 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:23:26.758 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:23:26.758 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:26.758 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:26.759 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # nvmfpid=414844 00:23:26.759 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # waitforlisten 414844 00:23:26.759 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:26.759 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # '[' -z 414844 ']' 00:23:26.759 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:26.759 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:26.759 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:26.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:26.759 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:26.759 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:26.759 [2024-11-05 19:12:55.962229] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:23:26.759 [2024-11-05 19:12:55.962278] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:26.759 [2024-11-05 19:12:56.055909] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:27.020 [2024-11-05 19:12:56.087276] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:27.020 [2024-11-05 19:12:56.087308] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:27.020 [2024-11-05 19:12:56.087314] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:27.020 [2024-11-05 19:12:56.087318] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:27.020 [2024-11-05 19:12:56.087323] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:27.020 [2024-11-05 19:12:56.088571] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:27.020 [2024-11-05 19:12:56.088726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:27.020 [2024-11-05 19:12:56.088886] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:27.020 [2024-11-05 19:12:56.088887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:23:27.593 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:27.593 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@866 -- # return 0 00:23:27.593 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:23:27.594 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:27.594 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:27.594 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:27.594 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:27.594 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.594 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:27.594 [2024-11-05 19:12:56.809440] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:27.594 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.594 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:23:27.594 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:23:27.594 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:27.594 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:27.594 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:27.594 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:27.594 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:27.594 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:27.594 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:27.594 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:27.594 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:27.594 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:27.594 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:27.594 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:27.594 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:27.594 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:27.594 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:27.594 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:27.594 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:27.594 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:27.594 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:27.594 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:27.594 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:27.594 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:27.594 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:27.594 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:23:27.594 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.594 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:27.594 Malloc1 00:23:27.855 [2024-11-05 19:12:56.922691] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:27.855 Malloc2 00:23:27.855 Malloc3 00:23:27.855 Malloc4 00:23:27.855 Malloc5 00:23:27.855 Malloc6 00:23:27.855 Malloc7 00:23:27.855 Malloc8 00:23:28.117 Malloc9 00:23:28.117 Malloc10 00:23:28.117 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.117 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:23:28.117 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:28.117 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:28.117 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=415226 00:23:28.117 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 415226 /var/tmp/bdevperf.sock 00:23:28.117 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # '[' -z 415226 ']' 00:23:28.117 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:28.117 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:28.117 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:28.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:28.117 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:28.117 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:28.117 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:28.117 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:28.117 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # config=() 00:23:28.117 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # local subsystem config 00:23:28.117 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:23:28.117 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:23:28.117 { 00:23:28.117 "params": { 00:23:28.117 "name": "Nvme$subsystem", 00:23:28.117 "trtype": "$TEST_TRANSPORT", 00:23:28.117 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:28.117 "adrfam": "ipv4", 00:23:28.117 "trsvcid": "$NVMF_PORT", 00:23:28.117 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:28.117 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:28.117 "hdgst": ${hdgst:-false}, 00:23:28.117 "ddgst": ${ddgst:-false} 00:23:28.117 }, 00:23:28.117 "method": "bdev_nvme_attach_controller" 00:23:28.117 } 00:23:28.117 EOF 00:23:28.117 )") 00:23:28.117 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # cat 00:23:28.117 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:23:28.117 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:23:28.117 { 00:23:28.117 "params": { 00:23:28.117 "name": "Nvme$subsystem", 00:23:28.117 "trtype": "$TEST_TRANSPORT", 00:23:28.117 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:28.117 "adrfam": "ipv4", 00:23:28.117 "trsvcid": "$NVMF_PORT", 00:23:28.117 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:28.117 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:28.117 "hdgst": ${hdgst:-false}, 00:23:28.117 "ddgst": ${ddgst:-false} 00:23:28.117 }, 00:23:28.117 "method": "bdev_nvme_attach_controller" 00:23:28.117 } 00:23:28.117 EOF 00:23:28.117 )") 00:23:28.117 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # cat 00:23:28.117 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:23:28.117 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:23:28.117 { 00:23:28.117 "params": { 00:23:28.117 "name": "Nvme$subsystem", 00:23:28.117 "trtype": "$TEST_TRANSPORT", 00:23:28.117 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:28.117 "adrfam": "ipv4", 00:23:28.117 "trsvcid": "$NVMF_PORT", 00:23:28.117 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:28.117 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:28.117 "hdgst": ${hdgst:-false}, 00:23:28.117 "ddgst": ${ddgst:-false} 00:23:28.117 }, 00:23:28.117 "method": "bdev_nvme_attach_controller" 00:23:28.117 } 00:23:28.117 EOF 00:23:28.117 )") 00:23:28.117 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # cat 00:23:28.117 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:23:28.117 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:23:28.117 { 00:23:28.117 "params": { 00:23:28.117 "name": "Nvme$subsystem", 00:23:28.117 "trtype": "$TEST_TRANSPORT", 00:23:28.117 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:28.117 "adrfam": "ipv4", 00:23:28.117 "trsvcid": "$NVMF_PORT", 00:23:28.117 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:28.117 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:28.117 "hdgst": ${hdgst:-false}, 00:23:28.117 "ddgst": ${ddgst:-false} 00:23:28.117 }, 00:23:28.117 "method": "bdev_nvme_attach_controller" 00:23:28.117 } 00:23:28.117 EOF 00:23:28.117 )") 00:23:28.117 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # cat 00:23:28.117 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:23:28.117 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:23:28.117 { 00:23:28.117 "params": { 00:23:28.117 "name": "Nvme$subsystem", 00:23:28.117 "trtype": "$TEST_TRANSPORT", 00:23:28.117 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:28.117 "adrfam": "ipv4", 00:23:28.117 "trsvcid": "$NVMF_PORT", 00:23:28.117 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:28.117 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:28.118 "hdgst": ${hdgst:-false}, 00:23:28.118 "ddgst": ${ddgst:-false} 00:23:28.118 }, 00:23:28.118 "method": "bdev_nvme_attach_controller" 00:23:28.118 } 00:23:28.118 EOF 00:23:28.118 )") 00:23:28.118 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # cat 00:23:28.118 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:23:28.118 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:23:28.118 { 00:23:28.118 "params": { 00:23:28.118 "name": "Nvme$subsystem", 00:23:28.118 "trtype": "$TEST_TRANSPORT", 00:23:28.118 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:28.118 "adrfam": "ipv4", 00:23:28.118 "trsvcid": "$NVMF_PORT", 00:23:28.118 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:28.118 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:28.118 "hdgst": ${hdgst:-false}, 00:23:28.118 "ddgst": ${ddgst:-false} 00:23:28.118 }, 00:23:28.118 "method": "bdev_nvme_attach_controller" 00:23:28.118 } 00:23:28.118 EOF 00:23:28.118 )") 00:23:28.118 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # cat 00:23:28.118 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:23:28.118 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:23:28.118 { 00:23:28.118 "params": { 00:23:28.118 "name": "Nvme$subsystem", 00:23:28.118 "trtype": "$TEST_TRANSPORT", 00:23:28.118 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:28.118 "adrfam": "ipv4", 00:23:28.118 "trsvcid": "$NVMF_PORT", 00:23:28.118 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:28.118 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:28.118 "hdgst": ${hdgst:-false}, 00:23:28.118 "ddgst": ${ddgst:-false} 00:23:28.118 }, 00:23:28.118 "method": "bdev_nvme_attach_controller" 00:23:28.118 } 00:23:28.118 EOF 00:23:28.118 )") 00:23:28.118 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # cat 00:23:28.118 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:23:28.118 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:23:28.118 { 00:23:28.118 "params": { 00:23:28.118 "name": "Nvme$subsystem", 00:23:28.118 "trtype": "$TEST_TRANSPORT", 00:23:28.118 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:28.118 "adrfam": "ipv4", 00:23:28.118 "trsvcid": "$NVMF_PORT", 00:23:28.118 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:28.118 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:28.118 "hdgst": ${hdgst:-false}, 00:23:28.118 "ddgst": ${ddgst:-false} 00:23:28.118 }, 00:23:28.118 "method": "bdev_nvme_attach_controller" 00:23:28.118 } 00:23:28.118 EOF 00:23:28.118 )") 00:23:28.118 [2024-11-05 19:12:57.374950] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:23:28.118 [2024-11-05 19:12:57.375004] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid415226 ] 00:23:28.118 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # cat 00:23:28.118 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:23:28.118 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:23:28.118 { 00:23:28.118 "params": { 00:23:28.118 "name": "Nvme$subsystem", 00:23:28.118 "trtype": "$TEST_TRANSPORT", 00:23:28.118 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:28.118 "adrfam": "ipv4", 00:23:28.118 "trsvcid": "$NVMF_PORT", 00:23:28.118 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:28.118 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:28.118 "hdgst": ${hdgst:-false}, 00:23:28.118 "ddgst": ${ddgst:-false} 00:23:28.118 }, 00:23:28.118 "method": "bdev_nvme_attach_controller" 00:23:28.118 } 00:23:28.118 EOF 00:23:28.118 )") 00:23:28.118 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # cat 00:23:28.118 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:23:28.118 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:23:28.118 { 00:23:28.118 "params": { 00:23:28.118 "name": "Nvme$subsystem", 00:23:28.118 "trtype": "$TEST_TRANSPORT", 00:23:28.118 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:28.118 "adrfam": "ipv4", 00:23:28.118 "trsvcid": "$NVMF_PORT", 00:23:28.118 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:28.118 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:28.118 "hdgst": ${hdgst:-false}, 00:23:28.118 "ddgst": ${ddgst:-false} 00:23:28.118 }, 00:23:28.118 "method": "bdev_nvme_attach_controller" 00:23:28.118 } 00:23:28.118 EOF 00:23:28.118 )") 00:23:28.118 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # cat 00:23:28.118 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@396 -- # jq . 00:23:28.118 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@397 -- # IFS=, 00:23:28.118 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:23:28.118 "params": { 00:23:28.118 "name": "Nvme1", 00:23:28.118 "trtype": "tcp", 00:23:28.118 "traddr": "10.0.0.2", 00:23:28.118 "adrfam": "ipv4", 00:23:28.118 "trsvcid": "4420", 00:23:28.118 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:28.118 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:28.118 "hdgst": false, 00:23:28.118 "ddgst": false 00:23:28.118 }, 00:23:28.118 "method": "bdev_nvme_attach_controller" 00:23:28.118 },{ 00:23:28.118 "params": { 00:23:28.118 "name": "Nvme2", 00:23:28.118 "trtype": "tcp", 00:23:28.118 "traddr": "10.0.0.2", 00:23:28.118 "adrfam": "ipv4", 00:23:28.118 "trsvcid": "4420", 00:23:28.118 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:28.118 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:28.118 "hdgst": false, 00:23:28.118 "ddgst": false 00:23:28.118 }, 00:23:28.118 "method": "bdev_nvme_attach_controller" 00:23:28.118 },{ 00:23:28.118 "params": { 00:23:28.118 "name": "Nvme3", 00:23:28.118 "trtype": "tcp", 00:23:28.118 "traddr": "10.0.0.2", 00:23:28.118 "adrfam": "ipv4", 00:23:28.118 "trsvcid": "4420", 00:23:28.118 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:28.118 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:28.118 "hdgst": false, 00:23:28.118 "ddgst": false 00:23:28.118 }, 00:23:28.118 "method": "bdev_nvme_attach_controller" 00:23:28.118 },{ 00:23:28.118 "params": { 00:23:28.118 "name": "Nvme4", 00:23:28.118 "trtype": "tcp", 00:23:28.118 "traddr": "10.0.0.2", 00:23:28.118 "adrfam": "ipv4", 00:23:28.118 "trsvcid": "4420", 00:23:28.118 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:28.118 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:28.118 "hdgst": false, 00:23:28.118 "ddgst": false 00:23:28.118 }, 00:23:28.118 "method": "bdev_nvme_attach_controller" 00:23:28.118 },{ 00:23:28.118 "params": { 00:23:28.118 "name": "Nvme5", 00:23:28.118 "trtype": "tcp", 00:23:28.118 "traddr": "10.0.0.2", 00:23:28.118 "adrfam": "ipv4", 00:23:28.118 "trsvcid": "4420", 00:23:28.118 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:28.118 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:28.118 "hdgst": false, 00:23:28.118 "ddgst": false 00:23:28.118 }, 00:23:28.118 "method": "bdev_nvme_attach_controller" 00:23:28.118 },{ 00:23:28.118 "params": { 00:23:28.118 "name": "Nvme6", 00:23:28.118 "trtype": "tcp", 00:23:28.118 "traddr": "10.0.0.2", 00:23:28.118 "adrfam": "ipv4", 00:23:28.118 "trsvcid": "4420", 00:23:28.118 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:28.118 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:28.118 "hdgst": false, 00:23:28.118 "ddgst": false 00:23:28.118 }, 00:23:28.118 "method": "bdev_nvme_attach_controller" 00:23:28.118 },{ 00:23:28.118 "params": { 00:23:28.118 "name": "Nvme7", 00:23:28.118 "trtype": "tcp", 00:23:28.118 "traddr": "10.0.0.2", 00:23:28.118 "adrfam": "ipv4", 00:23:28.118 "trsvcid": "4420", 00:23:28.118 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:28.118 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:28.118 "hdgst": false, 00:23:28.118 "ddgst": false 00:23:28.118 }, 00:23:28.118 "method": "bdev_nvme_attach_controller" 00:23:28.118 },{ 00:23:28.118 "params": { 00:23:28.118 "name": "Nvme8", 00:23:28.118 "trtype": "tcp", 00:23:28.118 "traddr": "10.0.0.2", 00:23:28.118 "adrfam": "ipv4", 00:23:28.118 "trsvcid": "4420", 00:23:28.118 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:28.118 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:28.118 "hdgst": false, 00:23:28.118 "ddgst": false 00:23:28.118 }, 00:23:28.118 "method": "bdev_nvme_attach_controller" 00:23:28.118 },{ 00:23:28.118 "params": { 00:23:28.118 "name": "Nvme9", 00:23:28.118 "trtype": "tcp", 00:23:28.118 "traddr": "10.0.0.2", 00:23:28.118 "adrfam": "ipv4", 00:23:28.118 "trsvcid": "4420", 00:23:28.118 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:28.118 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:28.118 "hdgst": false, 00:23:28.118 "ddgst": false 00:23:28.118 }, 00:23:28.118 "method": "bdev_nvme_attach_controller" 00:23:28.118 },{ 00:23:28.118 "params": { 00:23:28.118 "name": "Nvme10", 00:23:28.118 "trtype": "tcp", 00:23:28.118 "traddr": "10.0.0.2", 00:23:28.118 "adrfam": "ipv4", 00:23:28.118 "trsvcid": "4420", 00:23:28.119 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:28.119 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:28.119 "hdgst": false, 00:23:28.119 "ddgst": false 00:23:28.119 }, 00:23:28.119 "method": "bdev_nvme_attach_controller" 00:23:28.119 }' 00:23:28.379 [2024-11-05 19:12:57.446289] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:28.379 [2024-11-05 19:12:57.482681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:30.289 Running I/O for 10 seconds... 00:23:30.549 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:30.549 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@866 -- # return 0 00:23:30.549 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:30.549 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.549 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:30.810 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.810 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:23:30.810 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:30.810 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:23:30.810 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:23:30.810 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:23:30.810 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:23:30.810 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:30.810 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:30.810 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:30.810 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.810 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:30.810 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.810 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:23:30.810 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:23:30.810 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:23:30.810 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:23:30.810 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:23:30.810 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 415226 00:23:30.810 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # '[' -z 415226 ']' 00:23:30.810 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # kill -0 415226 00:23:30.810 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # uname 00:23:30.810 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:30.810 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 415226 00:23:30.810 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:23:30.810 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:23:30.810 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 415226' 00:23:30.810 killing process with pid 415226 00:23:30.810 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@971 -- # kill 415226 00:23:30.810 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@976 -- # wait 415226 00:23:30.810 Received shutdown signal, test time was about 0.879201 seconds 00:23:30.810 00:23:30.810 Latency(us) 00:23:30.810 [2024-11-05T18:13:00.133Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:30.810 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:30.810 Verification LBA range: start 0x0 length 0x400 00:23:30.810 Nvme1n1 : 0.85 226.49 14.16 0.00 0.00 278857.10 22282.24 260396.37 00:23:30.810 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:30.810 Verification LBA range: start 0x0 length 0x400 00:23:30.810 Nvme2n1 : 0.88 291.47 18.22 0.00 0.00 211083.95 18568.53 206219.95 00:23:30.810 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:30.810 Verification LBA range: start 0x0 length 0x400 00:23:30.810 Nvme3n1 : 0.84 228.24 14.27 0.00 0.00 263862.90 19988.48 227191.47 00:23:30.810 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:30.811 Verification LBA range: start 0x0 length 0x400 00:23:30.811 Nvme4n1 : 0.83 229.98 14.37 0.00 0.00 255330.70 21845.33 251658.24 00:23:30.811 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:30.811 Verification LBA range: start 0x0 length 0x400 00:23:30.811 Nvme5n1 : 0.86 222.77 13.92 0.00 0.00 257820.16 18350.08 253405.87 00:23:30.811 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:30.811 Verification LBA range: start 0x0 length 0x400 00:23:30.811 Nvme6n1 : 0.85 224.93 14.06 0.00 0.00 248630.61 34515.63 228939.09 00:23:30.811 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:30.811 Verification LBA range: start 0x0 length 0x400 00:23:30.811 Nvme7n1 : 0.85 225.69 14.11 0.00 0.00 241264.92 19442.35 242920.11 00:23:30.811 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:30.811 Verification LBA range: start 0x0 length 0x400 00:23:30.811 Nvme8n1 : 0.87 295.78 18.49 0.00 0.00 179569.07 15182.51 253405.87 00:23:30.811 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:30.811 Verification LBA range: start 0x0 length 0x400 00:23:30.811 Nvme9n1 : 0.87 220.99 13.81 0.00 0.00 234331.59 18677.76 258648.75 00:23:30.811 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:30.811 Verification LBA range: start 0x0 length 0x400 00:23:30.811 Nvme10n1 : 0.87 219.86 13.74 0.00 0.00 229395.34 19223.89 274377.39 00:23:30.811 [2024-11-05T18:13:00.134Z] =================================================================================================================== 00:23:30.811 [2024-11-05T18:13:00.134Z] Total : 2386.20 149.14 0.00 0.00 237221.63 15182.51 274377.39 00:23:31.071 19:13:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:23:32.012 19:13:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 414844 00:23:32.012 19:13:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:23:32.012 19:13:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:23:32.012 19:13:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:32.012 19:13:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:32.012 19:13:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:23:32.012 19:13:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # nvmfcleanup 00:23:32.012 19:13:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@99 -- # sync 00:23:32.012 19:13:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:23:32.012 19:13:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # set +e 00:23:32.012 19:13:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # for i in {1..20} 00:23:32.012 19:13:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:23:32.012 rmmod nvme_tcp 00:23:32.012 rmmod nvme_fabrics 00:23:32.012 rmmod nvme_keyring 00:23:32.012 19:13:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:23:32.012 19:13:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # set -e 00:23:32.012 19:13:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # return 0 00:23:32.012 19:13:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # '[' -n 414844 ']' 00:23:32.012 19:13:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@337 -- # killprocess 414844 00:23:32.012 19:13:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # '[' -z 414844 ']' 00:23:32.012 19:13:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # kill -0 414844 00:23:32.013 19:13:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # uname 00:23:32.013 19:13:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:32.273 19:13:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 414844 00:23:32.273 19:13:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:23:32.273 19:13:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:23:32.273 19:13:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 414844' 00:23:32.273 killing process with pid 414844 00:23:32.273 19:13:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@971 -- # kill 414844 00:23:32.273 19:13:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@976 -- # wait 414844 00:23:32.533 19:13:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:23:32.534 19:13:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # nvmf_fini 00:23:32.534 19:13:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@264 -- # local dev 00:23:32.534 19:13:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@267 -- # remove_target_ns 00:23:32.534 19:13:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:23:32.534 19:13:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:23:32.534 19:13:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_target_ns 00:23:34.446 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@268 -- # delete_main_bridge 00:23:34.446 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:23:34.446 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@130 -- # return 0 00:23:34.446 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:23:34.446 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:23:34.446 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:23:34.446 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:23:34.446 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:23:34.446 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:23:34.446 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:23:34.446 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:23:34.446 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:23:34.446 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:23:34.446 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:23:34.446 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:23:34.446 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:23:34.446 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:23:34.446 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:23:34.446 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:23:34.446 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:23:34.446 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@41 -- # _dev=0 00:23:34.446 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@41 -- # dev_map=() 00:23:34.446 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@284 -- # iptr 00:23:34.446 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:23:34.446 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@542 -- # iptables-save 00:23:34.446 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@542 -- # iptables-restore 00:23:34.446 00:23:34.446 real 0m8.271s 00:23:34.446 user 0m25.341s 00:23:34.446 sys 0m1.304s 00:23:34.446 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:34.446 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:34.446 ************************************ 00:23:34.446 END TEST nvmf_shutdown_tc2 00:23:34.446 ************************************ 00:23:34.446 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:23:34.446 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:23:34.446 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:34.446 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:34.708 ************************************ 00:23:34.708 START TEST nvmf_shutdown_tc3 00:23:34.708 ************************************ 00:23:34.708 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc3 00:23:34.708 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:23:34.708 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:23:34.708 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:23:34.708 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:34.708 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # prepare_net_devs 00:23:34.708 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # local -g is_hw=no 00:23:34.708 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # remove_target_ns 00:23:34.708 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:23:34.708 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:23:34.708 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_target_ns 00:23:34.708 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:23:34.708 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:23:34.708 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # xtrace_disable 00:23:34.708 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:34.708 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:34.708 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@131 -- # pci_devs=() 00:23:34.708 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@131 -- # local -a pci_devs 00:23:34.708 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@132 -- # pci_net_devs=() 00:23:34.708 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:23:34.708 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@133 -- # pci_drivers=() 00:23:34.708 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@133 -- # local -A pci_drivers 00:23:34.708 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@135 -- # net_devs=() 00:23:34.708 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@135 -- # local -ga net_devs 00:23:34.708 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@136 -- # e810=() 00:23:34.708 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@136 -- # local -ga e810 00:23:34.708 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@137 -- # x722=() 00:23:34.708 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@137 -- # local -ga x722 00:23:34.708 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@138 -- # mlx=() 00:23:34.708 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@138 -- # local -ga mlx 00:23:34.708 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:34.708 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:34.708 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:34.708 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:34.708 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:34.708 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:34.708 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:34.708 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:34.708 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:34.708 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:34.708 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:34.708 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:34.708 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:23:34.708 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:23:34.708 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:23:34.708 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:23:34.708 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:23:34.709 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:23:34.709 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:23:34.709 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:34.709 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:34.709 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:23:34.709 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:23:34.709 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:34.709 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:34.709 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:23:34.709 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:23:34.709 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:34.709 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:34.709 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:23:34.709 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:23:34.709 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:34.709 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:34.709 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:23:34.709 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:23:34.709 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:23:34.709 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:23:34.709 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:23:34.709 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:34.709 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:23:34.709 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:34.709 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # [[ up == up ]] 00:23:34.709 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:23:34.709 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:34.709 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:34.709 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:34.709 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:23:34.709 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:23:34.709 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:34.709 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:23:34.709 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:34.709 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # [[ up == up ]] 00:23:34.709 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:23:34.709 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:34.709 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:34.709 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:34.709 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:23:34.709 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:23:34.709 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:23:34.709 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # is_hw=yes 00:23:34.709 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:23:34.709 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:23:34.709 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:23:34.709 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:23:34.709 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@257 -- # create_target_ns 00:23:34.709 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:23:34.709 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:23:34.709 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:23:34.709 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:34.709 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:23:34.709 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:23:34.709 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:34.709 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:34.709 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:23:34.709 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:23:34.709 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:23:34.709 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:23:34.709 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@27 -- # local -gA dev_map 00:23:34.709 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@28 -- # local -g _dev 00:23:34.709 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:23:34.709 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:23:34.709 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:23:34.709 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:23:34.709 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@44 -- # ips=() 00:23:34.709 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:23:34.709 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:23:34.709 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:23:34.709 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:23:34.709 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:23:34.709 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:23:34.709 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:23:34.709 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:23:34.709 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:23:34.709 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:23:34.709 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:23:34.709 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:23:34.709 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:23:34.709 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:23:34.709 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:23:34.709 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:23:34.709 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:23:34.709 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:23:34.709 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:23:34.709 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:23:34.709 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@11 -- # local val=167772161 00:23:34.709 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:23:34.709 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:23:34.709 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:23:34.709 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:23:34.709 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:23:34.709 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:23:34.709 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:23:34.709 10.0.0.1 00:23:34.709 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:23:34.709 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:23:34.709 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:34.709 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:34.709 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:23:34.709 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@11 -- # local val=167772162 00:23:34.710 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:23:34.710 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:23:34.710 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:23:34.710 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:23:34.710 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:23:34.710 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:23:34.710 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:23:34.710 10.0.0.2 00:23:34.710 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:23:34.710 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:23:34.710 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:23:34.710 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:23:34.710 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:23:34.710 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:23:34.710 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:23:34.710 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:34.710 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:34.710 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:23:34.710 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:23:34.972 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:23:34.972 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:23:34.972 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:23:34.972 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:23:34.972 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:23:34.972 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:23:34.972 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:23:34.972 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:23:34.972 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:23:34.972 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@38 -- # ping_ips 1 00:23:34.972 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:23:34.972 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:23:34.972 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:23:34.972 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:23:34.972 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:23:34.972 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:23:34.972 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:23:34.972 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:23:34.972 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:23:34.972 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@107 -- # local dev=initiator0 00:23:34.972 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:23:34.972 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:23:34.972 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:23:34.972 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:23:34.972 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:23:34.972 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:23:34.972 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:23:34.972 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:23:34.972 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:23:34.972 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:23:34.972 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:23:34.972 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:34.972 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:34.972 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:23:34.972 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:23:34.972 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:34.972 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.691 ms 00:23:34.972 00:23:34.972 --- 10.0.0.1 ping statistics --- 00:23:34.972 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:34.972 rtt min/avg/max/mdev = 0.691/0.691/0.691/0.000 ms 00:23:34.972 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:23:34.972 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:23:34.972 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:23:34.972 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:23:34.972 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:34.972 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:34.972 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@168 -- # get_net_dev target0 00:23:34.972 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@107 -- # local dev=target0 00:23:34.972 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:23:34.972 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:23:34.972 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:23:34.972 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:23:34.972 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:23:34.972 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:23:34.972 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:23:34.972 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:23:34.972 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:23:34.972 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:23:34.972 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:23:34.972 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:23:34.972 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:23:34.972 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:23:34.972 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:34.972 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.278 ms 00:23:34.972 00:23:34.972 --- 10.0.0.2 ping statistics --- 00:23:34.972 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:34.972 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:23:34.972 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@98 -- # (( pair++ )) 00:23:34.972 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:23:34.972 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:34.972 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # return 0 00:23:34.972 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:23:34.972 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:23:34.972 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:23:34.972 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:23:34.972 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:23:34.972 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:23:34.972 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:23:34.972 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:23:34.972 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:23:34.972 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:23:34.972 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@107 -- # local dev=initiator0 00:23:34.972 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:23:34.972 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:23:34.972 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:23:34.972 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:23:34.972 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:23:34.972 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:23:34.972 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:23:34.972 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:23:34.972 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:23:34.972 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:34.972 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:23:34.972 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:23:34.972 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:23:34.972 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:23:34.972 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:23:34.972 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:23:34.972 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@107 -- # local dev=initiator1 00:23:34.972 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:23:34.972 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:23:34.972 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@109 -- # return 1 00:23:34.973 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@168 -- # dev= 00:23:34.973 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@169 -- # return 0 00:23:34.973 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:23:34.973 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:23:34.973 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:23:34.973 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:23:34.973 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:23:34.973 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:34.973 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:34.973 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@168 -- # get_net_dev target0 00:23:34.973 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@107 -- # local dev=target0 00:23:34.973 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:23:34.973 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:23:34.973 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:23:34.973 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:23:34.973 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:23:34.973 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:23:34.973 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:23:34.973 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:23:34.973 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:23:34.973 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:34.973 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:23:34.973 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:23:34.973 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:23:34.973 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:23:34.973 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:34.973 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:34.973 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@168 -- # get_net_dev target1 00:23:34.973 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@107 -- # local dev=target1 00:23:34.973 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:23:34.973 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:23:34.973 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@109 -- # return 1 00:23:34.973 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@168 -- # dev= 00:23:34.973 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@169 -- # return 0 00:23:34.973 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:23:34.973 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:34.973 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:23:34.973 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:23:34.973 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:34.973 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:23:34.973 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:23:34.973 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:23:34.973 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:23:34.973 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:34.973 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:34.973 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # nvmfpid=416703 00:23:34.973 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # waitforlisten 416703 00:23:34.973 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # '[' -z 416703 ']' 00:23:34.973 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:34.973 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:34.973 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:34.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:34.973 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:34.973 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:34.973 19:13:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk ip netns exec nvmf_ns_spdk ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:34.973 [2024-11-05 19:13:04.269255] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:23:34.973 [2024-11-05 19:13:04.269322] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:35.233 [2024-11-05 19:13:04.362344] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:35.233 [2024-11-05 19:13:04.396931] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:35.233 [2024-11-05 19:13:04.396961] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:35.233 [2024-11-05 19:13:04.396967] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:35.233 [2024-11-05 19:13:04.396971] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:35.233 [2024-11-05 19:13:04.396976] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:35.233 [2024-11-05 19:13:04.398498] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:35.233 [2024-11-05 19:13:04.398657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:35.233 [2024-11-05 19:13:04.398814] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:35.233 [2024-11-05 19:13:04.398816] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:23:35.804 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:35.804 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@866 -- # return 0 00:23:35.804 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:23:35.804 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:35.804 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:35.804 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:35.804 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:35.804 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.804 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:35.804 [2024-11-05 19:13:05.117332] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:35.804 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.804 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:23:35.804 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:23:35.804 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:35.804 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:36.065 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:36.065 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:36.065 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:36.065 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:36.065 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:36.065 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:36.065 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:36.065 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:36.065 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:36.065 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:36.065 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:36.065 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:36.065 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:36.065 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:36.065 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:36.065 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:36.065 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:36.065 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:36.065 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:36.065 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:36.065 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:36.065 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:23:36.065 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.065 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:36.065 Malloc1 00:23:36.065 [2024-11-05 19:13:05.231489] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:36.065 Malloc2 00:23:36.065 Malloc3 00:23:36.065 Malloc4 00:23:36.065 Malloc5 00:23:36.326 Malloc6 00:23:36.326 Malloc7 00:23:36.326 Malloc8 00:23:36.326 Malloc9 00:23:36.326 Malloc10 00:23:36.326 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.326 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:23:36.326 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:36.326 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:36.326 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=417083 00:23:36.326 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 417083 /var/tmp/bdevperf.sock 00:23:36.326 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # '[' -z 417083 ']' 00:23:36.326 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:36.326 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:36.326 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:36.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:36.326 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:36.326 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:36.326 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:36.326 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:36.326 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # config=() 00:23:36.326 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # local subsystem config 00:23:36.326 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:23:36.326 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:23:36.326 { 00:23:36.326 "params": { 00:23:36.326 "name": "Nvme$subsystem", 00:23:36.326 "trtype": "$TEST_TRANSPORT", 00:23:36.326 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:36.326 "adrfam": "ipv4", 00:23:36.326 "trsvcid": "$NVMF_PORT", 00:23:36.326 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:36.326 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:36.326 "hdgst": ${hdgst:-false}, 00:23:36.326 "ddgst": ${ddgst:-false} 00:23:36.326 }, 00:23:36.326 "method": "bdev_nvme_attach_controller" 00:23:36.326 } 00:23:36.326 EOF 00:23:36.326 )") 00:23:36.326 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # cat 00:23:36.327 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:23:36.327 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:23:36.327 { 00:23:36.327 "params": { 00:23:36.327 "name": "Nvme$subsystem", 00:23:36.327 "trtype": "$TEST_TRANSPORT", 00:23:36.327 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:36.327 "adrfam": "ipv4", 00:23:36.327 "trsvcid": "$NVMF_PORT", 00:23:36.327 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:36.327 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:36.327 "hdgst": ${hdgst:-false}, 00:23:36.327 "ddgst": ${ddgst:-false} 00:23:36.327 }, 00:23:36.327 "method": "bdev_nvme_attach_controller" 00:23:36.327 } 00:23:36.327 EOF 00:23:36.327 )") 00:23:36.327 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # cat 00:23:36.327 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:23:36.327 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:23:36.327 { 00:23:36.327 "params": { 00:23:36.327 "name": "Nvme$subsystem", 00:23:36.327 "trtype": "$TEST_TRANSPORT", 00:23:36.327 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:36.327 "adrfam": "ipv4", 00:23:36.327 "trsvcid": "$NVMF_PORT", 00:23:36.327 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:36.327 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:36.327 "hdgst": ${hdgst:-false}, 00:23:36.327 "ddgst": ${ddgst:-false} 00:23:36.327 }, 00:23:36.327 "method": "bdev_nvme_attach_controller" 00:23:36.327 } 00:23:36.327 EOF 00:23:36.327 )") 00:23:36.327 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # cat 00:23:36.588 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:23:36.588 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:23:36.588 { 00:23:36.588 "params": { 00:23:36.588 "name": "Nvme$subsystem", 00:23:36.588 "trtype": "$TEST_TRANSPORT", 00:23:36.588 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:36.588 "adrfam": "ipv4", 00:23:36.588 "trsvcid": "$NVMF_PORT", 00:23:36.588 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:36.588 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:36.588 "hdgst": ${hdgst:-false}, 00:23:36.588 "ddgst": ${ddgst:-false} 00:23:36.588 }, 00:23:36.588 "method": "bdev_nvme_attach_controller" 00:23:36.588 } 00:23:36.588 EOF 00:23:36.588 )") 00:23:36.588 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # cat 00:23:36.588 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:23:36.588 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:23:36.588 { 00:23:36.588 "params": { 00:23:36.588 "name": "Nvme$subsystem", 00:23:36.588 "trtype": "$TEST_TRANSPORT", 00:23:36.588 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:36.588 "adrfam": "ipv4", 00:23:36.588 "trsvcid": "$NVMF_PORT", 00:23:36.588 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:36.588 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:36.588 "hdgst": ${hdgst:-false}, 00:23:36.588 "ddgst": ${ddgst:-false} 00:23:36.588 }, 00:23:36.588 "method": "bdev_nvme_attach_controller" 00:23:36.588 } 00:23:36.588 EOF 00:23:36.588 )") 00:23:36.588 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # cat 00:23:36.588 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:23:36.588 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:23:36.588 { 00:23:36.588 "params": { 00:23:36.588 "name": "Nvme$subsystem", 00:23:36.588 "trtype": "$TEST_TRANSPORT", 00:23:36.588 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:36.588 "adrfam": "ipv4", 00:23:36.588 "trsvcid": "$NVMF_PORT", 00:23:36.588 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:36.588 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:36.589 "hdgst": ${hdgst:-false}, 00:23:36.589 "ddgst": ${ddgst:-false} 00:23:36.589 }, 00:23:36.589 "method": "bdev_nvme_attach_controller" 00:23:36.589 } 00:23:36.589 EOF 00:23:36.589 )") 00:23:36.589 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # cat 00:23:36.589 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:23:36.589 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:23:36.589 { 00:23:36.589 "params": { 00:23:36.589 "name": "Nvme$subsystem", 00:23:36.589 "trtype": "$TEST_TRANSPORT", 00:23:36.589 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:36.589 "adrfam": "ipv4", 00:23:36.589 "trsvcid": "$NVMF_PORT", 00:23:36.589 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:36.589 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:36.589 "hdgst": ${hdgst:-false}, 00:23:36.589 "ddgst": ${ddgst:-false} 00:23:36.589 }, 00:23:36.589 "method": "bdev_nvme_attach_controller" 00:23:36.589 } 00:23:36.589 EOF 00:23:36.589 )") 00:23:36.589 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # cat 00:23:36.589 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:23:36.589 [2024-11-05 19:13:05.686772] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:23:36.589 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:23:36.589 { 00:23:36.589 "params": { 00:23:36.589 "name": "Nvme$subsystem", 00:23:36.589 "trtype": "$TEST_TRANSPORT", 00:23:36.589 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:36.589 "adrfam": "ipv4", 00:23:36.589 "trsvcid": "$NVMF_PORT", 00:23:36.589 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:36.589 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:36.589 "hdgst": ${hdgst:-false}, 00:23:36.589 "ddgst": ${ddgst:-false} 00:23:36.589 }, 00:23:36.589 "method": "bdev_nvme_attach_controller" 00:23:36.589 } 00:23:36.589 EOF 00:23:36.589 )") 00:23:36.589 [2024-11-05 19:13:05.686823] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid417083 ] 00:23:36.589 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # cat 00:23:36.589 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:23:36.589 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:23:36.589 { 00:23:36.589 "params": { 00:23:36.589 "name": "Nvme$subsystem", 00:23:36.589 "trtype": "$TEST_TRANSPORT", 00:23:36.589 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:36.589 "adrfam": "ipv4", 00:23:36.589 "trsvcid": "$NVMF_PORT", 00:23:36.589 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:36.589 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:36.589 "hdgst": ${hdgst:-false}, 00:23:36.589 "ddgst": ${ddgst:-false} 00:23:36.589 }, 00:23:36.589 "method": "bdev_nvme_attach_controller" 00:23:36.589 } 00:23:36.589 EOF 00:23:36.589 )") 00:23:36.589 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # cat 00:23:36.589 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:23:36.589 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:23:36.589 { 00:23:36.589 "params": { 00:23:36.589 "name": "Nvme$subsystem", 00:23:36.589 "trtype": "$TEST_TRANSPORT", 00:23:36.589 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:36.589 "adrfam": "ipv4", 00:23:36.589 "trsvcid": "$NVMF_PORT", 00:23:36.589 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:36.589 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:36.589 "hdgst": ${hdgst:-false}, 00:23:36.589 "ddgst": ${ddgst:-false} 00:23:36.589 }, 00:23:36.589 "method": "bdev_nvme_attach_controller" 00:23:36.589 } 00:23:36.589 EOF 00:23:36.589 )") 00:23:36.589 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # cat 00:23:36.589 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@396 -- # jq . 00:23:36.589 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@397 -- # IFS=, 00:23:36.589 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:23:36.589 "params": { 00:23:36.589 "name": "Nvme1", 00:23:36.589 "trtype": "tcp", 00:23:36.589 "traddr": "10.0.0.2", 00:23:36.589 "adrfam": "ipv4", 00:23:36.589 "trsvcid": "4420", 00:23:36.589 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:36.589 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:36.589 "hdgst": false, 00:23:36.589 "ddgst": false 00:23:36.589 }, 00:23:36.589 "method": "bdev_nvme_attach_controller" 00:23:36.589 },{ 00:23:36.589 "params": { 00:23:36.589 "name": "Nvme2", 00:23:36.589 "trtype": "tcp", 00:23:36.589 "traddr": "10.0.0.2", 00:23:36.589 "adrfam": "ipv4", 00:23:36.589 "trsvcid": "4420", 00:23:36.589 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:36.589 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:36.589 "hdgst": false, 00:23:36.589 "ddgst": false 00:23:36.589 }, 00:23:36.589 "method": "bdev_nvme_attach_controller" 00:23:36.589 },{ 00:23:36.589 "params": { 00:23:36.589 "name": "Nvme3", 00:23:36.589 "trtype": "tcp", 00:23:36.589 "traddr": "10.0.0.2", 00:23:36.589 "adrfam": "ipv4", 00:23:36.589 "trsvcid": "4420", 00:23:36.589 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:36.589 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:36.589 "hdgst": false, 00:23:36.589 "ddgst": false 00:23:36.589 }, 00:23:36.589 "method": "bdev_nvme_attach_controller" 00:23:36.589 },{ 00:23:36.589 "params": { 00:23:36.589 "name": "Nvme4", 00:23:36.589 "trtype": "tcp", 00:23:36.589 "traddr": "10.0.0.2", 00:23:36.589 "adrfam": "ipv4", 00:23:36.589 "trsvcid": "4420", 00:23:36.589 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:36.589 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:36.589 "hdgst": false, 00:23:36.589 "ddgst": false 00:23:36.589 }, 00:23:36.589 "method": "bdev_nvme_attach_controller" 00:23:36.589 },{ 00:23:36.589 "params": { 00:23:36.589 "name": "Nvme5", 00:23:36.589 "trtype": "tcp", 00:23:36.589 "traddr": "10.0.0.2", 00:23:36.589 "adrfam": "ipv4", 00:23:36.589 "trsvcid": "4420", 00:23:36.589 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:36.589 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:36.589 "hdgst": false, 00:23:36.589 "ddgst": false 00:23:36.589 }, 00:23:36.589 "method": "bdev_nvme_attach_controller" 00:23:36.589 },{ 00:23:36.589 "params": { 00:23:36.589 "name": "Nvme6", 00:23:36.589 "trtype": "tcp", 00:23:36.589 "traddr": "10.0.0.2", 00:23:36.589 "adrfam": "ipv4", 00:23:36.589 "trsvcid": "4420", 00:23:36.589 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:36.589 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:36.589 "hdgst": false, 00:23:36.589 "ddgst": false 00:23:36.589 }, 00:23:36.589 "method": "bdev_nvme_attach_controller" 00:23:36.589 },{ 00:23:36.589 "params": { 00:23:36.589 "name": "Nvme7", 00:23:36.589 "trtype": "tcp", 00:23:36.589 "traddr": "10.0.0.2", 00:23:36.589 "adrfam": "ipv4", 00:23:36.589 "trsvcid": "4420", 00:23:36.589 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:36.589 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:36.589 "hdgst": false, 00:23:36.589 "ddgst": false 00:23:36.589 }, 00:23:36.589 "method": "bdev_nvme_attach_controller" 00:23:36.589 },{ 00:23:36.589 "params": { 00:23:36.589 "name": "Nvme8", 00:23:36.589 "trtype": "tcp", 00:23:36.589 "traddr": "10.0.0.2", 00:23:36.589 "adrfam": "ipv4", 00:23:36.589 "trsvcid": "4420", 00:23:36.589 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:36.589 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:36.589 "hdgst": false, 00:23:36.589 "ddgst": false 00:23:36.589 }, 00:23:36.589 "method": "bdev_nvme_attach_controller" 00:23:36.589 },{ 00:23:36.589 "params": { 00:23:36.589 "name": "Nvme9", 00:23:36.589 "trtype": "tcp", 00:23:36.589 "traddr": "10.0.0.2", 00:23:36.589 "adrfam": "ipv4", 00:23:36.589 "trsvcid": "4420", 00:23:36.589 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:36.589 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:36.589 "hdgst": false, 00:23:36.589 "ddgst": false 00:23:36.589 }, 00:23:36.589 "method": "bdev_nvme_attach_controller" 00:23:36.589 },{ 00:23:36.589 "params": { 00:23:36.589 "name": "Nvme10", 00:23:36.589 "trtype": "tcp", 00:23:36.589 "traddr": "10.0.0.2", 00:23:36.589 "adrfam": "ipv4", 00:23:36.589 "trsvcid": "4420", 00:23:36.589 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:36.589 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:36.589 "hdgst": false, 00:23:36.589 "ddgst": false 00:23:36.589 }, 00:23:36.589 "method": "bdev_nvme_attach_controller" 00:23:36.589 }' 00:23:36.589 [2024-11-05 19:13:05.758546] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:36.589 [2024-11-05 19:13:05.795007] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:38.504 Running I/O for 10 seconds... 00:23:38.504 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:38.504 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@866 -- # return 0 00:23:38.504 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:38.504 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.504 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:38.504 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:38.504 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:38.504 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:23:38.504 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:38.504 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:23:38.504 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:23:38.504 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:23:38.504 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:23:38.504 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:38.504 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:38.504 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:38.504 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.504 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:38.504 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:38.504 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:23:38.504 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:23:38.504 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:23:38.765 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:23:38.765 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:38.765 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:38.765 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:38.765 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.765 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:38.765 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:38.765 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:23:38.765 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:23:38.765 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:23:39.032 19:13:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:23:39.032 19:13:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:39.032 19:13:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:39.032 19:13:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:39.032 19:13:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.032 19:13:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:39.032 19:13:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.032 19:13:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:23:39.032 19:13:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:23:39.032 19:13:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:23:39.032 19:13:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:23:39.032 19:13:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:23:39.032 19:13:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 416703 00:23:39.032 19:13:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # '[' -z 416703 ']' 00:23:39.032 19:13:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # kill -0 416703 00:23:39.032 19:13:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@957 -- # uname 00:23:39.032 19:13:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:39.032 19:13:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 416703 00:23:39.033 19:13:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:23:39.033 19:13:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:23:39.033 19:13:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 416703' 00:23:39.033 killing process with pid 416703 00:23:39.033 19:13:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@971 -- # kill 416703 00:23:39.033 19:13:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@976 -- # wait 416703 00:23:39.033 [2024-11-05 19:13:08.324369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7f640 is same with the state(6) to be set 00:23:39.033 [2024-11-05 19:13:08.324418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7f640 is same with the state(6) to be set 00:23:39.033 [2024-11-05 19:13:08.324425] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7f640 is same with the state(6) to be set 00:23:39.033 [2024-11-05 19:13:08.324430] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7f640 is same with the state(6) to be set 00:23:39.033 [2024-11-05 19:13:08.324435] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7f640 is same with the state(6) to be set 00:23:39.033 [2024-11-05 19:13:08.324440] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7f640 is same with the state(6) to be set 00:23:39.033 [2024-11-05 19:13:08.324445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7f640 is same with the state(6) to be set 00:23:39.033 [2024-11-05 19:13:08.324450] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7f640 is same with the state(6) to be set 00:23:39.033 [2024-11-05 19:13:08.324454] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7f640 is same with the state(6) to be set 00:23:39.033 [2024-11-05 19:13:08.324459] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7f640 is same with the state(6) to be set 00:23:39.033 [2024-11-05 19:13:08.324463] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7f640 is same with the state(6) to be set 00:23:39.033 [2024-11-05 19:13:08.324468] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7f640 is same with the state(6) to be set 00:23:39.033 [2024-11-05 19:13:08.324473] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7f640 is same with the state(6) to be set 00:23:39.033 [2024-11-05 19:13:08.324478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7f640 is same with the state(6) to be set 00:23:39.033 [2024-11-05 19:13:08.324483] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7f640 is same with the state(6) to be set 00:23:39.033 [2024-11-05 19:13:08.324488] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7f640 is same with the state(6) to be set 00:23:39.033 [2024-11-05 19:13:08.324492] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7f640 is same with the state(6) to be set 00:23:39.033 [2024-11-05 19:13:08.324498] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7f640 is same with the state(6) to be set 00:23:39.033 [2024-11-05 19:13:08.324503] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7f640 is same with the state(6) to be set 00:23:39.033 [2024-11-05 19:13:08.324508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7f640 is same with the state(6) to be set 00:23:39.033 [2024-11-05 19:13:08.324513] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7f640 is same with the state(6) to be set 00:23:39.033 [2024-11-05 19:13:08.324518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7f640 is same with the state(6) to be set 00:23:39.033 [2024-11-05 19:13:08.324523] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7f640 is same with the state(6) to be set 00:23:39.033 [2024-11-05 19:13:08.324528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7f640 is same with the state(6) to be set 00:23:39.033 [2024-11-05 19:13:08.324532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7f640 is same with the state(6) to be set 00:23:39.033 [2024-11-05 19:13:08.324542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7f640 is same with the state(6) to be set 00:23:39.033 [2024-11-05 19:13:08.324547] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7f640 is same with the state(6) to be set 00:23:39.033 [2024-11-05 19:13:08.324552] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7f640 is same with the state(6) to be set 00:23:39.033 [2024-11-05 19:13:08.324556] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7f640 is same with the state(6) to be set 00:23:39.033 [2024-11-05 19:13:08.324561] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7f640 is same with the state(6) to be set 00:23:39.033 [2024-11-05 19:13:08.324566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7f640 is same with the state(6) to be set 00:23:39.033 [2024-11-05 19:13:08.324571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7f640 is same with the state(6) to be set 00:23:39.033 [2024-11-05 19:13:08.324576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7f640 is same with the state(6) to be set 00:23:39.033 [2024-11-05 19:13:08.324580] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7f640 is same with the state(6) to be set 00:23:39.033 [2024-11-05 19:13:08.324585] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7f640 is same with the state(6) to be set 00:23:39.033 [2024-11-05 19:13:08.324589] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7f640 is same with the state(6) to be set 00:23:39.033 [2024-11-05 19:13:08.324593] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7f640 is same with the state(6) to be set 00:23:39.033 [2024-11-05 19:13:08.324598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7f640 is same with the state(6) to be set 00:23:39.033 [2024-11-05 19:13:08.324603] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7f640 is same with the state(6) to be set 00:23:39.033 [2024-11-05 19:13:08.324607] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7f640 is same with the state(6) to be set 00:23:39.033 [2024-11-05 19:13:08.324612] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7f640 is same with the state(6) to be set 00:23:39.033 [2024-11-05 19:13:08.324616] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7f640 is same with the state(6) to be set 00:23:39.033 [2024-11-05 19:13:08.324621] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7f640 is same with the state(6) to be set 00:23:39.033 [2024-11-05 19:13:08.324626] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7f640 is same with the state(6) to be set 00:23:39.033 [2024-11-05 19:13:08.324630] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7f640 is same with the state(6) to be set 00:23:39.033 [2024-11-05 19:13:08.324635] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7f640 is same with the state(6) to be set 00:23:39.033 [2024-11-05 19:13:08.324639] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7f640 is same with the state(6) to be set 00:23:39.033 [2024-11-05 19:13:08.324644] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7f640 is same with the state(6) to be set 00:23:39.033 [2024-11-05 19:13:08.324649] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7f640 is same with the state(6) to be set 00:23:39.033 [2024-11-05 19:13:08.324653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7f640 is same with the state(6) to be set 00:23:39.033 [2024-11-05 19:13:08.324658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7f640 is same with the state(6) to be set 00:23:39.033 [2024-11-05 19:13:08.324662] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7f640 is same with the state(6) to be set 00:23:39.033 [2024-11-05 19:13:08.324668] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7f640 is same with the state(6) to be set 00:23:39.033 [2024-11-05 19:13:08.324673] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7f640 is same with the state(6) to be set 00:23:39.033 [2024-11-05 19:13:08.324678] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7f640 is same with the state(6) to be set 00:23:39.033 [2024-11-05 19:13:08.324682] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7f640 is same with the state(6) to be set 00:23:39.033 [2024-11-05 19:13:08.324687] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7f640 is same with the state(6) to be set 00:23:39.033 [2024-11-05 19:13:08.324691] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7f640 is same with the state(6) to be set 00:23:39.033 [2024-11-05 19:13:08.324696] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7f640 is same with the state(6) to be set 00:23:39.033 [2024-11-05 19:13:08.324701] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7f640 is same with the state(6) to be set 00:23:39.033 [2024-11-05 19:13:08.324705] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7f640 is same with the state(6) to be set 00:23:39.033 [2024-11-05 19:13:08.324709] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7f640 is same with the state(6) to be set 00:23:39.033 [2024-11-05 19:13:08.324714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7f640 is same with the state(6) to be set 00:23:39.033 [2024-11-05 19:13:08.325821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a820a0 is same with the state(6) to be set 00:23:39.033 [2024-11-05 19:13:08.325849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a820a0 is same with the state(6) to be set 00:23:39.033 [2024-11-05 19:13:08.325855] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a820a0 is same with the state(6) to be set 00:23:39.033 [2024-11-05 19:13:08.325860] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a820a0 is same with the state(6) to be set 00:23:39.033 [2024-11-05 19:13:08.325865] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a820a0 is same with the state(6) to be set 00:23:39.033 [2024-11-05 19:13:08.325870] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a820a0 is same with the state(6) to be set 00:23:39.033 [2024-11-05 19:13:08.325875] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a820a0 is same with the state(6) to be set 00:23:39.033 [2024-11-05 19:13:08.325880] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a820a0 is same with the state(6) to be set 00:23:39.033 [2024-11-05 19:13:08.325885] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a820a0 is same with the state(6) to be set 00:23:39.033 [2024-11-05 19:13:08.325889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a820a0 is same with the state(6) to be set 00:23:39.033 [2024-11-05 19:13:08.325894] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a820a0 is same with the state(6) to be set 00:23:39.033 [2024-11-05 19:13:08.325899] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a820a0 is same with the state(6) to be set 00:23:39.033 [2024-11-05 19:13:08.325903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a820a0 is same with the state(6) to be set 00:23:39.033 [2024-11-05 19:13:08.325908] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a820a0 is same with the state(6) to be set 00:23:39.033 [2024-11-05 19:13:08.325913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a820a0 is same with the state(6) to be set 00:23:39.033 [2024-11-05 19:13:08.325918] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a820a0 is same with the state(6) to be set 00:23:39.033 [2024-11-05 19:13:08.325925] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a820a0 is same with the state(6) to be set 00:23:39.034 [2024-11-05 19:13:08.325930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a820a0 is same with the state(6) to be set 00:23:39.034 [2024-11-05 19:13:08.325935] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a820a0 is same with the state(6) to be set 00:23:39.034 [2024-11-05 19:13:08.325940] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a820a0 is same with the state(6) to be set 00:23:39.034 [2024-11-05 19:13:08.325945] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a820a0 is same with the state(6) to be set 00:23:39.034 [2024-11-05 19:13:08.325950] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a820a0 is same with the state(6) to be set 00:23:39.034 [2024-11-05 19:13:08.325954] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a820a0 is same with the state(6) to be set 00:23:39.034 [2024-11-05 19:13:08.325959] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a820a0 is same with the state(6) to be set 00:23:39.034 [2024-11-05 19:13:08.325964] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a820a0 is same with the state(6) to be set 00:23:39.034 [2024-11-05 19:13:08.325969] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a820a0 is same with the state(6) to be set 00:23:39.034 [2024-11-05 19:13:08.325974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a820a0 is same with the state(6) to be set 00:23:39.034 [2024-11-05 19:13:08.325979] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a820a0 is same with the state(6) to be set 00:23:39.034 [2024-11-05 19:13:08.325984] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a820a0 is same with the state(6) to be set 00:23:39.034 [2024-11-05 19:13:08.325988] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a820a0 is same with the state(6) to be set 00:23:39.034 [2024-11-05 19:13:08.325993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a820a0 is same with the state(6) to be set 00:23:39.034 [2024-11-05 19:13:08.325998] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a820a0 is same with the state(6) to be set 00:23:39.034 [2024-11-05 19:13:08.326002] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a820a0 is same with the state(6) to be set 00:23:39.034 [2024-11-05 19:13:08.326007] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a820a0 is same with the state(6) to be set 00:23:39.034 [2024-11-05 19:13:08.326012] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a820a0 is same with the state(6) to be set 00:23:39.034 [2024-11-05 19:13:08.326017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a820a0 is same with the state(6) to be set 00:23:39.034 [2024-11-05 19:13:08.326022] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a820a0 is same with the state(6) to be set 00:23:39.034 [2024-11-05 19:13:08.326026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a820a0 is same with the state(6) to be set 00:23:39.034 [2024-11-05 19:13:08.326031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a820a0 is same with the state(6) to be set 00:23:39.034 [2024-11-05 19:13:08.326036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a820a0 is same with the state(6) to be set 00:23:39.034 [2024-11-05 19:13:08.326040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a820a0 is same with the state(6) to be set 00:23:39.034 [2024-11-05 19:13:08.326045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a820a0 is same with the state(6) to be set 00:23:39.034 [2024-11-05 19:13:08.326049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a820a0 is same with the state(6) to be set 00:23:39.034 [2024-11-05 19:13:08.326054] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a820a0 is same with the state(6) to be set 00:23:39.034 [2024-11-05 19:13:08.326060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a820a0 is same with the state(6) to be set 00:23:39.034 [2024-11-05 19:13:08.326064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a820a0 is same with the state(6) to be set 00:23:39.034 [2024-11-05 19:13:08.326069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a820a0 is same with the state(6) to be set 00:23:39.034 [2024-11-05 19:13:08.326074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a820a0 is same with the state(6) to be set 00:23:39.034 [2024-11-05 19:13:08.326079] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a820a0 is same with the state(6) to be set 00:23:39.034 [2024-11-05 19:13:08.326083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a820a0 is same with the state(6) to be set 00:23:39.034 [2024-11-05 19:13:08.326088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a820a0 is same with the state(6) to be set 00:23:39.034 [2024-11-05 19:13:08.326092] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a820a0 is same with the state(6) to be set 00:23:39.034 [2024-11-05 19:13:08.326097] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a820a0 is same with the state(6) to be set 00:23:39.034 [2024-11-05 19:13:08.326101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a820a0 is same with the state(6) to be set 00:23:39.034 [2024-11-05 19:13:08.326105] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a820a0 is same with the state(6) to be set 00:23:39.034 [2024-11-05 19:13:08.326110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a820a0 is same with the state(6) to be set 00:23:39.034 [2024-11-05 19:13:08.326114] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a820a0 is same with the state(6) to be set 00:23:39.034 [2024-11-05 19:13:08.326119] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a820a0 is same with the state(6) to be set 00:23:39.034 [2024-11-05 19:13:08.326124] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a820a0 is same with the state(6) to be set 00:23:39.034 [2024-11-05 19:13:08.326128] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a820a0 is same with the state(6) to be set 00:23:39.034 [2024-11-05 19:13:08.326133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a820a0 is same with the state(6) to be set 00:23:39.034 [2024-11-05 19:13:08.326137] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a820a0 is same with the state(6) to be set 00:23:39.034 [2024-11-05 19:13:08.326142] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a820a0 is same with the state(6) to be set 00:23:39.034 [2024-11-05 19:13:08.327334] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7fb10 is same with the state(6) to be set 00:23:39.034 [2024-11-05 19:13:08.327348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7fb10 is same with the state(6) to be set 00:23:39.034 [2024-11-05 19:13:08.327353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7fb10 is same with the state(6) to be set 00:23:39.034 [2024-11-05 19:13:08.327358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7fb10 is same with the state(6) to be set 00:23:39.034 [2024-11-05 19:13:08.327363] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7fb10 is same with the state(6) to be set 00:23:39.034 [2024-11-05 19:13:08.327368] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7fb10 is same with the state(6) to be set 00:23:39.034 [2024-11-05 19:13:08.327373] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7fb10 is same with the state(6) to be set 00:23:39.034 [2024-11-05 19:13:08.327378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7fb10 is same with the state(6) to be set 00:23:39.034 [2024-11-05 19:13:08.327385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7fb10 is same with the state(6) to be set 00:23:39.034 [2024-11-05 19:13:08.327390] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7fb10 is same with the state(6) to be set 00:23:39.034 [2024-11-05 19:13:08.327395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7fb10 is same with the state(6) to be set 00:23:39.034 [2024-11-05 19:13:08.327399] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7fb10 is same with the state(6) to be set 00:23:39.034 [2024-11-05 19:13:08.327404] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7fb10 is same with the state(6) to be set 00:23:39.034 [2024-11-05 19:13:08.327409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7fb10 is same with the state(6) to be set 00:23:39.034 [2024-11-05 19:13:08.327414] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7fb10 is same with the state(6) to be set 00:23:39.034 [2024-11-05 19:13:08.327418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7fb10 is same with the state(6) to be set 00:23:39.034 [2024-11-05 19:13:08.327423] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7fb10 is same with the state(6) to be set 00:23:39.034 [2024-11-05 19:13:08.327428] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7fb10 is same with the state(6) to be set 00:23:39.034 [2024-11-05 19:13:08.327432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7fb10 is same with the state(6) to be set 00:23:39.034 [2024-11-05 19:13:08.327437] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7fb10 is same with the state(6) to be set 00:23:39.034 [2024-11-05 19:13:08.327442] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7fb10 is same with the state(6) to be set 00:23:39.034 [2024-11-05 19:13:08.327446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7fb10 is same with the state(6) to be set 00:23:39.034 [2024-11-05 19:13:08.327451] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7fb10 is same with the state(6) to be set 00:23:39.034 [2024-11-05 19:13:08.327456] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7fb10 is same with the state(6) to be set 00:23:39.034 [2024-11-05 19:13:08.327461] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7fb10 is same with the state(6) to be set 00:23:39.034 [2024-11-05 19:13:08.327465] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7fb10 is same with the state(6) to be set 00:23:39.034 [2024-11-05 19:13:08.327470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7fb10 is same with the state(6) to be set 00:23:39.034 [2024-11-05 19:13:08.327475] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7fb10 is same with the state(6) to be set 00:23:39.034 [2024-11-05 19:13:08.327479] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7fb10 is same with the state(6) to be set 00:23:39.034 [2024-11-05 19:13:08.327484] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7fb10 is same with the state(6) to be set 00:23:39.034 [2024-11-05 19:13:08.327489] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7fb10 is same with the state(6) to be set 00:23:39.034 [2024-11-05 19:13:08.327493] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7fb10 is same with the state(6) to be set 00:23:39.034 [2024-11-05 19:13:08.327498] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7fb10 is same with the state(6) to be set 00:23:39.034 [2024-11-05 19:13:08.327503] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7fb10 is same with the state(6) to be set 00:23:39.034 [2024-11-05 19:13:08.327508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7fb10 is same with the state(6) to be set 00:23:39.034 [2024-11-05 19:13:08.327514] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7fb10 is same with the state(6) to be set 00:23:39.034 [2024-11-05 19:13:08.327519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7fb10 is same with the state(6) to be set 00:23:39.034 [2024-11-05 19:13:08.327524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7fb10 is same with the state(6) to be set 00:23:39.034 [2024-11-05 19:13:08.327529] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7fb10 is same with the state(6) to be set 00:23:39.035 [2024-11-05 19:13:08.327534] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7fb10 is same with the state(6) to be set 00:23:39.035 [2024-11-05 19:13:08.327538] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7fb10 is same with the state(6) to be set 00:23:39.035 [2024-11-05 19:13:08.327543] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7fb10 is same with the state(6) to be set 00:23:39.035 [2024-11-05 19:13:08.327548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7fb10 is same with the state(6) to be set 00:23:39.035 [2024-11-05 19:13:08.327553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7fb10 is same with the state(6) to be set 00:23:39.035 [2024-11-05 19:13:08.327557] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7fb10 is same with the state(6) to be set 00:23:39.035 [2024-11-05 19:13:08.327562] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7fb10 is same with the state(6) to be set 00:23:39.035 [2024-11-05 19:13:08.327567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7fb10 is same with the state(6) to be set 00:23:39.035 [2024-11-05 19:13:08.327571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7fb10 is same with the state(6) to be set 00:23:39.035 [2024-11-05 19:13:08.327576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7fb10 is same with the state(6) to be set 00:23:39.035 [2024-11-05 19:13:08.327580] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7fb10 is same with the state(6) to be set 00:23:39.035 [2024-11-05 19:13:08.327584] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7fb10 is same with the state(6) to be set 00:23:39.035 [2024-11-05 19:13:08.327589] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7fb10 is same with the state(6) to be set 00:23:39.035 [2024-11-05 19:13:08.327593] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7fb10 is same with the state(6) to be set 00:23:39.035 [2024-11-05 19:13:08.327598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7fb10 is same with the state(6) to be set 00:23:39.035 [2024-11-05 19:13:08.327603] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7fb10 is same with the state(6) to be set 00:23:39.035 [2024-11-05 19:13:08.327607] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7fb10 is same with the state(6) to be set 00:23:39.035 [2024-11-05 19:13:08.327612] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7fb10 is same with the state(6) to be set 00:23:39.035 [2024-11-05 19:13:08.327617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7fb10 is same with the state(6) to be set 00:23:39.035 [2024-11-05 19:13:08.327621] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7fb10 is same with the state(6) to be set 00:23:39.035 [2024-11-05 19:13:08.327626] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7fb10 is same with the state(6) to be set 00:23:39.035 [2024-11-05 19:13:08.327630] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7fb10 is same with the state(6) to be set 00:23:39.035 [2024-11-05 19:13:08.327635] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7fb10 is same with the state(6) to be set 00:23:39.035 [2024-11-05 19:13:08.327643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7fb10 is same with the state(6) to be set 00:23:39.035 [2024-11-05 19:13:08.328882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7ffe0 is same with the state(6) to be set 00:23:39.035 [2024-11-05 19:13:08.328906] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7ffe0 is same with the state(6) to be set 00:23:39.035 [2024-11-05 19:13:08.328913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7ffe0 is same with the state(6) to be set 00:23:39.035 [2024-11-05 19:13:08.328918] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7ffe0 is same with the state(6) to be set 00:23:39.035 [2024-11-05 19:13:08.328923] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7ffe0 is same with the state(6) to be set 00:23:39.035 [2024-11-05 19:13:08.328928] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7ffe0 is same with the state(6) to be set 00:23:39.035 [2024-11-05 19:13:08.328933] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7ffe0 is same with the state(6) to be set 00:23:39.035 [2024-11-05 19:13:08.328938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7ffe0 is same with the state(6) to be set 00:23:39.035 [2024-11-05 19:13:08.328942] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7ffe0 is same with the state(6) to be set 00:23:39.035 [2024-11-05 19:13:08.328947] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7ffe0 is same with the state(6) to be set 00:23:39.035 [2024-11-05 19:13:08.328952] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7ffe0 is same with the state(6) to be set 00:23:39.035 [2024-11-05 19:13:08.328956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7ffe0 is same with the state(6) to be set 00:23:39.035 [2024-11-05 19:13:08.328961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7ffe0 is same with the state(6) to be set 00:23:39.035 [2024-11-05 19:13:08.328966] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7ffe0 is same with the state(6) to be set 00:23:39.035 [2024-11-05 19:13:08.328970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7ffe0 is same with the state(6) to be set 00:23:39.035 [2024-11-05 19:13:08.328975] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7ffe0 is same with the state(6) to be set 00:23:39.035 [2024-11-05 19:13:08.328980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7ffe0 is same with the state(6) to be set 00:23:39.035 [2024-11-05 19:13:08.328985] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7ffe0 is same with the state(6) to be set 00:23:39.035 [2024-11-05 19:13:08.328989] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7ffe0 is same with the state(6) to be set 00:23:39.035 [2024-11-05 19:13:08.328994] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7ffe0 is same with the state(6) to be set 00:23:39.035 [2024-11-05 19:13:08.328999] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7ffe0 is same with the state(6) to be set 00:23:39.035 [2024-11-05 19:13:08.329004] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7ffe0 is same with the state(6) to be set 00:23:39.035 [2024-11-05 19:13:08.329008] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7ffe0 is same with the state(6) to be set 00:23:39.035 [2024-11-05 19:13:08.329013] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7ffe0 is same with the state(6) to be set 00:23:39.035 [2024-11-05 19:13:08.329018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7ffe0 is same with the state(6) to be set 00:23:39.035 [2024-11-05 19:13:08.329022] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7ffe0 is same with the state(6) to be set 00:23:39.035 [2024-11-05 19:13:08.329031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7ffe0 is same with the state(6) to be set 00:23:39.035 [2024-11-05 19:13:08.329036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7ffe0 is same with the state(6) to be set 00:23:39.035 [2024-11-05 19:13:08.329041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7ffe0 is same with the state(6) to be set 00:23:39.035 [2024-11-05 19:13:08.329046] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7ffe0 is same with the state(6) to be set 00:23:39.035 [2024-11-05 19:13:08.329050] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7ffe0 is same with the state(6) to be set 00:23:39.035 [2024-11-05 19:13:08.329055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7ffe0 is same with the state(6) to be set 00:23:39.035 [2024-11-05 19:13:08.329060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7ffe0 is same with the state(6) to be set 00:23:39.035 [2024-11-05 19:13:08.329065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7ffe0 is same with the state(6) to be set 00:23:39.035 [2024-11-05 19:13:08.329069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7ffe0 is same with the state(6) to be set 00:23:39.035 [2024-11-05 19:13:08.329074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7ffe0 is same with the state(6) to be set 00:23:39.035 [2024-11-05 19:13:08.329079] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7ffe0 is same with the state(6) to be set 00:23:39.035 [2024-11-05 19:13:08.329084] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7ffe0 is same with the state(6) to be set 00:23:39.035 [2024-11-05 19:13:08.329088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7ffe0 is same with the state(6) to be set 00:23:39.035 [2024-11-05 19:13:08.329094] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7ffe0 is same with the state(6) to be set 00:23:39.035 [2024-11-05 19:13:08.329099] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7ffe0 is same with the state(6) to be set 00:23:39.035 [2024-11-05 19:13:08.329103] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7ffe0 is same with the state(6) to be set 00:23:39.035 [2024-11-05 19:13:08.329108] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7ffe0 is same with the state(6) to be set 00:23:39.035 [2024-11-05 19:13:08.329112] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7ffe0 is same with the state(6) to be set 00:23:39.035 [2024-11-05 19:13:08.329117] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7ffe0 is same with the state(6) to be set 00:23:39.035 [2024-11-05 19:13:08.329122] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7ffe0 is same with the state(6) to be set 00:23:39.035 [2024-11-05 19:13:08.329126] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7ffe0 is same with the state(6) to be set 00:23:39.035 [2024-11-05 19:13:08.329131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7ffe0 is same with the state(6) to be set 00:23:39.035 [2024-11-05 19:13:08.329136] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7ffe0 is same with the state(6) to be set 00:23:39.035 [2024-11-05 19:13:08.329140] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7ffe0 is same with the state(6) to be set 00:23:39.035 [2024-11-05 19:13:08.329145] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7ffe0 is same with the state(6) to be set 00:23:39.035 [2024-11-05 19:13:08.329150] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7ffe0 is same with the state(6) to be set 00:23:39.035 [2024-11-05 19:13:08.329154] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7ffe0 is same with the state(6) to be set 00:23:39.035 [2024-11-05 19:13:08.329163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7ffe0 is same with the state(6) to be set 00:23:39.035 [2024-11-05 19:13:08.329168] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7ffe0 is same with the state(6) to be set 00:23:39.035 [2024-11-05 19:13:08.329173] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7ffe0 is same with the state(6) to be set 00:23:39.035 [2024-11-05 19:13:08.329178] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7ffe0 is same with the state(6) to be set 00:23:39.035 [2024-11-05 19:13:08.329183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7ffe0 is same with the state(6) to be set 00:23:39.035 [2024-11-05 19:13:08.329187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7ffe0 is same with the state(6) to be set 00:23:39.035 [2024-11-05 19:13:08.329192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7ffe0 is same with the state(6) to be set 00:23:39.036 [2024-11-05 19:13:08.329196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7ffe0 is same with the state(6) to be set 00:23:39.036 [2024-11-05 19:13:08.329201] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7ffe0 is same with the state(6) to be set 00:23:39.036 [2024-11-05 19:13:08.329205] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7ffe0 is same with the state(6) to be set 00:23:39.036 [2024-11-05 19:13:08.330130] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a804d0 is same with the state(6) to be set 00:23:39.036 [2024-11-05 19:13:08.330152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a804d0 is same with the state(6) to be set 00:23:39.036 [2024-11-05 19:13:08.330158] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a804d0 is same with the state(6) to be set 00:23:39.036 [2024-11-05 19:13:08.330163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a804d0 is same with the state(6) to be set 00:23:39.036 [2024-11-05 19:13:08.330168] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a804d0 is same with the state(6) to be set 00:23:39.036 [2024-11-05 19:13:08.330173] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a804d0 is same with the state(6) to be set 00:23:39.036 [2024-11-05 19:13:08.330178] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a804d0 is same with the state(6) to be set 00:23:39.036 [2024-11-05 19:13:08.330183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a804d0 is same with the state(6) to be set 00:23:39.036 [2024-11-05 19:13:08.330188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a804d0 is same with the state(6) to be set 00:23:39.036 [2024-11-05 19:13:08.330192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a804d0 is same with the state(6) to be set 00:23:39.036 [2024-11-05 19:13:08.330197] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a804d0 is same with the state(6) to be set 00:23:39.036 [2024-11-05 19:13:08.330202] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a804d0 is same with the state(6) to be set 00:23:39.036 [2024-11-05 19:13:08.330207] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a804d0 is same with the state(6) to be set 00:23:39.036 [2024-11-05 19:13:08.330212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a804d0 is same with the state(6) to be set 00:23:39.036 [2024-11-05 19:13:08.330216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a804d0 is same with the state(6) to be set 00:23:39.036 [2024-11-05 19:13:08.330221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a804d0 is same with the state(6) to be set 00:23:39.036 [2024-11-05 19:13:08.330225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a804d0 is same with the state(6) to be set 00:23:39.036 [2024-11-05 19:13:08.330234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a804d0 is same with the state(6) to be set 00:23:39.036 [2024-11-05 19:13:08.330239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a804d0 is same with the state(6) to be set 00:23:39.036 [2024-11-05 19:13:08.330244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a804d0 is same with the state(6) to be set 00:23:39.036 [2024-11-05 19:13:08.330249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a804d0 is same with the state(6) to be set 00:23:39.036 [2024-11-05 19:13:08.330253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a804d0 is same with the state(6) to be set 00:23:39.036 [2024-11-05 19:13:08.330258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a804d0 is same with the state(6) to be set 00:23:39.036 [2024-11-05 19:13:08.330263] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a804d0 is same with the state(6) to be set 00:23:39.036 [2024-11-05 19:13:08.330268] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a804d0 is same with the state(6) to be set 00:23:39.036 [2024-11-05 19:13:08.330272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a804d0 is same with the state(6) to be set 00:23:39.036 [2024-11-05 19:13:08.330277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a804d0 is same with the state(6) to be set 00:23:39.036 [2024-11-05 19:13:08.330281] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a804d0 is same with the state(6) to be set 00:23:39.036 [2024-11-05 19:13:08.330286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a804d0 is same with the state(6) to be set 00:23:39.036 [2024-11-05 19:13:08.330290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a804d0 is same with the state(6) to be set 00:23:39.036 [2024-11-05 19:13:08.330295] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a804d0 is same with the state(6) to be set 00:23:39.036 [2024-11-05 19:13:08.330300] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a804d0 is same with the state(6) to be set 00:23:39.036 [2024-11-05 19:13:08.330304] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a804d0 is same with the state(6) to be set 00:23:39.036 [2024-11-05 19:13:08.330309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a804d0 is same with the state(6) to be set 00:23:39.036 [2024-11-05 19:13:08.330314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a804d0 is same with the state(6) to be set 00:23:39.036 [2024-11-05 19:13:08.330318] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a804d0 is same with the state(6) to be set 00:23:39.036 [2024-11-05 19:13:08.330323] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a804d0 is same with the state(6) to be set 00:23:39.036 [2024-11-05 19:13:08.330328] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a804d0 is same with the state(6) to be set 00:23:39.036 [2024-11-05 19:13:08.330332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a804d0 is same with the state(6) to be set 00:23:39.036 [2024-11-05 19:13:08.330337] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a804d0 is same with the state(6) to be set 00:23:39.036 [2024-11-05 19:13:08.330342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a804d0 is same with the state(6) to be set 00:23:39.036 [2024-11-05 19:13:08.330346] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a804d0 is same with the state(6) to be set 00:23:39.036 [2024-11-05 19:13:08.330351] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a804d0 is same with the state(6) to be set 00:23:39.036 [2024-11-05 19:13:08.330355] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a804d0 is same with the state(6) to be set 00:23:39.036 [2024-11-05 19:13:08.330360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a804d0 is same with the state(6) to be set 00:23:39.036 [2024-11-05 19:13:08.330366] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a804d0 is same with the state(6) to be set 00:23:39.036 [2024-11-05 19:13:08.330372] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a804d0 is same with the state(6) to be set 00:23:39.036 [2024-11-05 19:13:08.330377] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a804d0 is same with the state(6) to be set 00:23:39.036 [2024-11-05 19:13:08.330382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a804d0 is same with the state(6) to be set 00:23:39.036 [2024-11-05 19:13:08.330386] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a804d0 is same with the state(6) to be set 00:23:39.036 [2024-11-05 19:13:08.330391] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a804d0 is same with the state(6) to be set 00:23:39.036 [2024-11-05 19:13:08.330395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a804d0 is same with the state(6) to be set 00:23:39.036 [2024-11-05 19:13:08.330400] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a804d0 is same with the state(6) to be set 00:23:39.036 [2024-11-05 19:13:08.330405] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a804d0 is same with the state(6) to be set 00:23:39.036 [2024-11-05 19:13:08.330410] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a804d0 is same with the state(6) to be set 00:23:39.036 [2024-11-05 19:13:08.330414] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a804d0 is same with the state(6) to be set 00:23:39.036 [2024-11-05 19:13:08.330420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a804d0 is same with the state(6) to be set 00:23:39.036 [2024-11-05 19:13:08.330424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a804d0 is same with the state(6) to be set 00:23:39.036 [2024-11-05 19:13:08.330429] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a804d0 is same with the state(6) to be set 00:23:39.036 [2024-11-05 19:13:08.330434] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a804d0 is same with the state(6) to be set 00:23:39.036 [2024-11-05 19:13:08.330438] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a804d0 is same with the state(6) to be set 00:23:39.036 [2024-11-05 19:13:08.330443] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a804d0 is same with the state(6) to be set 00:23:39.036 [2024-11-05 19:13:08.330448] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a804d0 is same with the state(6) to be set 00:23:39.036 [2024-11-05 19:13:08.330566] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:39.036 [2024-11-05 19:13:08.330601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.036 [2024-11-05 19:13:08.330612] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:39.036 [2024-11-05 19:13:08.330620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.036 [2024-11-05 19:13:08.330628] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:39.036 [2024-11-05 19:13:08.330636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.036 [2024-11-05 19:13:08.330645] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:39.036 [2024-11-05 19:13:08.330652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.036 [2024-11-05 19:13:08.330664] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf320 is same with the state(6) to be set 00:23:39.036 [2024-11-05 19:13:08.330697] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:39.036 [2024-11-05 19:13:08.330706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.036 [2024-11-05 19:13:08.330715] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:39.036 [2024-11-05 19:13:08.330722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.036 [2024-11-05 19:13:08.330730] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:39.036 [2024-11-05 19:13:08.330737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.036 [2024-11-05 19:13:08.330752] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:39.037 [2024-11-05 19:13:08.330760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.037 [2024-11-05 19:13:08.330767] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac7e50 is same with the state(6) to be set 00:23:39.037 [2024-11-05 19:13:08.330794] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:39.037 [2024-11-05 19:13:08.330803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.037 [2024-11-05 19:13:08.330812] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:39.037 [2024-11-05 19:13:08.330819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.037 [2024-11-05 19:13:08.330827] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:39.037 [2024-11-05 19:13:08.330834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.037 [2024-11-05 19:13:08.330842] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:39.037 [2024-11-05 19:13:08.330849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.037 [2024-11-05 19:13:08.330856] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x69dcb0 is same with the state(6) to be set 00:23:39.037 [2024-11-05 19:13:08.330893] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:39.037 [2024-11-05 19:13:08.330902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.037 [2024-11-05 19:13:08.330911] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:39.037 [2024-11-05 19:13:08.330919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.037 [2024-11-05 19:13:08.330927] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:39.037 [2024-11-05 19:13:08.330934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.037 [2024-11-05 19:13:08.330942] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:39.037 [2024-11-05 19:13:08.330957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.037 [2024-11-05 19:13:08.330964] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x693160 is same with the state(6) to be set 00:23:39.037 [2024-11-05 19:13:08.330987] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:39.037 [2024-11-05 19:13:08.330997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.037 [2024-11-05 19:13:08.331005] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:39.037 [2024-11-05 19:13:08.331012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.037 [2024-11-05 19:13:08.331020] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:39.037 [2024-11-05 19:13:08.331028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.037 [2024-11-05 19:13:08.331036] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:39.037 [2024-11-05 19:13:08.331044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.037 [2024-11-05 19:13:08.331051] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x694900 is same with the state(6) to be set 00:23:39.037 [2024-11-05 19:13:08.332077] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a80850 is same with the state(6) to be set 00:23:39.037 [2024-11-05 19:13:08.332093] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a80850 is same with the state(6) to be set 00:23:39.037 [2024-11-05 19:13:08.332098] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a80850 is same with the state(6) to be set 00:23:39.037 [2024-11-05 19:13:08.332660] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a80d20 is same with the state(6) to be set 00:23:39.037 [2024-11-05 19:13:08.332672] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a80d20 is same with the state(6) to be set 00:23:39.037 [2024-11-05 19:13:08.332677] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a80d20 is same with the state(6) to be set 00:23:39.037 [2024-11-05 19:13:08.332682] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a80d20 is same with the state(6) to be set 00:23:39.037 [2024-11-05 19:13:08.332686] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a80d20 is same with the state(6) to be set 00:23:39.037 [2024-11-05 19:13:08.332691] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a80d20 is same with the state(6) to be set 00:23:39.037 [2024-11-05 19:13:08.332696] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a80d20 is same with the state(6) to be set 00:23:39.037 [2024-11-05 19:13:08.332700] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a80d20 is same with the state(6) to be set 00:23:39.037 [2024-11-05 19:13:08.332705] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a80d20 is same with the state(6) to be set 00:23:39.037 [2024-11-05 19:13:08.332710] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a80d20 is same with the state(6) to be set 00:23:39.037 [2024-11-05 19:13:08.332715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a80d20 is same with the state(6) to be set 00:23:39.037 [2024-11-05 19:13:08.332720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a80d20 is same with the state(6) to be set 00:23:39.037 [2024-11-05 19:13:08.332727] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a80d20 is same with the state(6) to be set 00:23:39.037 [2024-11-05 19:13:08.332732] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a80d20 is same with the state(6) to be set 00:23:39.037 [2024-11-05 19:13:08.332736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a80d20 is same with the state(6) to be set 00:23:39.037 [2024-11-05 19:13:08.332742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a80d20 is same with the state(6) to be set 00:23:39.037 [2024-11-05 19:13:08.332750] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a80d20 is same with the state(6) to be set 00:23:39.037 [2024-11-05 19:13:08.332755] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a80d20 is same with the state(6) to be set 00:23:39.037 [2024-11-05 19:13:08.332760] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a80d20 is same with the state(6) to be set 00:23:39.037 [2024-11-05 19:13:08.332764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a80d20 is same with the state(6) to be set 00:23:39.037 [2024-11-05 19:13:08.332769] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a80d20 is same with the state(6) to be set 00:23:39.037 [2024-11-05 19:13:08.332774] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a80d20 is same with the state(6) to be set 00:23:39.037 [2024-11-05 19:13:08.332779] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a80d20 is same with the state(6) to be set 00:23:39.037 [2024-11-05 19:13:08.332784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a80d20 is same with the state(6) to be set 00:23:39.037 [2024-11-05 19:13:08.332788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a80d20 is same with the state(6) to be set 00:23:39.037 [2024-11-05 19:13:08.332793] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a80d20 is same with the state(6) to be set 00:23:39.037 [2024-11-05 19:13:08.332798] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a80d20 is same with the state(6) to be set 00:23:39.037 [2024-11-05 19:13:08.332802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a80d20 is same with the state(6) to be set 00:23:39.037 [2024-11-05 19:13:08.332807] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a80d20 is same with the state(6) to be set 00:23:39.037 [2024-11-05 19:13:08.332812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a80d20 is same with the state(6) to be set 00:23:39.037 [2024-11-05 19:13:08.332816] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a80d20 is same with the state(6) to be set 00:23:39.037 [2024-11-05 19:13:08.332821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a80d20 is same with the state(6) to be set 00:23:39.037 [2024-11-05 19:13:08.332826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a80d20 is same with the state(6) to be set 00:23:39.037 [2024-11-05 19:13:08.332830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a80d20 is same with the state(6) to be set 00:23:39.037 [2024-11-05 19:13:08.332835] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a80d20 is same with the state(6) to be set 00:23:39.037 [2024-11-05 19:13:08.332840] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a80d20 is same with the state(6) to be set 00:23:39.037 [2024-11-05 19:13:08.332845] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a80d20 is same with the state(6) to be set 00:23:39.038 [2024-11-05 19:13:08.332850] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a80d20 is same with the state(6) to be set 00:23:39.038 [2024-11-05 19:13:08.332854] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a80d20 is same with the state(6) to be set 00:23:39.038 [2024-11-05 19:13:08.332860] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a80d20 is same with the state(6) to be set 00:23:39.038 [2024-11-05 19:13:08.332865] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a80d20 is same with the state(6) to be set 00:23:39.038 [2024-11-05 19:13:08.332869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a80d20 is same with the state(6) to be set 00:23:39.038 [2024-11-05 19:13:08.332874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a80d20 is same with the state(6) to be set 00:23:39.038 [2024-11-05 19:13:08.332878] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a80d20 is same with the state(6) to be set 00:23:39.038 [2024-11-05 19:13:08.332883] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a80d20 is same with the state(6) to be set 00:23:39.038 [2024-11-05 19:13:08.332888] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a80d20 is same with the state(6) to be set 00:23:39.038 [2024-11-05 19:13:08.332893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a80d20 is same with the state(6) to be set 00:23:39.038 [2024-11-05 19:13:08.332898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a80d20 is same with the state(6) to be set 00:23:39.038 [2024-11-05 19:13:08.332903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a80d20 is same with the state(6) to be set 00:23:39.038 [2024-11-05 19:13:08.332908] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a80d20 is same with the state(6) to be set 00:23:39.038 [2024-11-05 19:13:08.332912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a80d20 is same with the state(6) to be set 00:23:39.038 [2024-11-05 19:13:08.332917] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a80d20 is same with the state(6) to be set 00:23:39.038 [2024-11-05 19:13:08.332922] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a80d20 is same with the state(6) to be set 00:23:39.038 [2024-11-05 19:13:08.332927] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a80d20 is same with the state(6) to be set 00:23:39.038 [2024-11-05 19:13:08.332931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a80d20 is same with the state(6) to be set 00:23:39.038 [2024-11-05 19:13:08.332936] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a80d20 is same with the state(6) to be set 00:23:39.038 [2024-11-05 19:13:08.332941] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a80d20 is same with the state(6) to be set 00:23:39.038 [2024-11-05 19:13:08.332945] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a80d20 is same with the state(6) to be set 00:23:39.038 [2024-11-05 19:13:08.332950] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a80d20 is same with the state(6) to be set 00:23:39.038 [2024-11-05 19:13:08.332955] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a80d20 is same with the state(6) to be set 00:23:39.038 [2024-11-05 19:13:08.332959] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a80d20 is same with the state(6) to be set 00:23:39.038 [2024-11-05 19:13:08.332964] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a80d20 is same with the state(6) to be set 00:23:39.038 [2024-11-05 19:13:08.332968] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a80d20 is same with the state(6) to be set 00:23:39.038 [2024-11-05 19:13:08.333282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.038 [2024-11-05 19:13:08.333306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.038 [2024-11-05 19:13:08.333323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.038 [2024-11-05 19:13:08.333335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.038 [2024-11-05 19:13:08.333345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.038 [2024-11-05 19:13:08.333353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.038 [2024-11-05 19:13:08.333362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.038 [2024-11-05 19:13:08.333370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.038 [2024-11-05 19:13:08.333379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.038 [2024-11-05 19:13:08.333386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.038 [2024-11-05 19:13:08.333396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.038 [2024-11-05 19:13:08.333403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.038 [2024-11-05 19:13:08.333412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.038 [2024-11-05 19:13:08.333420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.038 [2024-11-05 19:13:08.333430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.038 [2024-11-05 19:13:08.333437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.038 [2024-11-05 19:13:08.333446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.038 [2024-11-05 19:13:08.333454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.038 [2024-11-05 19:13:08.333463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.038 [2024-11-05 19:13:08.333471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.038 [2024-11-05 19:13:08.333481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.038 [2024-11-05 19:13:08.333489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.038 [2024-11-05 19:13:08.333498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.038 [2024-11-05 19:13:08.333506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.038 [2024-11-05 19:13:08.333515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.038 [2024-11-05 19:13:08.333523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.038 [2024-11-05 19:13:08.333533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.038 [2024-11-05 19:13:08.333540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.038 [2024-11-05 19:13:08.333551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.038 [2024-11-05 19:13:08.333559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.038 [2024-11-05 19:13:08.333568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.038 [2024-11-05 19:13:08.333576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.038 [2024-11-05 19:13:08.333586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.038 [2024-11-05 19:13:08.333593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.038 [2024-11-05 19:13:08.333602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.038 [2024-11-05 19:13:08.333609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.038 [2024-11-05 19:13:08.333619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.038 [2024-11-05 19:13:08.333626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.038 [2024-11-05 19:13:08.333636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.038 [2024-11-05 19:13:08.333643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.038 [2024-11-05 19:13:08.333652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.038 [2024-11-05 19:13:08.333660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.038 [2024-11-05 19:13:08.333669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.038 [2024-11-05 19:13:08.333677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.038 [2024-11-05 19:13:08.333686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.038 [2024-11-05 19:13:08.333693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.038 [2024-11-05 19:13:08.333702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.038 [2024-11-05 19:13:08.333709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.038 [2024-11-05 19:13:08.333719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.038 [2024-11-05 19:13:08.333726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.038 [2024-11-05 19:13:08.333736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.038 [2024-11-05 19:13:08.333743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.038 [2024-11-05 19:13:08.333761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.038 [2024-11-05 19:13:08.333771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.038 [2024-11-05 19:13:08.333781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.039 [2024-11-05 19:13:08.333789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.039 [2024-11-05 19:13:08.333798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.039 [2024-11-05 19:13:08.333806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.039 [2024-11-05 19:13:08.333816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.039 [2024-11-05 19:13:08.333823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.039 [2024-11-05 19:13:08.333832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.039 [2024-11-05 19:13:08.333840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.039 [2024-11-05 19:13:08.333849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.039 [2024-11-05 19:13:08.333856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.039 [2024-11-05 19:13:08.333855] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a811f0 is same with the state(6) to be set 00:23:39.039 [2024-11-05 19:13:08.333867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.039 [2024-11-05 19:13:08.333869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a811f0 is same with the state(6) to be set 00:23:39.039 [2024-11-05 19:13:08.333875] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a811f0 is same with [2024-11-05 19:13:08.333875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:23:39.039 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.039 [2024-11-05 19:13:08.333883] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a811f0 is same with the state(6) to be set 00:23:39.039 [2024-11-05 19:13:08.333887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128[2024-11-05 19:13:08.333888] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a811f0 is same with SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.039 the state(6) to be set 00:23:39.039 [2024-11-05 19:13:08.333896] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a811f0 is same with the state(6) to be set 00:23:39.039 [2024-11-05 19:13:08.333897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.039 [2024-11-05 19:13:08.333902] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a811f0 is same with the state(6) to be set 00:23:39.039 [2024-11-05 19:13:08.333907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a811f0 is same with [2024-11-05 19:13:08.333907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128the state(6) to be set 00:23:39.039 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.039 [2024-11-05 19:13:08.333914] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a811f0 is same with the state(6) to be set 00:23:39.039 [2024-11-05 19:13:08.333917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-05 19:13:08.333920] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a811f0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.039 the state(6) to be set 00:23:39.039 [2024-11-05 19:13:08.333927] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a811f0 is same with the state(6) to be set 00:23:39.039 [2024-11-05 19:13:08.333930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.039 [2024-11-05 19:13:08.333932] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a811f0 is same with the state(6) to be set 00:23:39.039 [2024-11-05 19:13:08.333938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a811f0 is same with the state(6) to be set 00:23:39.039 [2024-11-05 19:13:08.333938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.039 [2024-11-05 19:13:08.333944] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a811f0 is same with the state(6) to be set 00:23:39.039 [2024-11-05 19:13:08.333950] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a811f0 is same with [2024-11-05 19:13:08.333949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128the state(6) to be set 00:23:39.039 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.039 [2024-11-05 19:13:08.333957] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a811f0 is same with the state(6) to be set 00:23:39.039 [2024-11-05 19:13:08.333960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.039 [2024-11-05 19:13:08.333963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a811f0 is same with the state(6) to be set 00:23:39.039 [2024-11-05 19:13:08.333968] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a811f0 is same with the state(6) to be set 00:23:39.039 [2024-11-05 19:13:08.333970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.039 [2024-11-05 19:13:08.333973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a811f0 is same with the state(6) to be set 00:23:39.039 [2024-11-05 19:13:08.333977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-05 19:13:08.333978] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a811f0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.039 the state(6) to be set 00:23:39.039 [2024-11-05 19:13:08.333986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a811f0 is same with the state(6) to be set 00:23:39.039 [2024-11-05 19:13:08.333989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.039 [2024-11-05 19:13:08.333991] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a811f0 is same with the state(6) to be set 00:23:39.039 [2024-11-05 19:13:08.333997] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a811f0 is same with the state(6) to be set 00:23:39.039 [2024-11-05 19:13:08.333997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.039 [2024-11-05 19:13:08.334003] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a811f0 is same with the state(6) to be set 00:23:39.039 [2024-11-05 19:13:08.334008] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a811f0 is same with the state(6) to be set 00:23:39.039 [2024-11-05 19:13:08.334008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.039 [2024-11-05 19:13:08.334014] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a811f0 is same with the state(6) to be set 00:23:39.039 [2024-11-05 19:13:08.334017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.039 [2024-11-05 19:13:08.334023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a811f0 is same with the state(6) to be set 00:23:39.039 [2024-11-05 19:13:08.334028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:12[2024-11-05 19:13:08.334028] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a811f0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.039 the state(6) to be set 00:23:39.039 [2024-11-05 19:13:08.334037] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a811f0 is same with the state(6) to be set 00:23:39.039 [2024-11-05 19:13:08.334038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.039 [2024-11-05 19:13:08.334042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a811f0 is same with the state(6) to be set 00:23:39.039 [2024-11-05 19:13:08.334047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a811f0 is same with the state(6) to be set 00:23:39.039 [2024-11-05 19:13:08.334047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.039 [2024-11-05 19:13:08.334052] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a811f0 is same with the state(6) to be set 00:23:39.039 [2024-11-05 19:13:08.334055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.039 [2024-11-05 19:13:08.334058] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a811f0 is same with the state(6) to be set 00:23:39.039 [2024-11-05 19:13:08.334064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a811f0 is same with the state(6) to be set 00:23:39.039 [2024-11-05 19:13:08.334066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.039 [2024-11-05 19:13:08.334069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a811f0 is same with the state(6) to be set 00:23:39.039 [2024-11-05 19:13:08.334074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a811f0 is same with [2024-11-05 19:13:08.334074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:23:39.039 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.039 [2024-11-05 19:13:08.334082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a811f0 is same with the state(6) to be set 00:23:39.039 [2024-11-05 19:13:08.334086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:12[2024-11-05 19:13:08.334087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a811f0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.039 the state(6) to be set 00:23:39.039 [2024-11-05 19:13:08.334094] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a811f0 is same with the state(6) to be set 00:23:39.039 [2024-11-05 19:13:08.334095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.039 [2024-11-05 19:13:08.334099] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a811f0 is same with the state(6) to be set 00:23:39.039 [2024-11-05 19:13:08.334104] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a811f0 is same with the state(6) to be set 00:23:39.039 [2024-11-05 19:13:08.334106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.039 [2024-11-05 19:13:08.334109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a811f0 is same with the state(6) to be set 00:23:39.039 [2024-11-05 19:13:08.334114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.039 [2024-11-05 19:13:08.334116] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a811f0 is same with the state(6) to be set 00:23:39.039 [2024-11-05 19:13:08.334122] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a811f0 is same with the state(6) to be set 00:23:39.039 [2024-11-05 19:13:08.334124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.039 [2024-11-05 19:13:08.334127] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a811f0 is same with the state(6) to be set 00:23:39.039 [2024-11-05 19:13:08.334132] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a811f0 is same with the state(6) to be set 00:23:39.039 [2024-11-05 19:13:08.334132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.040 [2024-11-05 19:13:08.334138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a811f0 is same with the state(6) to be set 00:23:39.040 [2024-11-05 19:13:08.334144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a811f0 is same with [2024-11-05 19:13:08.334144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:12the state(6) to be set 00:23:39.040 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.040 [2024-11-05 19:13:08.334151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a811f0 is same with the state(6) to be set 00:23:39.040 [2024-11-05 19:13:08.334153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.040 [2024-11-05 19:13:08.334156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a811f0 is same with the state(6) to be set 00:23:39.040 [2024-11-05 19:13:08.334162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a811f0 is same with the state(6) to be set 00:23:39.040 [2024-11-05 19:13:08.334163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.040 [2024-11-05 19:13:08.334166] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a811f0 is same with the state(6) to be set 00:23:39.040 [2024-11-05 19:13:08.334172] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a811f0 is same with [2024-11-05 19:13:08.334171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:23:39.040 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.040 [2024-11-05 19:13:08.334179] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a811f0 is same with the state(6) to be set 00:23:39.040 [2024-11-05 19:13:08.334183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:12[2024-11-05 19:13:08.334184] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a811f0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.040 the state(6) to be set 00:23:39.040 [2024-11-05 19:13:08.334193] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a811f0 is same with the state(6) to be set 00:23:39.040 [2024-11-05 19:13:08.334193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.040 [2024-11-05 19:13:08.334198] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a811f0 is same with the state(6) to be set 00:23:39.040 [2024-11-05 19:13:08.334203] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a811f0 is same with the state(6) to be set 00:23:39.040 [2024-11-05 19:13:08.334203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.040 [2024-11-05 19:13:08.334208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a811f0 is same with the state(6) to be set 00:23:39.040 [2024-11-05 19:13:08.334212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.040 [2024-11-05 19:13:08.334222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.040 [2024-11-05 19:13:08.334229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.040 [2024-11-05 19:13:08.334239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.040 [2024-11-05 19:13:08.334246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.040 [2024-11-05 19:13:08.334256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.040 [2024-11-05 19:13:08.334263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.040 [2024-11-05 19:13:08.334272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.040 [2024-11-05 19:13:08.334279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.040 [2024-11-05 19:13:08.334288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.040 [2024-11-05 19:13:08.334296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.040 [2024-11-05 19:13:08.334305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.040 [2024-11-05 19:13:08.334312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.040 [2024-11-05 19:13:08.334321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.040 [2024-11-05 19:13:08.334329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.040 [2024-11-05 19:13:08.334338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.040 [2024-11-05 19:13:08.334345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.040 [2024-11-05 19:13:08.334355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.040 [2024-11-05 19:13:08.334362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.040 [2024-11-05 19:13:08.334371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.040 [2024-11-05 19:13:08.334378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.040 [2024-11-05 19:13:08.334388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.040 [2024-11-05 19:13:08.334395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.040 [2024-11-05 19:13:08.334404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.040 [2024-11-05 19:13:08.334411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.040 [2024-11-05 19:13:08.334424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.040 [2024-11-05 19:13:08.334432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.040 [2024-11-05 19:13:08.334441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.040 [2024-11-05 19:13:08.334448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.040 [2024-11-05 19:13:08.337236] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:39.040 [2024-11-05 19:13:08.337263] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:23:39.040 [2024-11-05 19:13:08.337302] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabe400 (9): Bad file descriptor 00:23:39.040 [2024-11-05 19:13:08.337341] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:39.040 [2024-11-05 19:13:08.337408] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:39.040 [2024-11-05 19:13:08.337442] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:39.040 [2024-11-05 19:13:08.338513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:39.040 [2024-11-05 19:13:08.338537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabe400 with addr=10.0.0.2, port=4420 00:23:39.040 [2024-11-05 19:13:08.338546] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabe400 is same with the state(6) to be set 00:23:39.040 [2024-11-05 19:13:08.338578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.040 [2024-11-05 19:13:08.338588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.040 [2024-11-05 19:13:08.338600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.040 [2024-11-05 19:13:08.338608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.040 [2024-11-05 19:13:08.338618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.040 [2024-11-05 19:13:08.338625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.040 [2024-11-05 19:13:08.338635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.040 [2024-11-05 19:13:08.338643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.040 [2024-11-05 19:13:08.338653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.040 [2024-11-05 19:13:08.338660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.040 [2024-11-05 19:13:08.338670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.040 [2024-11-05 19:13:08.338678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.040 [2024-11-05 19:13:08.338688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.040 [2024-11-05 19:13:08.338695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.040 [2024-11-05 19:13:08.338709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.040 [2024-11-05 19:13:08.338716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.040 [2024-11-05 19:13:08.338726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.040 [2024-11-05 19:13:08.338733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.040 [2024-11-05 19:13:08.338743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.040 [2024-11-05 19:13:08.338759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.040 [2024-11-05 19:13:08.338768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.040 [2024-11-05 19:13:08.338776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.040 [2024-11-05 19:13:08.338785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.041 [2024-11-05 19:13:08.338793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.041 [2024-11-05 19:13:08.338803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.041 [2024-11-05 19:13:08.338810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.041 [2024-11-05 19:13:08.338820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.041 [2024-11-05 19:13:08.338827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.041 [2024-11-05 19:13:08.338837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.041 [2024-11-05 19:13:08.338845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.041 [2024-11-05 19:13:08.338855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.041 [2024-11-05 19:13:08.338862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.041 [2024-11-05 19:13:08.338872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.041 [2024-11-05 19:13:08.338879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.041 [2024-11-05 19:13:08.338889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.041 [2024-11-05 19:13:08.338896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.041 [2024-11-05 19:13:08.338906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.041 [2024-11-05 19:13:08.338913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.041 [2024-11-05 19:13:08.338923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.041 [2024-11-05 19:13:08.338932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.041 [2024-11-05 19:13:08.338942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.041 [2024-11-05 19:13:08.338949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.041 [2024-11-05 19:13:08.338959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.041 [2024-11-05 19:13:08.338966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.041 [2024-11-05 19:13:08.338975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.041 [2024-11-05 19:13:08.338983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.041 [2024-11-05 19:13:08.338992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.041 [2024-11-05 19:13:08.339000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.041 [2024-11-05 19:13:08.339009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.041 [2024-11-05 19:13:08.339017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.041 [2024-11-05 19:13:08.339027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.041 [2024-11-05 19:13:08.339034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.041 [2024-11-05 19:13:08.339043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.041 [2024-11-05 19:13:08.339051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.041 [2024-11-05 19:13:08.339060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.041 [2024-11-05 19:13:08.339068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.041 [2024-11-05 19:13:08.339078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.041 [2024-11-05 19:13:08.339085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.041 [2024-11-05 19:13:08.339094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.041 [2024-11-05 19:13:08.339102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.041 [2024-11-05 19:13:08.339111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.041 [2024-11-05 19:13:08.339119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.041 [2024-11-05 19:13:08.339128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.041 [2024-11-05 19:13:08.339135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.041 [2024-11-05 19:13:08.339147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.041 [2024-11-05 19:13:08.339154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.041 [2024-11-05 19:13:08.339164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.041 [2024-11-05 19:13:08.339171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.041 [2024-11-05 19:13:08.339181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.041 [2024-11-05 19:13:08.339188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.041 [2024-11-05 19:13:08.339197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.041 [2024-11-05 19:13:08.339205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.041 [2024-11-05 19:13:08.339215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.041 [2024-11-05 19:13:08.339222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.041 [2024-11-05 19:13:08.339232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.041 [2024-11-05 19:13:08.339239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.041 [2024-11-05 19:13:08.339249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.041 [2024-11-05 19:13:08.339256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.041 [2024-11-05 19:13:08.339266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.041 [2024-11-05 19:13:08.339273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.041 [2024-11-05 19:13:08.339283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.041 [2024-11-05 19:13:08.339290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.041 [2024-11-05 19:13:08.339299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.041 [2024-11-05 19:13:08.339307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.041 [2024-11-05 19:13:08.339316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.041 [2024-11-05 19:13:08.339323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.041 [2024-11-05 19:13:08.339333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.041 [2024-11-05 19:13:08.339340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.041 [2024-11-05 19:13:08.339350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.041 [2024-11-05 19:13:08.339359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.041 [2024-11-05 19:13:08.339369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.041 [2024-11-05 19:13:08.339377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.041 [2024-11-05 19:13:08.339387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.041 [2024-11-05 19:13:08.339394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.042 [2024-11-05 19:13:08.339404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.042 [2024-11-05 19:13:08.339411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.042 [2024-11-05 19:13:08.339421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.042 [2024-11-05 19:13:08.346129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a811f0 is same with the state(6) to be set 00:23:39.042 [2024-11-05 19:13:08.346150] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a811f0 is same with the state(6) to be set 00:23:39.042 [2024-11-05 19:13:08.346156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a811f0 is same with the state(6) to be set 00:23:39.042 [2024-11-05 19:13:08.346162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a811f0 is same with the state(6) to be set 00:23:39.042 [2024-11-05 19:13:08.347085] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a81bb0 is same with the state(6) to be set 00:23:39.042 [2024-11-05 19:13:08.347099] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a81bb0 is same with the state(6) to be set 00:23:39.042 [2024-11-05 19:13:08.347103] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a81bb0 is same with the state(6) to be set 00:23:39.042 [2024-11-05 19:13:08.347108] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a81bb0 is same with the state(6) to be set 00:23:39.042 [2024-11-05 19:13:08.347112] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a81bb0 is same with the state(6) to be set 00:23:39.042 [2024-11-05 19:13:08.347117] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a81bb0 is same with the state(6) to be set 00:23:39.042 [2024-11-05 19:13:08.347121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a81bb0 is same with the state(6) to be set 00:23:39.042 [2024-11-05 19:13:08.347126] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a81bb0 is same with the state(6) to be set 00:23:39.042 [2024-11-05 19:13:08.347130] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a81bb0 is same with the state(6) to be set 00:23:39.042 [2024-11-05 19:13:08.347135] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a81bb0 is same with the state(6) to be set 00:23:39.042 [2024-11-05 19:13:08.347139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a81bb0 is same with the state(6) to be set 00:23:39.042 [2024-11-05 19:13:08.347144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a81bb0 is same with the state(6) to be set 00:23:39.042 [2024-11-05 19:13:08.347149] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a81bb0 is same with the state(6) to be set 00:23:39.042 [2024-11-05 19:13:08.347153] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a81bb0 is same with the state(6) to be set 00:23:39.042 [2024-11-05 19:13:08.347158] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a81bb0 is same with the state(6) to be set 00:23:39.042 [2024-11-05 19:13:08.347165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a81bb0 is same with the state(6) to be set 00:23:39.042 [2024-11-05 19:13:08.347170] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a81bb0 is same with the state(6) to be set 00:23:39.042 [2024-11-05 19:13:08.347174] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a81bb0 is same with the state(6) to be set 00:23:39.042 [2024-11-05 19:13:08.347179] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a81bb0 is same with the state(6) to be set 00:23:39.042 [2024-11-05 19:13:08.347184] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a81bb0 is same with the state(6) to be set 00:23:39.042 [2024-11-05 19:13:08.347188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a81bb0 is same with the state(6) to be set 00:23:39.042 [2024-11-05 19:13:08.347193] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a81bb0 is same with the state(6) to be set 00:23:39.042 [2024-11-05 19:13:08.347198] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a81bb0 is same with the state(6) to be set 00:23:39.042 [2024-11-05 19:13:08.347202] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a81bb0 is same with the state(6) to be set 00:23:39.042 [2024-11-05 19:13:08.347207] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a81bb0 is same with the state(6) to be set 00:23:39.042 [2024-11-05 19:13:08.347211] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a81bb0 is same with the state(6) to be set 00:23:39.042 [2024-11-05 19:13:08.347216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a81bb0 is same with the state(6) to be set 00:23:39.042 [2024-11-05 19:13:08.347220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a81bb0 is same with the state(6) to be set 00:23:39.042 [2024-11-05 19:13:08.347225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a81bb0 is same with the state(6) to be set 00:23:39.042 [2024-11-05 19:13:08.347229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a81bb0 is same with the state(6) to be set 00:23:39.042 [2024-11-05 19:13:08.347234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a81bb0 is same with the state(6) to be set 00:23:39.042 [2024-11-05 19:13:08.347239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a81bb0 is same with the state(6) to be set 00:23:39.042 [2024-11-05 19:13:08.347244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a81bb0 is same with the state(6) to be set 00:23:39.042 [2024-11-05 19:13:08.347248] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a81bb0 is same with the state(6) to be set 00:23:39.042 [2024-11-05 19:13:08.347253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a81bb0 is same with the state(6) to be set 00:23:39.042 [2024-11-05 19:13:08.347257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a81bb0 is same with the state(6) to be set 00:23:39.042 [2024-11-05 19:13:08.347262] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a81bb0 is same with the state(6) to be set 00:23:39.042 [2024-11-05 19:13:08.347266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a81bb0 is same with the state(6) to be set 00:23:39.042 [2024-11-05 19:13:08.347271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a81bb0 is same with the state(6) to be set 00:23:39.042 [2024-11-05 19:13:08.347276] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a81bb0 is same with the state(6) to be set 00:23:39.042 [2024-11-05 19:13:08.347280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a81bb0 is same with the state(6) to be set 00:23:39.042 [2024-11-05 19:13:08.347285] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a81bb0 is same with the state(6) to be set 00:23:39.042 [2024-11-05 19:13:08.347290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a81bb0 is same with the state(6) to be set 00:23:39.042 [2024-11-05 19:13:08.347295] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a81bb0 is same with the state(6) to be set 00:23:39.042 [2024-11-05 19:13:08.347300] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a81bb0 is same with the state(6) to be set 00:23:39.042 [2024-11-05 19:13:08.347304] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a81bb0 is same with the state(6) to be set 00:23:39.042 [2024-11-05 19:13:08.347309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a81bb0 is same with the state(6) to be set 00:23:39.042 [2024-11-05 19:13:08.347313] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a81bb0 is same with the state(6) to be set 00:23:39.042 [2024-11-05 19:13:08.347318] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a81bb0 is same with the state(6) to be set 00:23:39.042 [2024-11-05 19:13:08.347322] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a81bb0 is same with the state(6) to be set 00:23:39.042 [2024-11-05 19:13:08.347327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a81bb0 is same with the state(6) to be set 00:23:39.042 [2024-11-05 19:13:08.347332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a81bb0 is same with the state(6) to be set 00:23:39.042 [2024-11-05 19:13:08.347336] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a81bb0 is same with the state(6) to be set 00:23:39.042 [2024-11-05 19:13:08.347341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a81bb0 is same with the state(6) to be set 00:23:39.042 [2024-11-05 19:13:08.347345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a81bb0 is same with the state(6) to be set 00:23:39.042 [2024-11-05 19:13:08.347350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a81bb0 is same with the state(6) to be set 00:23:39.042 [2024-11-05 19:13:08.347354] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a81bb0 is same with the state(6) to be set 00:23:39.042 [2024-11-05 19:13:08.347358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a81bb0 is same with the state(6) to be set 00:23:39.042 [2024-11-05 19:13:08.347363] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a81bb0 is same with the state(6) to be set 00:23:39.042 [2024-11-05 19:13:08.347367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a81bb0 is same with the state(6) to be set 00:23:39.042 [2024-11-05 19:13:08.347372] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a81bb0 is same with the state(6) to be set 00:23:39.042 [2024-11-05 19:13:08.347376] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a81bb0 is same with the state(6) to be set 00:23:39.042 [2024-11-05 19:13:08.347381] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a81bb0 is same with the state(6) to be set 00:23:39.314 [2024-11-05 19:13:08.353599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.315 [2024-11-05 19:13:08.353644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.315 [2024-11-05 19:13:08.353655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.315 [2024-11-05 19:13:08.353666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.315 [2024-11-05 19:13:08.353675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.315 [2024-11-05 19:13:08.353684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.315 [2024-11-05 19:13:08.353697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.315 [2024-11-05 19:13:08.353707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.315 [2024-11-05 19:13:08.353715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.315 [2024-11-05 19:13:08.353725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.315 [2024-11-05 19:13:08.353732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.315 [2024-11-05 19:13:08.353742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.315 [2024-11-05 19:13:08.353757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.315 [2024-11-05 19:13:08.353766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.315 [2024-11-05 19:13:08.353774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.315 [2024-11-05 19:13:08.353784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.315 [2024-11-05 19:13:08.353791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.315 [2024-11-05 19:13:08.353801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.315 [2024-11-05 19:13:08.353808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.315 [2024-11-05 19:13:08.353818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.315 [2024-11-05 19:13:08.353825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.315 [2024-11-05 19:13:08.353835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.315 [2024-11-05 19:13:08.353842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.315 [2024-11-05 19:13:08.353852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.315 [2024-11-05 19:13:08.353859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.315 [2024-11-05 19:13:08.353869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.315 [2024-11-05 19:13:08.353877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.315 [2024-11-05 19:13:08.353886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.315 [2024-11-05 19:13:08.353894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.315 [2024-11-05 19:13:08.353904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.315 [2024-11-05 19:13:08.353912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.315 [2024-11-05 19:13:08.353923] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8c470 is same with the state(6) to be set 00:23:39.315 [2024-11-05 19:13:08.354058] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:39.315 [2024-11-05 19:13:08.354525] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabe400 (9): Bad file descriptor 00:23:39.315 [2024-11-05 19:13:08.354582] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:39.315 [2024-11-05 19:13:08.354594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.315 [2024-11-05 19:13:08.354603] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:39.315 [2024-11-05 19:13:08.354610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.315 [2024-11-05 19:13:08.354619] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:39.315 [2024-11-05 19:13:08.354626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.315 [2024-11-05 19:13:08.354634] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:39.315 [2024-11-05 19:13:08.354642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.315 [2024-11-05 19:13:08.354650] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabeff0 is same with the state(6) to be set 00:23:39.315 [2024-11-05 19:13:08.354673] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf320 (9): Bad file descriptor 00:23:39.315 [2024-11-05 19:13:08.354695] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xac7e50 (9): Bad file descriptor 00:23:39.315 [2024-11-05 19:13:08.354712] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x69dcb0 (9): Bad file descriptor 00:23:39.315 [2024-11-05 19:13:08.354737] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:39.315 [2024-11-05 19:13:08.354754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.315 [2024-11-05 19:13:08.354763] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:39.315 [2024-11-05 19:13:08.354770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.315 [2024-11-05 19:13:08.354778] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:39.315 [2024-11-05 19:13:08.354786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.315 [2024-11-05 19:13:08.354794] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:39.315 [2024-11-05 19:13:08.354801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.315 [2024-11-05 19:13:08.354809] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac8b90 is same with the state(6) to be set 00:23:39.315 [2024-11-05 19:13:08.354829] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x693160 (9): Bad file descriptor 00:23:39.315 [2024-11-05 19:13:08.354845] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x694900 (9): Bad file descriptor 00:23:39.315 [2024-11-05 19:13:08.354872] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:39.315 [2024-11-05 19:13:08.354880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.315 [2024-11-05 19:13:08.354889] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:39.315 [2024-11-05 19:13:08.354896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.315 [2024-11-05 19:13:08.354904] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:39.315 [2024-11-05 19:13:08.354912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.315 [2024-11-05 19:13:08.354920] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:39.315 [2024-11-05 19:13:08.354927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.315 [2024-11-05 19:13:08.354934] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac22a0 is same with the state(6) to be set 00:23:39.315 [2024-11-05 19:13:08.354959] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:39.315 [2024-11-05 19:13:08.354968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.315 [2024-11-05 19:13:08.354976] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:39.315 [2024-11-05 19:13:08.354984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.315 [2024-11-05 19:13:08.354992] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:39.315 [2024-11-05 19:13:08.354999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.315 [2024-11-05 19:13:08.355007] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:39.315 [2024-11-05 19:13:08.355014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.315 [2024-11-05 19:13:08.355021] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b5610 is same with the state(6) to be set 00:23:39.315 [2024-11-05 19:13:08.355039] bdev_nvme.c:3166:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:23:39.315 [2024-11-05 19:13:08.356425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.315 [2024-11-05 19:13:08.356442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.315 [2024-11-05 19:13:08.356456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.315 [2024-11-05 19:13:08.356466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.315 [2024-11-05 19:13:08.356478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.316 [2024-11-05 19:13:08.356487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.316 [2024-11-05 19:13:08.356499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.316 [2024-11-05 19:13:08.356517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.316 [2024-11-05 19:13:08.356528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.316 [2024-11-05 19:13:08.356538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.316 [2024-11-05 19:13:08.356549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.316 [2024-11-05 19:13:08.356558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.316 [2024-11-05 19:13:08.356569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.316 [2024-11-05 19:13:08.356578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.316 [2024-11-05 19:13:08.356588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.316 [2024-11-05 19:13:08.356595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.316 [2024-11-05 19:13:08.356605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.316 [2024-11-05 19:13:08.356612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.316 [2024-11-05 19:13:08.356622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.316 [2024-11-05 19:13:08.356629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.316 [2024-11-05 19:13:08.356638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.316 [2024-11-05 19:13:08.356645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.316 [2024-11-05 19:13:08.356656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.316 [2024-11-05 19:13:08.356663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.316 [2024-11-05 19:13:08.356672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.316 [2024-11-05 19:13:08.356680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.316 [2024-11-05 19:13:08.356689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.316 [2024-11-05 19:13:08.356696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.316 [2024-11-05 19:13:08.356706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.316 [2024-11-05 19:13:08.356713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.316 [2024-11-05 19:13:08.356724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.316 [2024-11-05 19:13:08.356732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.316 [2024-11-05 19:13:08.356742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.316 [2024-11-05 19:13:08.356755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.316 [2024-11-05 19:13:08.356764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.316 [2024-11-05 19:13:08.356772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.316 [2024-11-05 19:13:08.356781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.316 [2024-11-05 19:13:08.356788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.316 [2024-11-05 19:13:08.356798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.316 [2024-11-05 19:13:08.356806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.316 [2024-11-05 19:13:08.356815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.316 [2024-11-05 19:13:08.356822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.316 [2024-11-05 19:13:08.356832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.316 [2024-11-05 19:13:08.356839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.316 [2024-11-05 19:13:08.356848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.316 [2024-11-05 19:13:08.356856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.316 [2024-11-05 19:13:08.356865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.316 [2024-11-05 19:13:08.356872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.316 [2024-11-05 19:13:08.356881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.316 [2024-11-05 19:13:08.356889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.316 [2024-11-05 19:13:08.356898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.316 [2024-11-05 19:13:08.356905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.316 [2024-11-05 19:13:08.356915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.316 [2024-11-05 19:13:08.356922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.316 [2024-11-05 19:13:08.356932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.316 [2024-11-05 19:13:08.356939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.316 [2024-11-05 19:13:08.356951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.316 [2024-11-05 19:13:08.356958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.316 [2024-11-05 19:13:08.356967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.316 [2024-11-05 19:13:08.356975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.316 [2024-11-05 19:13:08.356984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.316 [2024-11-05 19:13:08.356992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.316 [2024-11-05 19:13:08.357001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.316 [2024-11-05 19:13:08.357008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.316 [2024-11-05 19:13:08.357017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.316 [2024-11-05 19:13:08.357025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.316 [2024-11-05 19:13:08.357034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.316 [2024-11-05 19:13:08.357041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.316 [2024-11-05 19:13:08.357051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.316 [2024-11-05 19:13:08.357058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.316 [2024-11-05 19:13:08.357067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.316 [2024-11-05 19:13:08.357075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.316 [2024-11-05 19:13:08.357084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.316 [2024-11-05 19:13:08.357091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.316 [2024-11-05 19:13:08.357101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.316 [2024-11-05 19:13:08.357108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.316 [2024-11-05 19:13:08.357117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.316 [2024-11-05 19:13:08.357125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.316 [2024-11-05 19:13:08.357134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.316 [2024-11-05 19:13:08.357141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.316 [2024-11-05 19:13:08.357151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.316 [2024-11-05 19:13:08.357159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.316 [2024-11-05 19:13:08.357169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.316 [2024-11-05 19:13:08.357176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.316 [2024-11-05 19:13:08.357185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.317 [2024-11-05 19:13:08.357192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.317 [2024-11-05 19:13:08.357202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.317 [2024-11-05 19:13:08.357209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.317 [2024-11-05 19:13:08.357218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.317 [2024-11-05 19:13:08.357226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.317 [2024-11-05 19:13:08.357235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.317 [2024-11-05 19:13:08.357242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.317 [2024-11-05 19:13:08.357251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.317 [2024-11-05 19:13:08.357259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.317 [2024-11-05 19:13:08.357268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.317 [2024-11-05 19:13:08.357276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.317 [2024-11-05 19:13:08.357285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.317 [2024-11-05 19:13:08.357292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.317 [2024-11-05 19:13:08.357302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.317 [2024-11-05 19:13:08.357310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.317 [2024-11-05 19:13:08.357319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.317 [2024-11-05 19:13:08.357327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.317 [2024-11-05 19:13:08.357336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.317 [2024-11-05 19:13:08.357343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.317 [2024-11-05 19:13:08.357353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.317 [2024-11-05 19:13:08.357360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.317 [2024-11-05 19:13:08.357369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.317 [2024-11-05 19:13:08.357378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.317 [2024-11-05 19:13:08.357388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.317 [2024-11-05 19:13:08.357395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.317 [2024-11-05 19:13:08.357404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.317 [2024-11-05 19:13:08.357411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.317 [2024-11-05 19:13:08.357421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.317 [2024-11-05 19:13:08.357428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.317 [2024-11-05 19:13:08.357437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.317 [2024-11-05 19:13:08.357445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.317 [2024-11-05 19:13:08.357454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.317 [2024-11-05 19:13:08.357461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.317 [2024-11-05 19:13:08.357470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.317 [2024-11-05 19:13:08.357478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.317 [2024-11-05 19:13:08.357487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.317 [2024-11-05 19:13:08.357494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.317 [2024-11-05 19:13:08.357504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.317 [2024-11-05 19:13:08.357511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.317 [2024-11-05 19:13:08.357520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.317 [2024-11-05 19:13:08.357528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.317 [2024-11-05 19:13:08.357537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.317 [2024-11-05 19:13:08.357545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.317 [2024-11-05 19:13:08.357656] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:23:39.317 [2024-11-05 19:13:08.357684] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:23:39.317 [2024-11-05 19:13:08.357692] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:23:39.317 [2024-11-05 19:13:08.357701] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:23:39.317 [2024-11-05 19:13:08.357712] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:23:39.317 [2024-11-05 19:13:08.359358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:39.317 [2024-11-05 19:13:08.359381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x694900 with addr=10.0.0.2, port=4420 00:23:39.317 [2024-11-05 19:13:08.359391] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x694900 is same with the state(6) to be set 00:23:39.317 [2024-11-05 19:13:08.359808] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:23:39.317 [2024-11-05 19:13:08.359827] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xac8b90 (9): Bad file descriptor 00:23:39.317 [2024-11-05 19:13:08.359838] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x694900 (9): Bad file descriptor 00:23:39.317 [2024-11-05 19:13:08.359893] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:39.317 [2024-11-05 19:13:08.360194] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:39.317 [2024-11-05 19:13:08.360220] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:23:39.317 [2024-11-05 19:13:08.360228] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:23:39.317 [2024-11-05 19:13:08.360235] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:23:39.317 [2024-11-05 19:13:08.360243] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:23:39.317 [2024-11-05 19:13:08.360596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:39.317 [2024-11-05 19:13:08.360611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xac8b90 with addr=10.0.0.2, port=4420 00:23:39.317 [2024-11-05 19:13:08.360619] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac8b90 is same with the state(6) to be set 00:23:39.317 [2024-11-05 19:13:08.360666] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xac8b90 (9): Bad file descriptor 00:23:39.317 [2024-11-05 19:13:08.360705] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:23:39.317 [2024-11-05 19:13:08.360712] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:23:39.317 [2024-11-05 19:13:08.360720] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:23:39.317 [2024-11-05 19:13:08.360727] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:23:39.317 [2024-11-05 19:13:08.364560] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabeff0 (9): Bad file descriptor 00:23:39.317 [2024-11-05 19:13:08.364615] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xac22a0 (9): Bad file descriptor 00:23:39.317 [2024-11-05 19:13:08.364633] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b5610 (9): Bad file descriptor 00:23:39.317 [2024-11-05 19:13:08.364750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.317 [2024-11-05 19:13:08.364762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.317 [2024-11-05 19:13:08.364775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.317 [2024-11-05 19:13:08.364782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.317 [2024-11-05 19:13:08.364792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.317 [2024-11-05 19:13:08.364805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.317 [2024-11-05 19:13:08.364814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.317 [2024-11-05 19:13:08.364822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.317 [2024-11-05 19:13:08.364831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.317 [2024-11-05 19:13:08.364839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.317 [2024-11-05 19:13:08.364849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.317 [2024-11-05 19:13:08.364856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.318 [2024-11-05 19:13:08.364865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.318 [2024-11-05 19:13:08.364873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.318 [2024-11-05 19:13:08.364882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.318 [2024-11-05 19:13:08.364890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.318 [2024-11-05 19:13:08.364899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.318 [2024-11-05 19:13:08.364906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.318 [2024-11-05 19:13:08.364916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.318 [2024-11-05 19:13:08.364924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.318 [2024-11-05 19:13:08.364933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.318 [2024-11-05 19:13:08.364940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.318 [2024-11-05 19:13:08.364950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.318 [2024-11-05 19:13:08.364957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.318 [2024-11-05 19:13:08.364966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.318 [2024-11-05 19:13:08.364974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.318 [2024-11-05 19:13:08.364983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.318 [2024-11-05 19:13:08.364991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.318 [2024-11-05 19:13:08.365000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.318 [2024-11-05 19:13:08.365007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.318 [2024-11-05 19:13:08.365018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.318 [2024-11-05 19:13:08.365026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.318 [2024-11-05 19:13:08.365035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.318 [2024-11-05 19:13:08.365042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.318 [2024-11-05 19:13:08.365052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.318 [2024-11-05 19:13:08.365059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.318 [2024-11-05 19:13:08.365069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.318 [2024-11-05 19:13:08.365076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.318 [2024-11-05 19:13:08.365085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.318 [2024-11-05 19:13:08.365092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.318 [2024-11-05 19:13:08.365102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.318 [2024-11-05 19:13:08.365110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.318 [2024-11-05 19:13:08.365119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.318 [2024-11-05 19:13:08.365126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.318 [2024-11-05 19:13:08.365136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.318 [2024-11-05 19:13:08.365143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.318 [2024-11-05 19:13:08.365153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.318 [2024-11-05 19:13:08.365160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.318 [2024-11-05 19:13:08.365170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.318 [2024-11-05 19:13:08.365177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.318 [2024-11-05 19:13:08.365187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.318 [2024-11-05 19:13:08.365194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.318 [2024-11-05 19:13:08.365203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.318 [2024-11-05 19:13:08.365211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.318 [2024-11-05 19:13:08.365220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.318 [2024-11-05 19:13:08.365229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.318 [2024-11-05 19:13:08.365239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.318 [2024-11-05 19:13:08.365246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.318 [2024-11-05 19:13:08.365255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.318 [2024-11-05 19:13:08.365263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.318 [2024-11-05 19:13:08.365272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.318 [2024-11-05 19:13:08.365280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.318 [2024-11-05 19:13:08.365289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.318 [2024-11-05 19:13:08.365296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.318 [2024-11-05 19:13:08.365307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.318 [2024-11-05 19:13:08.365314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.318 [2024-11-05 19:13:08.365324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.318 [2024-11-05 19:13:08.365331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.318 [2024-11-05 19:13:08.365341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.318 [2024-11-05 19:13:08.365348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.318 [2024-11-05 19:13:08.365357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.318 [2024-11-05 19:13:08.365365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.318 [2024-11-05 19:13:08.365374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.318 [2024-11-05 19:13:08.365381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.318 [2024-11-05 19:13:08.365391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.318 [2024-11-05 19:13:08.365398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.318 [2024-11-05 19:13:08.365408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.318 [2024-11-05 19:13:08.365415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.318 [2024-11-05 19:13:08.365424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.318 [2024-11-05 19:13:08.365432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.318 [2024-11-05 19:13:08.365443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.318 [2024-11-05 19:13:08.365450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.318 [2024-11-05 19:13:08.365459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.318 [2024-11-05 19:13:08.365467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.318 [2024-11-05 19:13:08.365476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.318 [2024-11-05 19:13:08.365483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.318 [2024-11-05 19:13:08.365493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.318 [2024-11-05 19:13:08.365500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.318 [2024-11-05 19:13:08.365510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.318 [2024-11-05 19:13:08.365517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.318 [2024-11-05 19:13:08.365526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.318 [2024-11-05 19:13:08.365534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.319 [2024-11-05 19:13:08.365543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.319 [2024-11-05 19:13:08.365551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.319 [2024-11-05 19:13:08.365560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.319 [2024-11-05 19:13:08.365568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.319 [2024-11-05 19:13:08.365577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.319 [2024-11-05 19:13:08.365584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.319 [2024-11-05 19:13:08.365594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.319 [2024-11-05 19:13:08.365601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.319 [2024-11-05 19:13:08.365611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.319 [2024-11-05 19:13:08.365618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.319 [2024-11-05 19:13:08.365628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.319 [2024-11-05 19:13:08.365635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.319 [2024-11-05 19:13:08.365644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.319 [2024-11-05 19:13:08.365653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.319 [2024-11-05 19:13:08.365663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.319 [2024-11-05 19:13:08.365670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.319 [2024-11-05 19:13:08.365679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.319 [2024-11-05 19:13:08.365687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.319 [2024-11-05 19:13:08.365696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.319 [2024-11-05 19:13:08.365704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.319 [2024-11-05 19:13:08.365713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.319 [2024-11-05 19:13:08.365720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.319 [2024-11-05 19:13:08.365730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.319 [2024-11-05 19:13:08.365738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.319 [2024-11-05 19:13:08.365751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.319 [2024-11-05 19:13:08.365759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.319 [2024-11-05 19:13:08.365768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.319 [2024-11-05 19:13:08.365776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.319 [2024-11-05 19:13:08.365785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.319 [2024-11-05 19:13:08.365792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.319 [2024-11-05 19:13:08.365802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.319 [2024-11-05 19:13:08.365809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.319 [2024-11-05 19:13:08.365818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.319 [2024-11-05 19:13:08.365826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.319 [2024-11-05 19:13:08.365835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.319 [2024-11-05 19:13:08.365843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.319 [2024-11-05 19:13:08.365851] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b180 is same with the state(6) to be set 00:23:39.319 [2024-11-05 19:13:08.367128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.319 [2024-11-05 19:13:08.367145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.319 [2024-11-05 19:13:08.367156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.319 [2024-11-05 19:13:08.367164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.319 [2024-11-05 19:13:08.367174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.319 [2024-11-05 19:13:08.367181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.319 [2024-11-05 19:13:08.367191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.319 [2024-11-05 19:13:08.367198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.319 [2024-11-05 19:13:08.367208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.319 [2024-11-05 19:13:08.367215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.319 [2024-11-05 19:13:08.367225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.319 [2024-11-05 19:13:08.367232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.319 [2024-11-05 19:13:08.367242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.319 [2024-11-05 19:13:08.367249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.319 [2024-11-05 19:13:08.367258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.319 [2024-11-05 19:13:08.367266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.319 [2024-11-05 19:13:08.367275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.319 [2024-11-05 19:13:08.367283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.319 [2024-11-05 19:13:08.367293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.319 [2024-11-05 19:13:08.367300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.319 [2024-11-05 19:13:08.367310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.319 [2024-11-05 19:13:08.367317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.319 [2024-11-05 19:13:08.367327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.319 [2024-11-05 19:13:08.367334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.319 [2024-11-05 19:13:08.367344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.319 [2024-11-05 19:13:08.367351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.319 [2024-11-05 19:13:08.367363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.319 [2024-11-05 19:13:08.367370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.319 [2024-11-05 19:13:08.367380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.319 [2024-11-05 19:13:08.367387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.319 [2024-11-05 19:13:08.367397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.320 [2024-11-05 19:13:08.367404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.320 [2024-11-05 19:13:08.367414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.320 [2024-11-05 19:13:08.367421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.320 [2024-11-05 19:13:08.367431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.320 [2024-11-05 19:13:08.367438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.320 [2024-11-05 19:13:08.367448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.320 [2024-11-05 19:13:08.367455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.320 [2024-11-05 19:13:08.367465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.320 [2024-11-05 19:13:08.367472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.320 [2024-11-05 19:13:08.367482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.320 [2024-11-05 19:13:08.367490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.320 [2024-11-05 19:13:08.367499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.320 [2024-11-05 19:13:08.367507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.320 [2024-11-05 19:13:08.367516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.320 [2024-11-05 19:13:08.367524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.320 [2024-11-05 19:13:08.367533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.320 [2024-11-05 19:13:08.367541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.320 [2024-11-05 19:13:08.367551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.320 [2024-11-05 19:13:08.367558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.320 [2024-11-05 19:13:08.367568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.320 [2024-11-05 19:13:08.367577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.320 [2024-11-05 19:13:08.367586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.320 [2024-11-05 19:13:08.367594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.320 [2024-11-05 19:13:08.367603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.320 [2024-11-05 19:13:08.367610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.320 [2024-11-05 19:13:08.367620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.320 [2024-11-05 19:13:08.367627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.320 [2024-11-05 19:13:08.367637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.320 [2024-11-05 19:13:08.367644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.320 [2024-11-05 19:13:08.367654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.320 [2024-11-05 19:13:08.367661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.320 [2024-11-05 19:13:08.367671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.320 [2024-11-05 19:13:08.367678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.320 [2024-11-05 19:13:08.367687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.320 [2024-11-05 19:13:08.367695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.320 [2024-11-05 19:13:08.367704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.320 [2024-11-05 19:13:08.367712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.320 [2024-11-05 19:13:08.367721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.320 [2024-11-05 19:13:08.367729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.320 [2024-11-05 19:13:08.367739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.320 [2024-11-05 19:13:08.367751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.320 [2024-11-05 19:13:08.367762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.320 [2024-11-05 19:13:08.367769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.320 [2024-11-05 19:13:08.367779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.320 [2024-11-05 19:13:08.367786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.320 [2024-11-05 19:13:08.367795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.320 [2024-11-05 19:13:08.367804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.320 [2024-11-05 19:13:08.367814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.320 [2024-11-05 19:13:08.367821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.320 [2024-11-05 19:13:08.367831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.320 [2024-11-05 19:13:08.367838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.320 [2024-11-05 19:13:08.367848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.320 [2024-11-05 19:13:08.367855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.320 [2024-11-05 19:13:08.367865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.320 [2024-11-05 19:13:08.367872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.320 [2024-11-05 19:13:08.367882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.320 [2024-11-05 19:13:08.367889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.320 [2024-11-05 19:13:08.367899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.320 [2024-11-05 19:13:08.367906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.320 [2024-11-05 19:13:08.367916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.320 [2024-11-05 19:13:08.367923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.320 [2024-11-05 19:13:08.367933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.320 [2024-11-05 19:13:08.367940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.320 [2024-11-05 19:13:08.367950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.320 [2024-11-05 19:13:08.367957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.320 [2024-11-05 19:13:08.367966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.320 [2024-11-05 19:13:08.367974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.320 [2024-11-05 19:13:08.367983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.320 [2024-11-05 19:13:08.367991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.320 [2024-11-05 19:13:08.368000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.320 [2024-11-05 19:13:08.368008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.320 [2024-11-05 19:13:08.368022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.320 [2024-11-05 19:13:08.368029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.320 [2024-11-05 19:13:08.368039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.320 [2024-11-05 19:13:08.368046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.320 [2024-11-05 19:13:08.368055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.320 [2024-11-05 19:13:08.368064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.320 [2024-11-05 19:13:08.368074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.320 [2024-11-05 19:13:08.368081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.320 [2024-11-05 19:13:08.368090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.321 [2024-11-05 19:13:08.368098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.321 [2024-11-05 19:13:08.368107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.321 [2024-11-05 19:13:08.368115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.321 [2024-11-05 19:13:08.368124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.321 [2024-11-05 19:13:08.368131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.321 [2024-11-05 19:13:08.368141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.321 [2024-11-05 19:13:08.368149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.321 [2024-11-05 19:13:08.368159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.321 [2024-11-05 19:13:08.368166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.321 [2024-11-05 19:13:08.368176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.321 [2024-11-05 19:13:08.368183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.321 [2024-11-05 19:13:08.368193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.321 [2024-11-05 19:13:08.368200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.321 [2024-11-05 19:13:08.368210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.321 [2024-11-05 19:13:08.368217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.321 [2024-11-05 19:13:08.368226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.321 [2024-11-05 19:13:08.368235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.321 [2024-11-05 19:13:08.368244] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a25c0 is same with the state(6) to be set 00:23:39.321 [2024-11-05 19:13:08.369522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.321 [2024-11-05 19:13:08.369537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.321 [2024-11-05 19:13:08.369550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.321 [2024-11-05 19:13:08.369559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.321 [2024-11-05 19:13:08.369571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.321 [2024-11-05 19:13:08.369580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.321 [2024-11-05 19:13:08.369592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.321 [2024-11-05 19:13:08.369601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.321 [2024-11-05 19:13:08.369611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.321 [2024-11-05 19:13:08.369619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.321 [2024-11-05 19:13:08.369629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.321 [2024-11-05 19:13:08.369636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.321 [2024-11-05 19:13:08.369646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.321 [2024-11-05 19:13:08.369653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.321 [2024-11-05 19:13:08.369663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.321 [2024-11-05 19:13:08.369671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.321 [2024-11-05 19:13:08.369680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.321 [2024-11-05 19:13:08.369687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.321 [2024-11-05 19:13:08.369697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.321 [2024-11-05 19:13:08.369704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.321 [2024-11-05 19:13:08.369714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.321 [2024-11-05 19:13:08.369721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.321 [2024-11-05 19:13:08.369731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.321 [2024-11-05 19:13:08.369741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.321 [2024-11-05 19:13:08.369755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.321 [2024-11-05 19:13:08.369763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.321 [2024-11-05 19:13:08.369773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.321 [2024-11-05 19:13:08.369780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.321 [2024-11-05 19:13:08.369790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.321 [2024-11-05 19:13:08.369797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.321 [2024-11-05 19:13:08.369807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.321 [2024-11-05 19:13:08.369814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.321 [2024-11-05 19:13:08.369824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.321 [2024-11-05 19:13:08.369831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.321 [2024-11-05 19:13:08.369840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.321 [2024-11-05 19:13:08.369848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.321 [2024-11-05 19:13:08.369857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.321 [2024-11-05 19:13:08.369864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.321 [2024-11-05 19:13:08.369874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.321 [2024-11-05 19:13:08.369881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.321 [2024-11-05 19:13:08.369891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.321 [2024-11-05 19:13:08.369899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.321 [2024-11-05 19:13:08.369908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.321 [2024-11-05 19:13:08.369915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.321 [2024-11-05 19:13:08.369925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.321 [2024-11-05 19:13:08.369932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.321 [2024-11-05 19:13:08.369942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.321 [2024-11-05 19:13:08.369950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.321 [2024-11-05 19:13:08.369961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.321 [2024-11-05 19:13:08.369969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.321 [2024-11-05 19:13:08.369978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.321 [2024-11-05 19:13:08.369986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.321 [2024-11-05 19:13:08.369995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.321 [2024-11-05 19:13:08.370003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.321 [2024-11-05 19:13:08.370012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.321 [2024-11-05 19:13:08.370020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.321 [2024-11-05 19:13:08.370029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.321 [2024-11-05 19:13:08.370037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.321 [2024-11-05 19:13:08.370046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.321 [2024-11-05 19:13:08.370053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.321 [2024-11-05 19:13:08.370063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.321 [2024-11-05 19:13:08.370070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.322 [2024-11-05 19:13:08.370080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.322 [2024-11-05 19:13:08.370087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.322 [2024-11-05 19:13:08.370097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.322 [2024-11-05 19:13:08.370104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.322 [2024-11-05 19:13:08.370114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.322 [2024-11-05 19:13:08.370121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.322 [2024-11-05 19:13:08.370130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.322 [2024-11-05 19:13:08.370138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.322 [2024-11-05 19:13:08.370147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.322 [2024-11-05 19:13:08.370155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.322 [2024-11-05 19:13:08.370164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.322 [2024-11-05 19:13:08.370174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.322 [2024-11-05 19:13:08.370184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.322 [2024-11-05 19:13:08.370191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.322 [2024-11-05 19:13:08.370201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.322 [2024-11-05 19:13:08.370208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.322 [2024-11-05 19:13:08.370217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.322 [2024-11-05 19:13:08.370225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.322 [2024-11-05 19:13:08.370234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.322 [2024-11-05 19:13:08.370241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.322 [2024-11-05 19:13:08.370251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.322 [2024-11-05 19:13:08.370258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.322 [2024-11-05 19:13:08.370267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.322 [2024-11-05 19:13:08.370275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.322 [2024-11-05 19:13:08.370284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.322 [2024-11-05 19:13:08.370292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.322 [2024-11-05 19:13:08.370301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.322 [2024-11-05 19:13:08.370308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.322 [2024-11-05 19:13:08.370318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.322 [2024-11-05 19:13:08.370325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.322 [2024-11-05 19:13:08.370335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.322 [2024-11-05 19:13:08.370342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.322 [2024-11-05 19:13:08.370352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.322 [2024-11-05 19:13:08.370359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.322 [2024-11-05 19:13:08.370369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.322 [2024-11-05 19:13:08.370376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.322 [2024-11-05 19:13:08.370387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.322 [2024-11-05 19:13:08.370394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.322 [2024-11-05 19:13:08.370404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.322 [2024-11-05 19:13:08.370411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.322 [2024-11-05 19:13:08.370421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.322 [2024-11-05 19:13:08.370428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.322 [2024-11-05 19:13:08.370437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.322 [2024-11-05 19:13:08.370445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.322 [2024-11-05 19:13:08.370454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.322 [2024-11-05 19:13:08.370462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.322 [2024-11-05 19:13:08.370471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.322 [2024-11-05 19:13:08.370479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.322 [2024-11-05 19:13:08.370488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.322 [2024-11-05 19:13:08.370496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.322 [2024-11-05 19:13:08.370505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.322 [2024-11-05 19:13:08.370513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.322 [2024-11-05 19:13:08.370522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.322 [2024-11-05 19:13:08.370530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.322 [2024-11-05 19:13:08.370539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.322 [2024-11-05 19:13:08.370547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.322 [2024-11-05 19:13:08.370557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.322 [2024-11-05 19:13:08.370564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.322 [2024-11-05 19:13:08.370573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.322 [2024-11-05 19:13:08.370581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.322 [2024-11-05 19:13:08.370590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.322 [2024-11-05 19:13:08.370599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.322 [2024-11-05 19:13:08.370608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.322 [2024-11-05 19:13:08.370616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.322 [2024-11-05 19:13:08.370626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.322 [2024-11-05 19:13:08.370633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.322 [2024-11-05 19:13:08.370642] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a3a10 is same with the state(6) to be set 00:23:39.322 [2024-11-05 19:13:08.371931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.322 [2024-11-05 19:13:08.371947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.322 [2024-11-05 19:13:08.371961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.322 [2024-11-05 19:13:08.371970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.322 [2024-11-05 19:13:08.371981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.322 [2024-11-05 19:13:08.371990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.322 [2024-11-05 19:13:08.372001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.322 [2024-11-05 19:13:08.372011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.322 [2024-11-05 19:13:08.372022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.322 [2024-11-05 19:13:08.372031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.322 [2024-11-05 19:13:08.372041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.322 [2024-11-05 19:13:08.372049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.323 [2024-11-05 19:13:08.372058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.323 [2024-11-05 19:13:08.372066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.323 [2024-11-05 19:13:08.372075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.323 [2024-11-05 19:13:08.372082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.323 [2024-11-05 19:13:08.372092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.323 [2024-11-05 19:13:08.372099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.323 [2024-11-05 19:13:08.372109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.323 [2024-11-05 19:13:08.372119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.323 [2024-11-05 19:13:08.372129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.323 [2024-11-05 19:13:08.372136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.323 [2024-11-05 19:13:08.372146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.323 [2024-11-05 19:13:08.372153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.323 [2024-11-05 19:13:08.372163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.323 [2024-11-05 19:13:08.372170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.323 [2024-11-05 19:13:08.372180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.323 [2024-11-05 19:13:08.372187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.323 [2024-11-05 19:13:08.372197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.323 [2024-11-05 19:13:08.372204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.323 [2024-11-05 19:13:08.372214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.323 [2024-11-05 19:13:08.372222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.323 [2024-11-05 19:13:08.372232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.323 [2024-11-05 19:13:08.372239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.323 [2024-11-05 19:13:08.372248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.323 [2024-11-05 19:13:08.372256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.323 [2024-11-05 19:13:08.372265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.323 [2024-11-05 19:13:08.372273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.323 [2024-11-05 19:13:08.372283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.323 [2024-11-05 19:13:08.372290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.323 [2024-11-05 19:13:08.372300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.323 [2024-11-05 19:13:08.372307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.323 [2024-11-05 19:13:08.372317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.323 [2024-11-05 19:13:08.372324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.323 [2024-11-05 19:13:08.372335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.323 [2024-11-05 19:13:08.372343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.323 [2024-11-05 19:13:08.372353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.323 [2024-11-05 19:13:08.372360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.323 [2024-11-05 19:13:08.372370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.323 [2024-11-05 19:13:08.372377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.323 [2024-11-05 19:13:08.372387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.323 [2024-11-05 19:13:08.372394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.323 [2024-11-05 19:13:08.372404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.323 [2024-11-05 19:13:08.372411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.323 [2024-11-05 19:13:08.372420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.323 [2024-11-05 19:13:08.372428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.323 [2024-11-05 19:13:08.372437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.323 [2024-11-05 19:13:08.372445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.323 [2024-11-05 19:13:08.372454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.323 [2024-11-05 19:13:08.372462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.323 [2024-11-05 19:13:08.372471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.323 [2024-11-05 19:13:08.372479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.323 [2024-11-05 19:13:08.372488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.323 [2024-11-05 19:13:08.372496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.323 [2024-11-05 19:13:08.372505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.323 [2024-11-05 19:13:08.372513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.323 [2024-11-05 19:13:08.372523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.323 [2024-11-05 19:13:08.372530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.323 [2024-11-05 19:13:08.372540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.323 [2024-11-05 19:13:08.372548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.323 [2024-11-05 19:13:08.372558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.323 [2024-11-05 19:13:08.372565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.323 [2024-11-05 19:13:08.372575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.323 [2024-11-05 19:13:08.372582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.323 [2024-11-05 19:13:08.372592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.323 [2024-11-05 19:13:08.372599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.323 [2024-11-05 19:13:08.372609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.323 [2024-11-05 19:13:08.372616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.323 [2024-11-05 19:13:08.372626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.323 [2024-11-05 19:13:08.372633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.323 [2024-11-05 19:13:08.372642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.323 [2024-11-05 19:13:08.372650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.323 [2024-11-05 19:13:08.372660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.324 [2024-11-05 19:13:08.372667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.324 [2024-11-05 19:13:08.372676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.324 [2024-11-05 19:13:08.372684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.324 [2024-11-05 19:13:08.372693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.324 [2024-11-05 19:13:08.372701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.324 [2024-11-05 19:13:08.372711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.324 [2024-11-05 19:13:08.372718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.324 [2024-11-05 19:13:08.372728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.324 [2024-11-05 19:13:08.372735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.324 [2024-11-05 19:13:08.372745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.324 [2024-11-05 19:13:08.372756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.324 [2024-11-05 19:13:08.372768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.324 [2024-11-05 19:13:08.372775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.324 [2024-11-05 19:13:08.372785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.324 [2024-11-05 19:13:08.372792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.324 [2024-11-05 19:13:08.372802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.324 [2024-11-05 19:13:08.372809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.324 [2024-11-05 19:13:08.372819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.324 [2024-11-05 19:13:08.372826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.324 [2024-11-05 19:13:08.372835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.324 [2024-11-05 19:13:08.372842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.324 [2024-11-05 19:13:08.372852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.324 [2024-11-05 19:13:08.372860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.324 [2024-11-05 19:13:08.372869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.324 [2024-11-05 19:13:08.372876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.324 [2024-11-05 19:13:08.372887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.324 [2024-11-05 19:13:08.372894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.324 [2024-11-05 19:13:08.372904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.324 [2024-11-05 19:13:08.372911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.324 [2024-11-05 19:13:08.372920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.324 [2024-11-05 19:13:08.372928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.324 [2024-11-05 19:13:08.372937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.324 [2024-11-05 19:13:08.372945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.324 [2024-11-05 19:13:08.372954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.324 [2024-11-05 19:13:08.372961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.324 [2024-11-05 19:13:08.372971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.324 [2024-11-05 19:13:08.372980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.324 [2024-11-05 19:13:08.372990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.324 [2024-11-05 19:13:08.372997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.324 [2024-11-05 19:13:08.373006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.324 [2024-11-05 19:13:08.373014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.324 [2024-11-05 19:13:08.373023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.324 [2024-11-05 19:13:08.373031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.324 [2024-11-05 19:13:08.373040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.324 [2024-11-05 19:13:08.373047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.324 [2024-11-05 19:13:08.373055] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa26c0 is same with the state(6) to be set 00:23:39.324 [2024-11-05 19:13:08.374299] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:23:39.324 [2024-11-05 19:13:08.374317] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:23:39.324 [2024-11-05 19:13:08.374329] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:23:39.324 [2024-11-05 19:13:08.374340] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:23:39.324 [2024-11-05 19:13:08.374457] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:23:39.324 [2024-11-05 19:13:08.374996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:39.324 [2024-11-05 19:13:08.375038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabe400 with addr=10.0.0.2, port=4420 00:23:39.324 [2024-11-05 19:13:08.375049] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabe400 is same with the state(6) to be set 00:23:39.324 [2024-11-05 19:13:08.375409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:39.324 [2024-11-05 19:13:08.375421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x69dcb0 with addr=10.0.0.2, port=4420 00:23:39.324 [2024-11-05 19:13:08.375428] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x69dcb0 is same with the state(6) to be set 00:23:39.324 [2024-11-05 19:13:08.375635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:39.324 [2024-11-05 19:13:08.375645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x693160 with addr=10.0.0.2, port=4420 00:23:39.324 [2024-11-05 19:13:08.375652] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x693160 is same with the state(6) to be set 00:23:39.324 [2024-11-05 19:13:08.376090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:39.324 [2024-11-05 19:13:08.376129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xac7e50 with addr=10.0.0.2, port=4420 00:23:39.324 [2024-11-05 19:13:08.376140] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac7e50 is same with the state(6) to be set 00:23:39.324 [2024-11-05 19:13:08.377258] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:23:39.324 [2024-11-05 19:13:08.377275] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:23:39.324 [2024-11-05 19:13:08.377640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:39.324 [2024-11-05 19:13:08.377654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf320 with addr=10.0.0.2, port=4420 00:23:39.324 [2024-11-05 19:13:08.377662] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf320 is same with the state(6) to be set 00:23:39.324 [2024-11-05 19:13:08.377672] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabe400 (9): Bad file descriptor 00:23:39.324 [2024-11-05 19:13:08.377683] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x69dcb0 (9): Bad file descriptor 00:23:39.324 [2024-11-05 19:13:08.377692] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x693160 (9): Bad file descriptor 00:23:39.324 [2024-11-05 19:13:08.377701] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xac7e50 (9): Bad file descriptor 00:23:39.324 [2024-11-05 19:13:08.378016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:39.324 [2024-11-05 19:13:08.378031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x694900 with addr=10.0.0.2, port=4420 00:23:39.324 [2024-11-05 19:13:08.378039] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x694900 is same with the state(6) to be set 00:23:39.324 [2024-11-05 19:13:08.378365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:39.324 [2024-11-05 19:13:08.378376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xac8b90 with addr=10.0.0.2, port=4420 00:23:39.324 [2024-11-05 19:13:08.378383] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac8b90 is same with the state(6) to be set 00:23:39.324 [2024-11-05 19:13:08.378392] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf320 (9): Bad file descriptor 00:23:39.324 [2024-11-05 19:13:08.378402] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:23:39.324 [2024-11-05 19:13:08.378409] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:23:39.324 [2024-11-05 19:13:08.378419] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:23:39.324 [2024-11-05 19:13:08.378427] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:23:39.324 [2024-11-05 19:13:08.378435] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:23:39.324 [2024-11-05 19:13:08.378442] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:23:39.325 [2024-11-05 19:13:08.378449] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:23:39.325 [2024-11-05 19:13:08.378455] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:23:39.325 [2024-11-05 19:13:08.378462] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:23:39.325 [2024-11-05 19:13:08.378469] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:23:39.325 [2024-11-05 19:13:08.378476] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:23:39.325 [2024-11-05 19:13:08.378482] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:23:39.325 [2024-11-05 19:13:08.378489] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:23:39.325 [2024-11-05 19:13:08.378495] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:23:39.325 [2024-11-05 19:13:08.378502] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:23:39.325 [2024-11-05 19:13:08.378513] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:23:39.325 [2024-11-05 19:13:08.378573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.325 [2024-11-05 19:13:08.378584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.325 [2024-11-05 19:13:08.378599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.325 [2024-11-05 19:13:08.378608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.325 [2024-11-05 19:13:08.378617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.325 [2024-11-05 19:13:08.378625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.325 [2024-11-05 19:13:08.378635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.325 [2024-11-05 19:13:08.378642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.325 [2024-11-05 19:13:08.378652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.325 [2024-11-05 19:13:08.378659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.325 [2024-11-05 19:13:08.378669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.325 [2024-11-05 19:13:08.378676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.325 [2024-11-05 19:13:08.378686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.325 [2024-11-05 19:13:08.378693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.325 [2024-11-05 19:13:08.378703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.325 [2024-11-05 19:13:08.378710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.325 [2024-11-05 19:13:08.378720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.325 [2024-11-05 19:13:08.378727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.325 [2024-11-05 19:13:08.378736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.325 [2024-11-05 19:13:08.378744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.325 [2024-11-05 19:13:08.378761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.325 [2024-11-05 19:13:08.378768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.325 [2024-11-05 19:13:08.378777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.325 [2024-11-05 19:13:08.378785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.325 [2024-11-05 19:13:08.378797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.325 [2024-11-05 19:13:08.378805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.325 [2024-11-05 19:13:08.378814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.325 [2024-11-05 19:13:08.378822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.325 [2024-11-05 19:13:08.378832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.325 [2024-11-05 19:13:08.378839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.325 [2024-11-05 19:13:08.378849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.325 [2024-11-05 19:13:08.378856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.325 [2024-11-05 19:13:08.378866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.325 [2024-11-05 19:13:08.378873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.325 [2024-11-05 19:13:08.378883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.325 [2024-11-05 19:13:08.378890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.325 [2024-11-05 19:13:08.378900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.325 [2024-11-05 19:13:08.378907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.325 [2024-11-05 19:13:08.378917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.325 [2024-11-05 19:13:08.378924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.325 [2024-11-05 19:13:08.378934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.325 [2024-11-05 19:13:08.378941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.325 [2024-11-05 19:13:08.378951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.325 [2024-11-05 19:13:08.378958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.325 [2024-11-05 19:13:08.378968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.325 [2024-11-05 19:13:08.378975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.325 [2024-11-05 19:13:08.378985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.325 [2024-11-05 19:13:08.378993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.325 [2024-11-05 19:13:08.379002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.325 [2024-11-05 19:13:08.379014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.325 [2024-11-05 19:13:08.379024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.325 [2024-11-05 19:13:08.379031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.325 [2024-11-05 19:13:08.379041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.325 [2024-11-05 19:13:08.379048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.325 [2024-11-05 19:13:08.379058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.325 [2024-11-05 19:13:08.379065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.325 [2024-11-05 19:13:08.379075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.325 [2024-11-05 19:13:08.379082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.325 [2024-11-05 19:13:08.379092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.325 [2024-11-05 19:13:08.379099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.325 [2024-11-05 19:13:08.379109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.325 [2024-11-05 19:13:08.379116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.325 [2024-11-05 19:13:08.379125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.325 [2024-11-05 19:13:08.379133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.325 [2024-11-05 19:13:08.379142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.325 [2024-11-05 19:13:08.379150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.325 [2024-11-05 19:13:08.379160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.325 [2024-11-05 19:13:08.379167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.325 [2024-11-05 19:13:08.379177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.325 [2024-11-05 19:13:08.379184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.325 [2024-11-05 19:13:08.379194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.326 [2024-11-05 19:13:08.379201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.326 [2024-11-05 19:13:08.379210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.326 [2024-11-05 19:13:08.379218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.326 [2024-11-05 19:13:08.379229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.326 [2024-11-05 19:13:08.379236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.326 [2024-11-05 19:13:08.379246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.326 [2024-11-05 19:13:08.379253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.326 [2024-11-05 19:13:08.379263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.326 [2024-11-05 19:13:08.379270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.326 [2024-11-05 19:13:08.379279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.326 [2024-11-05 19:13:08.379287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.326 [2024-11-05 19:13:08.379296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.326 [2024-11-05 19:13:08.379304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.326 [2024-11-05 19:13:08.379313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.326 [2024-11-05 19:13:08.379321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.326 [2024-11-05 19:13:08.379330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.326 [2024-11-05 19:13:08.379338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.326 [2024-11-05 19:13:08.379347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.326 [2024-11-05 19:13:08.379354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.326 [2024-11-05 19:13:08.379364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.326 [2024-11-05 19:13:08.379371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.326 [2024-11-05 19:13:08.379381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.326 [2024-11-05 19:13:08.379389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.326 [2024-11-05 19:13:08.379398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.326 [2024-11-05 19:13:08.379405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.326 [2024-11-05 19:13:08.379415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.326 [2024-11-05 19:13:08.379423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.326 [2024-11-05 19:13:08.379432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.326 [2024-11-05 19:13:08.379441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.326 [2024-11-05 19:13:08.379451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.326 [2024-11-05 19:13:08.379458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.326 [2024-11-05 19:13:08.379470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.326 [2024-11-05 19:13:08.379477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.326 [2024-11-05 19:13:08.379487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.326 [2024-11-05 19:13:08.379494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.326 [2024-11-05 19:13:08.379504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.326 [2024-11-05 19:13:08.379511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.326 [2024-11-05 19:13:08.379521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.326 [2024-11-05 19:13:08.379528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.326 [2024-11-05 19:13:08.379537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.326 [2024-11-05 19:13:08.379545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.326 [2024-11-05 19:13:08.379554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.326 [2024-11-05 19:13:08.379562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.326 [2024-11-05 19:13:08.379572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.326 [2024-11-05 19:13:08.379579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.326 [2024-11-05 19:13:08.379588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.326 [2024-11-05 19:13:08.379596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.326 [2024-11-05 19:13:08.379606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.326 [2024-11-05 19:13:08.379614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.326 [2024-11-05 19:13:08.379623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.326 [2024-11-05 19:13:08.379630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.326 [2024-11-05 19:13:08.379640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.326 [2024-11-05 19:13:08.379647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.326 [2024-11-05 19:13:08.379658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.326 [2024-11-05 19:13:08.379665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.326 [2024-11-05 19:13:08.379675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.326 [2024-11-05 19:13:08.379682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.326 [2024-11-05 19:13:08.379690] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa0180 is same with the state(6) to be set 00:23:39.326 [2024-11-05 19:13:08.380981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.326 [2024-11-05 19:13:08.380996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.326 [2024-11-05 19:13:08.381008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.326 [2024-11-05 19:13:08.381018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.326 [2024-11-05 19:13:08.381029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.326 [2024-11-05 19:13:08.381038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.326 [2024-11-05 19:13:08.381050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.326 [2024-11-05 19:13:08.381059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.326 [2024-11-05 19:13:08.381070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.326 [2024-11-05 19:13:08.381080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.326 [2024-11-05 19:13:08.381091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.326 [2024-11-05 19:13:08.381100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.326 [2024-11-05 19:13:08.381109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.326 [2024-11-05 19:13:08.381117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.326 [2024-11-05 19:13:08.381126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.326 [2024-11-05 19:13:08.381134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.326 [2024-11-05 19:13:08.381143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.326 [2024-11-05 19:13:08.381151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.326 [2024-11-05 19:13:08.381160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.326 [2024-11-05 19:13:08.381168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.326 [2024-11-05 19:13:08.381180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.326 [2024-11-05 19:13:08.381187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.327 [2024-11-05 19:13:08.381197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.327 [2024-11-05 19:13:08.381204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.327 [2024-11-05 19:13:08.381213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.327 [2024-11-05 19:13:08.381221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.327 [2024-11-05 19:13:08.381230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.327 [2024-11-05 19:13:08.381238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.327 [2024-11-05 19:13:08.381247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.327 [2024-11-05 19:13:08.381255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.327 [2024-11-05 19:13:08.381265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.327 [2024-11-05 19:13:08.381272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.327 [2024-11-05 19:13:08.381282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.327 [2024-11-05 19:13:08.381289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.327 [2024-11-05 19:13:08.381299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.327 [2024-11-05 19:13:08.381306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.327 [2024-11-05 19:13:08.381316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.327 [2024-11-05 19:13:08.381323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.327 [2024-11-05 19:13:08.381333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.327 [2024-11-05 19:13:08.381340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.327 [2024-11-05 19:13:08.381350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.327 [2024-11-05 19:13:08.381357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.327 [2024-11-05 19:13:08.381367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.327 [2024-11-05 19:13:08.381374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.327 [2024-11-05 19:13:08.381383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.327 [2024-11-05 19:13:08.381393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.327 [2024-11-05 19:13:08.381402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.327 [2024-11-05 19:13:08.381409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.327 [2024-11-05 19:13:08.381419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.327 [2024-11-05 19:13:08.381426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.327 [2024-11-05 19:13:08.381436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.327 [2024-11-05 19:13:08.381443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.327 [2024-11-05 19:13:08.381452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.327 [2024-11-05 19:13:08.381460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.327 [2024-11-05 19:13:08.381469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.327 [2024-11-05 19:13:08.381477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.327 [2024-11-05 19:13:08.381487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.327 [2024-11-05 19:13:08.381494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.327 [2024-11-05 19:13:08.381504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.327 [2024-11-05 19:13:08.381511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.327 [2024-11-05 19:13:08.381521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.327 [2024-11-05 19:13:08.381528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.327 [2024-11-05 19:13:08.381538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.327 [2024-11-05 19:13:08.381545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.327 [2024-11-05 19:13:08.381555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.327 [2024-11-05 19:13:08.381562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.327 [2024-11-05 19:13:08.381572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.327 [2024-11-05 19:13:08.381579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.327 [2024-11-05 19:13:08.381588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.327 [2024-11-05 19:13:08.381596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.327 [2024-11-05 19:13:08.381605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.327 [2024-11-05 19:13:08.381614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.327 [2024-11-05 19:13:08.381623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.327 [2024-11-05 19:13:08.381631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.327 [2024-11-05 19:13:08.381640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.327 [2024-11-05 19:13:08.381647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.327 [2024-11-05 19:13:08.381657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.327 [2024-11-05 19:13:08.381664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.327 [2024-11-05 19:13:08.381674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.327 [2024-11-05 19:13:08.381681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.327 [2024-11-05 19:13:08.381691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.327 [2024-11-05 19:13:08.381698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.327 [2024-11-05 19:13:08.381708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.327 [2024-11-05 19:13:08.381716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.327 [2024-11-05 19:13:08.381725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.327 [2024-11-05 19:13:08.381732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.327 [2024-11-05 19:13:08.381742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.327 [2024-11-05 19:13:08.381754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.327 [2024-11-05 19:13:08.381764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.327 [2024-11-05 19:13:08.381771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.327 [2024-11-05 19:13:08.381781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.327 [2024-11-05 19:13:08.381788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.327 [2024-11-05 19:13:08.381797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.327 [2024-11-05 19:13:08.381805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.327 [2024-11-05 19:13:08.381814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.327 [2024-11-05 19:13:08.381822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.327 [2024-11-05 19:13:08.381833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.327 [2024-11-05 19:13:08.381840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.327 [2024-11-05 19:13:08.381850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.327 [2024-11-05 19:13:08.381857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.327 [2024-11-05 19:13:08.381866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.327 [2024-11-05 19:13:08.381874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.328 [2024-11-05 19:13:08.381883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.328 [2024-11-05 19:13:08.381891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.328 [2024-11-05 19:13:08.381900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.328 [2024-11-05 19:13:08.381907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.328 [2024-11-05 19:13:08.381917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.328 [2024-11-05 19:13:08.381924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.328 [2024-11-05 19:13:08.381934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.328 [2024-11-05 19:13:08.381942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.328 [2024-11-05 19:13:08.381952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.328 [2024-11-05 19:13:08.381959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.328 [2024-11-05 19:13:08.381969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.328 [2024-11-05 19:13:08.381976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.328 [2024-11-05 19:13:08.381986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.328 [2024-11-05 19:13:08.381994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.328 [2024-11-05 19:13:08.382003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.328 [2024-11-05 19:13:08.382011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.328 [2024-11-05 19:13:08.382021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.328 [2024-11-05 19:13:08.382028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.328 [2024-11-05 19:13:08.382038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.328 [2024-11-05 19:13:08.382048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.328 [2024-11-05 19:13:08.382057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.328 [2024-11-05 19:13:08.382065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.328 [2024-11-05 19:13:08.382074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.328 [2024-11-05 19:13:08.382082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.328 [2024-11-05 19:13:08.382091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.328 [2024-11-05 19:13:08.382099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.328 [2024-11-05 19:13:08.382107] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa0590 is same with the state(6) to be set 00:23:39.328 [2024-11-05 19:13:08.383379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.328 [2024-11-05 19:13:08.383392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.328 [2024-11-05 19:13:08.383404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.328 [2024-11-05 19:13:08.383414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.328 [2024-11-05 19:13:08.383426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.328 [2024-11-05 19:13:08.383435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.328 [2024-11-05 19:13:08.383446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.328 [2024-11-05 19:13:08.383455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.328 [2024-11-05 19:13:08.383466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.328 [2024-11-05 19:13:08.383475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.328 [2024-11-05 19:13:08.383485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.328 [2024-11-05 19:13:08.383492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.328 [2024-11-05 19:13:08.383502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.328 [2024-11-05 19:13:08.383509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.328 [2024-11-05 19:13:08.383519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.328 [2024-11-05 19:13:08.383526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.328 [2024-11-05 19:13:08.383536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.328 [2024-11-05 19:13:08.383546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.328 [2024-11-05 19:13:08.383556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.328 [2024-11-05 19:13:08.383563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.328 [2024-11-05 19:13:08.383573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.328 [2024-11-05 19:13:08.383580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.328 [2024-11-05 19:13:08.383590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.328 [2024-11-05 19:13:08.383597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.328 [2024-11-05 19:13:08.383607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.328 [2024-11-05 19:13:08.383614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.328 [2024-11-05 19:13:08.383624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.328 [2024-11-05 19:13:08.383631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.328 [2024-11-05 19:13:08.383641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.328 [2024-11-05 19:13:08.383648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.328 [2024-11-05 19:13:08.383658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.328 [2024-11-05 19:13:08.383665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.328 [2024-11-05 19:13:08.383674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.328 [2024-11-05 19:13:08.383681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.328 [2024-11-05 19:13:08.383691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.328 [2024-11-05 19:13:08.383698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.328 [2024-11-05 19:13:08.383708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.328 [2024-11-05 19:13:08.383715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.328 [2024-11-05 19:13:08.383725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.328 [2024-11-05 19:13:08.383732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.328 [2024-11-05 19:13:08.383742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.328 [2024-11-05 19:13:08.383757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.329 [2024-11-05 19:13:08.383768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.329 [2024-11-05 19:13:08.383775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.329 [2024-11-05 19:13:08.383785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.329 [2024-11-05 19:13:08.383792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.329 [2024-11-05 19:13:08.383802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.329 [2024-11-05 19:13:08.383809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.329 [2024-11-05 19:13:08.383819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.329 [2024-11-05 19:13:08.383826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.329 [2024-11-05 19:13:08.383836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.329 [2024-11-05 19:13:08.383843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.329 [2024-11-05 19:13:08.383853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.329 [2024-11-05 19:13:08.383860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.329 [2024-11-05 19:13:08.383870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.329 [2024-11-05 19:13:08.383877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.329 [2024-11-05 19:13:08.383887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.329 [2024-11-05 19:13:08.383895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.329 [2024-11-05 19:13:08.383904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.329 [2024-11-05 19:13:08.383912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.329 [2024-11-05 19:13:08.383921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.329 [2024-11-05 19:13:08.383929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.329 [2024-11-05 19:13:08.383938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.329 [2024-11-05 19:13:08.383946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.329 [2024-11-05 19:13:08.383955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.329 [2024-11-05 19:13:08.383963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.329 [2024-11-05 19:13:08.383972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.329 [2024-11-05 19:13:08.383989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.329 [2024-11-05 19:13:08.383999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.329 [2024-11-05 19:13:08.384006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.329 [2024-11-05 19:13:08.384016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.329 [2024-11-05 19:13:08.384023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.329 [2024-11-05 19:13:08.384033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.329 [2024-11-05 19:13:08.384040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.329 [2024-11-05 19:13:08.384050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.329 [2024-11-05 19:13:08.384057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.329 [2024-11-05 19:13:08.384067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.329 [2024-11-05 19:13:08.384074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.329 [2024-11-05 19:13:08.384084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.329 [2024-11-05 19:13:08.384091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.329 [2024-11-05 19:13:08.384101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.329 [2024-11-05 19:13:08.384108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.329 [2024-11-05 19:13:08.384118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.329 [2024-11-05 19:13:08.384126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.329 [2024-11-05 19:13:08.384135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.329 [2024-11-05 19:13:08.384142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.329 [2024-11-05 19:13:08.384152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.329 [2024-11-05 19:13:08.384159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.329 [2024-11-05 19:13:08.384169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.329 [2024-11-05 19:13:08.384176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.329 [2024-11-05 19:13:08.384186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.329 [2024-11-05 19:13:08.384193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.329 [2024-11-05 19:13:08.384204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.329 [2024-11-05 19:13:08.384212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.329 [2024-11-05 19:13:08.384221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.329 [2024-11-05 19:13:08.384229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.329 [2024-11-05 19:13:08.384238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.329 [2024-11-05 19:13:08.384246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.329 [2024-11-05 19:13:08.384255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.329 [2024-11-05 19:13:08.384263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.329 [2024-11-05 19:13:08.384273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.329 [2024-11-05 19:13:08.384280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.329 [2024-11-05 19:13:08.384290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.329 [2024-11-05 19:13:08.384297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.329 [2024-11-05 19:13:08.384307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.329 [2024-11-05 19:13:08.384314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.329 [2024-11-05 19:13:08.384323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.329 [2024-11-05 19:13:08.384331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.329 [2024-11-05 19:13:08.384340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.329 [2024-11-05 19:13:08.384348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.329 [2024-11-05 19:13:08.384357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.329 [2024-11-05 19:13:08.384365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.329 [2024-11-05 19:13:08.384374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.329 [2024-11-05 19:13:08.384383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.329 [2024-11-05 19:13:08.384392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.329 [2024-11-05 19:13:08.384400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.329 [2024-11-05 19:13:08.384409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.329 [2024-11-05 19:13:08.384419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.329 [2024-11-05 19:13:08.384429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.329 [2024-11-05 19:13:08.384436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.329 [2024-11-05 19:13:08.384447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.329 [2024-11-05 19:13:08.384454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.330 [2024-11-05 19:13:08.384464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.330 [2024-11-05 19:13:08.384471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.330 [2024-11-05 19:13:08.384480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.330 [2024-11-05 19:13:08.384488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.330 [2024-11-05 19:13:08.384498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.330 [2024-11-05 19:13:08.384505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.330 [2024-11-05 19:13:08.384514] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa1b20 is same with the state(6) to be set 00:23:39.330 [2024-11-05 19:13:08.386052] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:23:39.330 [2024-11-05 19:13:08.386075] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:23:39.330 task offset: 29312 on job bdev=Nvme5n1 fails 00:23:39.330 00:23:39.330 Latency(us) 00:23:39.330 [2024-11-05T18:13:08.653Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:39.330 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:39.330 Job: Nvme1n1 ended in about 0.96 seconds with error 00:23:39.330 Verification LBA range: start 0x0 length 0x400 00:23:39.330 Nvme1n1 : 0.96 133.21 8.33 66.61 0.00 316831.57 19005.44 249910.61 00:23:39.330 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:39.330 Job: Nvme2n1 ended in about 0.95 seconds with error 00:23:39.330 Verification LBA range: start 0x0 length 0x400 00:23:39.330 Nvme2n1 : 0.95 202.09 12.63 67.36 0.00 229992.75 17257.81 246415.36 00:23:39.330 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:39.330 Job: Nvme3n1 ended in about 0.96 seconds with error 00:23:39.330 Verification LBA range: start 0x0 length 0x400 00:23:39.330 Nvme3n1 : 0.96 199.33 12.46 66.44 0.00 228414.72 17039.36 249910.61 00:23:39.330 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:39.330 Job: Nvme4n1 ended in about 0.97 seconds with error 00:23:39.330 Verification LBA range: start 0x0 length 0x400 00:23:39.330 Nvme4n1 : 0.97 198.83 12.43 66.28 0.00 224208.21 17367.04 246415.36 00:23:39.330 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:39.330 Job: Nvme5n1 ended in about 0.93 seconds with error 00:23:39.330 Verification LBA range: start 0x0 length 0x400 00:23:39.330 Nvme5n1 : 0.93 206.33 12.90 68.78 0.00 210585.71 3549.87 248162.99 00:23:39.330 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:39.330 Job: Nvme6n1 ended in about 0.97 seconds with error 00:23:39.330 Verification LBA range: start 0x0 length 0x400 00:23:39.330 Nvme6n1 : 0.97 131.32 8.21 65.66 0.00 289146.03 18786.99 286610.77 00:23:39.330 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:39.330 Job: Nvme7n1 ended in about 0.98 seconds with error 00:23:39.330 Verification LBA range: start 0x0 length 0x400 00:23:39.330 Nvme7n1 : 0.98 200.60 12.54 65.50 0.00 209253.30 18240.85 228939.09 00:23:39.330 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:39.330 Job: Nvme8n1 ended in about 0.95 seconds with error 00:23:39.330 Verification LBA range: start 0x0 length 0x400 00:23:39.330 Nvme8n1 : 0.95 201.52 12.59 67.17 0.00 201611.63 3604.48 244667.73 00:23:39.330 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:39.330 Job: Nvme9n1 ended in about 0.98 seconds with error 00:23:39.330 Verification LBA range: start 0x0 length 0x400 00:23:39.330 Nvme9n1 : 0.98 130.68 8.17 65.34 0.00 271530.38 19770.03 249910.61 00:23:39.330 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:39.330 Job: Nvme10n1 ended in about 0.97 seconds with error 00:23:39.330 Verification LBA range: start 0x0 length 0x400 00:23:39.330 Nvme10n1 : 0.97 132.22 8.26 66.11 0.00 261376.28 17913.17 267386.88 00:23:39.330 [2024-11-05T18:13:08.653Z] =================================================================================================================== 00:23:39.330 [2024-11-05T18:13:08.653Z] Total : 1736.13 108.51 665.25 0.00 239750.33 3549.87 286610.77 00:23:39.330 [2024-11-05 19:13:08.411125] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:23:39.330 [2024-11-05 19:13:08.411173] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:23:39.330 [2024-11-05 19:13:08.411234] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x694900 (9): Bad file descriptor 00:23:39.330 [2024-11-05 19:13:08.411249] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xac8b90 (9): Bad file descriptor 00:23:39.330 [2024-11-05 19:13:08.411258] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:23:39.330 [2024-11-05 19:13:08.411266] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:23:39.330 [2024-11-05 19:13:08.411275] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:23:39.330 [2024-11-05 19:13:08.411283] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:23:39.330 [2024-11-05 19:13:08.411844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:39.330 [2024-11-05 19:13:08.411865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xac22a0 with addr=10.0.0.2, port=4420 00:23:39.330 [2024-11-05 19:13:08.411875] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac22a0 is same with the state(6) to be set 00:23:39.330 [2024-11-05 19:13:08.412169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:39.330 [2024-11-05 19:13:08.412180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5610 with addr=10.0.0.2, port=4420 00:23:39.330 [2024-11-05 19:13:08.412187] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b5610 is same with the state(6) to be set 00:23:39.330 [2024-11-05 19:13:08.412400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:39.330 [2024-11-05 19:13:08.412409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabeff0 with addr=10.0.0.2, port=4420 00:23:39.330 [2024-11-05 19:13:08.412417] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabeff0 is same with the state(6) to be set 00:23:39.330 [2024-11-05 19:13:08.412425] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:23:39.330 [2024-11-05 19:13:08.412431] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:23:39.330 [2024-11-05 19:13:08.412444] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:23:39.330 [2024-11-05 19:13:08.412452] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:23:39.330 [2024-11-05 19:13:08.412461] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:23:39.330 [2024-11-05 19:13:08.412468] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:23:39.330 [2024-11-05 19:13:08.412475] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:23:39.330 [2024-11-05 19:13:08.412481] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:23:39.330 [2024-11-05 19:13:08.412515] bdev_nvme.c:3166:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:23:39.330 [2024-11-05 19:13:08.412528] bdev_nvme.c:3166:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:23:39.330 [2024-11-05 19:13:08.412538] bdev_nvme.c:3166:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:23:39.330 [2024-11-05 19:13:08.412551] bdev_nvme.c:3166:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:23:39.330 [2024-11-05 19:13:08.412561] bdev_nvme.c:3166:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:23:39.330 [2024-11-05 19:13:08.413384] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:23:39.330 [2024-11-05 19:13:08.413398] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:23:39.330 [2024-11-05 19:13:08.413407] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:23:39.330 [2024-11-05 19:13:08.413416] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:23:39.330 [2024-11-05 19:13:08.413424] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:23:39.330 [2024-11-05 19:13:08.413488] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xac22a0 (9): Bad file descriptor 00:23:39.330 [2024-11-05 19:13:08.413500] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b5610 (9): Bad file descriptor 00:23:39.330 [2024-11-05 19:13:08.413509] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabeff0 (9): Bad file descriptor 00:23:39.330 [2024-11-05 19:13:08.413563] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:23:39.330 [2024-11-05 19:13:08.413574] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:23:39.330 [2024-11-05 19:13:08.413915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:39.330 [2024-11-05 19:13:08.413928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xac7e50 with addr=10.0.0.2, port=4420 00:23:39.330 [2024-11-05 19:13:08.413936] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac7e50 is same with the state(6) to be set 00:23:39.330 [2024-11-05 19:13:08.414118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:39.330 [2024-11-05 19:13:08.414128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x693160 with addr=10.0.0.2, port=4420 00:23:39.330 [2024-11-05 19:13:08.414135] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x693160 is same with the state(6) to be set 00:23:39.330 [2024-11-05 19:13:08.414453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:39.330 [2024-11-05 19:13:08.414463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x69dcb0 with addr=10.0.0.2, port=4420 00:23:39.330 [2024-11-05 19:13:08.414474] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x69dcb0 is same with the state(6) to be set 00:23:39.330 [2024-11-05 19:13:08.414847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:39.330 [2024-11-05 19:13:08.414857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabe400 with addr=10.0.0.2, port=4420 00:23:39.331 [2024-11-05 19:13:08.414865] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabe400 is same with the state(6) to be set 00:23:39.331 [2024-11-05 19:13:08.415208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:39.331 [2024-11-05 19:13:08.415217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xabf320 with addr=10.0.0.2, port=4420 00:23:39.331 [2024-11-05 19:13:08.415224] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabf320 is same with the state(6) to be set 00:23:39.331 [2024-11-05 19:13:08.415232] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:23:39.331 [2024-11-05 19:13:08.415238] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:23:39.331 [2024-11-05 19:13:08.415245] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:23:39.331 [2024-11-05 19:13:08.415252] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:23:39.331 [2024-11-05 19:13:08.415259] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:23:39.331 [2024-11-05 19:13:08.415266] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:23:39.331 [2024-11-05 19:13:08.415272] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:23:39.331 [2024-11-05 19:13:08.415279] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:23:39.331 [2024-11-05 19:13:08.415286] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:23:39.331 [2024-11-05 19:13:08.415292] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:23:39.331 [2024-11-05 19:13:08.415299] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:23:39.331 [2024-11-05 19:13:08.415305] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:23:39.331 [2024-11-05 19:13:08.418076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:39.331 [2024-11-05 19:13:08.418117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xac8b90 with addr=10.0.0.2, port=4420 00:23:39.331 [2024-11-05 19:13:08.418130] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac8b90 is same with the state(6) to be set 00:23:39.331 [2024-11-05 19:13:08.418480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:39.331 [2024-11-05 19:13:08.418492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x694900 with addr=10.0.0.2, port=4420 00:23:39.331 [2024-11-05 19:13:08.418500] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x694900 is same with the state(6) to be set 00:23:39.331 [2024-11-05 19:13:08.418512] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xac7e50 (9): Bad file descriptor 00:23:39.331 [2024-11-05 19:13:08.418524] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x693160 (9): Bad file descriptor 00:23:39.331 [2024-11-05 19:13:08.418533] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x69dcb0 (9): Bad file descriptor 00:23:39.331 [2024-11-05 19:13:08.418542] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabe400 (9): Bad file descriptor 00:23:39.331 [2024-11-05 19:13:08.418556] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabf320 (9): Bad file descriptor 00:23:39.331 [2024-11-05 19:13:08.418597] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xac8b90 (9): Bad file descriptor 00:23:39.331 [2024-11-05 19:13:08.418609] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x694900 (9): Bad file descriptor 00:23:39.331 [2024-11-05 19:13:08.418617] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:23:39.331 [2024-11-05 19:13:08.418624] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:23:39.331 [2024-11-05 19:13:08.418632] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:23:39.331 [2024-11-05 19:13:08.418640] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:23:39.331 [2024-11-05 19:13:08.418648] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:23:39.331 [2024-11-05 19:13:08.418655] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:23:39.331 [2024-11-05 19:13:08.418662] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:23:39.331 [2024-11-05 19:13:08.418668] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:23:39.331 [2024-11-05 19:13:08.418676] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:23:39.331 [2024-11-05 19:13:08.418682] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:23:39.331 [2024-11-05 19:13:08.418689] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:23:39.331 [2024-11-05 19:13:08.418696] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:23:39.331 [2024-11-05 19:13:08.418704] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:23:39.331 [2024-11-05 19:13:08.418710] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:23:39.331 [2024-11-05 19:13:08.418717] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:23:39.331 [2024-11-05 19:13:08.418723] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:23:39.331 [2024-11-05 19:13:08.418731] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:23:39.331 [2024-11-05 19:13:08.418737] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:23:39.331 [2024-11-05 19:13:08.418744] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:23:39.331 [2024-11-05 19:13:08.418757] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:23:39.331 [2024-11-05 19:13:08.418802] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:23:39.331 [2024-11-05 19:13:08.418811] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:23:39.331 [2024-11-05 19:13:08.418818] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:23:39.331 [2024-11-05 19:13:08.418825] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:23:39.331 [2024-11-05 19:13:08.418833] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:23:39.331 [2024-11-05 19:13:08.418842] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:23:39.331 [2024-11-05 19:13:08.418849] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:23:39.331 [2024-11-05 19:13:08.418855] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:23:39.331 19:13:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:23:40.272 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 417083 00:23:40.272 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@650 -- # local es=0 00:23:40.272 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 417083 00:23:40.272 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@638 -- # local arg=wait 00:23:40.272 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:40.272 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # type -t wait 00:23:40.272 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:40.272 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # wait 417083 00:23:40.272 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # es=255 00:23:40.272 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:40.272 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@662 -- # es=127 00:23:40.272 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # case "$es" in 00:23:40.272 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@670 -- # es=1 00:23:40.272 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:40.272 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:23:40.272 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:23:40.272 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:40.532 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:40.532 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:23:40.532 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # nvmfcleanup 00:23:40.532 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@99 -- # sync 00:23:40.532 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:23:40.532 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # set +e 00:23:40.532 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # for i in {1..20} 00:23:40.532 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:23:40.532 rmmod nvme_tcp 00:23:40.532 rmmod nvme_fabrics 00:23:40.532 rmmod nvme_keyring 00:23:40.532 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:23:40.532 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # set -e 00:23:40.532 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # return 0 00:23:40.532 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # '[' -n 416703 ']' 00:23:40.532 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@337 -- # killprocess 416703 00:23:40.532 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # '[' -z 416703 ']' 00:23:40.532 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # kill -0 416703 00:23:40.532 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (416703) - No such process 00:23:40.532 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@979 -- # echo 'Process with pid 416703 is not found' 00:23:40.532 Process with pid 416703 is not found 00:23:40.532 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:23:40.532 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # nvmf_fini 00:23:40.532 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@264 -- # local dev 00:23:40.532 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@267 -- # remove_target_ns 00:23:40.532 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:23:40.532 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:23:40.532 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_target_ns 00:23:42.445 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@268 -- # delete_main_bridge 00:23:42.445 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:23:42.445 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@130 -- # return 0 00:23:42.445 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:23:42.445 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:23:42.445 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:23:42.445 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:23:42.445 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:23:42.445 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:23:42.445 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:23:42.445 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:23:42.445 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:23:42.445 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:23:42.445 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:23:42.445 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:23:42.445 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:23:42.445 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:23:42.446 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:23:42.446 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:23:42.446 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:23:42.446 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@41 -- # _dev=0 00:23:42.446 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@41 -- # dev_map=() 00:23:42.446 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@284 -- # iptr 00:23:42.446 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@542 -- # iptables-save 00:23:42.446 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:23:42.446 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@542 -- # iptables-restore 00:23:42.446 00:23:42.446 real 0m7.975s 00:23:42.446 user 0m19.834s 00:23:42.446 sys 0m1.252s 00:23:42.446 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:42.446 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:42.446 ************************************ 00:23:42.446 END TEST nvmf_shutdown_tc3 00:23:42.446 ************************************ 00:23:42.708 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:23:42.708 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:23:42.708 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:23:42.708 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:23:42.708 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:42.708 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:42.708 ************************************ 00:23:42.708 START TEST nvmf_shutdown_tc4 00:23:42.708 ************************************ 00:23:42.708 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc4 00:23:42.708 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:23:42.708 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:23:42.708 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:23:42.708 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:42.708 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@296 -- # prepare_net_devs 00:23:42.708 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # local -g is_hw=no 00:23:42.708 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@260 -- # remove_target_ns 00:23:42.708 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:23:42.708 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:23:42.708 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_target_ns 00:23:42.708 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:23:42.708 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:23:42.708 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # xtrace_disable 00:23:42.708 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:42.708 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:42.708 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@131 -- # pci_devs=() 00:23:42.708 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@131 -- # local -a pci_devs 00:23:42.708 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@132 -- # pci_net_devs=() 00:23:42.708 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:23:42.708 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@133 -- # pci_drivers=() 00:23:42.708 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@133 -- # local -A pci_drivers 00:23:42.708 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@135 -- # net_devs=() 00:23:42.708 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@135 -- # local -ga net_devs 00:23:42.708 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@136 -- # e810=() 00:23:42.708 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@136 -- # local -ga e810 00:23:42.708 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@137 -- # x722=() 00:23:42.708 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@137 -- # local -ga x722 00:23:42.708 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@138 -- # mlx=() 00:23:42.708 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@138 -- # local -ga mlx 00:23:42.708 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:42.708 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:42.708 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:42.708 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:42.708 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:42.708 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:42.708 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:42.708 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:42.708 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:42.708 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:42.708 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:42.708 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:42.708 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:23:42.708 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:23:42.708 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:23:42.708 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:23:42.708 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:23:42.708 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:23:42.708 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:23:42.708 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:42.708 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:42.708 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:23:42.708 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:23:42.709 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:42.709 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:42.709 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:23:42.709 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:23:42.709 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:42.709 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:42.709 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:23:42.709 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:23:42.709 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:42.709 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:42.709 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:23:42.709 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:23:42.709 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:23:42.709 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:23:42.709 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:23:42.709 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:42.709 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:23:42.709 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:42.709 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@234 -- # [[ up == up ]] 00:23:42.709 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:23:42.709 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:42.709 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:42.709 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:42.709 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:23:42.709 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:23:42.709 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:42.709 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:23:42.709 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:42.709 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@234 -- # [[ up == up ]] 00:23:42.709 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:23:42.709 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:42.709 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:42.709 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:42.709 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:23:42.709 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:23:42.709 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:23:42.709 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # is_hw=yes 00:23:42.709 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:23:42.709 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:23:42.709 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:23:42.709 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:23:42.709 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@257 -- # create_target_ns 00:23:42.709 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:23:42.709 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:23:42.709 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:23:42.709 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:42.709 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:23:42.709 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:23:42.709 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:42.709 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:42.709 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:23:42.709 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:23:42.709 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:23:42.709 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:23:42.709 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@27 -- # local -gA dev_map 00:23:42.709 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@28 -- # local -g _dev 00:23:42.709 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:23:42.709 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:23:42.709 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:23:42.709 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:23:42.709 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@44 -- # ips=() 00:23:42.709 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:23:42.709 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:23:42.709 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:23:42.709 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:23:42.709 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:23:42.709 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:23:42.709 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:23:42.709 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:23:42.709 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:23:42.709 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:23:42.709 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:23:42.709 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:23:42.709 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:23:42.709 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:23:42.709 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:23:42.709 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:23:42.709 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:23:42.709 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:23:42.709 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:23:42.709 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:23:42.709 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@11 -- # local val=167772161 00:23:42.709 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:23:42.709 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:23:42.709 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:23:42.709 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:23:42.709 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:23:42.709 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:23:42.709 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:23:42.709 10.0.0.1 00:23:42.709 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:23:42.709 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:23:42.709 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:42.709 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:42.710 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:23:42.710 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@11 -- # local val=167772162 00:23:42.710 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:23:42.710 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:23:42.710 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:23:42.710 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:23:42.710 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:23:42.710 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:23:42.710 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:23:42.710 10.0.0.2 00:23:42.710 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:23:42.710 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:23:42.710 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:23:42.710 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:23:42.710 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:23:42.973 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:23:42.973 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:23:42.973 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:42.973 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:42.974 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:23:42.974 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:23:42.974 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:23:42.974 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:23:42.974 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:23:42.974 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:23:42.974 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:23:42.974 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:23:42.974 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:23:42.974 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:23:42.974 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:23:42.974 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@38 -- # ping_ips 1 00:23:42.974 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:23:42.974 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:23:42.974 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:23:42.974 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:23:42.974 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:23:42.974 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:23:42.974 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:23:42.974 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:23:42.974 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:23:42.974 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@107 -- # local dev=initiator0 00:23:42.974 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:23:42.974 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:23:42.974 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:23:42.974 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:23:42.974 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:23:42.974 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:23:42.974 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:23:42.974 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:23:42.974 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:23:42.974 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:23:42.974 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:23:42.974 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:42.974 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:42.974 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:23:42.974 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:23:42.974 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:42.974 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.664 ms 00:23:42.974 00:23:42.974 --- 10.0.0.1 ping statistics --- 00:23:42.974 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:42.974 rtt min/avg/max/mdev = 0.664/0.664/0.664/0.000 ms 00:23:42.974 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:23:42.974 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:23:42.974 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:23:42.974 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:23:42.974 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:42.974 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:42.974 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@168 -- # get_net_dev target0 00:23:42.974 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@107 -- # local dev=target0 00:23:42.974 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:23:42.974 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:23:42.974 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:23:42.974 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:23:42.974 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:23:42.974 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:23:42.974 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:23:42.974 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:23:42.974 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:23:42.974 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:23:42.974 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:23:42.974 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:23:42.974 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:23:42.974 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:23:42.974 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:42.974 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.267 ms 00:23:42.974 00:23:42.974 --- 10.0.0.2 ping statistics --- 00:23:42.974 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:42.974 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:23:42.974 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@98 -- # (( pair++ )) 00:23:42.974 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:23:42.974 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:42.974 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@270 -- # return 0 00:23:42.974 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:23:42.974 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:23:42.974 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:23:42.974 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:23:42.974 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:23:42.974 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:23:42.974 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:23:42.974 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:23:42.974 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:23:42.974 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:23:42.974 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@107 -- # local dev=initiator0 00:23:42.974 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:23:42.974 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:23:42.974 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:23:42.974 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:23:42.974 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:23:42.974 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:23:42.974 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:23:42.974 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:23:42.974 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:23:42.974 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:42.974 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:23:42.974 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:23:42.974 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:23:42.974 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:23:42.974 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:23:42.974 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:23:42.975 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@107 -- # local dev=initiator1 00:23:42.975 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:23:42.975 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:23:42.975 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@109 -- # return 1 00:23:42.975 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@168 -- # dev= 00:23:42.975 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@169 -- # return 0 00:23:42.975 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:23:42.975 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:23:42.975 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:23:42.975 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:23:42.975 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:23:42.975 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:42.975 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:42.975 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@168 -- # get_net_dev target0 00:23:42.975 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@107 -- # local dev=target0 00:23:42.975 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:23:42.975 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:23:42.975 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:23:42.975 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:23:42.975 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:23:42.975 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:23:42.975 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:23:42.975 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:23:42.975 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:23:42.975 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:42.975 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:23:42.975 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:23:42.975 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:23:42.975 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:23:42.975 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:42.975 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:42.975 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@168 -- # get_net_dev target1 00:23:42.975 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@107 -- # local dev=target1 00:23:42.975 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:23:42.975 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:23:42.975 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@109 -- # return 1 00:23:42.975 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@168 -- # dev= 00:23:42.975 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@169 -- # return 0 00:23:42.975 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:23:42.975 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:42.975 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:23:42.975 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:23:42.975 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:42.975 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:23:42.975 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:23:42.975 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:23:42.975 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:23:42.975 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:42.975 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:42.975 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # nvmfpid=418496 00:23:43.236 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@329 -- # waitforlisten 418496 00:23:43.236 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk ip netns exec nvmf_ns_spdk ip netns exec nvmf_ns_spdk ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:43.236 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@833 -- # '[' -z 418496 ']' 00:23:43.236 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:43.236 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:43.236 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:43.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:43.236 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:43.236 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:43.236 [2024-11-05 19:13:12.358996] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:23:43.236 [2024-11-05 19:13:12.359065] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:43.236 [2024-11-05 19:13:12.454356] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:43.236 [2024-11-05 19:13:12.488796] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:43.236 [2024-11-05 19:13:12.488828] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:43.236 [2024-11-05 19:13:12.488834] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:43.236 [2024-11-05 19:13:12.488839] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:43.236 [2024-11-05 19:13:12.488842] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:43.236 [2024-11-05 19:13:12.490145] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:43.236 [2024-11-05 19:13:12.490305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:43.236 [2024-11-05 19:13:12.490460] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:43.236 [2024-11-05 19:13:12.490462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:23:44.178 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:44.178 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@866 -- # return 0 00:23:44.178 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:23:44.178 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:44.178 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:44.178 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:44.178 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:44.178 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:44.178 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:44.178 [2024-11-05 19:13:13.212885] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:44.178 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:44.178 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:23:44.178 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:23:44.178 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:44.178 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:44.179 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:44.179 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:44.179 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:44.179 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:44.179 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:44.179 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:44.179 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:44.179 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:44.179 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:44.179 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:44.179 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:44.179 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:44.179 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:44.179 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:44.179 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:44.179 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:44.179 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:44.179 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:44.179 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:44.179 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:44.179 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:44.179 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:23:44.179 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:44.179 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:44.179 Malloc1 00:23:44.179 [2024-11-05 19:13:13.327554] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:44.179 Malloc2 00:23:44.179 Malloc3 00:23:44.179 Malloc4 00:23:44.179 Malloc5 00:23:44.179 Malloc6 00:23:44.440 Malloc7 00:23:44.440 Malloc8 00:23:44.440 Malloc9 00:23:44.440 Malloc10 00:23:44.440 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:44.440 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:23:44.440 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:44.440 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:44.440 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=418725 00:23:44.440 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:23:44.440 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:23:44.700 [2024-11-05 19:13:13.791168] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:23:50.062 19:13:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:50.062 19:13:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 418496 00:23:50.062 19:13:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@952 -- # '[' -z 418496 ']' 00:23:50.062 19:13:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # kill -0 418496 00:23:50.062 19:13:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@957 -- # uname 00:23:50.062 19:13:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:50.062 19:13:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 418496 00:23:50.062 19:13:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:23:50.062 19:13:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:23:50.062 19:13:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 418496' 00:23:50.062 killing process with pid 418496 00:23:50.062 19:13:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@971 -- # kill 418496 00:23:50.062 19:13:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@976 -- # wait 418496 00:23:50.062 [2024-11-05 19:13:18.807646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ce7f0 is same with the state(6) to be set 00:23:50.062 [2024-11-05 19:13:18.807688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ce7f0 is same with the state(6) to be set 00:23:50.062 [2024-11-05 19:13:18.807694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ce7f0 is same with the state(6) to be set 00:23:50.062 [2024-11-05 19:13:18.807699] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ce7f0 is same with the state(6) to be set 00:23:50.062 [2024-11-05 19:13:18.807704] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ce7f0 is same with the state(6) to be set 00:23:50.062 [2024-11-05 19:13:18.807708] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ce7f0 is same with the state(6) to be set 00:23:50.062 [2024-11-05 19:13:18.807987] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24cecc0 is same with the state(6) to be set 00:23:50.062 [2024-11-05 19:13:18.808016] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24cecc0 is same with the state(6) to be set 00:23:50.062 [2024-11-05 19:13:18.808022] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24cecc0 is same with the state(6) to be set 00:23:50.062 [2024-11-05 19:13:18.808027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24cecc0 is same with the state(6) to be set 00:23:50.062 [2024-11-05 19:13:18.808033] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24cecc0 is same with the state(6) to be set 00:23:50.062 [2024-11-05 19:13:18.808038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24cecc0 is same with the state(6) to be set 00:23:50.063 [2024-11-05 19:13:18.808042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24cecc0 is same with the state(6) to be set 00:23:50.063 [2024-11-05 19:13:18.808047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24cecc0 is same with the state(6) to be set 00:23:50.063 [2024-11-05 19:13:18.808052] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24cecc0 is same with the state(6) to be set 00:23:50.063 [2024-11-05 19:13:18.808056] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24cecc0 is same with the state(6) to be set 00:23:50.063 [2024-11-05 19:13:18.808061] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24cecc0 is same with the state(6) to be set 00:23:50.063 [2024-11-05 19:13:18.808066] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24cecc0 is same with the state(6) to be set 00:23:50.063 [2024-11-05 19:13:18.808128] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24cf1b0 is same with the state(6) to be set 00:23:50.063 [2024-11-05 19:13:18.808152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24cf1b0 is same with the state(6) to be set 00:23:50.063 [2024-11-05 19:13:18.808159] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24cf1b0 is same with the state(6) to be set 00:23:50.063 [2024-11-05 19:13:18.808163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24cf1b0 is same with the state(6) to be set 00:23:50.063 [2024-11-05 19:13:18.808169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24cf1b0 is same with the state(6) to be set 00:23:50.063 [2024-11-05 19:13:18.808179] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24cf1b0 is same with the state(6) to be set 00:23:50.063 [2024-11-05 19:13:18.808397] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ce320 is same with the state(6) to be set 00:23:50.063 Write completed with error (sct=0, sc=8) 00:23:50.063 [2024-11-05 19:13:18.808418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ce320 is same with the state(6) to be set 00:23:50.063 [2024-11-05 19:13:18.808428] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ce320 is same with the state(6) to be set 00:23:50.063 [2024-11-05 19:13:18.808433] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ce320 is same with the state(6) to be set 00:23:50.063 [2024-11-05 19:13:18.808439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ce320 is same with the state(6) to be set 00:23:50.063 Write completed with error (sct=0, sc=8) 00:23:50.063 Write completed with error (sct=0, sc=8) 00:23:50.063 starting I/O failed: -6 00:23:50.063 Write completed with error (sct=0, sc=8) 00:23:50.063 Write completed with error (sct=0, sc=8) 00:23:50.063 Write completed with error (sct=0, sc=8) 00:23:50.063 Write completed with error (sct=0, sc=8) 00:23:50.063 starting I/O failed: -6 00:23:50.063 Write completed with error (sct=0, sc=8) 00:23:50.063 Write completed with error (sct=0, sc=8) 00:23:50.063 Write completed with error (sct=0, sc=8) 00:23:50.063 Write completed with error (sct=0, sc=8) 00:23:50.063 starting I/O failed: -6 00:23:50.063 Write completed with error (sct=0, sc=8) 00:23:50.063 Write completed with error (sct=0, sc=8) 00:23:50.063 Write completed with error (sct=0, sc=8) 00:23:50.063 Write completed with error (sct=0, sc=8) 00:23:50.063 starting I/O failed: -6 00:23:50.063 Write completed with error (sct=0, sc=8) 00:23:50.063 Write completed with error (sct=0, sc=8) 00:23:50.063 Write completed with error (sct=0, sc=8) 00:23:50.063 Write completed with error (sct=0, sc=8) 00:23:50.063 starting I/O failed: -6 00:23:50.063 Write completed with error (sct=0, sc=8) 00:23:50.063 Write completed with error (sct=0, sc=8) 00:23:50.063 Write completed with error (sct=0, sc=8) 00:23:50.063 Write completed with error (sct=0, sc=8) 00:23:50.063 starting I/O failed: -6 00:23:50.063 Write completed with error (sct=0, sc=8) 00:23:50.063 Write completed with error (sct=0, sc=8) 00:23:50.063 Write completed with error (sct=0, sc=8) 00:23:50.063 Write completed with error (sct=0, sc=8) 00:23:50.063 starting I/O failed: -6 00:23:50.063 Write completed with error (sct=0, sc=8) 00:23:50.063 Write completed with error (sct=0, sc=8) 00:23:50.063 Write completed with error (sct=0, sc=8) 00:23:50.063 Write completed with error (sct=0, sc=8) 00:23:50.063 starting I/O failed: -6 00:23:50.063 Write completed with error (sct=0, sc=8) 00:23:50.063 Write completed with error (sct=0, sc=8) 00:23:50.063 Write completed with error (sct=0, sc=8) 00:23:50.063 Write completed with error (sct=0, sc=8) 00:23:50.063 starting I/O failed: -6 00:23:50.063 Write completed with error (sct=0, sc=8) 00:23:50.063 Write completed with error (sct=0, sc=8) 00:23:50.063 Write completed with error (sct=0, sc=8) 00:23:50.063 Write completed with error (sct=0, sc=8) 00:23:50.063 starting I/O failed: -6 00:23:50.063 Write completed with error (sct=0, sc=8) 00:23:50.063 Write completed with error (sct=0, sc=8) 00:23:50.063 Write completed with error (sct=0, sc=8) 00:23:50.063 [2024-11-05 19:13:18.809157] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:50.063 starting I/O failed: -6 00:23:50.063 Write completed with error (sct=0, sc=8) 00:23:50.063 starting I/O failed: -6 00:23:50.063 Write completed with error (sct=0, sc=8) 00:23:50.063 starting I/O failed: -6 00:23:50.063 Write completed with error (sct=0, sc=8) 00:23:50.063 Write completed with error (sct=0, sc=8) 00:23:50.063 Write completed with error (sct=0, sc=8) 00:23:50.063 starting I/O failed: -6 00:23:50.063 Write completed with error (sct=0, sc=8) 00:23:50.063 starting I/O failed: -6 00:23:50.063 Write completed with error (sct=0, sc=8) 00:23:50.063 Write completed with error (sct=0, sc=8) 00:23:50.063 Write completed with error (sct=0, sc=8) 00:23:50.063 starting I/O failed: -6 00:23:50.063 Write completed with error (sct=0, sc=8) 00:23:50.063 starting I/O failed: -6 00:23:50.063 Write completed with error (sct=0, sc=8) 00:23:50.063 Write completed with error (sct=0, sc=8) 00:23:50.063 Write completed with error (sct=0, sc=8) 00:23:50.063 starting I/O failed: -6 00:23:50.063 Write completed with error (sct=0, sc=8) 00:23:50.063 starting I/O failed: -6 00:23:50.063 Write completed with error (sct=0, sc=8) 00:23:50.063 Write completed with error (sct=0, sc=8) 00:23:50.063 Write completed with error (sct=0, sc=8) 00:23:50.063 starting I/O failed: -6 00:23:50.063 Write completed with error (sct=0, sc=8) 00:23:50.063 starting I/O failed: -6 00:23:50.063 Write completed with error (sct=0, sc=8) 00:23:50.063 Write completed with error (sct=0, sc=8) 00:23:50.063 Write completed with error (sct=0, sc=8) 00:23:50.063 starting I/O failed: -6 00:23:50.063 Write completed with error (sct=0, sc=8) 00:23:50.063 starting I/O failed: -6 00:23:50.063 Write completed with error (sct=0, sc=8) 00:23:50.063 Write completed with error (sct=0, sc=8) 00:23:50.063 Write completed with error (sct=0, sc=8) 00:23:50.063 starting I/O failed: -6 00:23:50.063 Write completed with error (sct=0, sc=8) 00:23:50.063 starting I/O failed: -6 00:23:50.063 Write completed with error (sct=0, sc=8) 00:23:50.063 Write completed with error (sct=0, sc=8) 00:23:50.063 Write completed with error (sct=0, sc=8) 00:23:50.063 starting I/O failed: -6 00:23:50.063 Write completed with error (sct=0, sc=8) 00:23:50.063 starting I/O failed: -6 00:23:50.063 Write completed with error (sct=0, sc=8) 00:23:50.063 Write completed with error (sct=0, sc=8) 00:23:50.063 Write completed with error (sct=0, sc=8) 00:23:50.063 starting I/O failed: -6 00:23:50.063 Write completed with error (sct=0, sc=8) 00:23:50.063 starting I/O failed: -6 00:23:50.063 Write completed with error (sct=0, sc=8) 00:23:50.063 Write completed with error (sct=0, sc=8) 00:23:50.063 [2024-11-05 19:13:18.810017] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:50.063 Write completed with error (sct=0, sc=8) 00:23:50.063 starting I/O failed: -6 00:23:50.063 Write completed with error (sct=0, sc=8) 00:23:50.063 starting I/O failed: -6 00:23:50.063 Write completed with error (sct=0, sc=8) 00:23:50.063 Write completed with error (sct=0, sc=8) 00:23:50.063 starting I/O failed: -6 00:23:50.063 Write completed with error (sct=0, sc=8) 00:23:50.063 starting I/O failed: -6 00:23:50.063 Write completed with error (sct=0, sc=8) 00:23:50.063 starting I/O failed: -6 00:23:50.063 Write completed with error (sct=0, sc=8) 00:23:50.063 Write completed with error (sct=0, sc=8) 00:23:50.063 starting I/O failed: -6 00:23:50.063 Write completed with error (sct=0, sc=8) 00:23:50.063 starting I/O failed: -6 00:23:50.063 Write completed with error (sct=0, sc=8) 00:23:50.063 starting I/O failed: -6 00:23:50.063 Write completed with error (sct=0, sc=8) 00:23:50.063 Write completed with error (sct=0, sc=8) 00:23:50.063 starting I/O failed: -6 00:23:50.063 Write completed with error (sct=0, sc=8) 00:23:50.063 starting I/O failed: -6 00:23:50.063 Write completed with error (sct=0, sc=8) 00:23:50.063 starting I/O failed: -6 00:23:50.063 Write completed with error (sct=0, sc=8) 00:23:50.063 Write completed with error (sct=0, sc=8) 00:23:50.063 starting I/O failed: -6 00:23:50.063 Write completed with error (sct=0, sc=8) 00:23:50.063 starting I/O failed: -6 00:23:50.063 Write completed with error (sct=0, sc=8) 00:23:50.063 starting I/O failed: -6 00:23:50.063 Write completed with error (sct=0, sc=8) 00:23:50.063 Write completed with error (sct=0, sc=8) 00:23:50.063 starting I/O failed: -6 00:23:50.063 Write completed with error (sct=0, sc=8) 00:23:50.063 starting I/O failed: -6 00:23:50.063 Write completed with error (sct=0, sc=8) 00:23:50.063 starting I/O failed: -6 00:23:50.063 Write completed with error (sct=0, sc=8) 00:23:50.063 Write completed with error (sct=0, sc=8) 00:23:50.063 starting I/O failed: -6 00:23:50.063 Write completed with error (sct=0, sc=8) 00:23:50.063 starting I/O failed: -6 00:23:50.063 Write completed with error (sct=0, sc=8) 00:23:50.063 starting I/O failed: -6 00:23:50.063 Write completed with error (sct=0, sc=8) 00:23:50.063 Write completed with error (sct=0, sc=8) 00:23:50.063 starting I/O failed: -6 00:23:50.063 Write completed with error (sct=0, sc=8) 00:23:50.063 starting I/O failed: -6 00:23:50.063 Write completed with error (sct=0, sc=8) 00:23:50.063 starting I/O failed: -6 00:23:50.063 Write completed with error (sct=0, sc=8) 00:23:50.063 Write completed with error (sct=0, sc=8) 00:23:50.063 starting I/O failed: -6 00:23:50.063 Write completed with error (sct=0, sc=8) 00:23:50.063 starting I/O failed: -6 00:23:50.063 Write completed with error (sct=0, sc=8) 00:23:50.063 starting I/O failed: -6 00:23:50.063 Write completed with error (sct=0, sc=8) 00:23:50.063 Write completed with error (sct=0, sc=8) 00:23:50.064 starting I/O failed: -6 00:23:50.064 Write completed with error (sct=0, sc=8) 00:23:50.064 starting I/O failed: -6 00:23:50.064 Write completed with error (sct=0, sc=8) 00:23:50.064 starting I/O failed: -6 00:23:50.064 Write completed with error (sct=0, sc=8) 00:23:50.064 Write completed with error (sct=0, sc=8) 00:23:50.064 starting I/O failed: -6 00:23:50.064 Write completed with error (sct=0, sc=8) 00:23:50.064 starting I/O failed: -6 00:23:50.064 Write completed with error (sct=0, sc=8) 00:23:50.064 starting I/O failed: -6 00:23:50.064 Write completed with error (sct=0, sc=8) 00:23:50.064 Write completed with error (sct=0, sc=8) 00:23:50.064 starting I/O failed: -6 00:23:50.064 Write completed with error (sct=0, sc=8) 00:23:50.064 starting I/O failed: -6 00:23:50.064 Write completed with error (sct=0, sc=8) 00:23:50.064 starting I/O failed: -6 00:23:50.064 Write completed with error (sct=0, sc=8) 00:23:50.064 Write completed with error (sct=0, sc=8) 00:23:50.064 starting I/O failed: -6 00:23:50.064 Write completed with error (sct=0, sc=8) 00:23:50.064 starting I/O failed: -6 00:23:50.064 Write completed with error (sct=0, sc=8) 00:23:50.064 starting I/O failed: -6 00:23:50.064 [2024-11-05 19:13:18.810957] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:50.064 Write completed with error (sct=0, sc=8) 00:23:50.064 starting I/O failed: -6 00:23:50.064 Write completed with error (sct=0, sc=8) 00:23:50.064 starting I/O failed: -6 00:23:50.064 Write completed with error (sct=0, sc=8) 00:23:50.064 starting I/O failed: -6 00:23:50.064 Write completed with error (sct=0, sc=8) 00:23:50.064 starting I/O failed: -6 00:23:50.064 Write completed with error (sct=0, sc=8) 00:23:50.064 starting I/O failed: -6 00:23:50.064 Write completed with error (sct=0, sc=8) 00:23:50.064 starting I/O failed: -6 00:23:50.064 Write completed with error (sct=0, sc=8) 00:23:50.064 starting I/O failed: -6 00:23:50.064 Write completed with error (sct=0, sc=8) 00:23:50.064 starting I/O failed: -6 00:23:50.064 Write completed with error (sct=0, sc=8) 00:23:50.064 starting I/O failed: -6 00:23:50.064 Write completed with error (sct=0, sc=8) 00:23:50.064 starting I/O failed: -6 00:23:50.064 Write completed with error (sct=0, sc=8) 00:23:50.064 starting I/O failed: -6 00:23:50.064 Write completed with error (sct=0, sc=8) 00:23:50.064 starting I/O failed: -6 00:23:50.064 Write completed with error (sct=0, sc=8) 00:23:50.064 starting I/O failed: -6 00:23:50.064 Write completed with error (sct=0, sc=8) 00:23:50.064 starting I/O failed: -6 00:23:50.064 Write completed with error (sct=0, sc=8) 00:23:50.064 starting I/O failed: -6 00:23:50.064 Write completed with error (sct=0, sc=8) 00:23:50.064 starting I/O failed: -6 00:23:50.064 Write completed with error (sct=0, sc=8) 00:23:50.064 starting I/O failed: -6 00:23:50.064 Write completed with error (sct=0, sc=8) 00:23:50.064 starting I/O failed: -6 00:23:50.064 Write completed with error (sct=0, sc=8) 00:23:50.064 starting I/O failed: -6 00:23:50.064 Write completed with error (sct=0, sc=8) 00:23:50.064 starting I/O failed: -6 00:23:50.064 Write completed with error (sct=0, sc=8) 00:23:50.064 starting I/O failed: -6 00:23:50.064 Write completed with error (sct=0, sc=8) 00:23:50.064 starting I/O failed: -6 00:23:50.064 Write completed with error (sct=0, sc=8) 00:23:50.064 starting I/O failed: -6 00:23:50.064 Write completed with error (sct=0, sc=8) 00:23:50.064 starting I/O failed: -6 00:23:50.064 Write completed with error (sct=0, sc=8) 00:23:50.064 starting I/O failed: -6 00:23:50.064 Write completed with error (sct=0, sc=8) 00:23:50.064 starting I/O failed: -6 00:23:50.064 Write completed with error (sct=0, sc=8) 00:23:50.064 starting I/O failed: -6 00:23:50.064 Write completed with error (sct=0, sc=8) 00:23:50.064 starting I/O failed: -6 00:23:50.064 Write completed with error (sct=0, sc=8) 00:23:50.064 starting I/O failed: -6 00:23:50.064 Write completed with error (sct=0, sc=8) 00:23:50.064 starting I/O failed: -6 00:23:50.064 Write completed with error (sct=0, sc=8) 00:23:50.064 starting I/O failed: -6 00:23:50.064 Write completed with error (sct=0, sc=8) 00:23:50.064 starting I/O failed: -6 00:23:50.064 Write completed with error (sct=0, sc=8) 00:23:50.064 starting I/O failed: -6 00:23:50.064 Write completed with error (sct=0, sc=8) 00:23:50.064 starting I/O failed: -6 00:23:50.064 Write completed with error (sct=0, sc=8) 00:23:50.064 starting I/O failed: -6 00:23:50.064 Write completed with error (sct=0, sc=8) 00:23:50.064 starting I/O failed: -6 00:23:50.064 Write completed with error (sct=0, sc=8) 00:23:50.064 starting I/O failed: -6 00:23:50.064 Write completed with error (sct=0, sc=8) 00:23:50.064 starting I/O failed: -6 00:23:50.064 Write completed with error (sct=0, sc=8) 00:23:50.064 starting I/O failed: -6 00:23:50.064 Write completed with error (sct=0, sc=8) 00:23:50.064 starting I/O failed: -6 00:23:50.064 Write completed with error (sct=0, sc=8) 00:23:50.064 starting I/O failed: -6 00:23:50.064 Write completed with error (sct=0, sc=8) 00:23:50.064 starting I/O failed: -6 00:23:50.064 Write completed with error (sct=0, sc=8) 00:23:50.064 starting I/O failed: -6 00:23:50.064 Write completed with error (sct=0, sc=8) 00:23:50.064 starting I/O failed: -6 00:23:50.064 Write completed with error (sct=0, sc=8) 00:23:50.064 starting I/O failed: -6 00:23:50.064 Write completed with error (sct=0, sc=8) 00:23:50.064 starting I/O failed: -6 00:23:50.064 Write completed with error (sct=0, sc=8) 00:23:50.064 starting I/O failed: -6 00:23:50.064 Write completed with error (sct=0, sc=8) 00:23:50.064 starting I/O failed: -6 00:23:50.064 Write completed with error (sct=0, sc=8) 00:23:50.064 starting I/O failed: -6 00:23:50.064 Write completed with error (sct=0, sc=8) 00:23:50.064 starting I/O failed: -6 00:23:50.064 Write completed with error (sct=0, sc=8) 00:23:50.064 starting I/O failed: -6 00:23:50.064 Write completed with error (sct=0, sc=8) 00:23:50.064 starting I/O failed: -6 00:23:50.064 Write completed with error (sct=0, sc=8) 00:23:50.064 starting I/O failed: -6 00:23:50.064 Write completed with error (sct=0, sc=8) 00:23:50.064 starting I/O failed: -6 00:23:50.064 Write completed with error (sct=0, sc=8) 00:23:50.064 starting I/O failed: -6 00:23:50.064 Write completed with error (sct=0, sc=8) 00:23:50.064 starting I/O failed: -6 00:23:50.064 Write completed with error (sct=0, sc=8) 00:23:50.064 starting I/O failed: -6 00:23:50.064 Write completed with error (sct=0, sc=8) 00:23:50.064 starting I/O failed: -6 00:23:50.064 Write completed with error (sct=0, sc=8) 00:23:50.064 starting I/O failed: -6 00:23:50.064 Write completed with error (sct=0, sc=8) 00:23:50.064 starting I/O failed: -6 00:23:50.064 Write completed with error (sct=0, sc=8) 00:23:50.064 starting I/O failed: -6 00:23:50.064 [2024-11-05 19:13:18.812216] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:50.064 NVMe io qpair process completion error 00:23:50.064 Write completed with error (sct=0, sc=8) 00:23:50.064 Write completed with error (sct=0, sc=8) 00:23:50.064 starting I/O failed: -6 00:23:50.064 Write completed with error (sct=0, sc=8) 00:23:50.064 Write completed with error (sct=0, sc=8) 00:23:50.064 Write completed with error (sct=0, sc=8) 00:23:50.064 Write completed with error (sct=0, sc=8) 00:23:50.064 starting I/O failed: -6 00:23:50.064 Write completed with error (sct=0, sc=8) 00:23:50.064 Write completed with error (sct=0, sc=8) 00:23:50.064 Write completed with error (sct=0, sc=8) 00:23:50.064 Write completed with error (sct=0, sc=8) 00:23:50.064 starting I/O failed: -6 00:23:50.064 Write completed with error (sct=0, sc=8) 00:23:50.064 Write completed with error (sct=0, sc=8) 00:23:50.064 Write completed with error (sct=0, sc=8) 00:23:50.064 Write completed with error (sct=0, sc=8) 00:23:50.064 starting I/O failed: -6 00:23:50.064 Write completed with error (sct=0, sc=8) 00:23:50.064 Write completed with error (sct=0, sc=8) 00:23:50.064 Write completed with error (sct=0, sc=8) 00:23:50.064 Write completed with error (sct=0, sc=8) 00:23:50.064 starting I/O failed: -6 00:23:50.064 Write completed with error (sct=0, sc=8) 00:23:50.064 Write completed with error (sct=0, sc=8) 00:23:50.064 Write completed with error (sct=0, sc=8) 00:23:50.064 Write completed with error (sct=0, sc=8) 00:23:50.064 starting I/O failed: -6 00:23:50.064 Write completed with error (sct=0, sc=8) 00:23:50.064 Write completed with error (sct=0, sc=8) 00:23:50.064 Write completed with error (sct=0, sc=8) 00:23:50.064 Write completed with error (sct=0, sc=8) 00:23:50.064 starting I/O failed: -6 00:23:50.064 Write completed with error (sct=0, sc=8) 00:23:50.064 Write completed with error (sct=0, sc=8) 00:23:50.064 Write completed with error (sct=0, sc=8) 00:23:50.064 Write completed with error (sct=0, sc=8) 00:23:50.064 starting I/O failed: -6 00:23:50.064 [2024-11-05 19:13:18.813502] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:50.064 Write completed with error (sct=0, sc=8) 00:23:50.064 Write completed with error (sct=0, sc=8) 00:23:50.064 starting I/O failed: -6 00:23:50.064 Write completed with error (sct=0, sc=8) 00:23:50.064 Write completed with error (sct=0, sc=8) 00:23:50.064 starting I/O failed: -6 00:23:50.064 Write completed with error (sct=0, sc=8) 00:23:50.064 Write completed with error (sct=0, sc=8) 00:23:50.064 starting I/O failed: -6 00:23:50.064 Write completed with error (sct=0, sc=8) 00:23:50.064 Write completed with error (sct=0, sc=8) 00:23:50.064 starting I/O failed: -6 00:23:50.064 Write completed with error (sct=0, sc=8) 00:23:50.064 Write completed with error (sct=0, sc=8) 00:23:50.064 starting I/O failed: -6 00:23:50.064 Write completed with error (sct=0, sc=8) 00:23:50.064 Write completed with error (sct=0, sc=8) 00:23:50.064 starting I/O failed: -6 00:23:50.065 Write completed with error (sct=0, sc=8) 00:23:50.065 Write completed with error (sct=0, sc=8) 00:23:50.065 starting I/O failed: -6 00:23:50.065 Write completed with error (sct=0, sc=8) 00:23:50.065 Write completed with error (sct=0, sc=8) 00:23:50.065 starting I/O failed: -6 00:23:50.065 Write completed with error (sct=0, sc=8) 00:23:50.065 Write completed with error (sct=0, sc=8) 00:23:50.065 starting I/O failed: -6 00:23:50.065 Write completed with error (sct=0, sc=8) 00:23:50.065 Write completed with error (sct=0, sc=8) 00:23:50.065 starting I/O failed: -6 00:23:50.065 Write completed with error (sct=0, sc=8) 00:23:50.065 Write completed with error (sct=0, sc=8) 00:23:50.065 starting I/O failed: -6 00:23:50.065 Write completed with error (sct=0, sc=8) 00:23:50.065 Write completed with error (sct=0, sc=8) 00:23:50.065 starting I/O failed: -6 00:23:50.065 Write completed with error (sct=0, sc=8) 00:23:50.065 Write completed with error (sct=0, sc=8) 00:23:50.065 starting I/O failed: -6 00:23:50.065 Write completed with error (sct=0, sc=8) 00:23:50.065 Write completed with error (sct=0, sc=8) 00:23:50.065 starting I/O failed: -6 00:23:50.065 Write completed with error (sct=0, sc=8) 00:23:50.065 Write completed with error (sct=0, sc=8) 00:23:50.065 starting I/O failed: -6 00:23:50.065 Write completed with error (sct=0, sc=8) 00:23:50.065 Write completed with error (sct=0, sc=8) 00:23:50.065 starting I/O failed: -6 00:23:50.065 Write completed with error (sct=0, sc=8) 00:23:50.065 Write completed with error (sct=0, sc=8) 00:23:50.065 starting I/O failed: -6 00:23:50.065 Write completed with error (sct=0, sc=8) 00:23:50.065 Write completed with error (sct=0, sc=8) 00:23:50.065 starting I/O failed: -6 00:23:50.065 Write completed with error (sct=0, sc=8) 00:23:50.065 Write completed with error (sct=0, sc=8) 00:23:50.065 starting I/O failed: -6 00:23:50.065 Write completed with error (sct=0, sc=8) 00:23:50.065 Write completed with error (sct=0, sc=8) 00:23:50.065 starting I/O failed: -6 00:23:50.065 Write completed with error (sct=0, sc=8) 00:23:50.065 Write completed with error (sct=0, sc=8) 00:23:50.065 starting I/O failed: -6 00:23:50.065 Write completed with error (sct=0, sc=8) 00:23:50.065 Write completed with error (sct=0, sc=8) 00:23:50.065 starting I/O failed: -6 00:23:50.065 Write completed with error (sct=0, sc=8) 00:23:50.065 Write completed with error (sct=0, sc=8) 00:23:50.065 starting I/O failed: -6 00:23:50.065 Write completed with error (sct=0, sc=8) 00:23:50.065 [2024-11-05 19:13:18.814762] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:50.065 Write completed with error (sct=0, sc=8) 00:23:50.065 starting I/O failed: -6 00:23:50.065 Write completed with error (sct=0, sc=8) 00:23:50.065 starting I/O failed: -6 00:23:50.065 Write completed with error (sct=0, sc=8) 00:23:50.065 starting I/O failed: -6 00:23:50.065 Write completed with error (sct=0, sc=8) 00:23:50.065 Write completed with error (sct=0, sc=8) 00:23:50.065 starting I/O failed: -6 00:23:50.065 Write completed with error (sct=0, sc=8) 00:23:50.065 starting I/O failed: -6 00:23:50.065 Write completed with error (sct=0, sc=8) 00:23:50.065 starting I/O failed: -6 00:23:50.065 Write completed with error (sct=0, sc=8) 00:23:50.065 Write completed with error (sct=0, sc=8) 00:23:50.065 starting I/O failed: -6 00:23:50.065 Write completed with error (sct=0, sc=8) 00:23:50.065 starting I/O failed: -6 00:23:50.065 Write completed with error (sct=0, sc=8) 00:23:50.065 starting I/O failed: -6 00:23:50.065 Write completed with error (sct=0, sc=8) 00:23:50.065 Write completed with error (sct=0, sc=8) 00:23:50.065 starting I/O failed: -6 00:23:50.065 Write completed with error (sct=0, sc=8) 00:23:50.065 starting I/O failed: -6 00:23:50.065 Write completed with error (sct=0, sc=8) 00:23:50.065 starting I/O failed: -6 00:23:50.065 Write completed with error (sct=0, sc=8) 00:23:50.065 Write completed with error (sct=0, sc=8) 00:23:50.065 starting I/O failed: -6 00:23:50.065 Write completed with error (sct=0, sc=8) 00:23:50.065 starting I/O failed: -6 00:23:50.065 Write completed with error (sct=0, sc=8) 00:23:50.065 starting I/O failed: -6 00:23:50.065 Write completed with error (sct=0, sc=8) 00:23:50.065 Write completed with error (sct=0, sc=8) 00:23:50.065 starting I/O failed: -6 00:23:50.065 Write completed with error (sct=0, sc=8) 00:23:50.065 starting I/O failed: -6 00:23:50.065 Write completed with error (sct=0, sc=8) 00:23:50.065 starting I/O failed: -6 00:23:50.065 Write completed with error (sct=0, sc=8) 00:23:50.065 Write completed with error (sct=0, sc=8) 00:23:50.065 starting I/O failed: -6 00:23:50.065 Write completed with error (sct=0, sc=8) 00:23:50.065 starting I/O failed: -6 00:23:50.065 Write completed with error (sct=0, sc=8) 00:23:50.065 starting I/O failed: -6 00:23:50.065 Write completed with error (sct=0, sc=8) 00:23:50.065 Write completed with error (sct=0, sc=8) 00:23:50.065 starting I/O failed: -6 00:23:50.065 Write completed with error (sct=0, sc=8) 00:23:50.065 starting I/O failed: -6 00:23:50.065 Write completed with error (sct=0, sc=8) 00:23:50.065 starting I/O failed: -6 00:23:50.065 Write completed with error (sct=0, sc=8) 00:23:50.065 Write completed with error (sct=0, sc=8) 00:23:50.065 starting I/O failed: -6 00:23:50.065 Write completed with error (sct=0, sc=8) 00:23:50.065 starting I/O failed: -6 00:23:50.065 Write completed with error (sct=0, sc=8) 00:23:50.065 starting I/O failed: -6 00:23:50.065 Write completed with error (sct=0, sc=8) 00:23:50.065 Write completed with error (sct=0, sc=8) 00:23:50.065 starting I/O failed: -6 00:23:50.065 Write completed with error (sct=0, sc=8) 00:23:50.065 starting I/O failed: -6 00:23:50.065 Write completed with error (sct=0, sc=8) 00:23:50.065 starting I/O failed: -6 00:23:50.065 Write completed with error (sct=0, sc=8) 00:23:50.065 Write completed with error (sct=0, sc=8) 00:23:50.065 starting I/O failed: -6 00:23:50.065 Write completed with error (sct=0, sc=8) 00:23:50.065 starting I/O failed: -6 00:23:50.065 Write completed with error (sct=0, sc=8) 00:23:50.065 starting I/O failed: -6 00:23:50.065 Write completed with error (sct=0, sc=8) 00:23:50.065 Write completed with error (sct=0, sc=8) 00:23:50.065 starting I/O failed: -6 00:23:50.065 Write completed with error (sct=0, sc=8) 00:23:50.065 starting I/O failed: -6 00:23:50.065 Write completed with error (sct=0, sc=8) 00:23:50.065 starting I/O failed: -6 00:23:50.065 Write completed with error (sct=0, sc=8) 00:23:50.065 [2024-11-05 19:13:18.815638] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:50.065 Write completed with error (sct=0, sc=8) 00:23:50.065 starting I/O failed: -6 00:23:50.065 Write completed with error (sct=0, sc=8) 00:23:50.065 starting I/O failed: -6 00:23:50.065 Write completed with error (sct=0, sc=8) 00:23:50.065 starting I/O failed: -6 00:23:50.065 Write completed with error (sct=0, sc=8) 00:23:50.065 starting I/O failed: -6 00:23:50.065 Write completed with error (sct=0, sc=8) 00:23:50.065 starting I/O failed: -6 00:23:50.065 Write completed with error (sct=0, sc=8) 00:23:50.065 starting I/O failed: -6 00:23:50.065 Write completed with error (sct=0, sc=8) 00:23:50.065 starting I/O failed: -6 00:23:50.065 Write completed with error (sct=0, sc=8) 00:23:50.065 starting I/O failed: -6 00:23:50.065 Write completed with error (sct=0, sc=8) 00:23:50.065 starting I/O failed: -6 00:23:50.065 Write completed with error (sct=0, sc=8) 00:23:50.065 starting I/O failed: -6 00:23:50.065 Write completed with error (sct=0, sc=8) 00:23:50.065 starting I/O failed: -6 00:23:50.065 Write completed with error (sct=0, sc=8) 00:23:50.065 starting I/O failed: -6 00:23:50.065 Write completed with error (sct=0, sc=8) 00:23:50.065 starting I/O failed: -6 00:23:50.065 Write completed with error (sct=0, sc=8) 00:23:50.065 starting I/O failed: -6 00:23:50.065 Write completed with error (sct=0, sc=8) 00:23:50.065 starting I/O failed: -6 00:23:50.065 Write completed with error (sct=0, sc=8) 00:23:50.065 starting I/O failed: -6 00:23:50.065 Write completed with error (sct=0, sc=8) 00:23:50.065 starting I/O failed: -6 00:23:50.065 Write completed with error (sct=0, sc=8) 00:23:50.065 starting I/O failed: -6 00:23:50.065 Write completed with error (sct=0, sc=8) 00:23:50.065 starting I/O failed: -6 00:23:50.065 Write completed with error (sct=0, sc=8) 00:23:50.065 starting I/O failed: -6 00:23:50.065 Write completed with error (sct=0, sc=8) 00:23:50.065 starting I/O failed: -6 00:23:50.065 Write completed with error (sct=0, sc=8) 00:23:50.065 starting I/O failed: -6 00:23:50.065 Write completed with error (sct=0, sc=8) 00:23:50.065 starting I/O failed: -6 00:23:50.065 Write completed with error (sct=0, sc=8) 00:23:50.065 starting I/O failed: -6 00:23:50.065 Write completed with error (sct=0, sc=8) 00:23:50.065 starting I/O failed: -6 00:23:50.065 Write completed with error (sct=0, sc=8) 00:23:50.065 starting I/O failed: -6 00:23:50.065 Write completed with error (sct=0, sc=8) 00:23:50.065 starting I/O failed: -6 00:23:50.065 Write completed with error (sct=0, sc=8) 00:23:50.065 starting I/O failed: -6 00:23:50.065 Write completed with error (sct=0, sc=8) 00:23:50.065 starting I/O failed: -6 00:23:50.065 Write completed with error (sct=0, sc=8) 00:23:50.065 starting I/O failed: -6 00:23:50.065 Write completed with error (sct=0, sc=8) 00:23:50.065 starting I/O failed: -6 00:23:50.065 Write completed with error (sct=0, sc=8) 00:23:50.065 starting I/O failed: -6 00:23:50.065 Write completed with error (sct=0, sc=8) 00:23:50.065 starting I/O failed: -6 00:23:50.065 Write completed with error (sct=0, sc=8) 00:23:50.065 starting I/O failed: -6 00:23:50.065 Write completed with error (sct=0, sc=8) 00:23:50.065 starting I/O failed: -6 00:23:50.065 Write completed with error (sct=0, sc=8) 00:23:50.065 starting I/O failed: -6 00:23:50.065 Write completed with error (sct=0, sc=8) 00:23:50.065 starting I/O failed: -6 00:23:50.065 Write completed with error (sct=0, sc=8) 00:23:50.065 starting I/O failed: -6 00:23:50.065 Write completed with error (sct=0, sc=8) 00:23:50.065 starting I/O failed: -6 00:23:50.065 Write completed with error (sct=0, sc=8) 00:23:50.065 starting I/O failed: -6 00:23:50.065 Write completed with error (sct=0, sc=8) 00:23:50.066 starting I/O failed: -6 00:23:50.066 Write completed with error (sct=0, sc=8) 00:23:50.066 starting I/O failed: -6 00:23:50.066 Write completed with error (sct=0, sc=8) 00:23:50.066 starting I/O failed: -6 00:23:50.066 Write completed with error (sct=0, sc=8) 00:23:50.066 starting I/O failed: -6 00:23:50.066 Write completed with error (sct=0, sc=8) 00:23:50.066 starting I/O failed: -6 00:23:50.066 Write completed with error (sct=0, sc=8) 00:23:50.066 starting I/O failed: -6 00:23:50.066 Write completed with error (sct=0, sc=8) 00:23:50.066 starting I/O failed: -6 00:23:50.066 Write completed with error (sct=0, sc=8) 00:23:50.066 starting I/O failed: -6 00:23:50.066 Write completed with error (sct=0, sc=8) 00:23:50.066 starting I/O failed: -6 00:23:50.066 Write completed with error (sct=0, sc=8) 00:23:50.066 starting I/O failed: -6 00:23:50.066 Write completed with error (sct=0, sc=8) 00:23:50.066 starting I/O failed: -6 00:23:50.066 Write completed with error (sct=0, sc=8) 00:23:50.066 starting I/O failed: -6 00:23:50.066 Write completed with error (sct=0, sc=8) 00:23:50.066 starting I/O failed: -6 00:23:50.066 Write completed with error (sct=0, sc=8) 00:23:50.066 starting I/O failed: -6 00:23:50.066 Write completed with error (sct=0, sc=8) 00:23:50.066 starting I/O failed: -6 00:23:50.066 Write completed with error (sct=0, sc=8) 00:23:50.066 starting I/O failed: -6 00:23:50.066 Write completed with error (sct=0, sc=8) 00:23:50.066 starting I/O failed: -6 00:23:50.066 Write completed with error (sct=0, sc=8) 00:23:50.066 starting I/O failed: -6 00:23:50.066 Write completed with error (sct=0, sc=8) 00:23:50.066 starting I/O failed: -6 00:23:50.066 Write completed with error (sct=0, sc=8) 00:23:50.066 starting I/O failed: -6 00:23:50.066 Write completed with error (sct=0, sc=8) 00:23:50.066 starting I/O failed: -6 00:23:50.066 [2024-11-05 19:13:18.817013] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:50.066 NVMe io qpair process completion error 00:23:50.066 Write completed with error (sct=0, sc=8) 00:23:50.066 Write completed with error (sct=0, sc=8) 00:23:50.066 starting I/O failed: -6 00:23:50.066 Write completed with error (sct=0, sc=8) 00:23:50.066 Write completed with error (sct=0, sc=8) 00:23:50.066 Write completed with error (sct=0, sc=8) 00:23:50.066 Write completed with error (sct=0, sc=8) 00:23:50.066 starting I/O failed: -6 00:23:50.066 Write completed with error (sct=0, sc=8) 00:23:50.066 Write completed with error (sct=0, sc=8) 00:23:50.066 Write completed with error (sct=0, sc=8) 00:23:50.066 Write completed with error (sct=0, sc=8) 00:23:50.066 starting I/O failed: -6 00:23:50.066 Write completed with error (sct=0, sc=8) 00:23:50.066 Write completed with error (sct=0, sc=8) 00:23:50.066 Write completed with error (sct=0, sc=8) 00:23:50.066 Write completed with error (sct=0, sc=8) 00:23:50.066 starting I/O failed: -6 00:23:50.066 Write completed with error (sct=0, sc=8) 00:23:50.066 Write completed with error (sct=0, sc=8) 00:23:50.066 Write completed with error (sct=0, sc=8) 00:23:50.066 Write completed with error (sct=0, sc=8) 00:23:50.066 starting I/O failed: -6 00:23:50.066 Write completed with error (sct=0, sc=8) 00:23:50.066 Write completed with error (sct=0, sc=8) 00:23:50.066 Write completed with error (sct=0, sc=8) 00:23:50.066 Write completed with error (sct=0, sc=8) 00:23:50.066 starting I/O failed: -6 00:23:50.066 Write completed with error (sct=0, sc=8) 00:23:50.066 Write completed with error (sct=0, sc=8) 00:23:50.066 Write completed with error (sct=0, sc=8) 00:23:50.066 Write completed with error (sct=0, sc=8) 00:23:50.066 starting I/O failed: -6 00:23:50.066 Write completed with error (sct=0, sc=8) 00:23:50.066 Write completed with error (sct=0, sc=8) 00:23:50.066 Write completed with error (sct=0, sc=8) 00:23:50.066 Write completed with error (sct=0, sc=8) 00:23:50.066 starting I/O failed: -6 00:23:50.066 [2024-11-05 19:13:18.818304] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:50.066 Write completed with error (sct=0, sc=8) 00:23:50.066 Write completed with error (sct=0, sc=8) 00:23:50.066 starting I/O failed: -6 00:23:50.066 Write completed with error (sct=0, sc=8) 00:23:50.066 starting I/O failed: -6 00:23:50.066 Write completed with error (sct=0, sc=8) 00:23:50.066 Write completed with error (sct=0, sc=8) 00:23:50.066 Write completed with error (sct=0, sc=8) 00:23:50.066 starting I/O failed: -6 00:23:50.066 Write completed with error (sct=0, sc=8) 00:23:50.066 starting I/O failed: -6 00:23:50.066 Write completed with error (sct=0, sc=8) 00:23:50.066 Write completed with error (sct=0, sc=8) 00:23:50.066 Write completed with error (sct=0, sc=8) 00:23:50.066 starting I/O failed: -6 00:23:50.066 Write completed with error (sct=0, sc=8) 00:23:50.066 starting I/O failed: -6 00:23:50.066 Write completed with error (sct=0, sc=8) 00:23:50.066 Write completed with error (sct=0, sc=8) 00:23:50.066 Write completed with error (sct=0, sc=8) 00:23:50.066 starting I/O failed: -6 00:23:50.066 Write completed with error (sct=0, sc=8) 00:23:50.066 starting I/O failed: -6 00:23:50.066 Write completed with error (sct=0, sc=8) 00:23:50.066 Write completed with error (sct=0, sc=8) 00:23:50.066 Write completed with error (sct=0, sc=8) 00:23:50.066 starting I/O failed: -6 00:23:50.066 Write completed with error (sct=0, sc=8) 00:23:50.066 starting I/O failed: -6 00:23:50.066 Write completed with error (sct=0, sc=8) 00:23:50.066 Write completed with error (sct=0, sc=8) 00:23:50.066 Write completed with error (sct=0, sc=8) 00:23:50.066 starting I/O failed: -6 00:23:50.066 Write completed with error (sct=0, sc=8) 00:23:50.066 starting I/O failed: -6 00:23:50.066 Write completed with error (sct=0, sc=8) 00:23:50.066 Write completed with error (sct=0, sc=8) 00:23:50.066 Write completed with error (sct=0, sc=8) 00:23:50.066 starting I/O failed: -6 00:23:50.066 Write completed with error (sct=0, sc=8) 00:23:50.066 starting I/O failed: -6 00:23:50.066 Write completed with error (sct=0, sc=8) 00:23:50.066 Write completed with error (sct=0, sc=8) 00:23:50.066 Write completed with error (sct=0, sc=8) 00:23:50.066 starting I/O failed: -6 00:23:50.066 Write completed with error (sct=0, sc=8) 00:23:50.066 starting I/O failed: -6 00:23:50.066 Write completed with error (sct=0, sc=8) 00:23:50.066 Write completed with error (sct=0, sc=8) 00:23:50.066 Write completed with error (sct=0, sc=8) 00:23:50.066 starting I/O failed: -6 00:23:50.066 Write completed with error (sct=0, sc=8) 00:23:50.066 starting I/O failed: -6 00:23:50.066 Write completed with error (sct=0, sc=8) 00:23:50.066 Write completed with error (sct=0, sc=8) 00:23:50.066 Write completed with error (sct=0, sc=8) 00:23:50.066 starting I/O failed: -6 00:23:50.066 Write completed with error (sct=0, sc=8) 00:23:50.066 starting I/O failed: -6 00:23:50.066 Write completed with error (sct=0, sc=8) 00:23:50.066 Write completed with error (sct=0, sc=8) 00:23:50.066 Write completed with error (sct=0, sc=8) 00:23:50.066 starting I/O failed: -6 00:23:50.066 Write completed with error (sct=0, sc=8) 00:23:50.066 starting I/O failed: -6 00:23:50.066 Write completed with error (sct=0, sc=8) 00:23:50.066 Write completed with error (sct=0, sc=8) 00:23:50.066 [2024-11-05 19:13:18.819221] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:50.066 Write completed with error (sct=0, sc=8) 00:23:50.066 starting I/O failed: -6 00:23:50.066 Write completed with error (sct=0, sc=8) 00:23:50.066 starting I/O failed: -6 00:23:50.066 Write completed with error (sct=0, sc=8) 00:23:50.066 starting I/O failed: -6 00:23:50.066 Write completed with error (sct=0, sc=8) 00:23:50.066 Write completed with error (sct=0, sc=8) 00:23:50.066 starting I/O failed: -6 00:23:50.066 Write completed with error (sct=0, sc=8) 00:23:50.066 starting I/O failed: -6 00:23:50.066 Write completed with error (sct=0, sc=8) 00:23:50.066 starting I/O failed: -6 00:23:50.066 Write completed with error (sct=0, sc=8) 00:23:50.066 Write completed with error (sct=0, sc=8) 00:23:50.066 starting I/O failed: -6 00:23:50.066 Write completed with error (sct=0, sc=8) 00:23:50.066 starting I/O failed: -6 00:23:50.066 Write completed with error (sct=0, sc=8) 00:23:50.066 starting I/O failed: -6 00:23:50.066 Write completed with error (sct=0, sc=8) 00:23:50.066 Write completed with error (sct=0, sc=8) 00:23:50.066 starting I/O failed: -6 00:23:50.066 Write completed with error (sct=0, sc=8) 00:23:50.066 starting I/O failed: -6 00:23:50.066 Write completed with error (sct=0, sc=8) 00:23:50.066 starting I/O failed: -6 00:23:50.066 Write completed with error (sct=0, sc=8) 00:23:50.066 Write completed with error (sct=0, sc=8) 00:23:50.066 starting I/O failed: -6 00:23:50.067 Write completed with error (sct=0, sc=8) 00:23:50.067 starting I/O failed: -6 00:23:50.067 Write completed with error (sct=0, sc=8) 00:23:50.067 starting I/O failed: -6 00:23:50.067 Write completed with error (sct=0, sc=8) 00:23:50.067 Write completed with error (sct=0, sc=8) 00:23:50.067 starting I/O failed: -6 00:23:50.067 Write completed with error (sct=0, sc=8) 00:23:50.067 starting I/O failed: -6 00:23:50.067 Write completed with error (sct=0, sc=8) 00:23:50.067 starting I/O failed: -6 00:23:50.067 Write completed with error (sct=0, sc=8) 00:23:50.067 Write completed with error (sct=0, sc=8) 00:23:50.067 starting I/O failed: -6 00:23:50.067 Write completed with error (sct=0, sc=8) 00:23:50.067 starting I/O failed: -6 00:23:50.067 Write completed with error (sct=0, sc=8) 00:23:50.067 starting I/O failed: -6 00:23:50.067 Write completed with error (sct=0, sc=8) 00:23:50.067 Write completed with error (sct=0, sc=8) 00:23:50.067 starting I/O failed: -6 00:23:50.067 Write completed with error (sct=0, sc=8) 00:23:50.067 starting I/O failed: -6 00:23:50.067 Write completed with error (sct=0, sc=8) 00:23:50.067 starting I/O failed: -6 00:23:50.067 Write completed with error (sct=0, sc=8) 00:23:50.067 Write completed with error (sct=0, sc=8) 00:23:50.067 starting I/O failed: -6 00:23:50.067 Write completed with error (sct=0, sc=8) 00:23:50.067 starting I/O failed: -6 00:23:50.067 Write completed with error (sct=0, sc=8) 00:23:50.067 starting I/O failed: -6 00:23:50.067 Write completed with error (sct=0, sc=8) 00:23:50.067 Write completed with error (sct=0, sc=8) 00:23:50.067 starting I/O failed: -6 00:23:50.067 Write completed with error (sct=0, sc=8) 00:23:50.067 starting I/O failed: -6 00:23:50.067 Write completed with error (sct=0, sc=8) 00:23:50.067 starting I/O failed: -6 00:23:50.067 Write completed with error (sct=0, sc=8) 00:23:50.067 Write completed with error (sct=0, sc=8) 00:23:50.067 starting I/O failed: -6 00:23:50.067 Write completed with error (sct=0, sc=8) 00:23:50.067 starting I/O failed: -6 00:23:50.067 Write completed with error (sct=0, sc=8) 00:23:50.067 starting I/O failed: -6 00:23:50.067 Write completed with error (sct=0, sc=8) 00:23:50.067 Write completed with error (sct=0, sc=8) 00:23:50.067 starting I/O failed: -6 00:23:50.067 Write completed with error (sct=0, sc=8) 00:23:50.067 starting I/O failed: -6 00:23:50.067 Write completed with error (sct=0, sc=8) 00:23:50.067 starting I/O failed: -6 00:23:50.067 Write completed with error (sct=0, sc=8) 00:23:50.067 Write completed with error (sct=0, sc=8) 00:23:50.067 starting I/O failed: -6 00:23:50.067 [2024-11-05 19:13:18.820150] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:50.067 Write completed with error (sct=0, sc=8) 00:23:50.067 starting I/O failed: -6 00:23:50.067 Write completed with error (sct=0, sc=8) 00:23:50.067 starting I/O failed: -6 00:23:50.067 Write completed with error (sct=0, sc=8) 00:23:50.067 starting I/O failed: -6 00:23:50.067 Write completed with error (sct=0, sc=8) 00:23:50.067 starting I/O failed: -6 00:23:50.067 Write completed with error (sct=0, sc=8) 00:23:50.067 starting I/O failed: -6 00:23:50.067 Write completed with error (sct=0, sc=8) 00:23:50.067 starting I/O failed: -6 00:23:50.067 Write completed with error (sct=0, sc=8) 00:23:50.067 starting I/O failed: -6 00:23:50.067 Write completed with error (sct=0, sc=8) 00:23:50.067 starting I/O failed: -6 00:23:50.067 Write completed with error (sct=0, sc=8) 00:23:50.067 starting I/O failed: -6 00:23:50.067 Write completed with error (sct=0, sc=8) 00:23:50.067 starting I/O failed: -6 00:23:50.067 Write completed with error (sct=0, sc=8) 00:23:50.067 starting I/O failed: -6 00:23:50.067 Write completed with error (sct=0, sc=8) 00:23:50.067 starting I/O failed: -6 00:23:50.067 Write completed with error (sct=0, sc=8) 00:23:50.067 starting I/O failed: -6 00:23:50.067 Write completed with error (sct=0, sc=8) 00:23:50.067 starting I/O failed: -6 00:23:50.067 Write completed with error (sct=0, sc=8) 00:23:50.067 starting I/O failed: -6 00:23:50.067 Write completed with error (sct=0, sc=8) 00:23:50.067 starting I/O failed: -6 00:23:50.067 Write completed with error (sct=0, sc=8) 00:23:50.067 starting I/O failed: -6 00:23:50.067 Write completed with error (sct=0, sc=8) 00:23:50.067 starting I/O failed: -6 00:23:50.067 Write completed with error (sct=0, sc=8) 00:23:50.067 starting I/O failed: -6 00:23:50.067 Write completed with error (sct=0, sc=8) 00:23:50.067 starting I/O failed: -6 00:23:50.067 Write completed with error (sct=0, sc=8) 00:23:50.067 starting I/O failed: -6 00:23:50.067 Write completed with error (sct=0, sc=8) 00:23:50.067 starting I/O failed: -6 00:23:50.067 Write completed with error (sct=0, sc=8) 00:23:50.067 starting I/O failed: -6 00:23:50.067 Write completed with error (sct=0, sc=8) 00:23:50.067 starting I/O failed: -6 00:23:50.067 Write completed with error (sct=0, sc=8) 00:23:50.067 starting I/O failed: -6 00:23:50.067 Write completed with error (sct=0, sc=8) 00:23:50.067 starting I/O failed: -6 00:23:50.067 Write completed with error (sct=0, sc=8) 00:23:50.067 starting I/O failed: -6 00:23:50.067 Write completed with error (sct=0, sc=8) 00:23:50.067 starting I/O failed: -6 00:23:50.067 Write completed with error (sct=0, sc=8) 00:23:50.067 starting I/O failed: -6 00:23:50.067 Write completed with error (sct=0, sc=8) 00:23:50.067 starting I/O failed: -6 00:23:50.067 Write completed with error (sct=0, sc=8) 00:23:50.067 starting I/O failed: -6 00:23:50.067 Write completed with error (sct=0, sc=8) 00:23:50.067 starting I/O failed: -6 00:23:50.067 Write completed with error (sct=0, sc=8) 00:23:50.067 starting I/O failed: -6 00:23:50.067 Write completed with error (sct=0, sc=8) 00:23:50.067 starting I/O failed: -6 00:23:50.067 Write completed with error (sct=0, sc=8) 00:23:50.067 starting I/O failed: -6 00:23:50.067 Write completed with error (sct=0, sc=8) 00:23:50.067 starting I/O failed: -6 00:23:50.067 Write completed with error (sct=0, sc=8) 00:23:50.067 starting I/O failed: -6 00:23:50.067 Write completed with error (sct=0, sc=8) 00:23:50.067 starting I/O failed: -6 00:23:50.067 Write completed with error (sct=0, sc=8) 00:23:50.067 starting I/O failed: -6 00:23:50.067 Write completed with error (sct=0, sc=8) 00:23:50.067 starting I/O failed: -6 00:23:50.067 Write completed with error (sct=0, sc=8) 00:23:50.067 starting I/O failed: -6 00:23:50.067 Write completed with error (sct=0, sc=8) 00:23:50.067 starting I/O failed: -6 00:23:50.067 Write completed with error (sct=0, sc=8) 00:23:50.067 starting I/O failed: -6 00:23:50.067 Write completed with error (sct=0, sc=8) 00:23:50.067 starting I/O failed: -6 00:23:50.067 Write completed with error (sct=0, sc=8) 00:23:50.067 starting I/O failed: -6 00:23:50.067 Write completed with error (sct=0, sc=8) 00:23:50.067 starting I/O failed: -6 00:23:50.067 Write completed with error (sct=0, sc=8) 00:23:50.067 starting I/O failed: -6 00:23:50.067 Write completed with error (sct=0, sc=8) 00:23:50.067 starting I/O failed: -6 00:23:50.067 Write completed with error (sct=0, sc=8) 00:23:50.067 starting I/O failed: -6 00:23:50.067 Write completed with error (sct=0, sc=8) 00:23:50.067 starting I/O failed: -6 00:23:50.067 Write completed with error (sct=0, sc=8) 00:23:50.067 starting I/O failed: -6 00:23:50.067 Write completed with error (sct=0, sc=8) 00:23:50.067 starting I/O failed: -6 00:23:50.067 Write completed with error (sct=0, sc=8) 00:23:50.067 starting I/O failed: -6 00:23:50.067 Write completed with error (sct=0, sc=8) 00:23:50.067 starting I/O failed: -6 00:23:50.067 Write completed with error (sct=0, sc=8) 00:23:50.067 starting I/O failed: -6 00:23:50.067 Write completed with error (sct=0, sc=8) 00:23:50.067 starting I/O failed: -6 00:23:50.067 Write completed with error (sct=0, sc=8) 00:23:50.067 starting I/O failed: -6 00:23:50.067 Write completed with error (sct=0, sc=8) 00:23:50.067 starting I/O failed: -6 00:23:50.067 Write completed with error (sct=0, sc=8) 00:23:50.067 starting I/O failed: -6 00:23:50.067 Write completed with error (sct=0, sc=8) 00:23:50.067 starting I/O failed: -6 00:23:50.067 Write completed with error (sct=0, sc=8) 00:23:50.067 starting I/O failed: -6 00:23:50.067 [2024-11-05 19:13:18.823089] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:50.067 NVMe io qpair process completion error 00:23:50.067 Write completed with error (sct=0, sc=8) 00:23:50.067 starting I/O failed: -6 00:23:50.067 Write completed with error (sct=0, sc=8) 00:23:50.067 Write completed with error (sct=0, sc=8) 00:23:50.067 Write completed with error (sct=0, sc=8) 00:23:50.068 Write completed with error (sct=0, sc=8) 00:23:50.068 starting I/O failed: -6 00:23:50.068 Write completed with error (sct=0, sc=8) 00:23:50.068 Write completed with error (sct=0, sc=8) 00:23:50.068 Write completed with error (sct=0, sc=8) 00:23:50.068 Write completed with error (sct=0, sc=8) 00:23:50.068 starting I/O failed: -6 00:23:50.068 Write completed with error (sct=0, sc=8) 00:23:50.068 Write completed with error (sct=0, sc=8) 00:23:50.068 Write completed with error (sct=0, sc=8) 00:23:50.068 Write completed with error (sct=0, sc=8) 00:23:50.068 starting I/O failed: -6 00:23:50.068 Write completed with error (sct=0, sc=8) 00:23:50.068 Write completed with error (sct=0, sc=8) 00:23:50.068 Write completed with error (sct=0, sc=8) 00:23:50.068 Write completed with error (sct=0, sc=8) 00:23:50.068 starting I/O failed: -6 00:23:50.068 Write completed with error (sct=0, sc=8) 00:23:50.068 Write completed with error (sct=0, sc=8) 00:23:50.068 Write completed with error (sct=0, sc=8) 00:23:50.068 Write completed with error (sct=0, sc=8) 00:23:50.068 starting I/O failed: -6 00:23:50.068 Write completed with error (sct=0, sc=8) 00:23:50.068 Write completed with error (sct=0, sc=8) 00:23:50.068 Write completed with error (sct=0, sc=8) 00:23:50.068 Write completed with error (sct=0, sc=8) 00:23:50.068 starting I/O failed: -6 00:23:50.068 Write completed with error (sct=0, sc=8) 00:23:50.068 Write completed with error (sct=0, sc=8) 00:23:50.068 Write completed with error (sct=0, sc=8) 00:23:50.068 Write completed with error (sct=0, sc=8) 00:23:50.068 starting I/O failed: -6 00:23:50.068 Write completed with error (sct=0, sc=8) 00:23:50.068 [2024-11-05 19:13:18.824395] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:50.068 Write completed with error (sct=0, sc=8) 00:23:50.068 starting I/O failed: -6 00:23:50.068 Write completed with error (sct=0, sc=8) 00:23:50.068 Write completed with error (sct=0, sc=8) 00:23:50.068 starting I/O failed: -6 00:23:50.068 Write completed with error (sct=0, sc=8) 00:23:50.068 Write completed with error (sct=0, sc=8) 00:23:50.068 starting I/O failed: -6 00:23:50.068 Write completed with error (sct=0, sc=8) 00:23:50.068 Write completed with error (sct=0, sc=8) 00:23:50.068 starting I/O failed: -6 00:23:50.068 Write completed with error (sct=0, sc=8) 00:23:50.068 Write completed with error (sct=0, sc=8) 00:23:50.068 starting I/O failed: -6 00:23:50.068 Write completed with error (sct=0, sc=8) 00:23:50.068 Write completed with error (sct=0, sc=8) 00:23:50.068 starting I/O failed: -6 00:23:50.068 Write completed with error (sct=0, sc=8) 00:23:50.068 Write completed with error (sct=0, sc=8) 00:23:50.068 starting I/O failed: -6 00:23:50.068 Write completed with error (sct=0, sc=8) 00:23:50.068 Write completed with error (sct=0, sc=8) 00:23:50.068 starting I/O failed: -6 00:23:50.068 Write completed with error (sct=0, sc=8) 00:23:50.068 Write completed with error (sct=0, sc=8) 00:23:50.068 starting I/O failed: -6 00:23:50.068 Write completed with error (sct=0, sc=8) 00:23:50.068 Write completed with error (sct=0, sc=8) 00:23:50.068 starting I/O failed: -6 00:23:50.068 Write completed with error (sct=0, sc=8) 00:23:50.068 Write completed with error (sct=0, sc=8) 00:23:50.068 starting I/O failed: -6 00:23:50.068 Write completed with error (sct=0, sc=8) 00:23:50.068 Write completed with error (sct=0, sc=8) 00:23:50.068 starting I/O failed: -6 00:23:50.068 Write completed with error (sct=0, sc=8) 00:23:50.068 Write completed with error (sct=0, sc=8) 00:23:50.068 starting I/O failed: -6 00:23:50.068 Write completed with error (sct=0, sc=8) 00:23:50.068 Write completed with error (sct=0, sc=8) 00:23:50.068 starting I/O failed: -6 00:23:50.068 Write completed with error (sct=0, sc=8) 00:23:50.068 Write completed with error (sct=0, sc=8) 00:23:50.068 starting I/O failed: -6 00:23:50.068 Write completed with error (sct=0, sc=8) 00:23:50.068 Write completed with error (sct=0, sc=8) 00:23:50.068 starting I/O failed: -6 00:23:50.068 Write completed with error (sct=0, sc=8) 00:23:50.068 Write completed with error (sct=0, sc=8) 00:23:50.068 starting I/O failed: -6 00:23:50.068 Write completed with error (sct=0, sc=8) 00:23:50.068 Write completed with error (sct=0, sc=8) 00:23:50.068 starting I/O failed: -6 00:23:50.068 Write completed with error (sct=0, sc=8) 00:23:50.068 Write completed with error (sct=0, sc=8) 00:23:50.068 starting I/O failed: -6 00:23:50.068 Write completed with error (sct=0, sc=8) 00:23:50.068 Write completed with error (sct=0, sc=8) 00:23:50.068 starting I/O failed: -6 00:23:50.068 Write completed with error (sct=0, sc=8) 00:23:50.068 Write completed with error (sct=0, sc=8) 00:23:50.068 starting I/O failed: -6 00:23:50.068 Write completed with error (sct=0, sc=8) 00:23:50.068 Write completed with error (sct=0, sc=8) 00:23:50.068 starting I/O failed: -6 00:23:50.068 Write completed with error (sct=0, sc=8) 00:23:50.068 Write completed with error (sct=0, sc=8) 00:23:50.068 starting I/O failed: -6 00:23:50.068 Write completed with error (sct=0, sc=8) 00:23:50.068 [2024-11-05 19:13:18.825318] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:50.068 Write completed with error (sct=0, sc=8) 00:23:50.068 starting I/O failed: -6 00:23:50.068 Write completed with error (sct=0, sc=8) 00:23:50.068 Write completed with error (sct=0, sc=8) 00:23:50.068 starting I/O failed: -6 00:23:50.068 Write completed with error (sct=0, sc=8) 00:23:50.068 starting I/O failed: -6 00:23:50.068 Write completed with error (sct=0, sc=8) 00:23:50.068 starting I/O failed: -6 00:23:50.068 Write completed with error (sct=0, sc=8) 00:23:50.068 Write completed with error (sct=0, sc=8) 00:23:50.068 starting I/O failed: -6 00:23:50.068 Write completed with error (sct=0, sc=8) 00:23:50.068 starting I/O failed: -6 00:23:50.068 Write completed with error (sct=0, sc=8) 00:23:50.068 starting I/O failed: -6 00:23:50.068 Write completed with error (sct=0, sc=8) 00:23:50.068 Write completed with error (sct=0, sc=8) 00:23:50.068 starting I/O failed: -6 00:23:50.068 Write completed with error (sct=0, sc=8) 00:23:50.068 starting I/O failed: -6 00:23:50.068 Write completed with error (sct=0, sc=8) 00:23:50.068 starting I/O failed: -6 00:23:50.068 Write completed with error (sct=0, sc=8) 00:23:50.068 Write completed with error (sct=0, sc=8) 00:23:50.068 starting I/O failed: -6 00:23:50.068 Write completed with error (sct=0, sc=8) 00:23:50.068 starting I/O failed: -6 00:23:50.068 Write completed with error (sct=0, sc=8) 00:23:50.068 starting I/O failed: -6 00:23:50.068 Write completed with error (sct=0, sc=8) 00:23:50.068 Write completed with error (sct=0, sc=8) 00:23:50.068 starting I/O failed: -6 00:23:50.068 Write completed with error (sct=0, sc=8) 00:23:50.068 starting I/O failed: -6 00:23:50.068 Write completed with error (sct=0, sc=8) 00:23:50.068 starting I/O failed: -6 00:23:50.068 Write completed with error (sct=0, sc=8) 00:23:50.068 Write completed with error (sct=0, sc=8) 00:23:50.068 starting I/O failed: -6 00:23:50.068 Write completed with error (sct=0, sc=8) 00:23:50.068 starting I/O failed: -6 00:23:50.068 Write completed with error (sct=0, sc=8) 00:23:50.068 starting I/O failed: -6 00:23:50.068 Write completed with error (sct=0, sc=8) 00:23:50.068 Write completed with error (sct=0, sc=8) 00:23:50.068 starting I/O failed: -6 00:23:50.068 Write completed with error (sct=0, sc=8) 00:23:50.068 starting I/O failed: -6 00:23:50.068 Write completed with error (sct=0, sc=8) 00:23:50.068 starting I/O failed: -6 00:23:50.068 Write completed with error (sct=0, sc=8) 00:23:50.068 Write completed with error (sct=0, sc=8) 00:23:50.068 starting I/O failed: -6 00:23:50.068 Write completed with error (sct=0, sc=8) 00:23:50.068 starting I/O failed: -6 00:23:50.068 Write completed with error (sct=0, sc=8) 00:23:50.068 starting I/O failed: -6 00:23:50.068 Write completed with error (sct=0, sc=8) 00:23:50.068 Write completed with error (sct=0, sc=8) 00:23:50.068 starting I/O failed: -6 00:23:50.068 Write completed with error (sct=0, sc=8) 00:23:50.068 starting I/O failed: -6 00:23:50.068 Write completed with error (sct=0, sc=8) 00:23:50.068 starting I/O failed: -6 00:23:50.068 Write completed with error (sct=0, sc=8) 00:23:50.068 Write completed with error (sct=0, sc=8) 00:23:50.068 starting I/O failed: -6 00:23:50.068 Write completed with error (sct=0, sc=8) 00:23:50.068 starting I/O failed: -6 00:23:50.068 Write completed with error (sct=0, sc=8) 00:23:50.068 starting I/O failed: -6 00:23:50.068 Write completed with error (sct=0, sc=8) 00:23:50.068 Write completed with error (sct=0, sc=8) 00:23:50.068 starting I/O failed: -6 00:23:50.068 Write completed with error (sct=0, sc=8) 00:23:50.068 starting I/O failed: -6 00:23:50.068 Write completed with error (sct=0, sc=8) 00:23:50.068 starting I/O failed: -6 00:23:50.068 Write completed with error (sct=0, sc=8) 00:23:50.068 Write completed with error (sct=0, sc=8) 00:23:50.068 starting I/O failed: -6 00:23:50.068 Write completed with error (sct=0, sc=8) 00:23:50.068 starting I/O failed: -6 00:23:50.068 Write completed with error (sct=0, sc=8) 00:23:50.068 starting I/O failed: -6 00:23:50.068 [2024-11-05 19:13:18.826246] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:50.068 Write completed with error (sct=0, sc=8) 00:23:50.068 starting I/O failed: -6 00:23:50.068 Write completed with error (sct=0, sc=8) 00:23:50.068 starting I/O failed: -6 00:23:50.068 Write completed with error (sct=0, sc=8) 00:23:50.068 starting I/O failed: -6 00:23:50.068 Write completed with error (sct=0, sc=8) 00:23:50.068 starting I/O failed: -6 00:23:50.068 Write completed with error (sct=0, sc=8) 00:23:50.068 starting I/O failed: -6 00:23:50.068 Write completed with error (sct=0, sc=8) 00:23:50.068 starting I/O failed: -6 00:23:50.068 Write completed with error (sct=0, sc=8) 00:23:50.068 starting I/O failed: -6 00:23:50.068 Write completed with error (sct=0, sc=8) 00:23:50.068 starting I/O failed: -6 00:23:50.068 Write completed with error (sct=0, sc=8) 00:23:50.069 starting I/O failed: -6 00:23:50.069 Write completed with error (sct=0, sc=8) 00:23:50.069 starting I/O failed: -6 00:23:50.069 Write completed with error (sct=0, sc=8) 00:23:50.069 starting I/O failed: -6 00:23:50.069 Write completed with error (sct=0, sc=8) 00:23:50.069 starting I/O failed: -6 00:23:50.069 Write completed with error (sct=0, sc=8) 00:23:50.069 starting I/O failed: -6 00:23:50.069 Write completed with error (sct=0, sc=8) 00:23:50.069 starting I/O failed: -6 00:23:50.069 Write completed with error (sct=0, sc=8) 00:23:50.069 starting I/O failed: -6 00:23:50.069 Write completed with error (sct=0, sc=8) 00:23:50.069 starting I/O failed: -6 00:23:50.069 Write completed with error (sct=0, sc=8) 00:23:50.069 starting I/O failed: -6 00:23:50.069 Write completed with error (sct=0, sc=8) 00:23:50.069 starting I/O failed: -6 00:23:50.069 Write completed with error (sct=0, sc=8) 00:23:50.069 starting I/O failed: -6 00:23:50.069 Write completed with error (sct=0, sc=8) 00:23:50.069 starting I/O failed: -6 00:23:50.069 Write completed with error (sct=0, sc=8) 00:23:50.069 starting I/O failed: -6 00:23:50.069 Write completed with error (sct=0, sc=8) 00:23:50.069 starting I/O failed: -6 00:23:50.069 Write completed with error (sct=0, sc=8) 00:23:50.069 starting I/O failed: -6 00:23:50.069 Write completed with error (sct=0, sc=8) 00:23:50.069 starting I/O failed: -6 00:23:50.069 Write completed with error (sct=0, sc=8) 00:23:50.069 starting I/O failed: -6 00:23:50.069 Write completed with error (sct=0, sc=8) 00:23:50.069 starting I/O failed: -6 00:23:50.069 Write completed with error (sct=0, sc=8) 00:23:50.069 starting I/O failed: -6 00:23:50.069 Write completed with error (sct=0, sc=8) 00:23:50.069 starting I/O failed: -6 00:23:50.069 Write completed with error (sct=0, sc=8) 00:23:50.069 starting I/O failed: -6 00:23:50.069 Write completed with error (sct=0, sc=8) 00:23:50.069 starting I/O failed: -6 00:23:50.069 Write completed with error (sct=0, sc=8) 00:23:50.069 starting I/O failed: -6 00:23:50.069 Write completed with error (sct=0, sc=8) 00:23:50.069 starting I/O failed: -6 00:23:50.069 Write completed with error (sct=0, sc=8) 00:23:50.069 starting I/O failed: -6 00:23:50.069 Write completed with error (sct=0, sc=8) 00:23:50.069 starting I/O failed: -6 00:23:50.069 Write completed with error (sct=0, sc=8) 00:23:50.069 starting I/O failed: -6 00:23:50.069 Write completed with error (sct=0, sc=8) 00:23:50.069 starting I/O failed: -6 00:23:50.069 Write completed with error (sct=0, sc=8) 00:23:50.069 starting I/O failed: -6 00:23:50.069 Write completed with error (sct=0, sc=8) 00:23:50.069 starting I/O failed: -6 00:23:50.069 Write completed with error (sct=0, sc=8) 00:23:50.069 starting I/O failed: -6 00:23:50.069 Write completed with error (sct=0, sc=8) 00:23:50.069 starting I/O failed: -6 00:23:50.069 Write completed with error (sct=0, sc=8) 00:23:50.069 starting I/O failed: -6 00:23:50.069 Write completed with error (sct=0, sc=8) 00:23:50.069 starting I/O failed: -6 00:23:50.069 Write completed with error (sct=0, sc=8) 00:23:50.069 starting I/O failed: -6 00:23:50.069 Write completed with error (sct=0, sc=8) 00:23:50.069 starting I/O failed: -6 00:23:50.069 Write completed with error (sct=0, sc=8) 00:23:50.069 starting I/O failed: -6 00:23:50.069 Write completed with error (sct=0, sc=8) 00:23:50.069 starting I/O failed: -6 00:23:50.069 Write completed with error (sct=0, sc=8) 00:23:50.069 starting I/O failed: -6 00:23:50.069 Write completed with error (sct=0, sc=8) 00:23:50.069 starting I/O failed: -6 00:23:50.069 Write completed with error (sct=0, sc=8) 00:23:50.069 starting I/O failed: -6 00:23:50.069 Write completed with error (sct=0, sc=8) 00:23:50.069 starting I/O failed: -6 00:23:50.069 Write completed with error (sct=0, sc=8) 00:23:50.069 starting I/O failed: -6 00:23:50.069 Write completed with error (sct=0, sc=8) 00:23:50.069 starting I/O failed: -6 00:23:50.069 Write completed with error (sct=0, sc=8) 00:23:50.069 starting I/O failed: -6 00:23:50.069 Write completed with error (sct=0, sc=8) 00:23:50.069 starting I/O failed: -6 00:23:50.069 Write completed with error (sct=0, sc=8) 00:23:50.069 starting I/O failed: -6 00:23:50.069 Write completed with error (sct=0, sc=8) 00:23:50.069 starting I/O failed: -6 00:23:50.069 Write completed with error (sct=0, sc=8) 00:23:50.069 starting I/O failed: -6 00:23:50.069 Write completed with error (sct=0, sc=8) 00:23:50.069 starting I/O failed: -6 00:23:50.069 Write completed with error (sct=0, sc=8) 00:23:50.069 starting I/O failed: -6 00:23:50.069 Write completed with error (sct=0, sc=8) 00:23:50.069 starting I/O failed: -6 00:23:50.069 [2024-11-05 19:13:18.828208] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:50.069 NVMe io qpair process completion error 00:23:50.069 Write completed with error (sct=0, sc=8) 00:23:50.069 Write completed with error (sct=0, sc=8) 00:23:50.069 starting I/O failed: -6 00:23:50.069 Write completed with error (sct=0, sc=8) 00:23:50.069 Write completed with error (sct=0, sc=8) 00:23:50.069 Write completed with error (sct=0, sc=8) 00:23:50.069 Write completed with error (sct=0, sc=8) 00:23:50.069 starting I/O failed: -6 00:23:50.069 Write completed with error (sct=0, sc=8) 00:23:50.069 Write completed with error (sct=0, sc=8) 00:23:50.069 Write completed with error (sct=0, sc=8) 00:23:50.069 Write completed with error (sct=0, sc=8) 00:23:50.069 starting I/O failed: -6 00:23:50.069 Write completed with error (sct=0, sc=8) 00:23:50.069 Write completed with error (sct=0, sc=8) 00:23:50.069 Write completed with error (sct=0, sc=8) 00:23:50.069 Write completed with error (sct=0, sc=8) 00:23:50.069 starting I/O failed: -6 00:23:50.069 Write completed with error (sct=0, sc=8) 00:23:50.069 Write completed with error (sct=0, sc=8) 00:23:50.069 Write completed with error (sct=0, sc=8) 00:23:50.069 Write completed with error (sct=0, sc=8) 00:23:50.069 starting I/O failed: -6 00:23:50.069 Write completed with error (sct=0, sc=8) 00:23:50.069 Write completed with error (sct=0, sc=8) 00:23:50.069 Write completed with error (sct=0, sc=8) 00:23:50.069 Write completed with error (sct=0, sc=8) 00:23:50.069 starting I/O failed: -6 00:23:50.069 Write completed with error (sct=0, sc=8) 00:23:50.069 Write completed with error (sct=0, sc=8) 00:23:50.069 Write completed with error (sct=0, sc=8) 00:23:50.069 Write completed with error (sct=0, sc=8) 00:23:50.069 starting I/O failed: -6 00:23:50.069 Write completed with error (sct=0, sc=8) 00:23:50.069 Write completed with error (sct=0, sc=8) 00:23:50.069 Write completed with error (sct=0, sc=8) 00:23:50.069 Write completed with error (sct=0, sc=8) 00:23:50.069 starting I/O failed: -6 00:23:50.069 Write completed with error (sct=0, sc=8) 00:23:50.069 Write completed with error (sct=0, sc=8) 00:23:50.069 Write completed with error (sct=0, sc=8) 00:23:50.069 Write completed with error (sct=0, sc=8) 00:23:50.069 starting I/O failed: -6 00:23:50.069 Write completed with error (sct=0, sc=8) 00:23:50.069 Write completed with error (sct=0, sc=8) 00:23:50.069 Write completed with error (sct=0, sc=8) 00:23:50.069 Write completed with error (sct=0, sc=8) 00:23:50.069 starting I/O failed: -6 00:23:50.069 Write completed with error (sct=0, sc=8) 00:23:50.069 Write completed with error (sct=0, sc=8) 00:23:50.069 [2024-11-05 19:13:18.829507] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:50.069 starting I/O failed: -6 00:23:50.069 starting I/O failed: -6 00:23:50.069 starting I/O failed: -6 00:23:50.069 Write completed with error (sct=0, sc=8) 00:23:50.069 starting I/O failed: -6 00:23:50.069 Write completed with error (sct=0, sc=8) 00:23:50.069 starting I/O failed: -6 00:23:50.069 Write completed with error (sct=0, sc=8) 00:23:50.069 Write completed with error (sct=0, sc=8) 00:23:50.069 Write completed with error (sct=0, sc=8) 00:23:50.069 starting I/O failed: -6 00:23:50.069 Write completed with error (sct=0, sc=8) 00:23:50.069 starting I/O failed: -6 00:23:50.069 Write completed with error (sct=0, sc=8) 00:23:50.069 Write completed with error (sct=0, sc=8) 00:23:50.069 Write completed with error (sct=0, sc=8) 00:23:50.069 starting I/O failed: -6 00:23:50.069 Write completed with error (sct=0, sc=8) 00:23:50.069 starting I/O failed: -6 00:23:50.069 Write completed with error (sct=0, sc=8) 00:23:50.069 Write completed with error (sct=0, sc=8) 00:23:50.069 Write completed with error (sct=0, sc=8) 00:23:50.069 starting I/O failed: -6 00:23:50.069 Write completed with error (sct=0, sc=8) 00:23:50.069 starting I/O failed: -6 00:23:50.069 Write completed with error (sct=0, sc=8) 00:23:50.069 Write completed with error (sct=0, sc=8) 00:23:50.069 Write completed with error (sct=0, sc=8) 00:23:50.069 starting I/O failed: -6 00:23:50.069 Write completed with error (sct=0, sc=8) 00:23:50.069 starting I/O failed: -6 00:23:50.069 Write completed with error (sct=0, sc=8) 00:23:50.069 Write completed with error (sct=0, sc=8) 00:23:50.069 Write completed with error (sct=0, sc=8) 00:23:50.069 starting I/O failed: -6 00:23:50.069 Write completed with error (sct=0, sc=8) 00:23:50.069 starting I/O failed: -6 00:23:50.069 Write completed with error (sct=0, sc=8) 00:23:50.069 Write completed with error (sct=0, sc=8) 00:23:50.069 Write completed with error (sct=0, sc=8) 00:23:50.069 starting I/O failed: -6 00:23:50.069 Write completed with error (sct=0, sc=8) 00:23:50.069 starting I/O failed: -6 00:23:50.069 Write completed with error (sct=0, sc=8) 00:23:50.069 Write completed with error (sct=0, sc=8) 00:23:50.069 Write completed with error (sct=0, sc=8) 00:23:50.069 starting I/O failed: -6 00:23:50.069 Write completed with error (sct=0, sc=8) 00:23:50.069 starting I/O failed: -6 00:23:50.069 Write completed with error (sct=0, sc=8) 00:23:50.069 Write completed with error (sct=0, sc=8) 00:23:50.069 Write completed with error (sct=0, sc=8) 00:23:50.069 starting I/O failed: -6 00:23:50.069 Write completed with error (sct=0, sc=8) 00:23:50.069 starting I/O failed: -6 00:23:50.069 Write completed with error (sct=0, sc=8) 00:23:50.069 [2024-11-05 19:13:18.830475] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:50.069 starting I/O failed: -6 00:23:50.070 starting I/O failed: -6 00:23:50.070 starting I/O failed: -6 00:23:50.070 starting I/O failed: -6 00:23:50.070 starting I/O failed: -6 00:23:50.070 Write completed with error (sct=0, sc=8) 00:23:50.070 starting I/O failed: -6 00:23:50.070 Write completed with error (sct=0, sc=8) 00:23:50.070 Write completed with error (sct=0, sc=8) 00:23:50.070 starting I/O failed: -6 00:23:50.070 Write completed with error (sct=0, sc=8) 00:23:50.070 starting I/O failed: -6 00:23:50.070 Write completed with error (sct=0, sc=8) 00:23:50.070 starting I/O failed: -6 00:23:50.070 Write completed with error (sct=0, sc=8) 00:23:50.070 Write completed with error (sct=0, sc=8) 00:23:50.070 starting I/O failed: -6 00:23:50.070 Write completed with error (sct=0, sc=8) 00:23:50.070 starting I/O failed: -6 00:23:50.070 Write completed with error (sct=0, sc=8) 00:23:50.070 starting I/O failed: -6 00:23:50.070 Write completed with error (sct=0, sc=8) 00:23:50.070 Write completed with error (sct=0, sc=8) 00:23:50.070 starting I/O failed: -6 00:23:50.070 Write completed with error (sct=0, sc=8) 00:23:50.070 starting I/O failed: -6 00:23:50.070 Write completed with error (sct=0, sc=8) 00:23:50.070 starting I/O failed: -6 00:23:50.070 Write completed with error (sct=0, sc=8) 00:23:50.070 Write completed with error (sct=0, sc=8) 00:23:50.070 starting I/O failed: -6 00:23:50.070 Write completed with error (sct=0, sc=8) 00:23:50.070 starting I/O failed: -6 00:23:50.070 Write completed with error (sct=0, sc=8) 00:23:50.070 starting I/O failed: -6 00:23:50.070 Write completed with error (sct=0, sc=8) 00:23:50.070 Write completed with error (sct=0, sc=8) 00:23:50.070 starting I/O failed: -6 00:23:50.070 Write completed with error (sct=0, sc=8) 00:23:50.070 starting I/O failed: -6 00:23:50.070 Write completed with error (sct=0, sc=8) 00:23:50.070 starting I/O failed: -6 00:23:50.070 Write completed with error (sct=0, sc=8) 00:23:50.070 Write completed with error (sct=0, sc=8) 00:23:50.070 starting I/O failed: -6 00:23:50.070 Write completed with error (sct=0, sc=8) 00:23:50.070 starting I/O failed: -6 00:23:50.070 Write completed with error (sct=0, sc=8) 00:23:50.070 starting I/O failed: -6 00:23:50.070 Write completed with error (sct=0, sc=8) 00:23:50.070 Write completed with error (sct=0, sc=8) 00:23:50.070 starting I/O failed: -6 00:23:50.070 Write completed with error (sct=0, sc=8) 00:23:50.070 starting I/O failed: -6 00:23:50.070 Write completed with error (sct=0, sc=8) 00:23:50.070 starting I/O failed: -6 00:23:50.070 Write completed with error (sct=0, sc=8) 00:23:50.070 Write completed with error (sct=0, sc=8) 00:23:50.070 starting I/O failed: -6 00:23:50.070 Write completed with error (sct=0, sc=8) 00:23:50.070 starting I/O failed: -6 00:23:50.070 Write completed with error (sct=0, sc=8) 00:23:50.070 starting I/O failed: -6 00:23:50.070 Write completed with error (sct=0, sc=8) 00:23:50.070 Write completed with error (sct=0, sc=8) 00:23:50.070 starting I/O failed: -6 00:23:50.070 Write completed with error (sct=0, sc=8) 00:23:50.070 starting I/O failed: -6 00:23:50.070 Write completed with error (sct=0, sc=8) 00:23:50.070 starting I/O failed: -6 00:23:50.070 Write completed with error (sct=0, sc=8) 00:23:50.070 Write completed with error (sct=0, sc=8) 00:23:50.070 starting I/O failed: -6 00:23:50.070 Write completed with error (sct=0, sc=8) 00:23:50.070 starting I/O failed: -6 00:23:50.070 Write completed with error (sct=0, sc=8) 00:23:50.070 starting I/O failed: -6 00:23:50.070 [2024-11-05 19:13:18.831624] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:50.070 Write completed with error (sct=0, sc=8) 00:23:50.070 starting I/O failed: -6 00:23:50.070 Write completed with error (sct=0, sc=8) 00:23:50.070 starting I/O failed: -6 00:23:50.070 Write completed with error (sct=0, sc=8) 00:23:50.070 starting I/O failed: -6 00:23:50.070 Write completed with error (sct=0, sc=8) 00:23:50.070 starting I/O failed: -6 00:23:50.070 Write completed with error (sct=0, sc=8) 00:23:50.070 starting I/O failed: -6 00:23:50.070 Write completed with error (sct=0, sc=8) 00:23:50.070 starting I/O failed: -6 00:23:50.070 Write completed with error (sct=0, sc=8) 00:23:50.070 starting I/O failed: -6 00:23:50.070 Write completed with error (sct=0, sc=8) 00:23:50.070 starting I/O failed: -6 00:23:50.070 Write completed with error (sct=0, sc=8) 00:23:50.070 starting I/O failed: -6 00:23:50.070 Write completed with error (sct=0, sc=8) 00:23:50.070 starting I/O failed: -6 00:23:50.070 Write completed with error (sct=0, sc=8) 00:23:50.070 starting I/O failed: -6 00:23:50.070 Write completed with error (sct=0, sc=8) 00:23:50.070 starting I/O failed: -6 00:23:50.070 Write completed with error (sct=0, sc=8) 00:23:50.070 starting I/O failed: -6 00:23:50.070 Write completed with error (sct=0, sc=8) 00:23:50.070 starting I/O failed: -6 00:23:50.070 Write completed with error (sct=0, sc=8) 00:23:50.070 starting I/O failed: -6 00:23:50.070 Write completed with error (sct=0, sc=8) 00:23:50.070 starting I/O failed: -6 00:23:50.070 Write completed with error (sct=0, sc=8) 00:23:50.070 starting I/O failed: -6 00:23:50.070 Write completed with error (sct=0, sc=8) 00:23:50.070 starting I/O failed: -6 00:23:50.070 Write completed with error (sct=0, sc=8) 00:23:50.070 starting I/O failed: -6 00:23:50.070 Write completed with error (sct=0, sc=8) 00:23:50.070 starting I/O failed: -6 00:23:50.070 Write completed with error (sct=0, sc=8) 00:23:50.070 starting I/O failed: -6 00:23:50.070 Write completed with error (sct=0, sc=8) 00:23:50.070 starting I/O failed: -6 00:23:50.070 Write completed with error (sct=0, sc=8) 00:23:50.070 starting I/O failed: -6 00:23:50.070 Write completed with error (sct=0, sc=8) 00:23:50.070 starting I/O failed: -6 00:23:50.070 Write completed with error (sct=0, sc=8) 00:23:50.070 starting I/O failed: -6 00:23:50.070 Write completed with error (sct=0, sc=8) 00:23:50.070 starting I/O failed: -6 00:23:50.070 Write completed with error (sct=0, sc=8) 00:23:50.070 starting I/O failed: -6 00:23:50.070 Write completed with error (sct=0, sc=8) 00:23:50.070 starting I/O failed: -6 00:23:50.070 Write completed with error (sct=0, sc=8) 00:23:50.070 starting I/O failed: -6 00:23:50.070 Write completed with error (sct=0, sc=8) 00:23:50.070 starting I/O failed: -6 00:23:50.070 Write completed with error (sct=0, sc=8) 00:23:50.070 starting I/O failed: -6 00:23:50.070 Write completed with error (sct=0, sc=8) 00:23:50.070 starting I/O failed: -6 00:23:50.070 Write completed with error (sct=0, sc=8) 00:23:50.070 starting I/O failed: -6 00:23:50.070 Write completed with error (sct=0, sc=8) 00:23:50.070 starting I/O failed: -6 00:23:50.070 Write completed with error (sct=0, sc=8) 00:23:50.070 starting I/O failed: -6 00:23:50.070 Write completed with error (sct=0, sc=8) 00:23:50.070 starting I/O failed: -6 00:23:50.070 Write completed with error (sct=0, sc=8) 00:23:50.070 starting I/O failed: -6 00:23:50.070 Write completed with error (sct=0, sc=8) 00:23:50.070 starting I/O failed: -6 00:23:50.070 Write completed with error (sct=0, sc=8) 00:23:50.070 starting I/O failed: -6 00:23:50.070 Write completed with error (sct=0, sc=8) 00:23:50.070 starting I/O failed: -6 00:23:50.070 Write completed with error (sct=0, sc=8) 00:23:50.070 starting I/O failed: -6 00:23:50.070 Write completed with error (sct=0, sc=8) 00:23:50.070 starting I/O failed: -6 00:23:50.070 Write completed with error (sct=0, sc=8) 00:23:50.070 starting I/O failed: -6 00:23:50.070 Write completed with error (sct=0, sc=8) 00:23:50.070 starting I/O failed: -6 00:23:50.070 Write completed with error (sct=0, sc=8) 00:23:50.070 starting I/O failed: -6 00:23:50.070 Write completed with error (sct=0, sc=8) 00:23:50.070 starting I/O failed: -6 00:23:50.070 Write completed with error (sct=0, sc=8) 00:23:50.070 starting I/O failed: -6 00:23:50.070 Write completed with error (sct=0, sc=8) 00:23:50.070 starting I/O failed: -6 00:23:50.070 Write completed with error (sct=0, sc=8) 00:23:50.070 starting I/O failed: -6 00:23:50.070 Write completed with error (sct=0, sc=8) 00:23:50.070 starting I/O failed: -6 00:23:50.070 Write completed with error (sct=0, sc=8) 00:23:50.070 starting I/O failed: -6 00:23:50.070 Write completed with error (sct=0, sc=8) 00:23:50.070 starting I/O failed: -6 00:23:50.070 Write completed with error (sct=0, sc=8) 00:23:50.070 starting I/O failed: -6 00:23:50.070 Write completed with error (sct=0, sc=8) 00:23:50.070 starting I/O failed: -6 00:23:50.070 Write completed with error (sct=0, sc=8) 00:23:50.070 starting I/O failed: -6 00:23:50.070 Write completed with error (sct=0, sc=8) 00:23:50.070 starting I/O failed: -6 00:23:50.070 Write completed with error (sct=0, sc=8) 00:23:50.070 starting I/O failed: -6 00:23:50.070 Write completed with error (sct=0, sc=8) 00:23:50.070 starting I/O failed: -6 00:23:50.070 Write completed with error (sct=0, sc=8) 00:23:50.070 starting I/O failed: -6 00:23:50.070 Write completed with error (sct=0, sc=8) 00:23:50.070 starting I/O failed: -6 00:23:50.070 Write completed with error (sct=0, sc=8) 00:23:50.070 starting I/O failed: -6 00:23:50.070 [2024-11-05 19:13:18.833270] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:50.070 NVMe io qpair process completion error 00:23:50.070 Write completed with error (sct=0, sc=8) 00:23:50.070 Write completed with error (sct=0, sc=8) 00:23:50.070 Write completed with error (sct=0, sc=8) 00:23:50.070 Write completed with error (sct=0, sc=8) 00:23:50.070 starting I/O failed: -6 00:23:50.070 Write completed with error (sct=0, sc=8) 00:23:50.070 Write completed with error (sct=0, sc=8) 00:23:50.070 Write completed with error (sct=0, sc=8) 00:23:50.070 Write completed with error (sct=0, sc=8) 00:23:50.070 starting I/O failed: -6 00:23:50.070 Write completed with error (sct=0, sc=8) 00:23:50.070 Write completed with error (sct=0, sc=8) 00:23:50.071 Write completed with error (sct=0, sc=8) 00:23:50.071 Write completed with error (sct=0, sc=8) 00:23:50.071 starting I/O failed: -6 00:23:50.071 Write completed with error (sct=0, sc=8) 00:23:50.071 Write completed with error (sct=0, sc=8) 00:23:50.071 Write completed with error (sct=0, sc=8) 00:23:50.071 Write completed with error (sct=0, sc=8) 00:23:50.071 starting I/O failed: -6 00:23:50.071 Write completed with error (sct=0, sc=8) 00:23:50.071 Write completed with error (sct=0, sc=8) 00:23:50.071 Write completed with error (sct=0, sc=8) 00:23:50.071 Write completed with error (sct=0, sc=8) 00:23:50.071 starting I/O failed: -6 00:23:50.071 Write completed with error (sct=0, sc=8) 00:23:50.071 Write completed with error (sct=0, sc=8) 00:23:50.071 Write completed with error (sct=0, sc=8) 00:23:50.071 Write completed with error (sct=0, sc=8) 00:23:50.071 starting I/O failed: -6 00:23:50.071 Write completed with error (sct=0, sc=8) 00:23:50.071 Write completed with error (sct=0, sc=8) 00:23:50.071 Write completed with error (sct=0, sc=8) 00:23:50.071 Write completed with error (sct=0, sc=8) 00:23:50.071 starting I/O failed: -6 00:23:50.071 Write completed with error (sct=0, sc=8) 00:23:50.071 Write completed with error (sct=0, sc=8) 00:23:50.071 Write completed with error (sct=0, sc=8) 00:23:50.071 Write completed with error (sct=0, sc=8) 00:23:50.071 starting I/O failed: -6 00:23:50.071 Write completed with error (sct=0, sc=8) 00:23:50.071 Write completed with error (sct=0, sc=8) 00:23:50.071 Write completed with error (sct=0, sc=8) 00:23:50.071 Write completed with error (sct=0, sc=8) 00:23:50.071 starting I/O failed: -6 00:23:50.071 Write completed with error (sct=0, sc=8) 00:23:50.071 Write completed with error (sct=0, sc=8) 00:23:50.071 Write completed with error (sct=0, sc=8) 00:23:50.071 Write completed with error (sct=0, sc=8) 00:23:50.071 starting I/O failed: -6 00:23:50.071 Write completed with error (sct=0, sc=8) 00:23:50.071 Write completed with error (sct=0, sc=8) 00:23:50.071 Write completed with error (sct=0, sc=8) 00:23:50.071 Write completed with error (sct=0, sc=8) 00:23:50.071 starting I/O failed: -6 00:23:50.071 [2024-11-05 19:13:18.834392] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:50.071 Write completed with error (sct=0, sc=8) 00:23:50.071 Write completed with error (sct=0, sc=8) 00:23:50.071 starting I/O failed: -6 00:23:50.071 Write completed with error (sct=0, sc=8) 00:23:50.071 Write completed with error (sct=0, sc=8) 00:23:50.071 starting I/O failed: -6 00:23:50.071 Write completed with error (sct=0, sc=8) 00:23:50.071 Write completed with error (sct=0, sc=8) 00:23:50.071 starting I/O failed: -6 00:23:50.071 Write completed with error (sct=0, sc=8) 00:23:50.071 Write completed with error (sct=0, sc=8) 00:23:50.071 starting I/O failed: -6 00:23:50.071 Write completed with error (sct=0, sc=8) 00:23:50.071 Write completed with error (sct=0, sc=8) 00:23:50.071 starting I/O failed: -6 00:23:50.071 Write completed with error (sct=0, sc=8) 00:23:50.071 Write completed with error (sct=0, sc=8) 00:23:50.071 starting I/O failed: -6 00:23:50.071 Write completed with error (sct=0, sc=8) 00:23:50.071 Write completed with error (sct=0, sc=8) 00:23:50.071 starting I/O failed: -6 00:23:50.071 Write completed with error (sct=0, sc=8) 00:23:50.071 Write completed with error (sct=0, sc=8) 00:23:50.071 starting I/O failed: -6 00:23:50.071 Write completed with error (sct=0, sc=8) 00:23:50.071 Write completed with error (sct=0, sc=8) 00:23:50.071 starting I/O failed: -6 00:23:50.071 Write completed with error (sct=0, sc=8) 00:23:50.071 Write completed with error (sct=0, sc=8) 00:23:50.071 starting I/O failed: -6 00:23:50.071 Write completed with error (sct=0, sc=8) 00:23:50.071 Write completed with error (sct=0, sc=8) 00:23:50.071 starting I/O failed: -6 00:23:50.071 Write completed with error (sct=0, sc=8) 00:23:50.071 Write completed with error (sct=0, sc=8) 00:23:50.071 starting I/O failed: -6 00:23:50.071 Write completed with error (sct=0, sc=8) 00:23:50.071 Write completed with error (sct=0, sc=8) 00:23:50.071 starting I/O failed: -6 00:23:50.071 Write completed with error (sct=0, sc=8) 00:23:50.071 Write completed with error (sct=0, sc=8) 00:23:50.071 starting I/O failed: -6 00:23:50.071 Write completed with error (sct=0, sc=8) 00:23:50.071 Write completed with error (sct=0, sc=8) 00:23:50.071 starting I/O failed: -6 00:23:50.071 Write completed with error (sct=0, sc=8) 00:23:50.071 Write completed with error (sct=0, sc=8) 00:23:50.071 starting I/O failed: -6 00:23:50.071 Write completed with error (sct=0, sc=8) 00:23:50.071 Write completed with error (sct=0, sc=8) 00:23:50.071 starting I/O failed: -6 00:23:50.071 Write completed with error (sct=0, sc=8) 00:23:50.071 Write completed with error (sct=0, sc=8) 00:23:50.071 starting I/O failed: -6 00:23:50.071 Write completed with error (sct=0, sc=8) 00:23:50.071 Write completed with error (sct=0, sc=8) 00:23:50.071 starting I/O failed: -6 00:23:50.071 [2024-11-05 19:13:18.835218] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:50.071 Write completed with error (sct=0, sc=8) 00:23:50.071 Write completed with error (sct=0, sc=8) 00:23:50.071 starting I/O failed: -6 00:23:50.071 Write completed with error (sct=0, sc=8) 00:23:50.071 starting I/O failed: -6 00:23:50.071 Write completed with error (sct=0, sc=8) 00:23:50.071 starting I/O failed: -6 00:23:50.071 Write completed with error (sct=0, sc=8) 00:23:50.071 Write completed with error (sct=0, sc=8) 00:23:50.071 starting I/O failed: -6 00:23:50.071 Write completed with error (sct=0, sc=8) 00:23:50.071 starting I/O failed: -6 00:23:50.071 Write completed with error (sct=0, sc=8) 00:23:50.071 starting I/O failed: -6 00:23:50.071 Write completed with error (sct=0, sc=8) 00:23:50.071 Write completed with error (sct=0, sc=8) 00:23:50.071 starting I/O failed: -6 00:23:50.071 Write completed with error (sct=0, sc=8) 00:23:50.071 starting I/O failed: -6 00:23:50.071 Write completed with error (sct=0, sc=8) 00:23:50.071 starting I/O failed: -6 00:23:50.071 Write completed with error (sct=0, sc=8) 00:23:50.071 Write completed with error (sct=0, sc=8) 00:23:50.071 starting I/O failed: -6 00:23:50.071 Write completed with error (sct=0, sc=8) 00:23:50.071 starting I/O failed: -6 00:23:50.071 Write completed with error (sct=0, sc=8) 00:23:50.071 starting I/O failed: -6 00:23:50.071 Write completed with error (sct=0, sc=8) 00:23:50.071 Write completed with error (sct=0, sc=8) 00:23:50.071 starting I/O failed: -6 00:23:50.071 Write completed with error (sct=0, sc=8) 00:23:50.071 starting I/O failed: -6 00:23:50.071 Write completed with error (sct=0, sc=8) 00:23:50.071 starting I/O failed: -6 00:23:50.071 Write completed with error (sct=0, sc=8) 00:23:50.071 Write completed with error (sct=0, sc=8) 00:23:50.071 starting I/O failed: -6 00:23:50.071 Write completed with error (sct=0, sc=8) 00:23:50.071 starting I/O failed: -6 00:23:50.071 Write completed with error (sct=0, sc=8) 00:23:50.071 starting I/O failed: -6 00:23:50.071 Write completed with error (sct=0, sc=8) 00:23:50.071 Write completed with error (sct=0, sc=8) 00:23:50.071 starting I/O failed: -6 00:23:50.071 Write completed with error (sct=0, sc=8) 00:23:50.071 starting I/O failed: -6 00:23:50.071 Write completed with error (sct=0, sc=8) 00:23:50.071 starting I/O failed: -6 00:23:50.071 Write completed with error (sct=0, sc=8) 00:23:50.071 Write completed with error (sct=0, sc=8) 00:23:50.071 starting I/O failed: -6 00:23:50.071 Write completed with error (sct=0, sc=8) 00:23:50.071 starting I/O failed: -6 00:23:50.071 Write completed with error (sct=0, sc=8) 00:23:50.071 starting I/O failed: -6 00:23:50.071 Write completed with error (sct=0, sc=8) 00:23:50.071 Write completed with error (sct=0, sc=8) 00:23:50.071 starting I/O failed: -6 00:23:50.071 Write completed with error (sct=0, sc=8) 00:23:50.071 starting I/O failed: -6 00:23:50.071 Write completed with error (sct=0, sc=8) 00:23:50.071 starting I/O failed: -6 00:23:50.071 Write completed with error (sct=0, sc=8) 00:23:50.071 Write completed with error (sct=0, sc=8) 00:23:50.071 starting I/O failed: -6 00:23:50.071 Write completed with error (sct=0, sc=8) 00:23:50.071 starting I/O failed: -6 00:23:50.071 Write completed with error (sct=0, sc=8) 00:23:50.071 starting I/O failed: -6 00:23:50.071 Write completed with error (sct=0, sc=8) 00:23:50.071 Write completed with error (sct=0, sc=8) 00:23:50.071 starting I/O failed: -6 00:23:50.071 Write completed with error (sct=0, sc=8) 00:23:50.071 starting I/O failed: -6 00:23:50.071 Write completed with error (sct=0, sc=8) 00:23:50.071 starting I/O failed: -6 00:23:50.071 Write completed with error (sct=0, sc=8) 00:23:50.071 Write completed with error (sct=0, sc=8) 00:23:50.071 starting I/O failed: -6 00:23:50.071 Write completed with error (sct=0, sc=8) 00:23:50.071 starting I/O failed: -6 00:23:50.071 Write completed with error (sct=0, sc=8) 00:23:50.071 starting I/O failed: -6 00:23:50.071 Write completed with error (sct=0, sc=8) 00:23:50.071 [2024-11-05 19:13:18.836140] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:50.071 Write completed with error (sct=0, sc=8) 00:23:50.071 starting I/O failed: -6 00:23:50.071 Write completed with error (sct=0, sc=8) 00:23:50.071 starting I/O failed: -6 00:23:50.071 Write completed with error (sct=0, sc=8) 00:23:50.071 starting I/O failed: -6 00:23:50.071 Write completed with error (sct=0, sc=8) 00:23:50.071 starting I/O failed: -6 00:23:50.071 Write completed with error (sct=0, sc=8) 00:23:50.071 starting I/O failed: -6 00:23:50.071 Write completed with error (sct=0, sc=8) 00:23:50.071 starting I/O failed: -6 00:23:50.071 Write completed with error (sct=0, sc=8) 00:23:50.071 starting I/O failed: -6 00:23:50.071 Write completed with error (sct=0, sc=8) 00:23:50.071 starting I/O failed: -6 00:23:50.071 Write completed with error (sct=0, sc=8) 00:23:50.071 starting I/O failed: -6 00:23:50.071 Write completed with error (sct=0, sc=8) 00:23:50.071 starting I/O failed: -6 00:23:50.071 Write completed with error (sct=0, sc=8) 00:23:50.071 starting I/O failed: -6 00:23:50.072 Write completed with error (sct=0, sc=8) 00:23:50.072 starting I/O failed: -6 00:23:50.072 Write completed with error (sct=0, sc=8) 00:23:50.072 starting I/O failed: -6 00:23:50.072 Write completed with error (sct=0, sc=8) 00:23:50.072 starting I/O failed: -6 00:23:50.072 Write completed with error (sct=0, sc=8) 00:23:50.072 starting I/O failed: -6 00:23:50.072 Write completed with error (sct=0, sc=8) 00:23:50.072 starting I/O failed: -6 00:23:50.072 Write completed with error (sct=0, sc=8) 00:23:50.072 starting I/O failed: -6 00:23:50.072 Write completed with error (sct=0, sc=8) 00:23:50.072 starting I/O failed: -6 00:23:50.072 Write completed with error (sct=0, sc=8) 00:23:50.072 starting I/O failed: -6 00:23:50.072 Write completed with error (sct=0, sc=8) 00:23:50.072 starting I/O failed: -6 00:23:50.072 Write completed with error (sct=0, sc=8) 00:23:50.072 starting I/O failed: -6 00:23:50.072 Write completed with error (sct=0, sc=8) 00:23:50.072 starting I/O failed: -6 00:23:50.072 Write completed with error (sct=0, sc=8) 00:23:50.072 starting I/O failed: -6 00:23:50.072 Write completed with error (sct=0, sc=8) 00:23:50.072 starting I/O failed: -6 00:23:50.072 Write completed with error (sct=0, sc=8) 00:23:50.072 starting I/O failed: -6 00:23:50.072 Write completed with error (sct=0, sc=8) 00:23:50.072 starting I/O failed: -6 00:23:50.072 Write completed with error (sct=0, sc=8) 00:23:50.072 starting I/O failed: -6 00:23:50.072 Write completed with error (sct=0, sc=8) 00:23:50.072 starting I/O failed: -6 00:23:50.072 Write completed with error (sct=0, sc=8) 00:23:50.072 starting I/O failed: -6 00:23:50.072 Write completed with error (sct=0, sc=8) 00:23:50.072 starting I/O failed: -6 00:23:50.072 Write completed with error (sct=0, sc=8) 00:23:50.072 starting I/O failed: -6 00:23:50.072 Write completed with error (sct=0, sc=8) 00:23:50.072 starting I/O failed: -6 00:23:50.072 Write completed with error (sct=0, sc=8) 00:23:50.072 starting I/O failed: -6 00:23:50.072 Write completed with error (sct=0, sc=8) 00:23:50.072 starting I/O failed: -6 00:23:50.072 Write completed with error (sct=0, sc=8) 00:23:50.072 starting I/O failed: -6 00:23:50.072 Write completed with error (sct=0, sc=8) 00:23:50.072 starting I/O failed: -6 00:23:50.072 Write completed with error (sct=0, sc=8) 00:23:50.072 starting I/O failed: -6 00:23:50.072 Write completed with error (sct=0, sc=8) 00:23:50.072 starting I/O failed: -6 00:23:50.072 Write completed with error (sct=0, sc=8) 00:23:50.072 starting I/O failed: -6 00:23:50.072 Write completed with error (sct=0, sc=8) 00:23:50.072 starting I/O failed: -6 00:23:50.072 Write completed with error (sct=0, sc=8) 00:23:50.072 starting I/O failed: -6 00:23:50.072 Write completed with error (sct=0, sc=8) 00:23:50.072 starting I/O failed: -6 00:23:50.072 Write completed with error (sct=0, sc=8) 00:23:50.072 starting I/O failed: -6 00:23:50.072 Write completed with error (sct=0, sc=8) 00:23:50.072 starting I/O failed: -6 00:23:50.072 Write completed with error (sct=0, sc=8) 00:23:50.072 starting I/O failed: -6 00:23:50.072 Write completed with error (sct=0, sc=8) 00:23:50.072 starting I/O failed: -6 00:23:50.072 Write completed with error (sct=0, sc=8) 00:23:50.072 starting I/O failed: -6 00:23:50.072 Write completed with error (sct=0, sc=8) 00:23:50.072 starting I/O failed: -6 00:23:50.072 Write completed with error (sct=0, sc=8) 00:23:50.072 starting I/O failed: -6 00:23:50.072 Write completed with error (sct=0, sc=8) 00:23:50.072 starting I/O failed: -6 00:23:50.072 Write completed with error (sct=0, sc=8) 00:23:50.072 starting I/O failed: -6 00:23:50.072 Write completed with error (sct=0, sc=8) 00:23:50.072 starting I/O failed: -6 00:23:50.072 Write completed with error (sct=0, sc=8) 00:23:50.072 starting I/O failed: -6 00:23:50.072 Write completed with error (sct=0, sc=8) 00:23:50.072 starting I/O failed: -6 00:23:50.072 Write completed with error (sct=0, sc=8) 00:23:50.072 starting I/O failed: -6 00:23:50.072 Write completed with error (sct=0, sc=8) 00:23:50.072 starting I/O failed: -6 00:23:50.072 Write completed with error (sct=0, sc=8) 00:23:50.072 starting I/O failed: -6 00:23:50.072 Write completed with error (sct=0, sc=8) 00:23:50.072 starting I/O failed: -6 00:23:50.072 Write completed with error (sct=0, sc=8) 00:23:50.072 starting I/O failed: -6 00:23:50.072 Write completed with error (sct=0, sc=8) 00:23:50.072 starting I/O failed: -6 00:23:50.072 Write completed with error (sct=0, sc=8) 00:23:50.072 starting I/O failed: -6 00:23:50.072 Write completed with error (sct=0, sc=8) 00:23:50.072 starting I/O failed: -6 00:23:50.072 [2024-11-05 19:13:18.838664] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:50.072 NVMe io qpair process completion error 00:23:50.072 Write completed with error (sct=0, sc=8) 00:23:50.072 Write completed with error (sct=0, sc=8) 00:23:50.072 Write completed with error (sct=0, sc=8) 00:23:50.072 Write completed with error (sct=0, sc=8) 00:23:50.072 starting I/O failed: -6 00:23:50.072 Write completed with error (sct=0, sc=8) 00:23:50.072 Write completed with error (sct=0, sc=8) 00:23:50.072 Write completed with error (sct=0, sc=8) 00:23:50.072 Write completed with error (sct=0, sc=8) 00:23:50.072 starting I/O failed: -6 00:23:50.072 Write completed with error (sct=0, sc=8) 00:23:50.072 Write completed with error (sct=0, sc=8) 00:23:50.072 Write completed with error (sct=0, sc=8) 00:23:50.072 Write completed with error (sct=0, sc=8) 00:23:50.072 starting I/O failed: -6 00:23:50.072 Write completed with error (sct=0, sc=8) 00:23:50.072 Write completed with error (sct=0, sc=8) 00:23:50.072 Write completed with error (sct=0, sc=8) 00:23:50.072 Write completed with error (sct=0, sc=8) 00:23:50.072 starting I/O failed: -6 00:23:50.072 Write completed with error (sct=0, sc=8) 00:23:50.072 Write completed with error (sct=0, sc=8) 00:23:50.072 Write completed with error (sct=0, sc=8) 00:23:50.072 Write completed with error (sct=0, sc=8) 00:23:50.072 starting I/O failed: -6 00:23:50.072 Write completed with error (sct=0, sc=8) 00:23:50.072 Write completed with error (sct=0, sc=8) 00:23:50.072 Write completed with error (sct=0, sc=8) 00:23:50.072 Write completed with error (sct=0, sc=8) 00:23:50.072 starting I/O failed: -6 00:23:50.072 Write completed with error (sct=0, sc=8) 00:23:50.072 Write completed with error (sct=0, sc=8) 00:23:50.072 Write completed with error (sct=0, sc=8) 00:23:50.072 Write completed with error (sct=0, sc=8) 00:23:50.072 starting I/O failed: -6 00:23:50.072 Write completed with error (sct=0, sc=8) 00:23:50.072 Write completed with error (sct=0, sc=8) 00:23:50.072 Write completed with error (sct=0, sc=8) 00:23:50.072 Write completed with error (sct=0, sc=8) 00:23:50.072 starting I/O failed: -6 00:23:50.072 Write completed with error (sct=0, sc=8) 00:23:50.072 Write completed with error (sct=0, sc=8) 00:23:50.072 Write completed with error (sct=0, sc=8) 00:23:50.072 Write completed with error (sct=0, sc=8) 00:23:50.072 starting I/O failed: -6 00:23:50.072 Write completed with error (sct=0, sc=8) 00:23:50.072 Write completed with error (sct=0, sc=8) 00:23:50.072 Write completed with error (sct=0, sc=8) 00:23:50.072 Write completed with error (sct=0, sc=8) 00:23:50.072 starting I/O failed: -6 00:23:50.072 Write completed with error (sct=0, sc=8) 00:23:50.072 [2024-11-05 19:13:18.839739] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:50.072 starting I/O failed: -6 00:23:50.072 starting I/O failed: -6 00:23:50.072 starting I/O failed: -6 00:23:50.072 starting I/O failed: -6 00:23:50.072 Write completed with error (sct=0, sc=8) 00:23:50.072 starting I/O failed: -6 00:23:50.072 Write completed with error (sct=0, sc=8) 00:23:50.072 Write completed with error (sct=0, sc=8) 00:23:50.072 Write completed with error (sct=0, sc=8) 00:23:50.072 starting I/O failed: -6 00:23:50.072 Write completed with error (sct=0, sc=8) 00:23:50.072 starting I/O failed: -6 00:23:50.072 Write completed with error (sct=0, sc=8) 00:23:50.072 Write completed with error (sct=0, sc=8) 00:23:50.072 Write completed with error (sct=0, sc=8) 00:23:50.072 starting I/O failed: -6 00:23:50.073 Write completed with error (sct=0, sc=8) 00:23:50.073 starting I/O failed: -6 00:23:50.073 Write completed with error (sct=0, sc=8) 00:23:50.073 Write completed with error (sct=0, sc=8) 00:23:50.073 Write completed with error (sct=0, sc=8) 00:23:50.073 starting I/O failed: -6 00:23:50.073 Write completed with error (sct=0, sc=8) 00:23:50.073 starting I/O failed: -6 00:23:50.073 Write completed with error (sct=0, sc=8) 00:23:50.073 Write completed with error (sct=0, sc=8) 00:23:50.073 Write completed with error (sct=0, sc=8) 00:23:50.073 starting I/O failed: -6 00:23:50.073 Write completed with error (sct=0, sc=8) 00:23:50.073 starting I/O failed: -6 00:23:50.073 Write completed with error (sct=0, sc=8) 00:23:50.073 Write completed with error (sct=0, sc=8) 00:23:50.073 Write completed with error (sct=0, sc=8) 00:23:50.073 starting I/O failed: -6 00:23:50.073 Write completed with error (sct=0, sc=8) 00:23:50.073 starting I/O failed: -6 00:23:50.073 Write completed with error (sct=0, sc=8) 00:23:50.073 Write completed with error (sct=0, sc=8) 00:23:50.073 Write completed with error (sct=0, sc=8) 00:23:50.073 starting I/O failed: -6 00:23:50.073 Write completed with error (sct=0, sc=8) 00:23:50.073 starting I/O failed: -6 00:23:50.073 Write completed with error (sct=0, sc=8) 00:23:50.073 Write completed with error (sct=0, sc=8) 00:23:50.073 Write completed with error (sct=0, sc=8) 00:23:50.073 starting I/O failed: -6 00:23:50.073 Write completed with error (sct=0, sc=8) 00:23:50.073 starting I/O failed: -6 00:23:50.073 Write completed with error (sct=0, sc=8) 00:23:50.073 Write completed with error (sct=0, sc=8) 00:23:50.073 [2024-11-05 19:13:18.840733] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:50.073 Write completed with error (sct=0, sc=8) 00:23:50.073 starting I/O failed: -6 00:23:50.073 Write completed with error (sct=0, sc=8) 00:23:50.073 starting I/O failed: -6 00:23:50.073 Write completed with error (sct=0, sc=8) 00:23:50.073 starting I/O failed: -6 00:23:50.073 Write completed with error (sct=0, sc=8) 00:23:50.073 Write completed with error (sct=0, sc=8) 00:23:50.073 starting I/O failed: -6 00:23:50.073 Write completed with error (sct=0, sc=8) 00:23:50.073 starting I/O failed: -6 00:23:50.073 Write completed with error (sct=0, sc=8) 00:23:50.073 starting I/O failed: -6 00:23:50.073 Write completed with error (sct=0, sc=8) 00:23:50.073 Write completed with error (sct=0, sc=8) 00:23:50.073 starting I/O failed: -6 00:23:50.073 Write completed with error (sct=0, sc=8) 00:23:50.073 starting I/O failed: -6 00:23:50.073 Write completed with error (sct=0, sc=8) 00:23:50.073 starting I/O failed: -6 00:23:50.073 Write completed with error (sct=0, sc=8) 00:23:50.073 Write completed with error (sct=0, sc=8) 00:23:50.073 starting I/O failed: -6 00:23:50.073 Write completed with error (sct=0, sc=8) 00:23:50.073 starting I/O failed: -6 00:23:50.073 Write completed with error (sct=0, sc=8) 00:23:50.073 starting I/O failed: -6 00:23:50.073 Write completed with error (sct=0, sc=8) 00:23:50.073 Write completed with error (sct=0, sc=8) 00:23:50.073 starting I/O failed: -6 00:23:50.073 Write completed with error (sct=0, sc=8) 00:23:50.073 starting I/O failed: -6 00:23:50.073 Write completed with error (sct=0, sc=8) 00:23:50.073 starting I/O failed: -6 00:23:50.073 Write completed with error (sct=0, sc=8) 00:23:50.073 Write completed with error (sct=0, sc=8) 00:23:50.073 starting I/O failed: -6 00:23:50.073 Write completed with error (sct=0, sc=8) 00:23:50.073 starting I/O failed: -6 00:23:50.073 Write completed with error (sct=0, sc=8) 00:23:50.073 starting I/O failed: -6 00:23:50.073 Write completed with error (sct=0, sc=8) 00:23:50.073 Write completed with error (sct=0, sc=8) 00:23:50.073 starting I/O failed: -6 00:23:50.073 Write completed with error (sct=0, sc=8) 00:23:50.073 starting I/O failed: -6 00:23:50.073 Write completed with error (sct=0, sc=8) 00:23:50.073 starting I/O failed: -6 00:23:50.073 Write completed with error (sct=0, sc=8) 00:23:50.073 Write completed with error (sct=0, sc=8) 00:23:50.073 starting I/O failed: -6 00:23:50.073 Write completed with error (sct=0, sc=8) 00:23:50.073 starting I/O failed: -6 00:23:50.073 Write completed with error (sct=0, sc=8) 00:23:50.073 starting I/O failed: -6 00:23:50.073 Write completed with error (sct=0, sc=8) 00:23:50.073 Write completed with error (sct=0, sc=8) 00:23:50.073 starting I/O failed: -6 00:23:50.073 Write completed with error (sct=0, sc=8) 00:23:50.073 starting I/O failed: -6 00:23:50.073 Write completed with error (sct=0, sc=8) 00:23:50.073 starting I/O failed: -6 00:23:50.073 Write completed with error (sct=0, sc=8) 00:23:50.073 Write completed with error (sct=0, sc=8) 00:23:50.073 starting I/O failed: -6 00:23:50.073 Write completed with error (sct=0, sc=8) 00:23:50.073 starting I/O failed: -6 00:23:50.073 Write completed with error (sct=0, sc=8) 00:23:50.073 starting I/O failed: -6 00:23:50.073 Write completed with error (sct=0, sc=8) 00:23:50.073 Write completed with error (sct=0, sc=8) 00:23:50.073 starting I/O failed: -6 00:23:50.073 Write completed with error (sct=0, sc=8) 00:23:50.073 starting I/O failed: -6 00:23:50.073 Write completed with error (sct=0, sc=8) 00:23:50.073 starting I/O failed: -6 00:23:50.073 Write completed with error (sct=0, sc=8) 00:23:50.073 Write completed with error (sct=0, sc=8) 00:23:50.073 starting I/O failed: -6 00:23:50.073 Write completed with error (sct=0, sc=8) 00:23:50.073 starting I/O failed: -6 00:23:50.073 Write completed with error (sct=0, sc=8) 00:23:50.073 starting I/O failed: -6 00:23:50.073 Write completed with error (sct=0, sc=8) 00:23:50.073 Write completed with error (sct=0, sc=8) 00:23:50.073 starting I/O failed: -6 00:23:50.073 [2024-11-05 19:13:18.841663] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:50.073 Write completed with error (sct=0, sc=8) 00:23:50.073 starting I/O failed: -6 00:23:50.073 Write completed with error (sct=0, sc=8) 00:23:50.073 starting I/O failed: -6 00:23:50.073 Write completed with error (sct=0, sc=8) 00:23:50.073 starting I/O failed: -6 00:23:50.073 Write completed with error (sct=0, sc=8) 00:23:50.073 starting I/O failed: -6 00:23:50.073 Write completed with error (sct=0, sc=8) 00:23:50.073 starting I/O failed: -6 00:23:50.073 Write completed with error (sct=0, sc=8) 00:23:50.073 starting I/O failed: -6 00:23:50.073 Write completed with error (sct=0, sc=8) 00:23:50.073 starting I/O failed: -6 00:23:50.073 Write completed with error (sct=0, sc=8) 00:23:50.073 starting I/O failed: -6 00:23:50.073 Write completed with error (sct=0, sc=8) 00:23:50.073 starting I/O failed: -6 00:23:50.073 Write completed with error (sct=0, sc=8) 00:23:50.073 starting I/O failed: -6 00:23:50.073 Write completed with error (sct=0, sc=8) 00:23:50.073 starting I/O failed: -6 00:23:50.073 Write completed with error (sct=0, sc=8) 00:23:50.073 starting I/O failed: -6 00:23:50.073 Write completed with error (sct=0, sc=8) 00:23:50.073 starting I/O failed: -6 00:23:50.073 Write completed with error (sct=0, sc=8) 00:23:50.073 starting I/O failed: -6 00:23:50.073 Write completed with error (sct=0, sc=8) 00:23:50.073 starting I/O failed: -6 00:23:50.073 Write completed with error (sct=0, sc=8) 00:23:50.073 starting I/O failed: -6 00:23:50.073 Write completed with error (sct=0, sc=8) 00:23:50.073 starting I/O failed: -6 00:23:50.073 Write completed with error (sct=0, sc=8) 00:23:50.073 starting I/O failed: -6 00:23:50.073 Write completed with error (sct=0, sc=8) 00:23:50.073 starting I/O failed: -6 00:23:50.073 Write completed with error (sct=0, sc=8) 00:23:50.073 starting I/O failed: -6 00:23:50.073 Write completed with error (sct=0, sc=8) 00:23:50.073 starting I/O failed: -6 00:23:50.073 Write completed with error (sct=0, sc=8) 00:23:50.073 starting I/O failed: -6 00:23:50.073 Write completed with error (sct=0, sc=8) 00:23:50.073 starting I/O failed: -6 00:23:50.073 Write completed with error (sct=0, sc=8) 00:23:50.073 starting I/O failed: -6 00:23:50.073 Write completed with error (sct=0, sc=8) 00:23:50.073 starting I/O failed: -6 00:23:50.073 Write completed with error (sct=0, sc=8) 00:23:50.073 starting I/O failed: -6 00:23:50.073 Write completed with error (sct=0, sc=8) 00:23:50.073 starting I/O failed: -6 00:23:50.073 Write completed with error (sct=0, sc=8) 00:23:50.073 starting I/O failed: -6 00:23:50.073 Write completed with error (sct=0, sc=8) 00:23:50.073 starting I/O failed: -6 00:23:50.073 Write completed with error (sct=0, sc=8) 00:23:50.073 starting I/O failed: -6 00:23:50.073 Write completed with error (sct=0, sc=8) 00:23:50.073 starting I/O failed: -6 00:23:50.074 Write completed with error (sct=0, sc=8) 00:23:50.074 starting I/O failed: -6 00:23:50.074 Write completed with error (sct=0, sc=8) 00:23:50.074 starting I/O failed: -6 00:23:50.074 Write completed with error (sct=0, sc=8) 00:23:50.074 starting I/O failed: -6 00:23:50.074 Write completed with error (sct=0, sc=8) 00:23:50.074 starting I/O failed: -6 00:23:50.074 Write completed with error (sct=0, sc=8) 00:23:50.074 starting I/O failed: -6 00:23:50.074 Write completed with error (sct=0, sc=8) 00:23:50.074 starting I/O failed: -6 00:23:50.074 Write completed with error (sct=0, sc=8) 00:23:50.074 starting I/O failed: -6 00:23:50.074 Write completed with error (sct=0, sc=8) 00:23:50.074 starting I/O failed: -6 00:23:50.074 Write completed with error (sct=0, sc=8) 00:23:50.074 starting I/O failed: -6 00:23:50.074 Write completed with error (sct=0, sc=8) 00:23:50.074 starting I/O failed: -6 00:23:50.074 Write completed with error (sct=0, sc=8) 00:23:50.074 starting I/O failed: -6 00:23:50.074 Write completed with error (sct=0, sc=8) 00:23:50.074 starting I/O failed: -6 00:23:50.074 Write completed with error (sct=0, sc=8) 00:23:50.074 starting I/O failed: -6 00:23:50.074 Write completed with error (sct=0, sc=8) 00:23:50.074 starting I/O failed: -6 00:23:50.074 Write completed with error (sct=0, sc=8) 00:23:50.074 starting I/O failed: -6 00:23:50.074 Write completed with error (sct=0, sc=8) 00:23:50.074 starting I/O failed: -6 00:23:50.074 Write completed with error (sct=0, sc=8) 00:23:50.074 starting I/O failed: -6 00:23:50.074 Write completed with error (sct=0, sc=8) 00:23:50.074 starting I/O failed: -6 00:23:50.074 Write completed with error (sct=0, sc=8) 00:23:50.074 starting I/O failed: -6 00:23:50.074 Write completed with error (sct=0, sc=8) 00:23:50.074 starting I/O failed: -6 00:23:50.074 Write completed with error (sct=0, sc=8) 00:23:50.074 starting I/O failed: -6 00:23:50.074 Write completed with error (sct=0, sc=8) 00:23:50.074 starting I/O failed: -6 00:23:50.074 Write completed with error (sct=0, sc=8) 00:23:50.074 starting I/O failed: -6 00:23:50.074 Write completed with error (sct=0, sc=8) 00:23:50.074 starting I/O failed: -6 00:23:50.074 Write completed with error (sct=0, sc=8) 00:23:50.074 starting I/O failed: -6 00:23:50.074 Write completed with error (sct=0, sc=8) 00:23:50.074 starting I/O failed: -6 00:23:50.074 Write completed with error (sct=0, sc=8) 00:23:50.074 starting I/O failed: -6 00:23:50.074 Write completed with error (sct=0, sc=8) 00:23:50.074 starting I/O failed: -6 00:23:50.074 Write completed with error (sct=0, sc=8) 00:23:50.074 starting I/O failed: -6 00:23:50.074 Write completed with error (sct=0, sc=8) 00:23:50.074 starting I/O failed: -6 00:23:50.074 Write completed with error (sct=0, sc=8) 00:23:50.074 starting I/O failed: -6 00:23:50.074 [2024-11-05 19:13:18.843336] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:50.074 NVMe io qpair process completion error 00:23:50.074 Write completed with error (sct=0, sc=8) 00:23:50.074 Write completed with error (sct=0, sc=8) 00:23:50.074 Write completed with error (sct=0, sc=8) 00:23:50.074 starting I/O failed: -6 00:23:50.074 Write completed with error (sct=0, sc=8) 00:23:50.074 Write completed with error (sct=0, sc=8) 00:23:50.074 Write completed with error (sct=0, sc=8) 00:23:50.074 Write completed with error (sct=0, sc=8) 00:23:50.074 starting I/O failed: -6 00:23:50.074 Write completed with error (sct=0, sc=8) 00:23:50.074 Write completed with error (sct=0, sc=8) 00:23:50.074 Write completed with error (sct=0, sc=8) 00:23:50.074 Write completed with error (sct=0, sc=8) 00:23:50.074 starting I/O failed: -6 00:23:50.074 Write completed with error (sct=0, sc=8) 00:23:50.074 Write completed with error (sct=0, sc=8) 00:23:50.074 Write completed with error (sct=0, sc=8) 00:23:50.074 Write completed with error (sct=0, sc=8) 00:23:50.074 starting I/O failed: -6 00:23:50.074 Write completed with error (sct=0, sc=8) 00:23:50.074 Write completed with error (sct=0, sc=8) 00:23:50.074 Write completed with error (sct=0, sc=8) 00:23:50.074 Write completed with error (sct=0, sc=8) 00:23:50.074 starting I/O failed: -6 00:23:50.074 Write completed with error (sct=0, sc=8) 00:23:50.074 Write completed with error (sct=0, sc=8) 00:23:50.074 Write completed with error (sct=0, sc=8) 00:23:50.074 Write completed with error (sct=0, sc=8) 00:23:50.074 starting I/O failed: -6 00:23:50.074 Write completed with error (sct=0, sc=8) 00:23:50.074 Write completed with error (sct=0, sc=8) 00:23:50.074 Write completed with error (sct=0, sc=8) 00:23:50.074 Write completed with error (sct=0, sc=8) 00:23:50.074 starting I/O failed: -6 00:23:50.074 Write completed with error (sct=0, sc=8) 00:23:50.074 Write completed with error (sct=0, sc=8) 00:23:50.074 Write completed with error (sct=0, sc=8) 00:23:50.074 Write completed with error (sct=0, sc=8) 00:23:50.074 starting I/O failed: -6 00:23:50.074 Write completed with error (sct=0, sc=8) 00:23:50.074 Write completed with error (sct=0, sc=8) 00:23:50.074 Write completed with error (sct=0, sc=8) 00:23:50.074 Write completed with error (sct=0, sc=8) 00:23:50.074 starting I/O failed: -6 00:23:50.074 Write completed with error (sct=0, sc=8) 00:23:50.074 Write completed with error (sct=0, sc=8) 00:23:50.074 Write completed with error (sct=0, sc=8) 00:23:50.074 [2024-11-05 19:13:18.844467] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:50.074 starting I/O failed: -6 00:23:50.074 starting I/O failed: -6 00:23:50.074 starting I/O failed: -6 00:23:50.074 Write completed with error (sct=0, sc=8) 00:23:50.074 Write completed with error (sct=0, sc=8) 00:23:50.074 starting I/O failed: -6 00:23:50.074 Write completed with error (sct=0, sc=8) 00:23:50.074 Write completed with error (sct=0, sc=8) 00:23:50.074 starting I/O failed: -6 00:23:50.074 Write completed with error (sct=0, sc=8) 00:23:50.074 Write completed with error (sct=0, sc=8) 00:23:50.074 starting I/O failed: -6 00:23:50.074 Write completed with error (sct=0, sc=8) 00:23:50.074 Write completed with error (sct=0, sc=8) 00:23:50.074 starting I/O failed: -6 00:23:50.074 Write completed with error (sct=0, sc=8) 00:23:50.074 Write completed with error (sct=0, sc=8) 00:23:50.074 starting I/O failed: -6 00:23:50.074 Write completed with error (sct=0, sc=8) 00:23:50.074 Write completed with error (sct=0, sc=8) 00:23:50.074 starting I/O failed: -6 00:23:50.074 Write completed with error (sct=0, sc=8) 00:23:50.074 Write completed with error (sct=0, sc=8) 00:23:50.074 starting I/O failed: -6 00:23:50.074 Write completed with error (sct=0, sc=8) 00:23:50.074 Write completed with error (sct=0, sc=8) 00:23:50.074 starting I/O failed: -6 00:23:50.074 Write completed with error (sct=0, sc=8) 00:23:50.074 Write completed with error (sct=0, sc=8) 00:23:50.074 starting I/O failed: -6 00:23:50.074 Write completed with error (sct=0, sc=8) 00:23:50.074 Write completed with error (sct=0, sc=8) 00:23:50.074 starting I/O failed: -6 00:23:50.074 Write completed with error (sct=0, sc=8) 00:23:50.074 Write completed with error (sct=0, sc=8) 00:23:50.074 starting I/O failed: -6 00:23:50.074 Write completed with error (sct=0, sc=8) 00:23:50.074 Write completed with error (sct=0, sc=8) 00:23:50.074 starting I/O failed: -6 00:23:50.074 Write completed with error (sct=0, sc=8) 00:23:50.074 Write completed with error (sct=0, sc=8) 00:23:50.074 starting I/O failed: -6 00:23:50.074 Write completed with error (sct=0, sc=8) 00:23:50.074 Write completed with error (sct=0, sc=8) 00:23:50.074 starting I/O failed: -6 00:23:50.074 Write completed with error (sct=0, sc=8) 00:23:50.074 Write completed with error (sct=0, sc=8) 00:23:50.074 starting I/O failed: -6 00:23:50.074 Write completed with error (sct=0, sc=8) 00:23:50.074 Write completed with error (sct=0, sc=8) 00:23:50.074 starting I/O failed: -6 00:23:50.074 Write completed with error (sct=0, sc=8) 00:23:50.074 [2024-11-05 19:13:18.845338] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:50.074 Write completed with error (sct=0, sc=8) 00:23:50.074 starting I/O failed: -6 00:23:50.074 Write completed with error (sct=0, sc=8) 00:23:50.074 starting I/O failed: -6 00:23:50.074 Write completed with error (sct=0, sc=8) 00:23:50.074 starting I/O failed: -6 00:23:50.074 Write completed with error (sct=0, sc=8) 00:23:50.074 Write completed with error (sct=0, sc=8) 00:23:50.074 starting I/O failed: -6 00:23:50.074 Write completed with error (sct=0, sc=8) 00:23:50.074 starting I/O failed: -6 00:23:50.074 Write completed with error (sct=0, sc=8) 00:23:50.074 starting I/O failed: -6 00:23:50.074 Write completed with error (sct=0, sc=8) 00:23:50.074 Write completed with error (sct=0, sc=8) 00:23:50.074 starting I/O failed: -6 00:23:50.074 Write completed with error (sct=0, sc=8) 00:23:50.074 starting I/O failed: -6 00:23:50.074 Write completed with error (sct=0, sc=8) 00:23:50.075 starting I/O failed: -6 00:23:50.075 Write completed with error (sct=0, sc=8) 00:23:50.075 Write completed with error (sct=0, sc=8) 00:23:50.075 starting I/O failed: -6 00:23:50.075 Write completed with error (sct=0, sc=8) 00:23:50.075 starting I/O failed: -6 00:23:50.075 Write completed with error (sct=0, sc=8) 00:23:50.075 starting I/O failed: -6 00:23:50.075 Write completed with error (sct=0, sc=8) 00:23:50.075 Write completed with error (sct=0, sc=8) 00:23:50.075 starting I/O failed: -6 00:23:50.075 Write completed with error (sct=0, sc=8) 00:23:50.075 starting I/O failed: -6 00:23:50.075 Write completed with error (sct=0, sc=8) 00:23:50.075 starting I/O failed: -6 00:23:50.075 Write completed with error (sct=0, sc=8) 00:23:50.075 Write completed with error (sct=0, sc=8) 00:23:50.075 starting I/O failed: -6 00:23:50.075 Write completed with error (sct=0, sc=8) 00:23:50.075 starting I/O failed: -6 00:23:50.075 Write completed with error (sct=0, sc=8) 00:23:50.075 starting I/O failed: -6 00:23:50.075 Write completed with error (sct=0, sc=8) 00:23:50.075 Write completed with error (sct=0, sc=8) 00:23:50.075 starting I/O failed: -6 00:23:50.075 Write completed with error (sct=0, sc=8) 00:23:50.075 starting I/O failed: -6 00:23:50.075 Write completed with error (sct=0, sc=8) 00:23:50.075 starting I/O failed: -6 00:23:50.075 Write completed with error (sct=0, sc=8) 00:23:50.075 Write completed with error (sct=0, sc=8) 00:23:50.075 starting I/O failed: -6 00:23:50.075 Write completed with error (sct=0, sc=8) 00:23:50.075 starting I/O failed: -6 00:23:50.075 Write completed with error (sct=0, sc=8) 00:23:50.075 starting I/O failed: -6 00:23:50.075 Write completed with error (sct=0, sc=8) 00:23:50.075 Write completed with error (sct=0, sc=8) 00:23:50.075 starting I/O failed: -6 00:23:50.075 Write completed with error (sct=0, sc=8) 00:23:50.075 starting I/O failed: -6 00:23:50.075 Write completed with error (sct=0, sc=8) 00:23:50.075 starting I/O failed: -6 00:23:50.075 Write completed with error (sct=0, sc=8) 00:23:50.075 Write completed with error (sct=0, sc=8) 00:23:50.075 starting I/O failed: -6 00:23:50.075 Write completed with error (sct=0, sc=8) 00:23:50.075 starting I/O failed: -6 00:23:50.075 Write completed with error (sct=0, sc=8) 00:23:50.075 starting I/O failed: -6 00:23:50.075 Write completed with error (sct=0, sc=8) 00:23:50.075 Write completed with error (sct=0, sc=8) 00:23:50.075 starting I/O failed: -6 00:23:50.075 Write completed with error (sct=0, sc=8) 00:23:50.075 starting I/O failed: -6 00:23:50.075 Write completed with error (sct=0, sc=8) 00:23:50.075 starting I/O failed: -6 00:23:50.075 Write completed with error (sct=0, sc=8) 00:23:50.075 Write completed with error (sct=0, sc=8) 00:23:50.075 starting I/O failed: -6 00:23:50.075 Write completed with error (sct=0, sc=8) 00:23:50.075 starting I/O failed: -6 00:23:50.075 Write completed with error (sct=0, sc=8) 00:23:50.075 starting I/O failed: -6 00:23:50.075 Write completed with error (sct=0, sc=8) 00:23:50.075 Write completed with error (sct=0, sc=8) 00:23:50.075 starting I/O failed: -6 00:23:50.075 Write completed with error (sct=0, sc=8) 00:23:50.075 starting I/O failed: -6 00:23:50.075 [2024-11-05 19:13:18.846285] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:50.075 Write completed with error (sct=0, sc=8) 00:23:50.075 starting I/O failed: -6 00:23:50.075 Write completed with error (sct=0, sc=8) 00:23:50.075 starting I/O failed: -6 00:23:50.075 Write completed with error (sct=0, sc=8) 00:23:50.075 starting I/O failed: -6 00:23:50.075 Write completed with error (sct=0, sc=8) 00:23:50.075 starting I/O failed: -6 00:23:50.075 Write completed with error (sct=0, sc=8) 00:23:50.075 starting I/O failed: -6 00:23:50.075 Write completed with error (sct=0, sc=8) 00:23:50.075 starting I/O failed: -6 00:23:50.075 Write completed with error (sct=0, sc=8) 00:23:50.075 starting I/O failed: -6 00:23:50.075 Write completed with error (sct=0, sc=8) 00:23:50.075 starting I/O failed: -6 00:23:50.075 Write completed with error (sct=0, sc=8) 00:23:50.075 starting I/O failed: -6 00:23:50.075 Write completed with error (sct=0, sc=8) 00:23:50.075 starting I/O failed: -6 00:23:50.075 Write completed with error (sct=0, sc=8) 00:23:50.075 starting I/O failed: -6 00:23:50.075 Write completed with error (sct=0, sc=8) 00:23:50.075 starting I/O failed: -6 00:23:50.075 Write completed with error (sct=0, sc=8) 00:23:50.075 starting I/O failed: -6 00:23:50.075 Write completed with error (sct=0, sc=8) 00:23:50.075 starting I/O failed: -6 00:23:50.075 Write completed with error (sct=0, sc=8) 00:23:50.075 starting I/O failed: -6 00:23:50.075 Write completed with error (sct=0, sc=8) 00:23:50.075 starting I/O failed: -6 00:23:50.075 Write completed with error (sct=0, sc=8) 00:23:50.075 starting I/O failed: -6 00:23:50.075 Write completed with error (sct=0, sc=8) 00:23:50.075 starting I/O failed: -6 00:23:50.075 Write completed with error (sct=0, sc=8) 00:23:50.075 starting I/O failed: -6 00:23:50.075 Write completed with error (sct=0, sc=8) 00:23:50.075 starting I/O failed: -6 00:23:50.075 Write completed with error (sct=0, sc=8) 00:23:50.075 starting I/O failed: -6 00:23:50.075 Write completed with error (sct=0, sc=8) 00:23:50.075 starting I/O failed: -6 00:23:50.075 Write completed with error (sct=0, sc=8) 00:23:50.075 starting I/O failed: -6 00:23:50.075 Write completed with error (sct=0, sc=8) 00:23:50.075 starting I/O failed: -6 00:23:50.075 Write completed with error (sct=0, sc=8) 00:23:50.075 starting I/O failed: -6 00:23:50.075 Write completed with error (sct=0, sc=8) 00:23:50.075 starting I/O failed: -6 00:23:50.075 Write completed with error (sct=0, sc=8) 00:23:50.075 starting I/O failed: -6 00:23:50.075 Write completed with error (sct=0, sc=8) 00:23:50.075 starting I/O failed: -6 00:23:50.075 Write completed with error (sct=0, sc=8) 00:23:50.075 starting I/O failed: -6 00:23:50.075 Write completed with error (sct=0, sc=8) 00:23:50.075 starting I/O failed: -6 00:23:50.075 Write completed with error (sct=0, sc=8) 00:23:50.075 starting I/O failed: -6 00:23:50.075 Write completed with error (sct=0, sc=8) 00:23:50.075 starting I/O failed: -6 00:23:50.075 Write completed with error (sct=0, sc=8) 00:23:50.075 starting I/O failed: -6 00:23:50.075 Write completed with error (sct=0, sc=8) 00:23:50.075 starting I/O failed: -6 00:23:50.075 Write completed with error (sct=0, sc=8) 00:23:50.075 starting I/O failed: -6 00:23:50.075 Write completed with error (sct=0, sc=8) 00:23:50.075 starting I/O failed: -6 00:23:50.075 Write completed with error (sct=0, sc=8) 00:23:50.075 starting I/O failed: -6 00:23:50.075 Write completed with error (sct=0, sc=8) 00:23:50.075 starting I/O failed: -6 00:23:50.075 Write completed with error (sct=0, sc=8) 00:23:50.075 starting I/O failed: -6 00:23:50.075 Write completed with error (sct=0, sc=8) 00:23:50.075 starting I/O failed: -6 00:23:50.075 Write completed with error (sct=0, sc=8) 00:23:50.075 starting I/O failed: -6 00:23:50.075 Write completed with error (sct=0, sc=8) 00:23:50.075 starting I/O failed: -6 00:23:50.075 Write completed with error (sct=0, sc=8) 00:23:50.075 starting I/O failed: -6 00:23:50.075 Write completed with error (sct=0, sc=8) 00:23:50.075 starting I/O failed: -6 00:23:50.075 Write completed with error (sct=0, sc=8) 00:23:50.075 starting I/O failed: -6 00:23:50.075 Write completed with error (sct=0, sc=8) 00:23:50.075 starting I/O failed: -6 00:23:50.075 Write completed with error (sct=0, sc=8) 00:23:50.075 starting I/O failed: -6 00:23:50.075 Write completed with error (sct=0, sc=8) 00:23:50.075 starting I/O failed: -6 00:23:50.075 Write completed with error (sct=0, sc=8) 00:23:50.075 starting I/O failed: -6 00:23:50.075 Write completed with error (sct=0, sc=8) 00:23:50.075 starting I/O failed: -6 00:23:50.075 Write completed with error (sct=0, sc=8) 00:23:50.075 starting I/O failed: -6 00:23:50.075 Write completed with error (sct=0, sc=8) 00:23:50.075 starting I/O failed: -6 00:23:50.075 Write completed with error (sct=0, sc=8) 00:23:50.075 starting I/O failed: -6 00:23:50.075 Write completed with error (sct=0, sc=8) 00:23:50.075 starting I/O failed: -6 00:23:50.075 Write completed with error (sct=0, sc=8) 00:23:50.075 starting I/O failed: -6 00:23:50.075 Write completed with error (sct=0, sc=8) 00:23:50.075 starting I/O failed: -6 00:23:50.075 Write completed with error (sct=0, sc=8) 00:23:50.075 starting I/O failed: -6 00:23:50.075 Write completed with error (sct=0, sc=8) 00:23:50.075 starting I/O failed: -6 00:23:50.075 Write completed with error (sct=0, sc=8) 00:23:50.075 starting I/O failed: -6 00:23:50.075 Write completed with error (sct=0, sc=8) 00:23:50.075 starting I/O failed: -6 00:23:50.075 Write completed with error (sct=0, sc=8) 00:23:50.075 starting I/O failed: -6 00:23:50.075 Write completed with error (sct=0, sc=8) 00:23:50.075 starting I/O failed: -6 00:23:50.075 [2024-11-05 19:13:18.849477] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:50.076 NVMe io qpair process completion error 00:23:50.076 Write completed with error (sct=0, sc=8) 00:23:50.076 Write completed with error (sct=0, sc=8) 00:23:50.076 Write completed with error (sct=0, sc=8) 00:23:50.076 starting I/O failed: -6 00:23:50.076 Write completed with error (sct=0, sc=8) 00:23:50.076 Write completed with error (sct=0, sc=8) 00:23:50.076 Write completed with error (sct=0, sc=8) 00:23:50.076 Write completed with error (sct=0, sc=8) 00:23:50.076 starting I/O failed: -6 00:23:50.076 Write completed with error (sct=0, sc=8) 00:23:50.076 Write completed with error (sct=0, sc=8) 00:23:50.076 Write completed with error (sct=0, sc=8) 00:23:50.076 Write completed with error (sct=0, sc=8) 00:23:50.076 starting I/O failed: -6 00:23:50.076 Write completed with error (sct=0, sc=8) 00:23:50.076 Write completed with error (sct=0, sc=8) 00:23:50.076 Write completed with error (sct=0, sc=8) 00:23:50.076 Write completed with error (sct=0, sc=8) 00:23:50.076 starting I/O failed: -6 00:23:50.076 Write completed with error (sct=0, sc=8) 00:23:50.076 Write completed with error (sct=0, sc=8) 00:23:50.076 Write completed with error (sct=0, sc=8) 00:23:50.076 Write completed with error (sct=0, sc=8) 00:23:50.076 starting I/O failed: -6 00:23:50.076 Write completed with error (sct=0, sc=8) 00:23:50.076 Write completed with error (sct=0, sc=8) 00:23:50.076 Write completed with error (sct=0, sc=8) 00:23:50.076 Write completed with error (sct=0, sc=8) 00:23:50.076 starting I/O failed: -6 00:23:50.076 Write completed with error (sct=0, sc=8) 00:23:50.076 Write completed with error (sct=0, sc=8) 00:23:50.076 Write completed with error (sct=0, sc=8) 00:23:50.076 Write completed with error (sct=0, sc=8) 00:23:50.076 starting I/O failed: -6 00:23:50.076 Write completed with error (sct=0, sc=8) 00:23:50.076 Write completed with error (sct=0, sc=8) 00:23:50.076 Write completed with error (sct=0, sc=8) 00:23:50.076 Write completed with error (sct=0, sc=8) 00:23:50.076 starting I/O failed: -6 00:23:50.076 Write completed with error (sct=0, sc=8) 00:23:50.076 Write completed with error (sct=0, sc=8) 00:23:50.076 Write completed with error (sct=0, sc=8) 00:23:50.076 Write completed with error (sct=0, sc=8) 00:23:50.076 starting I/O failed: -6 00:23:50.076 Write completed with error (sct=0, sc=8) 00:23:50.076 Write completed with error (sct=0, sc=8) 00:23:50.076 Write completed with error (sct=0, sc=8) 00:23:50.076 Write completed with error (sct=0, sc=8) 00:23:50.076 starting I/O failed: -6 00:23:50.076 Write completed with error (sct=0, sc=8) 00:23:50.076 Write completed with error (sct=0, sc=8) 00:23:50.076 Write completed with error (sct=0, sc=8) 00:23:50.076 Write completed with error (sct=0, sc=8) 00:23:50.076 starting I/O failed: -6 00:23:50.076 [2024-11-05 19:13:18.850848] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:50.076 Write completed with error (sct=0, sc=8) 00:23:50.076 starting I/O failed: -6 00:23:50.076 Write completed with error (sct=0, sc=8) 00:23:50.076 Write completed with error (sct=0, sc=8) 00:23:50.076 starting I/O failed: -6 00:23:50.076 Write completed with error (sct=0, sc=8) 00:23:50.076 Write completed with error (sct=0, sc=8) 00:23:50.076 starting I/O failed: -6 00:23:50.076 Write completed with error (sct=0, sc=8) 00:23:50.076 Write completed with error (sct=0, sc=8) 00:23:50.076 starting I/O failed: -6 00:23:50.076 Write completed with error (sct=0, sc=8) 00:23:50.076 Write completed with error (sct=0, sc=8) 00:23:50.076 starting I/O failed: -6 00:23:50.076 Write completed with error (sct=0, sc=8) 00:23:50.076 Write completed with error (sct=0, sc=8) 00:23:50.076 starting I/O failed: -6 00:23:50.076 Write completed with error (sct=0, sc=8) 00:23:50.076 Write completed with error (sct=0, sc=8) 00:23:50.076 starting I/O failed: -6 00:23:50.076 Write completed with error (sct=0, sc=8) 00:23:50.076 Write completed with error (sct=0, sc=8) 00:23:50.076 starting I/O failed: -6 00:23:50.076 Write completed with error (sct=0, sc=8) 00:23:50.076 Write completed with error (sct=0, sc=8) 00:23:50.076 starting I/O failed: -6 00:23:50.076 Write completed with error (sct=0, sc=8) 00:23:50.076 Write completed with error (sct=0, sc=8) 00:23:50.076 starting I/O failed: -6 00:23:50.076 Write completed with error (sct=0, sc=8) 00:23:50.076 Write completed with error (sct=0, sc=8) 00:23:50.076 starting I/O failed: -6 00:23:50.076 Write completed with error (sct=0, sc=8) 00:23:50.076 Write completed with error (sct=0, sc=8) 00:23:50.076 starting I/O failed: -6 00:23:50.076 Write completed with error (sct=0, sc=8) 00:23:50.076 Write completed with error (sct=0, sc=8) 00:23:50.076 starting I/O failed: -6 00:23:50.076 Write completed with error (sct=0, sc=8) 00:23:50.076 Write completed with error (sct=0, sc=8) 00:23:50.076 starting I/O failed: -6 00:23:50.076 Write completed with error (sct=0, sc=8) 00:23:50.076 Write completed with error (sct=0, sc=8) 00:23:50.076 starting I/O failed: -6 00:23:50.076 Write completed with error (sct=0, sc=8) 00:23:50.076 Write completed with error (sct=0, sc=8) 00:23:50.076 starting I/O failed: -6 00:23:50.076 Write completed with error (sct=0, sc=8) 00:23:50.076 Write completed with error (sct=0, sc=8) 00:23:50.076 starting I/O failed: -6 00:23:50.076 Write completed with error (sct=0, sc=8) 00:23:50.076 Write completed with error (sct=0, sc=8) 00:23:50.076 starting I/O failed: -6 00:23:50.076 Write completed with error (sct=0, sc=8) 00:23:50.076 Write completed with error (sct=0, sc=8) 00:23:50.076 starting I/O failed: -6 00:23:50.076 Write completed with error (sct=0, sc=8) 00:23:50.076 Write completed with error (sct=0, sc=8) 00:23:50.076 starting I/O failed: -6 00:23:50.076 [2024-11-05 19:13:18.851659] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:50.076 Write completed with error (sct=0, sc=8) 00:23:50.076 Write completed with error (sct=0, sc=8) 00:23:50.076 starting I/O failed: -6 00:23:50.076 Write completed with error (sct=0, sc=8) 00:23:50.076 starting I/O failed: -6 00:23:50.076 Write completed with error (sct=0, sc=8) 00:23:50.076 starting I/O failed: -6 00:23:50.076 Write completed with error (sct=0, sc=8) 00:23:50.076 Write completed with error (sct=0, sc=8) 00:23:50.076 starting I/O failed: -6 00:23:50.076 Write completed with error (sct=0, sc=8) 00:23:50.076 starting I/O failed: -6 00:23:50.076 Write completed with error (sct=0, sc=8) 00:23:50.076 starting I/O failed: -6 00:23:50.076 Write completed with error (sct=0, sc=8) 00:23:50.076 Write completed with error (sct=0, sc=8) 00:23:50.076 starting I/O failed: -6 00:23:50.076 Write completed with error (sct=0, sc=8) 00:23:50.076 starting I/O failed: -6 00:23:50.076 Write completed with error (sct=0, sc=8) 00:23:50.076 starting I/O failed: -6 00:23:50.076 Write completed with error (sct=0, sc=8) 00:23:50.076 Write completed with error (sct=0, sc=8) 00:23:50.076 starting I/O failed: -6 00:23:50.076 Write completed with error (sct=0, sc=8) 00:23:50.076 starting I/O failed: -6 00:23:50.076 Write completed with error (sct=0, sc=8) 00:23:50.076 starting I/O failed: -6 00:23:50.076 Write completed with error (sct=0, sc=8) 00:23:50.076 Write completed with error (sct=0, sc=8) 00:23:50.076 starting I/O failed: -6 00:23:50.076 Write completed with error (sct=0, sc=8) 00:23:50.076 starting I/O failed: -6 00:23:50.076 Write completed with error (sct=0, sc=8) 00:23:50.076 starting I/O failed: -6 00:23:50.076 Write completed with error (sct=0, sc=8) 00:23:50.076 Write completed with error (sct=0, sc=8) 00:23:50.076 starting I/O failed: -6 00:23:50.076 Write completed with error (sct=0, sc=8) 00:23:50.076 starting I/O failed: -6 00:23:50.076 Write completed with error (sct=0, sc=8) 00:23:50.076 starting I/O failed: -6 00:23:50.076 Write completed with error (sct=0, sc=8) 00:23:50.076 Write completed with error (sct=0, sc=8) 00:23:50.076 starting I/O failed: -6 00:23:50.076 Write completed with error (sct=0, sc=8) 00:23:50.076 starting I/O failed: -6 00:23:50.076 Write completed with error (sct=0, sc=8) 00:23:50.076 starting I/O failed: -6 00:23:50.076 Write completed with error (sct=0, sc=8) 00:23:50.076 Write completed with error (sct=0, sc=8) 00:23:50.076 starting I/O failed: -6 00:23:50.076 Write completed with error (sct=0, sc=8) 00:23:50.076 starting I/O failed: -6 00:23:50.076 Write completed with error (sct=0, sc=8) 00:23:50.076 starting I/O failed: -6 00:23:50.076 Write completed with error (sct=0, sc=8) 00:23:50.076 Write completed with error (sct=0, sc=8) 00:23:50.076 starting I/O failed: -6 00:23:50.076 Write completed with error (sct=0, sc=8) 00:23:50.076 starting I/O failed: -6 00:23:50.076 Write completed with error (sct=0, sc=8) 00:23:50.076 starting I/O failed: -6 00:23:50.076 Write completed with error (sct=0, sc=8) 00:23:50.076 Write completed with error (sct=0, sc=8) 00:23:50.076 starting I/O failed: -6 00:23:50.076 Write completed with error (sct=0, sc=8) 00:23:50.076 starting I/O failed: -6 00:23:50.076 Write completed with error (sct=0, sc=8) 00:23:50.076 starting I/O failed: -6 00:23:50.076 Write completed with error (sct=0, sc=8) 00:23:50.076 Write completed with error (sct=0, sc=8) 00:23:50.076 starting I/O failed: -6 00:23:50.076 Write completed with error (sct=0, sc=8) 00:23:50.076 starting I/O failed: -6 00:23:50.076 Write completed with error (sct=0, sc=8) 00:23:50.076 starting I/O failed: -6 00:23:50.076 Write completed with error (sct=0, sc=8) 00:23:50.076 Write completed with error (sct=0, sc=8) 00:23:50.076 starting I/O failed: -6 00:23:50.076 Write completed with error (sct=0, sc=8) 00:23:50.076 starting I/O failed: -6 00:23:50.076 Write completed with error (sct=0, sc=8) 00:23:50.076 starting I/O failed: -6 00:23:50.076 Write completed with error (sct=0, sc=8) 00:23:50.076 [2024-11-05 19:13:18.852592] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:50.076 Write completed with error (sct=0, sc=8) 00:23:50.076 starting I/O failed: -6 00:23:50.076 Write completed with error (sct=0, sc=8) 00:23:50.076 starting I/O failed: -6 00:23:50.076 Write completed with error (sct=0, sc=8) 00:23:50.076 starting I/O failed: -6 00:23:50.076 Write completed with error (sct=0, sc=8) 00:23:50.076 starting I/O failed: -6 00:23:50.076 Write completed with error (sct=0, sc=8) 00:23:50.076 starting I/O failed: -6 00:23:50.077 Write completed with error (sct=0, sc=8) 00:23:50.077 starting I/O failed: -6 00:23:50.077 Write completed with error (sct=0, sc=8) 00:23:50.077 starting I/O failed: -6 00:23:50.077 Write completed with error (sct=0, sc=8) 00:23:50.077 starting I/O failed: -6 00:23:50.077 Write completed with error (sct=0, sc=8) 00:23:50.077 starting I/O failed: -6 00:23:50.077 Write completed with error (sct=0, sc=8) 00:23:50.077 starting I/O failed: -6 00:23:50.077 Write completed with error (sct=0, sc=8) 00:23:50.077 starting I/O failed: -6 00:23:50.077 Write completed with error (sct=0, sc=8) 00:23:50.077 starting I/O failed: -6 00:23:50.077 Write completed with error (sct=0, sc=8) 00:23:50.077 starting I/O failed: -6 00:23:50.077 Write completed with error (sct=0, sc=8) 00:23:50.077 starting I/O failed: -6 00:23:50.077 Write completed with error (sct=0, sc=8) 00:23:50.077 starting I/O failed: -6 00:23:50.077 Write completed with error (sct=0, sc=8) 00:23:50.077 starting I/O failed: -6 00:23:50.077 Write completed with error (sct=0, sc=8) 00:23:50.077 starting I/O failed: -6 00:23:50.077 Write completed with error (sct=0, sc=8) 00:23:50.077 starting I/O failed: -6 00:23:50.077 Write completed with error (sct=0, sc=8) 00:23:50.077 starting I/O failed: -6 00:23:50.077 Write completed with error (sct=0, sc=8) 00:23:50.077 starting I/O failed: -6 00:23:50.077 Write completed with error (sct=0, sc=8) 00:23:50.077 starting I/O failed: -6 00:23:50.077 Write completed with error (sct=0, sc=8) 00:23:50.077 starting I/O failed: -6 00:23:50.077 Write completed with error (sct=0, sc=8) 00:23:50.077 starting I/O failed: -6 00:23:50.077 Write completed with error (sct=0, sc=8) 00:23:50.077 starting I/O failed: -6 00:23:50.077 Write completed with error (sct=0, sc=8) 00:23:50.077 starting I/O failed: -6 00:23:50.077 Write completed with error (sct=0, sc=8) 00:23:50.077 starting I/O failed: -6 00:23:50.077 Write completed with error (sct=0, sc=8) 00:23:50.077 starting I/O failed: -6 00:23:50.077 Write completed with error (sct=0, sc=8) 00:23:50.077 starting I/O failed: -6 00:23:50.077 Write completed with error (sct=0, sc=8) 00:23:50.077 starting I/O failed: -6 00:23:50.077 Write completed with error (sct=0, sc=8) 00:23:50.077 starting I/O failed: -6 00:23:50.077 Write completed with error (sct=0, sc=8) 00:23:50.077 starting I/O failed: -6 00:23:50.077 Write completed with error (sct=0, sc=8) 00:23:50.077 starting I/O failed: -6 00:23:50.077 Write completed with error (sct=0, sc=8) 00:23:50.077 starting I/O failed: -6 00:23:50.077 Write completed with error (sct=0, sc=8) 00:23:50.077 starting I/O failed: -6 00:23:50.077 Write completed with error (sct=0, sc=8) 00:23:50.077 starting I/O failed: -6 00:23:50.077 Write completed with error (sct=0, sc=8) 00:23:50.077 starting I/O failed: -6 00:23:50.077 Write completed with error (sct=0, sc=8) 00:23:50.077 starting I/O failed: -6 00:23:50.077 Write completed with error (sct=0, sc=8) 00:23:50.077 starting I/O failed: -6 00:23:50.077 Write completed with error (sct=0, sc=8) 00:23:50.077 starting I/O failed: -6 00:23:50.077 Write completed with error (sct=0, sc=8) 00:23:50.077 starting I/O failed: -6 00:23:50.077 Write completed with error (sct=0, sc=8) 00:23:50.077 starting I/O failed: -6 00:23:50.077 Write completed with error (sct=0, sc=8) 00:23:50.077 starting I/O failed: -6 00:23:50.077 Write completed with error (sct=0, sc=8) 00:23:50.077 starting I/O failed: -6 00:23:50.077 Write completed with error (sct=0, sc=8) 00:23:50.077 starting I/O failed: -6 00:23:50.077 Write completed with error (sct=0, sc=8) 00:23:50.077 starting I/O failed: -6 00:23:50.077 Write completed with error (sct=0, sc=8) 00:23:50.077 starting I/O failed: -6 00:23:50.077 Write completed with error (sct=0, sc=8) 00:23:50.077 starting I/O failed: -6 00:23:50.077 Write completed with error (sct=0, sc=8) 00:23:50.077 starting I/O failed: -6 00:23:50.077 Write completed with error (sct=0, sc=8) 00:23:50.077 starting I/O failed: -6 00:23:50.077 Write completed with error (sct=0, sc=8) 00:23:50.077 starting I/O failed: -6 00:23:50.077 Write completed with error (sct=0, sc=8) 00:23:50.077 starting I/O failed: -6 00:23:50.077 Write completed with error (sct=0, sc=8) 00:23:50.077 starting I/O failed: -6 00:23:50.077 Write completed with error (sct=0, sc=8) 00:23:50.077 starting I/O failed: -6 00:23:50.077 Write completed with error (sct=0, sc=8) 00:23:50.077 starting I/O failed: -6 00:23:50.077 Write completed with error (sct=0, sc=8) 00:23:50.077 starting I/O failed: -6 00:23:50.077 Write completed with error (sct=0, sc=8) 00:23:50.077 starting I/O failed: -6 00:23:50.077 Write completed with error (sct=0, sc=8) 00:23:50.077 starting I/O failed: -6 00:23:50.077 Write completed with error (sct=0, sc=8) 00:23:50.077 starting I/O failed: -6 00:23:50.077 Write completed with error (sct=0, sc=8) 00:23:50.077 starting I/O failed: -6 00:23:50.077 Write completed with error (sct=0, sc=8) 00:23:50.077 starting I/O failed: -6 00:23:50.077 Write completed with error (sct=0, sc=8) 00:23:50.077 starting I/O failed: -6 00:23:50.077 [2024-11-05 19:13:18.854478] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:50.077 NVMe io qpair process completion error 00:23:50.077 Write completed with error (sct=0, sc=8) 00:23:50.077 starting I/O failed: -6 00:23:50.077 Write completed with error (sct=0, sc=8) 00:23:50.077 Write completed with error (sct=0, sc=8) 00:23:50.077 Write completed with error (sct=0, sc=8) 00:23:50.077 Write completed with error (sct=0, sc=8) 00:23:50.077 starting I/O failed: -6 00:23:50.077 Write completed with error (sct=0, sc=8) 00:23:50.077 Write completed with error (sct=0, sc=8) 00:23:50.077 starting I/O failed: -6 00:23:50.077 Write completed with error (sct=0, sc=8) 00:23:50.077 Write completed with error (sct=0, sc=8) 00:23:50.077 starting I/O failed: -6 00:23:50.077 Write completed with error (sct=0, sc=8) 00:23:50.077 Write completed with error (sct=0, sc=8) 00:23:50.077 starting I/O failed: -6 00:23:50.077 Write completed with error (sct=0, sc=8) 00:23:50.077 Write completed with error (sct=0, sc=8) 00:23:50.077 starting I/O failed: -6 00:23:50.077 Write completed with error (sct=0, sc=8) 00:23:50.077 Write completed with error (sct=0, sc=8) 00:23:50.077 starting I/O failed: -6 00:23:50.077 Write completed with error (sct=0, sc=8) 00:23:50.077 Write completed with error (sct=0, sc=8) 00:23:50.077 starting I/O failed: -6 00:23:50.077 Write completed with error (sct=0, sc=8) 00:23:50.077 Write completed with error (sct=0, sc=8) 00:23:50.077 starting I/O failed: -6 00:23:50.077 Write completed with error (sct=0, sc=8) 00:23:50.077 Write completed with error (sct=0, sc=8) 00:23:50.077 starting I/O failed: -6 00:23:50.077 Write completed with error (sct=0, sc=8) 00:23:50.077 Write completed with error (sct=0, sc=8) 00:23:50.077 starting I/O failed: -6 00:23:50.077 Write completed with error (sct=0, sc=8) 00:23:50.077 Write completed with error (sct=0, sc=8) 00:23:50.077 starting I/O failed: -6 00:23:50.077 Write completed with error (sct=0, sc=8) 00:23:50.077 Write completed with error (sct=0, sc=8) 00:23:50.077 starting I/O failed: -6 00:23:50.077 Write completed with error (sct=0, sc=8) 00:23:50.077 Write completed with error (sct=0, sc=8) 00:23:50.077 starting I/O failed: -6 00:23:50.077 Write completed with error (sct=0, sc=8) 00:23:50.077 Write completed with error (sct=0, sc=8) 00:23:50.077 starting I/O failed: -6 00:23:50.077 Write completed with error (sct=0, sc=8) 00:23:50.077 Write completed with error (sct=0, sc=8) 00:23:50.077 starting I/O failed: -6 00:23:50.077 Write completed with error (sct=0, sc=8) 00:23:50.077 Write completed with error (sct=0, sc=8) 00:23:50.077 starting I/O failed: -6 00:23:50.077 Write completed with error (sct=0, sc=8) 00:23:50.077 Write completed with error (sct=0, sc=8) 00:23:50.077 starting I/O failed: -6 00:23:50.077 Write completed with error (sct=0, sc=8) 00:23:50.077 Write completed with error (sct=0, sc=8) 00:23:50.077 starting I/O failed: -6 00:23:50.077 Write completed with error (sct=0, sc=8) 00:23:50.077 Write completed with error (sct=0, sc=8) 00:23:50.077 starting I/O failed: -6 00:23:50.077 Write completed with error (sct=0, sc=8) 00:23:50.077 Write completed with error (sct=0, sc=8) 00:23:50.077 starting I/O failed: -6 00:23:50.077 Write completed with error (sct=0, sc=8) 00:23:50.077 Write completed with error (sct=0, sc=8) 00:23:50.077 starting I/O failed: -6 00:23:50.077 Write completed with error (sct=0, sc=8) 00:23:50.077 Write completed with error (sct=0, sc=8) 00:23:50.077 starting I/O failed: -6 00:23:50.077 Write completed with error (sct=0, sc=8) 00:23:50.077 Write completed with error (sct=0, sc=8) 00:23:50.077 starting I/O failed: -6 00:23:50.077 Write completed with error (sct=0, sc=8) 00:23:50.077 Write completed with error (sct=0, sc=8) 00:23:50.077 starting I/O failed: -6 00:23:50.077 Write completed with error (sct=0, sc=8) 00:23:50.077 Write completed with error (sct=0, sc=8) 00:23:50.077 starting I/O failed: -6 00:23:50.077 Write completed with error (sct=0, sc=8) 00:23:50.077 Write completed with error (sct=0, sc=8) 00:23:50.077 starting I/O failed: -6 00:23:50.077 Write completed with error (sct=0, sc=8) 00:23:50.077 Write completed with error (sct=0, sc=8) 00:23:50.077 starting I/O failed: -6 00:23:50.077 Write completed with error (sct=0, sc=8) 00:23:50.077 starting I/O failed: -6 00:23:50.077 Write completed with error (sct=0, sc=8) 00:23:50.077 starting I/O failed: -6 00:23:50.077 Write completed with error (sct=0, sc=8) 00:23:50.077 Write completed with error (sct=0, sc=8) 00:23:50.077 starting I/O failed: -6 00:23:50.077 Write completed with error (sct=0, sc=8) 00:23:50.077 starting I/O failed: -6 00:23:50.077 Write completed with error (sct=0, sc=8) 00:23:50.077 starting I/O failed: -6 00:23:50.078 Write completed with error (sct=0, sc=8) 00:23:50.078 Write completed with error (sct=0, sc=8) 00:23:50.078 starting I/O failed: -6 00:23:50.078 Write completed with error (sct=0, sc=8) 00:23:50.078 starting I/O failed: -6 00:23:50.078 Write completed with error (sct=0, sc=8) 00:23:50.078 starting I/O failed: -6 00:23:50.078 Write completed with error (sct=0, sc=8) 00:23:50.078 Write completed with error (sct=0, sc=8) 00:23:50.078 starting I/O failed: -6 00:23:50.078 Write completed with error (sct=0, sc=8) 00:23:50.078 starting I/O failed: -6 00:23:50.078 Write completed with error (sct=0, sc=8) 00:23:50.078 starting I/O failed: -6 00:23:50.078 Write completed with error (sct=0, sc=8) 00:23:50.078 Write completed with error (sct=0, sc=8) 00:23:50.078 starting I/O failed: -6 00:23:50.078 Write completed with error (sct=0, sc=8) 00:23:50.078 starting I/O failed: -6 00:23:50.078 Write completed with error (sct=0, sc=8) 00:23:50.078 starting I/O failed: -6 00:23:50.078 Write completed with error (sct=0, sc=8) 00:23:50.078 Write completed with error (sct=0, sc=8) 00:23:50.078 starting I/O failed: -6 00:23:50.078 Write completed with error (sct=0, sc=8) 00:23:50.078 starting I/O failed: -6 00:23:50.078 Write completed with error (sct=0, sc=8) 00:23:50.078 starting I/O failed: -6 00:23:50.078 Write completed with error (sct=0, sc=8) 00:23:50.078 Write completed with error (sct=0, sc=8) 00:23:50.078 starting I/O failed: -6 00:23:50.078 Write completed with error (sct=0, sc=8) 00:23:50.078 starting I/O failed: -6 00:23:50.078 Write completed with error (sct=0, sc=8) 00:23:50.078 starting I/O failed: -6 00:23:50.078 Write completed with error (sct=0, sc=8) 00:23:50.078 Write completed with error (sct=0, sc=8) 00:23:50.078 starting I/O failed: -6 00:23:50.078 Write completed with error (sct=0, sc=8) 00:23:50.078 starting I/O failed: -6 00:23:50.078 Write completed with error (sct=0, sc=8) 00:23:50.078 starting I/O failed: -6 00:23:50.078 Write completed with error (sct=0, sc=8) 00:23:50.078 Write completed with error (sct=0, sc=8) 00:23:50.078 starting I/O failed: -6 00:23:50.078 Write completed with error (sct=0, sc=8) 00:23:50.078 starting I/O failed: -6 00:23:50.078 Write completed with error (sct=0, sc=8) 00:23:50.078 starting I/O failed: -6 00:23:50.078 Write completed with error (sct=0, sc=8) 00:23:50.078 Write completed with error (sct=0, sc=8) 00:23:50.078 starting I/O failed: -6 00:23:50.078 Write completed with error (sct=0, sc=8) 00:23:50.078 starting I/O failed: -6 00:23:50.078 Write completed with error (sct=0, sc=8) 00:23:50.078 starting I/O failed: -6 00:23:50.078 Write completed with error (sct=0, sc=8) 00:23:50.078 Write completed with error (sct=0, sc=8) 00:23:50.078 starting I/O failed: -6 00:23:50.078 Write completed with error (sct=0, sc=8) 00:23:50.078 starting I/O failed: -6 00:23:50.078 Write completed with error (sct=0, sc=8) 00:23:50.078 starting I/O failed: -6 00:23:50.078 Write completed with error (sct=0, sc=8) 00:23:50.078 Write completed with error (sct=0, sc=8) 00:23:50.078 starting I/O failed: -6 00:23:50.078 Write completed with error (sct=0, sc=8) 00:23:50.078 starting I/O failed: -6 00:23:50.078 Write completed with error (sct=0, sc=8) 00:23:50.078 starting I/O failed: -6 00:23:50.078 Write completed with error (sct=0, sc=8) 00:23:50.078 Write completed with error (sct=0, sc=8) 00:23:50.078 starting I/O failed: -6 00:23:50.078 Write completed with error (sct=0, sc=8) 00:23:50.078 starting I/O failed: -6 00:23:50.078 Write completed with error (sct=0, sc=8) 00:23:50.078 starting I/O failed: -6 00:23:50.078 Write completed with error (sct=0, sc=8) 00:23:50.078 Write completed with error (sct=0, sc=8) 00:23:50.078 starting I/O failed: -6 00:23:50.078 Write completed with error (sct=0, sc=8) 00:23:50.078 starting I/O failed: -6 00:23:50.078 Write completed with error (sct=0, sc=8) 00:23:50.078 starting I/O failed: -6 00:23:50.078 Write completed with error (sct=0, sc=8) 00:23:50.078 Write completed with error (sct=0, sc=8) 00:23:50.078 starting I/O failed: -6 00:23:50.078 Write completed with error (sct=0, sc=8) 00:23:50.078 starting I/O failed: -6 00:23:50.078 Write completed with error (sct=0, sc=8) 00:23:50.078 starting I/O failed: -6 00:23:50.078 Write completed with error (sct=0, sc=8) 00:23:50.078 starting I/O failed: -6 00:23:50.078 Write completed with error (sct=0, sc=8) 00:23:50.078 starting I/O failed: -6 00:23:50.078 Write completed with error (sct=0, sc=8) 00:23:50.078 starting I/O failed: -6 00:23:50.078 Write completed with error (sct=0, sc=8) 00:23:50.078 starting I/O failed: -6 00:23:50.078 Write completed with error (sct=0, sc=8) 00:23:50.078 starting I/O failed: -6 00:23:50.078 Write completed with error (sct=0, sc=8) 00:23:50.078 starting I/O failed: -6 00:23:50.078 Write completed with error (sct=0, sc=8) 00:23:50.078 starting I/O failed: -6 00:23:50.078 Write completed with error (sct=0, sc=8) 00:23:50.078 starting I/O failed: -6 00:23:50.078 Write completed with error (sct=0, sc=8) 00:23:50.078 starting I/O failed: -6 00:23:50.078 Write completed with error (sct=0, sc=8) 00:23:50.078 starting I/O failed: -6 00:23:50.078 Write completed with error (sct=0, sc=8) 00:23:50.078 starting I/O failed: -6 00:23:50.078 Write completed with error (sct=0, sc=8) 00:23:50.078 starting I/O failed: -6 00:23:50.078 Write completed with error (sct=0, sc=8) 00:23:50.078 starting I/O failed: -6 00:23:50.078 Write completed with error (sct=0, sc=8) 00:23:50.078 starting I/O failed: -6 00:23:50.078 Write completed with error (sct=0, sc=8) 00:23:50.078 starting I/O failed: -6 00:23:50.078 Write completed with error (sct=0, sc=8) 00:23:50.078 starting I/O failed: -6 00:23:50.078 Write completed with error (sct=0, sc=8) 00:23:50.078 starting I/O failed: -6 00:23:50.078 Write completed with error (sct=0, sc=8) 00:23:50.078 starting I/O failed: -6 00:23:50.078 Write completed with error (sct=0, sc=8) 00:23:50.078 starting I/O failed: -6 00:23:50.078 Write completed with error (sct=0, sc=8) 00:23:50.078 starting I/O failed: -6 00:23:50.078 Write completed with error (sct=0, sc=8) 00:23:50.078 starting I/O failed: -6 00:23:50.078 Write completed with error (sct=0, sc=8) 00:23:50.078 starting I/O failed: -6 00:23:50.078 Write completed with error (sct=0, sc=8) 00:23:50.078 starting I/O failed: -6 00:23:50.078 Write completed with error (sct=0, sc=8) 00:23:50.078 starting I/O failed: -6 00:23:50.078 Write completed with error (sct=0, sc=8) 00:23:50.078 starting I/O failed: -6 00:23:50.078 Write completed with error (sct=0, sc=8) 00:23:50.078 starting I/O failed: -6 00:23:50.078 Write completed with error (sct=0, sc=8) 00:23:50.078 starting I/O failed: -6 00:23:50.078 Write completed with error (sct=0, sc=8) 00:23:50.078 starting I/O failed: -6 00:23:50.078 Write completed with error (sct=0, sc=8) 00:23:50.078 starting I/O failed: -6 00:23:50.078 Write completed with error (sct=0, sc=8) 00:23:50.078 starting I/O failed: -6 00:23:50.078 Write completed with error (sct=0, sc=8) 00:23:50.078 starting I/O failed: -6 00:23:50.078 Write completed with error (sct=0, sc=8) 00:23:50.078 starting I/O failed: -6 00:23:50.078 Write completed with error (sct=0, sc=8) 00:23:50.078 starting I/O failed: -6 00:23:50.078 Write completed with error (sct=0, sc=8) 00:23:50.078 starting I/O failed: -6 00:23:50.078 Write completed with error (sct=0, sc=8) 00:23:50.078 starting I/O failed: -6 00:23:50.078 Write completed with error (sct=0, sc=8) 00:23:50.078 starting I/O failed: -6 00:23:50.078 Write completed with error (sct=0, sc=8) 00:23:50.078 starting I/O failed: -6 00:23:50.078 Write completed with error (sct=0, sc=8) 00:23:50.078 starting I/O failed: -6 00:23:50.078 Write completed with error (sct=0, sc=8) 00:23:50.078 starting I/O failed: -6 00:23:50.078 Write completed with error (sct=0, sc=8) 00:23:50.078 starting I/O failed: -6 00:23:50.078 Write completed with error (sct=0, sc=8) 00:23:50.078 starting I/O failed: -6 00:23:50.078 Write completed with error (sct=0, sc=8) 00:23:50.078 starting I/O failed: -6 00:23:50.078 Write completed with error (sct=0, sc=8) 00:23:50.078 starting I/O failed: -6 00:23:50.078 Write completed with error (sct=0, sc=8) 00:23:50.078 starting I/O failed: -6 00:23:50.078 Write completed with error (sct=0, sc=8) 00:23:50.078 starting I/O failed: -6 00:23:50.078 Write completed with error (sct=0, sc=8) 00:23:50.078 starting I/O failed: -6 00:23:50.078 Write completed with error (sct=0, sc=8) 00:23:50.078 starting I/O failed: -6 00:23:50.078 Write completed with error (sct=0, sc=8) 00:23:50.078 starting I/O failed: -6 00:23:50.078 Write completed with error (sct=0, sc=8) 00:23:50.079 starting I/O failed: -6 00:23:50.079 Write completed with error (sct=0, sc=8) 00:23:50.079 starting I/O failed: -6 00:23:50.079 Write completed with error (sct=0, sc=8) 00:23:50.079 starting I/O failed: -6 00:23:50.079 Write completed with error (sct=0, sc=8) 00:23:50.079 starting I/O failed: -6 00:23:50.079 Write completed with error (sct=0, sc=8) 00:23:50.079 starting I/O failed: -6 00:23:50.079 Write completed with error (sct=0, sc=8) 00:23:50.079 starting I/O failed: -6 00:23:50.079 Write completed with error (sct=0, sc=8) 00:23:50.079 starting I/O failed: -6 00:23:50.079 Write completed with error (sct=0, sc=8) 00:23:50.079 starting I/O failed: -6 00:23:50.079 Initializing NVMe Controllers 00:23:50.079 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:23:50.079 Controller IO queue size 128, less than required. 00:23:50.079 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:50.079 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:50.079 Controller IO queue size 128, less than required. 00:23:50.079 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:50.079 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:23:50.079 Controller IO queue size 128, less than required. 00:23:50.079 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:50.079 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:23:50.079 Controller IO queue size 128, less than required. 00:23:50.079 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:50.079 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:23:50.079 Controller IO queue size 128, less than required. 00:23:50.079 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:50.079 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:23:50.079 Controller IO queue size 128, less than required. 00:23:50.079 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:50.079 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:23:50.079 Controller IO queue size 128, less than required. 00:23:50.079 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:50.079 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:23:50.079 Controller IO queue size 128, less than required. 00:23:50.079 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:50.079 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:23:50.079 Controller IO queue size 128, less than required. 00:23:50.079 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:50.079 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:23:50.079 Controller IO queue size 128, less than required. 00:23:50.079 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:50.079 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:23:50.079 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:50.079 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:23:50.079 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:23:50.079 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:23:50.079 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:23:50.079 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:23:50.079 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:23:50.079 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:23:50.079 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:23:50.079 Initialization complete. Launching workers. 00:23:50.079 ======================================================== 00:23:50.079 Latency(us) 00:23:50.079 Device Information : IOPS MiB/s Average min max 00:23:50.079 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1856.56 79.77 68949.88 617.91 134666.27 00:23:50.079 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1868.41 80.28 67819.89 700.26 153035.08 00:23:50.079 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1843.88 79.23 68744.50 578.07 119369.84 00:23:50.079 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1899.18 81.61 66764.82 612.01 120230.04 00:23:50.079 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1895.65 81.45 66930.52 627.90 118426.07 00:23:50.079 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1893.36 81.36 67042.56 687.25 121860.23 00:23:50.079 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1899.60 81.62 66843.95 681.83 120018.34 00:23:50.079 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1889.20 81.18 67245.82 541.72 126279.80 00:23:50.079 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1890.86 81.25 67208.30 631.43 120380.36 00:23:50.079 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1921.84 82.58 66169.15 844.57 131637.29 00:23:50.079 ======================================================== 00:23:50.079 Total : 18858.54 810.33 67362.49 541.72 153035.08 00:23:50.079 00:23:50.079 [2024-11-05 19:13:18.864603] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bdae0 is same with the state(6) to be set 00:23:50.079 [2024-11-05 19:13:18.864651] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bd720 is same with the state(6) to be set 00:23:50.079 [2024-11-05 19:13:18.864682] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bb890 is same with the state(6) to be set 00:23:50.079 [2024-11-05 19:13:18.864711] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bb560 is same with the state(6) to be set 00:23:50.079 [2024-11-05 19:13:18.864741] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bc410 is same with the state(6) to be set 00:23:50.079 [2024-11-05 19:13:18.864776] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bbbc0 is same with the state(6) to be set 00:23:50.079 [2024-11-05 19:13:18.864805] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bd900 is same with the state(6) to be set 00:23:50.079 [2024-11-05 19:13:18.864834] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bbef0 is same with the state(6) to be set 00:23:50.079 [2024-11-05 19:13:18.864863] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bc740 is same with the state(6) to be set 00:23:50.079 [2024-11-05 19:13:18.864891] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bca70 is same with the state(6) to be set 00:23:50.079 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:23:50.079 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:23:51.017 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 418725 00:23:51.017 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@650 -- # local es=0 00:23:51.017 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 418725 00:23:51.017 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@638 -- # local arg=wait 00:23:51.017 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:51.017 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # type -t wait 00:23:51.017 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:51.017 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # wait 418725 00:23:51.017 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # es=1 00:23:51.017 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:51.017 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:51.017 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:51.017 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:23:51.017 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:23:51.017 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:51.017 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:51.017 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:23:51.017 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@335 -- # nvmfcleanup 00:23:51.017 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@99 -- # sync 00:23:51.017 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:23:51.018 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@102 -- # set +e 00:23:51.018 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@103 -- # for i in {1..20} 00:23:51.018 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:23:51.018 rmmod nvme_tcp 00:23:51.018 rmmod nvme_fabrics 00:23:51.018 rmmod nvme_keyring 00:23:51.018 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:23:51.018 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # set -e 00:23:51.018 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # return 0 00:23:51.018 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # '[' -n 418496 ']' 00:23:51.018 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@337 -- # killprocess 418496 00:23:51.018 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@952 -- # '[' -z 418496 ']' 00:23:51.018 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # kill -0 418496 00:23:51.018 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (418496) - No such process 00:23:51.018 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@979 -- # echo 'Process with pid 418496 is not found' 00:23:51.018 Process with pid 418496 is not found 00:23:51.018 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:23:51.018 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@342 -- # nvmf_fini 00:23:51.018 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@264 -- # local dev 00:23:51.018 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@267 -- # remove_target_ns 00:23:51.018 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:23:51.018 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:23:51.018 19:13:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_target_ns 00:23:52.929 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@268 -- # delete_main_bridge 00:23:52.929 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:23:52.929 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@130 -- # return 0 00:23:52.929 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:23:52.929 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:23:52.929 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:23:52.929 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:23:52.929 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:23:52.929 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:23:52.929 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:23:52.929 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:23:52.929 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:23:52.929 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:23:52.929 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:23:52.929 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:23:52.929 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:23:52.929 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:23:52.929 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:23:52.929 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:23:52.929 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:23:52.929 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@41 -- # _dev=0 00:23:52.929 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@41 -- # dev_map=() 00:23:52.929 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@284 -- # iptr 00:23:52.929 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@542 -- # iptables-save 00:23:52.929 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:23:52.929 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@542 -- # iptables-restore 00:23:52.929 00:23:52.929 real 0m10.375s 00:23:52.929 user 0m28.179s 00:23:52.929 sys 0m3.988s 00:23:52.929 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:52.929 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:52.929 ************************************ 00:23:52.929 END TEST nvmf_shutdown_tc4 00:23:52.929 ************************************ 00:23:53.191 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:23:53.191 00:23:53.191 real 0m43.919s 00:23:53.191 user 1m47.313s 00:23:53.191 sys 0m13.674s 00:23:53.191 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:53.191 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:53.191 ************************************ 00:23:53.191 END TEST nvmf_shutdown 00:23:53.191 ************************************ 00:23:53.191 19:13:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:23:53.191 19:13:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:23:53.191 19:13:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:53.191 19:13:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:53.191 ************************************ 00:23:53.191 START TEST nvmf_nsid 00:23:53.191 ************************************ 00:23:53.191 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:23:53.191 * Looking for test storage... 00:23:53.191 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:53.191 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:53.191 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # lcov --version 00:23:53.191 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:53.452 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:53.452 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:53.452 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:53.452 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:53.452 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:23:53.452 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:23:53.452 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:23:53.452 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:23:53.452 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:23:53.452 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:23:53.452 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:23:53.452 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:53.452 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:23:53.452 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:23:53.452 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:53.452 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:53.452 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:23:53.452 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:23:53.452 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:53.452 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:23:53.452 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:23:53.452 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:23:53.452 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:23:53.452 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:53.452 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:23:53.452 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:23:53.452 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:53.452 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:53.452 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:23:53.453 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:53.453 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:53.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:53.453 --rc genhtml_branch_coverage=1 00:23:53.453 --rc genhtml_function_coverage=1 00:23:53.453 --rc genhtml_legend=1 00:23:53.453 --rc geninfo_all_blocks=1 00:23:53.453 --rc geninfo_unexecuted_blocks=1 00:23:53.453 00:23:53.453 ' 00:23:53.453 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:53.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:53.453 --rc genhtml_branch_coverage=1 00:23:53.453 --rc genhtml_function_coverage=1 00:23:53.453 --rc genhtml_legend=1 00:23:53.453 --rc geninfo_all_blocks=1 00:23:53.453 --rc geninfo_unexecuted_blocks=1 00:23:53.453 00:23:53.453 ' 00:23:53.453 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:53.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:53.453 --rc genhtml_branch_coverage=1 00:23:53.453 --rc genhtml_function_coverage=1 00:23:53.453 --rc genhtml_legend=1 00:23:53.453 --rc geninfo_all_blocks=1 00:23:53.453 --rc geninfo_unexecuted_blocks=1 00:23:53.453 00:23:53.453 ' 00:23:53.453 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:53.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:53.453 --rc genhtml_branch_coverage=1 00:23:53.453 --rc genhtml_function_coverage=1 00:23:53.453 --rc genhtml_legend=1 00:23:53.453 --rc geninfo_all_blocks=1 00:23:53.453 --rc geninfo_unexecuted_blocks=1 00:23:53.453 00:23:53.453 ' 00:23:53.453 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:53.453 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:23:53.453 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:53.453 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:53.453 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:53.453 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:53.453 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:53.453 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:23:53.453 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:53.453 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:23:53.453 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:53.453 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:53.453 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:53.453 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:23:53.453 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:23:53.453 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:53.453 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:53.453 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:23:53.453 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:53.453 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:53.453 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:53.453 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:53.453 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:53.453 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:53.453 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:23:53.453 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:53.453 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:23:53.453 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:23:53.453 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:23:53.453 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:23:53.453 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@50 -- # : 0 00:23:53.453 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:23:53.453 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:23:53.453 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:23:53.453 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:53.453 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:53.453 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:23:53.453 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:23:53.453 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:23:53.453 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:23:53.453 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@54 -- # have_pci_nics=0 00:23:53.453 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:23:53.453 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:23:53.453 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:23:53.453 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:23:53.453 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:23:53.453 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:23:53.453 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:23:53.453 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:53.453 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@296 -- # prepare_net_devs 00:23:53.453 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # local -g is_hw=no 00:23:53.453 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@260 -- # remove_target_ns 00:23:53.453 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:23:53.453 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:23:53.453 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_target_ns 00:23:53.453 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:23:53.453 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:23:53.453 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # xtrace_disable 00:23:53.453 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:01.596 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:01.596 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@131 -- # pci_devs=() 00:24:01.596 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@131 -- # local -a pci_devs 00:24:01.596 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@132 -- # pci_net_devs=() 00:24:01.596 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:24:01.596 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@133 -- # pci_drivers=() 00:24:01.596 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@133 -- # local -A pci_drivers 00:24:01.596 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@135 -- # net_devs=() 00:24:01.596 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@135 -- # local -ga net_devs 00:24:01.596 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@136 -- # e810=() 00:24:01.596 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@136 -- # local -ga e810 00:24:01.596 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@137 -- # x722=() 00:24:01.596 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@137 -- # local -ga x722 00:24:01.596 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@138 -- # mlx=() 00:24:01.596 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@138 -- # local -ga mlx 00:24:01.596 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:01.596 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:01.596 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:01.596 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:01.596 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:01.596 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:01.596 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:01.596 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:01.596 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:01.596 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:01.596 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:01.596 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:01.596 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:24:01.596 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:24:01.596 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:24:01.596 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:24:01.596 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:24:01.596 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:24:01.596 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:24:01.596 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:01.596 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:01.596 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:24:01.596 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:24:01.596 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:01.596 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:01.596 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:24:01.596 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:24:01.596 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:01.596 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:01.596 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:24:01.596 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:24:01.596 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:01.596 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:01.596 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:24:01.596 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:24:01.596 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:24:01.596 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:24:01.596 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:24:01.596 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:01.596 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:24:01.596 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:01.596 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@234 -- # [[ up == up ]] 00:24:01.596 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:24:01.596 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:01.596 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:01.596 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:01.596 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:24:01.596 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:24:01.596 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:01.596 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:24:01.596 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:01.596 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@234 -- # [[ up == up ]] 00:24:01.596 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:24:01.596 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:01.596 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:01.596 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:01.596 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:24:01.596 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:24:01.596 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:24:01.596 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # is_hw=yes 00:24:01.596 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:24:01.596 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:24:01.596 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:24:01.596 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:24:01.596 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@257 -- # create_target_ns 00:24:01.596 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:24:01.597 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:24:01.597 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:24:01.597 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:01.597 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:24:01.597 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:24:01.597 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:01.597 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:01.597 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:24:01.597 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:24:01.597 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:24:01.597 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:24:01.597 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@27 -- # local -gA dev_map 00:24:01.597 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@28 -- # local -g _dev 00:24:01.597 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:24:01.597 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:24:01.597 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:24:01.597 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:24:01.597 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@44 -- # ips=() 00:24:01.597 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:24:01.597 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:24:01.597 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:24:01.597 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:24:01.597 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:24:01.597 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:24:01.597 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:24:01.597 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:24:01.597 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:24:01.597 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:24:01.597 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:24:01.597 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:24:01.597 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:24:01.597 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:24:01.597 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:24:01.597 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:24:01.597 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:24:01.597 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:24:01.597 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:01.597 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:24:01.597 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@11 -- # local val=167772161 00:24:01.597 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:24:01.597 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:24:01.597 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:24:01.597 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:24:01.597 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:24:01.597 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:24:01.597 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:24:01.597 10.0.0.1 00:24:01.597 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:24:01.597 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:24:01.597 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:01.597 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:01.597 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:24:01.597 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@11 -- # local val=167772162 00:24:01.597 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:24:01.597 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:24:01.597 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:24:01.597 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:24:01.597 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:24:01.597 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:24:01.597 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:24:01.597 10.0.0.2 00:24:01.597 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:24:01.597 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:24:01.597 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:24:01.597 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:24:01.597 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:24:01.597 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:24:01.597 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:24:01.597 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:01.597 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:01.597 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:24:01.597 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:24:01.597 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:24:01.597 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:24:01.597 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:24:01.597 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:24:01.597 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:24:01.597 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:24:01.597 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:24:01.597 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:24:01.597 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:24:01.597 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@38 -- # ping_ips 1 00:24:01.597 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:24:01.597 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:24:01.597 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:24:01.597 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:24:01.597 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:24:01.597 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:24:01.597 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:24:01.597 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:24:01.597 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:24:01.597 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@107 -- # local dev=initiator0 00:24:01.597 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:24:01.597 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:24:01.597 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:24:01.597 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:24:01.597 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:24:01.597 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:24:01.597 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:24:01.597 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:24:01.597 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:24:01.597 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:24:01.597 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:24:01.597 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:01.597 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:01.597 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:24:01.597 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:24:01.597 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:01.597 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.623 ms 00:24:01.597 00:24:01.597 --- 10.0.0.1 ping statistics --- 00:24:01.597 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:01.597 rtt min/avg/max/mdev = 0.623/0.623/0.623/0.000 ms 00:24:01.597 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:24:01.598 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:24:01.598 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:24:01.598 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:24:01.598 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:01.598 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:01.598 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@168 -- # get_net_dev target0 00:24:01.598 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@107 -- # local dev=target0 00:24:01.598 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:24:01.598 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:24:01.598 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:24:01.598 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:24:01.598 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:24:01.598 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:24:01.598 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:24:01.598 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:24:01.598 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:24:01.598 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:24:01.598 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:24:01.598 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:24:01.598 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:24:01.598 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:24:01.598 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:01.598 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.290 ms 00:24:01.598 00:24:01.598 --- 10.0.0.2 ping statistics --- 00:24:01.598 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:01.598 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:24:01.598 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@98 -- # (( pair++ )) 00:24:01.598 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:24:01.598 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:01.598 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@270 -- # return 0 00:24:01.598 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:24:01.598 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:24:01.598 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:24:01.598 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:24:01.598 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:24:01.598 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:24:01.598 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:24:01.598 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:24:01.598 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:24:01.598 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:24:01.598 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@107 -- # local dev=initiator0 00:24:01.598 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:24:01.598 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:24:01.598 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:24:01.598 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:24:01.598 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:24:01.598 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:24:01.598 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:24:01.598 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:24:01.598 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:24:01.598 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:01.598 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:24:01.598 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:24:01.598 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:24:01.598 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:24:01.598 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:24:01.598 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:24:01.598 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@107 -- # local dev=initiator1 00:24:01.598 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:24:01.598 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:24:01.598 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@109 -- # return 1 00:24:01.598 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@168 -- # dev= 00:24:01.598 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@169 -- # return 0 00:24:01.598 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:24:01.598 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:24:01.598 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:24:01.598 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:24:01.598 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:24:01.598 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:01.598 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:01.598 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@168 -- # get_net_dev target0 00:24:01.598 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@107 -- # local dev=target0 00:24:01.598 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:24:01.598 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:24:01.598 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:24:01.598 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:24:01.598 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:24:01.598 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:24:01.598 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:24:01.598 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:24:01.598 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:24:01.598 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:01.598 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:24:01.598 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:24:01.598 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:24:01.598 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:24:01.598 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:01.598 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:01.598 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@168 -- # get_net_dev target1 00:24:01.598 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@107 -- # local dev=target1 00:24:01.598 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:24:01.598 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:24:01.598 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@109 -- # return 1 00:24:01.598 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@168 -- # dev= 00:24:01.598 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@169 -- # return 0 00:24:01.598 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:24:01.599 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:01.599 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:24:01.599 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:24:01.599 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:01.599 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:24:01.599 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:24:01.599 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:24:01.599 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:24:01.599 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:01.599 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:01.599 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # nvmfpid=424321 00:24:01.599 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@329 -- # waitforlisten 424321 00:24:01.599 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@833 -- # '[' -z 424321 ']' 00:24:01.599 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:24:01.599 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:01.599 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:01.599 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:01.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:01.599 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:01.599 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:01.599 [2024-11-05 19:13:30.167791] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:24:01.599 [2024-11-05 19:13:30.167859] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:01.599 [2024-11-05 19:13:30.246644] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:01.599 [2024-11-05 19:13:30.281046] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:01.599 [2024-11-05 19:13:30.281077] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:01.599 [2024-11-05 19:13:30.281086] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:01.599 [2024-11-05 19:13:30.281092] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:01.599 [2024-11-05 19:13:30.281099] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:01.599 [2024-11-05 19:13:30.281669] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:01.859 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:01.859 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@866 -- # return 0 00:24:01.859 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:24:01.859 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:01.859 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:01.859 19:13:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:01.859 19:13:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:24:01.859 19:13:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=424361 00:24:01.859 19:13:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:24:01.859 19:13:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:24:01.859 19:13:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=127.0.0.1 00:24:01.860 19:13:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:24:01.860 19:13:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=031fbd15-14e6-46c4-b040-f15332ccaf56 00:24:01.860 19:13:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:24:01.860 19:13:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=77a32a15-11d0-4da3-a5b2-f66a55de1e07 00:24:01.860 19:13:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:24:01.860 19:13:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=2f9da755-71a3-40f9-a14a-8b6ba1bd3cf7 00:24:01.860 19:13:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:24:01.860 19:13:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:01.860 19:13:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:01.860 null0 00:24:01.860 null1 00:24:01.860 null2 00:24:01.860 [2024-11-05 19:13:31.065873] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:01.860 [2024-11-05 19:13:31.079760] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:24:01.860 [2024-11-05 19:13:31.079809] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid424361 ] 00:24:01.860 [2024-11-05 19:13:31.090098] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:01.860 19:13:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:01.860 19:13:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 424361 /var/tmp/tgt2.sock 00:24:01.860 19:13:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@833 -- # '[' -z 424361 ']' 00:24:01.860 19:13:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/tgt2.sock 00:24:01.860 19:13:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:01.860 19:13:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:24:01.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:24:01.860 19:13:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:01.860 19:13:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:01.860 [2024-11-05 19:13:31.168011] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:02.120 [2024-11-05 19:13:31.204192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:02.120 19:13:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:02.120 19:13:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@866 -- # return 0 00:24:02.120 19:13:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:24:02.381 [2024-11-05 19:13:31.694727] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:02.641 [2024-11-05 19:13:31.710856] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4421 *** 00:24:02.641 nvme0n1 nvme0n2 00:24:02.641 nvme1n1 00:24:02.641 19:13:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:24:02.641 19:13:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:24:02.642 19:13:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 127.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:04.023 19:13:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:24:04.023 19:13:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:24:04.023 19:13:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:24:04.023 19:13:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:24:04.023 19:13:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 00:24:04.023 19:13:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:24:04.023 19:13:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:24:04.023 19:13:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:24:04.023 19:13:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:24:04.023 19:13:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:24:04.023 19:13:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # '[' 0 -lt 15 ']' 00:24:04.023 19:13:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # i=1 00:24:04.023 19:13:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # sleep 1 00:24:04.964 19:13:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:24:04.964 19:13:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:24:04.964 19:13:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:24:04.964 19:13:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:24:04.964 19:13:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:24:04.964 19:13:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 031fbd15-14e6-46c4-b040-f15332ccaf56 00:24:04.964 19:13:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@538 -- # tr -d - 00:24:04.964 19:13:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:24:04.964 19:13:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:24:04.964 19:13:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:24:04.964 19:13:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:24:04.964 19:13:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=031fbd1514e646c4b040f15332ccaf56 00:24:04.964 19:13:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 031FBD1514E646C4B040F15332CCAF56 00:24:04.964 19:13:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 031FBD1514E646C4B040F15332CCAF56 == \0\3\1\F\B\D\1\5\1\4\E\6\4\6\C\4\B\0\4\0\F\1\5\3\3\2\C\C\A\F\5\6 ]] 00:24:04.964 19:13:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:24:04.964 19:13:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:24:04.964 19:13:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:24:04.964 19:13:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n2 00:24:04.964 19:13:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:24:04.964 19:13:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n2 00:24:05.225 19:13:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:24:05.225 19:13:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 77a32a15-11d0-4da3-a5b2-f66a55de1e07 00:24:05.225 19:13:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@538 -- # tr -d - 00:24:05.225 19:13:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:24:05.225 19:13:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:24:05.225 19:13:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:24:05.225 19:13:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:24:05.225 19:13:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=77a32a1511d04da3a5b2f66a55de1e07 00:24:05.225 19:13:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 77A32A1511D04DA3A5B2F66A55DE1E07 00:24:05.225 19:13:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 77A32A1511D04DA3A5B2F66A55DE1E07 == \7\7\A\3\2\A\1\5\1\1\D\0\4\D\A\3\A\5\B\2\F\6\6\A\5\5\D\E\1\E\0\7 ]] 00:24:05.225 19:13:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:24:05.225 19:13:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:24:05.225 19:13:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n3 00:24:05.225 19:13:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:24:05.226 19:13:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:24:05.226 19:13:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n3 00:24:05.226 19:13:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:24:05.226 19:13:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 2f9da755-71a3-40f9-a14a-8b6ba1bd3cf7 00:24:05.226 19:13:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@538 -- # tr -d - 00:24:05.226 19:13:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:24:05.226 19:13:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:24:05.226 19:13:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:24:05.226 19:13:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:24:05.226 19:13:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=2f9da75571a340f9a14a8b6ba1bd3cf7 00:24:05.226 19:13:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 2F9DA75571A340F9A14A8B6BA1BD3CF7 00:24:05.226 19:13:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 2F9DA75571A340F9A14A8B6BA1BD3CF7 == \2\F\9\D\A\7\5\5\7\1\A\3\4\0\F\9\A\1\4\A\8\B\6\B\A\1\B\D\3\C\F\7 ]] 00:24:05.226 19:13:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:24:05.486 19:13:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:24:05.486 19:13:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:24:05.486 19:13:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 424361 00:24:05.486 19:13:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@952 -- # '[' -z 424361 ']' 00:24:05.486 19:13:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@956 -- # kill -0 424361 00:24:05.486 19:13:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # uname 00:24:05.486 19:13:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:05.486 19:13:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 424361 00:24:05.486 19:13:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:24:05.486 19:13:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:24:05.486 19:13:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 424361' 00:24:05.486 killing process with pid 424361 00:24:05.486 19:13:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@971 -- # kill 424361 00:24:05.486 19:13:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@976 -- # wait 424361 00:24:05.746 19:13:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:24:05.746 19:13:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@335 -- # nvmfcleanup 00:24:05.746 19:13:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@99 -- # sync 00:24:05.746 19:13:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:24:05.746 19:13:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@102 -- # set +e 00:24:05.746 19:13:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@103 -- # for i in {1..20} 00:24:05.746 19:13:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:24:05.746 rmmod nvme_tcp 00:24:05.746 rmmod nvme_fabrics 00:24:05.746 rmmod nvme_keyring 00:24:05.746 19:13:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:24:05.746 19:13:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@106 -- # set -e 00:24:05.746 19:13:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@107 -- # return 0 00:24:05.746 19:13:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # '[' -n 424321 ']' 00:24:05.746 19:13:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@337 -- # killprocess 424321 00:24:05.746 19:13:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@952 -- # '[' -z 424321 ']' 00:24:05.746 19:13:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@956 -- # kill -0 424321 00:24:05.746 19:13:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # uname 00:24:05.746 19:13:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:05.746 19:13:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 424321 00:24:05.746 19:13:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:05.746 19:13:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:05.746 19:13:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 424321' 00:24:05.746 killing process with pid 424321 00:24:05.746 19:13:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@971 -- # kill 424321 00:24:05.746 19:13:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@976 -- # wait 424321 00:24:06.007 19:13:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:24:06.007 19:13:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@342 -- # nvmf_fini 00:24:06.007 19:13:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@264 -- # local dev 00:24:06.007 19:13:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@267 -- # remove_target_ns 00:24:06.007 19:13:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:24:06.007 19:13:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:24:06.007 19:13:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_target_ns 00:24:07.918 19:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@268 -- # delete_main_bridge 00:24:07.918 19:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:24:07.918 19:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@130 -- # return 0 00:24:07.918 19:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:24:07.918 19:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:24:07.918 19:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:24:07.918 19:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:24:07.918 19:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:24:07.918 19:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:24:07.918 19:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:24:07.918 19:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:24:07.918 19:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:24:07.918 19:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:24:07.918 19:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:24:07.918 19:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:24:07.918 19:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:24:07.918 19:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:24:07.918 19:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:24:07.918 19:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:24:07.918 19:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:24:07.918 19:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@41 -- # _dev=0 00:24:07.918 19:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@41 -- # dev_map=() 00:24:07.918 19:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@284 -- # iptr 00:24:07.918 19:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@542 -- # iptables-save 00:24:07.918 19:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:24:07.918 19:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@542 -- # iptables-restore 00:24:07.918 00:24:07.918 real 0m14.877s 00:24:07.918 user 0m11.265s 00:24:07.918 sys 0m6.777s 00:24:07.918 19:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:07.918 19:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:07.918 ************************************ 00:24:07.918 END TEST nvmf_nsid 00:24:07.918 ************************************ 00:24:08.179 19:13:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:24:08.179 00:24:08.179 real 13m0.075s 00:24:08.179 user 27m14.403s 00:24:08.179 sys 3m52.035s 00:24:08.179 19:13:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:08.179 19:13:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:08.179 ************************************ 00:24:08.179 END TEST nvmf_target_extra 00:24:08.179 ************************************ 00:24:08.179 19:13:37 nvmf_tcp -- nvmf/nvmf.sh@12 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:24:08.179 19:13:37 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:24:08.179 19:13:37 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:08.179 19:13:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:08.179 ************************************ 00:24:08.179 START TEST nvmf_host 00:24:08.179 ************************************ 00:24:08.179 19:13:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:24:08.179 * Looking for test storage... 00:24:08.179 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:24:08.179 19:13:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:08.179 19:13:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # lcov --version 00:24:08.179 19:13:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:08.440 19:13:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:08.440 19:13:37 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:08.440 19:13:37 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:08.440 19:13:37 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:08.440 19:13:37 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:24:08.440 19:13:37 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:24:08.440 19:13:37 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:24:08.440 19:13:37 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:24:08.440 19:13:37 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:24:08.440 19:13:37 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:24:08.440 19:13:37 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:24:08.440 19:13:37 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:08.440 19:13:37 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:24:08.440 19:13:37 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:24:08.440 19:13:37 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:08.440 19:13:37 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:08.440 19:13:37 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:24:08.440 19:13:37 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:24:08.440 19:13:37 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:08.440 19:13:37 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:24:08.440 19:13:37 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:24:08.440 19:13:37 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:24:08.441 19:13:37 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:24:08.441 19:13:37 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:08.441 19:13:37 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:24:08.441 19:13:37 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:24:08.441 19:13:37 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:08.441 19:13:37 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:08.441 19:13:37 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:24:08.441 19:13:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:08.441 19:13:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:08.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:08.441 --rc genhtml_branch_coverage=1 00:24:08.441 --rc genhtml_function_coverage=1 00:24:08.441 --rc genhtml_legend=1 00:24:08.441 --rc geninfo_all_blocks=1 00:24:08.441 --rc geninfo_unexecuted_blocks=1 00:24:08.441 00:24:08.441 ' 00:24:08.441 19:13:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:08.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:08.441 --rc genhtml_branch_coverage=1 00:24:08.441 --rc genhtml_function_coverage=1 00:24:08.441 --rc genhtml_legend=1 00:24:08.441 --rc geninfo_all_blocks=1 00:24:08.441 --rc geninfo_unexecuted_blocks=1 00:24:08.441 00:24:08.441 ' 00:24:08.441 19:13:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:08.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:08.441 --rc genhtml_branch_coverage=1 00:24:08.441 --rc genhtml_function_coverage=1 00:24:08.441 --rc genhtml_legend=1 00:24:08.441 --rc geninfo_all_blocks=1 00:24:08.441 --rc geninfo_unexecuted_blocks=1 00:24:08.441 00:24:08.441 ' 00:24:08.441 19:13:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:08.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:08.441 --rc genhtml_branch_coverage=1 00:24:08.441 --rc genhtml_function_coverage=1 00:24:08.441 --rc genhtml_legend=1 00:24:08.441 --rc geninfo_all_blocks=1 00:24:08.441 --rc geninfo_unexecuted_blocks=1 00:24:08.441 00:24:08.441 ' 00:24:08.441 19:13:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:08.441 19:13:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:24:08.441 19:13:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:08.441 19:13:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:08.441 19:13:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:08.441 19:13:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:08.441 19:13:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:08.441 19:13:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:24:08.441 19:13:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:08.441 19:13:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:24:08.441 19:13:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:08.441 19:13:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:08.441 19:13:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:08.441 19:13:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:24:08.441 19:13:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:24:08.441 19:13:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:08.441 19:13:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:08.441 19:13:37 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:08.441 19:13:37 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:08.441 19:13:37 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:08.441 19:13:37 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:08.441 19:13:37 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.441 19:13:37 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.441 19:13:37 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.441 19:13:37 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:24:08.441 19:13:37 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.441 19:13:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:24:08.441 19:13:37 nvmf_tcp.nvmf_host -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:24:08.441 19:13:37 nvmf_tcp.nvmf_host -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:24:08.441 19:13:37 nvmf_tcp.nvmf_host -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:24:08.441 19:13:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@50 -- # : 0 00:24:08.441 19:13:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:24:08.441 19:13:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:24:08.441 19:13:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:24:08.441 19:13:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:08.441 19:13:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:08.441 19:13:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:24:08.441 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:24:08.441 19:13:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:24:08.441 19:13:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:24:08.441 19:13:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@54 -- # have_pci_nics=0 00:24:08.441 19:13:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:24:08.441 19:13:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:24:08.441 19:13:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:24:08.441 19:13:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:24:08.441 19:13:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:24:08.441 19:13:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:08.441 19:13:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.441 ************************************ 00:24:08.441 START TEST nvmf_aer 00:24:08.441 ************************************ 00:24:08.441 19:13:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:24:08.441 * Looking for test storage... 00:24:08.441 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:08.441 19:13:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:08.441 19:13:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # lcov --version 00:24:08.441 19:13:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:08.703 19:13:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:08.703 19:13:37 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:08.703 19:13:37 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:08.703 19:13:37 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:08.703 19:13:37 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:24:08.703 19:13:37 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:24:08.703 19:13:37 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:24:08.703 19:13:37 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:24:08.703 19:13:37 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:24:08.703 19:13:37 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:24:08.703 19:13:37 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:24:08.703 19:13:37 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:08.703 19:13:37 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:24:08.703 19:13:37 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:24:08.703 19:13:37 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:08.703 19:13:37 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:08.703 19:13:37 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:24:08.703 19:13:37 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:24:08.703 19:13:37 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:08.703 19:13:37 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:24:08.703 19:13:37 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:24:08.703 19:13:37 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:24:08.703 19:13:37 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:24:08.703 19:13:37 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:08.703 19:13:37 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:24:08.703 19:13:37 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:24:08.703 19:13:37 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:08.703 19:13:37 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:08.703 19:13:37 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:24:08.703 19:13:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:08.703 19:13:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:08.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:08.703 --rc genhtml_branch_coverage=1 00:24:08.703 --rc genhtml_function_coverage=1 00:24:08.703 --rc genhtml_legend=1 00:24:08.703 --rc geninfo_all_blocks=1 00:24:08.703 --rc geninfo_unexecuted_blocks=1 00:24:08.703 00:24:08.703 ' 00:24:08.703 19:13:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:08.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:08.703 --rc genhtml_branch_coverage=1 00:24:08.703 --rc genhtml_function_coverage=1 00:24:08.703 --rc genhtml_legend=1 00:24:08.703 --rc geninfo_all_blocks=1 00:24:08.703 --rc geninfo_unexecuted_blocks=1 00:24:08.703 00:24:08.703 ' 00:24:08.703 19:13:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:08.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:08.703 --rc genhtml_branch_coverage=1 00:24:08.703 --rc genhtml_function_coverage=1 00:24:08.703 --rc genhtml_legend=1 00:24:08.703 --rc geninfo_all_blocks=1 00:24:08.703 --rc geninfo_unexecuted_blocks=1 00:24:08.703 00:24:08.703 ' 00:24:08.703 19:13:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:08.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:08.703 --rc genhtml_branch_coverage=1 00:24:08.703 --rc genhtml_function_coverage=1 00:24:08.703 --rc genhtml_legend=1 00:24:08.703 --rc geninfo_all_blocks=1 00:24:08.703 --rc geninfo_unexecuted_blocks=1 00:24:08.703 00:24:08.703 ' 00:24:08.703 19:13:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:08.703 19:13:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:24:08.703 19:13:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:08.703 19:13:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:08.703 19:13:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:08.703 19:13:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:08.703 19:13:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:08.703 19:13:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:24:08.703 19:13:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:08.703 19:13:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:24:08.703 19:13:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:08.703 19:13:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:08.703 19:13:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:08.703 19:13:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:24:08.703 19:13:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:24:08.703 19:13:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:08.703 19:13:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:08.703 19:13:37 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:24:08.703 19:13:37 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:08.703 19:13:37 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:08.703 19:13:37 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:08.703 19:13:37 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.703 19:13:37 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.703 19:13:37 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.703 19:13:37 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:24:08.704 19:13:37 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.704 19:13:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:24:08.704 19:13:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:24:08.704 19:13:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:24:08.704 19:13:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:24:08.704 19:13:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@50 -- # : 0 00:24:08.704 19:13:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:24:08.704 19:13:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:24:08.704 19:13:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:24:08.704 19:13:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:08.704 19:13:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:08.704 19:13:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:24:08.704 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:24:08.704 19:13:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:24:08.704 19:13:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:24:08.704 19:13:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@54 -- # have_pci_nics=0 00:24:08.704 19:13:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:24:08.704 19:13:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:24:08.704 19:13:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:08.704 19:13:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # prepare_net_devs 00:24:08.704 19:13:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # local -g is_hw=no 00:24:08.704 19:13:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@260 -- # remove_target_ns 00:24:08.704 19:13:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:24:08.704 19:13:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:24:08.704 19:13:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_target_ns 00:24:08.704 19:13:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:24:08.704 19:13:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:24:08.704 19:13:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # xtrace_disable 00:24:08.704 19:13:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:15.293 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:15.293 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@131 -- # pci_devs=() 00:24:15.293 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@131 -- # local -a pci_devs 00:24:15.293 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@132 -- # pci_net_devs=() 00:24:15.293 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:24:15.293 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@133 -- # pci_drivers=() 00:24:15.293 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@133 -- # local -A pci_drivers 00:24:15.293 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@135 -- # net_devs=() 00:24:15.293 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@135 -- # local -ga net_devs 00:24:15.293 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@136 -- # e810=() 00:24:15.293 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@136 -- # local -ga e810 00:24:15.293 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@137 -- # x722=() 00:24:15.293 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@137 -- # local -ga x722 00:24:15.293 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@138 -- # mlx=() 00:24:15.293 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@138 -- # local -ga mlx 00:24:15.293 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:15.293 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:15.293 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:15.293 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:15.293 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:15.293 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:15.293 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:15.293 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:15.294 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:15.294 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:15.294 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:15.294 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:15.294 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:24:15.294 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:24:15.294 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:24:15.294 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:24:15.294 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:24:15.294 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:24:15.294 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:24:15.294 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:15.294 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:15.294 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:24:15.294 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:24:15.294 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:15.294 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:15.294 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:24:15.294 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:24:15.294 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:15.294 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:15.294 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:24:15.294 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:24:15.294 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:15.294 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:15.294 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:24:15.294 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:24:15.294 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:24:15.294 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:24:15.294 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:24:15.294 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:15.294 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:24:15.294 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:15.294 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@234 -- # [[ up == up ]] 00:24:15.294 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:24:15.294 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:15.294 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:15.294 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:15.294 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:24:15.294 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:24:15.294 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:15.294 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:24:15.294 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:15.294 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@234 -- # [[ up == up ]] 00:24:15.294 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:24:15.294 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:15.294 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:15.294 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:15.294 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:24:15.294 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:24:15.294 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:24:15.294 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # is_hw=yes 00:24:15.294 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:24:15.294 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:24:15.294 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:24:15.294 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:24:15.294 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@257 -- # create_target_ns 00:24:15.294 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:24:15.294 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:24:15.294 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:24:15.294 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:15.294 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:24:15.294 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:24:15.294 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:15.294 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:15.294 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:24:15.294 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:24:15.294 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:24:15.294 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:24:15.294 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@27 -- # local -gA dev_map 00:24:15.294 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@28 -- # local -g _dev 00:24:15.294 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:24:15.294 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:24:15.294 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:24:15.294 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:24:15.294 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@44 -- # ips=() 00:24:15.294 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:24:15.294 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:24:15.294 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:24:15.294 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:24:15.294 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:24:15.294 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:24:15.294 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:24:15.294 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:24:15.294 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:24:15.294 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:24:15.294 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:24:15.294 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:24:15.294 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:24:15.294 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:24:15.294 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:24:15.294 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:24:15.294 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:24:15.294 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:24:15.294 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:15.294 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:24:15.294 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@11 -- # local val=167772161 00:24:15.294 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:24:15.294 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:24:15.294 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:24:15.294 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:24:15.294 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:24:15.294 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:24:15.294 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:24:15.294 10.0.0.1 00:24:15.294 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:24:15.294 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:24:15.294 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:15.294 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:15.294 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:24:15.294 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@11 -- # local val=167772162 00:24:15.294 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:24:15.294 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:24:15.294 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:24:15.294 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:24:15.294 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:24:15.294 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:24:15.294 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:24:15.294 10.0.0.2 00:24:15.294 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:24:15.294 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:24:15.294 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:24:15.294 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:24:15.294 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:24:15.295 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:24:15.295 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:24:15.295 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:15.295 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:15.295 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:24:15.295 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:24:15.295 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:24:15.295 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:24:15.295 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:24:15.295 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:24:15.295 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:24:15.295 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:24:15.295 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:24:15.295 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:24:15.295 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:24:15.295 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@38 -- # ping_ips 1 00:24:15.295 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:24:15.295 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:24:15.295 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:24:15.295 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:24:15.295 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:24:15.295 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:24:15.295 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:24:15.295 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:24:15.295 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:24:15.295 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@107 -- # local dev=initiator0 00:24:15.295 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:24:15.295 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:24:15.295 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:24:15.295 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:24:15.295 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:24:15.295 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:24:15.295 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:24:15.295 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:24:15.295 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:24:15.295 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:24:15.295 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:24:15.295 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:15.295 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:15.295 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:24:15.295 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:24:15.295 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:15.295 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.600 ms 00:24:15.295 00:24:15.295 --- 10.0.0.1 ping statistics --- 00:24:15.295 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:15.295 rtt min/avg/max/mdev = 0.600/0.600/0.600/0.000 ms 00:24:15.295 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:24:15.295 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:24:15.295 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:24:15.295 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:24:15.295 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:15.295 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:15.295 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@168 -- # get_net_dev target0 00:24:15.295 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@107 -- # local dev=target0 00:24:15.295 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:24:15.295 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:24:15.295 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:24:15.295 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:24:15.295 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:24:15.295 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:24:15.295 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:24:15.295 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:24:15.295 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:24:15.295 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:24:15.295 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:24:15.295 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:24:15.295 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:24:15.295 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:24:15.295 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:15.295 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.299 ms 00:24:15.295 00:24:15.295 --- 10.0.0.2 ping statistics --- 00:24:15.295 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:15.295 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:24:15.295 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@98 -- # (( pair++ )) 00:24:15.295 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:24:15.295 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:15.295 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@270 -- # return 0 00:24:15.295 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:24:15.295 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:24:15.295 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:24:15.295 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:24:15.295 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:24:15.295 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:24:15.295 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:24:15.295 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:24:15.295 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:24:15.295 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:24:15.295 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@107 -- # local dev=initiator0 00:24:15.295 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:24:15.295 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:24:15.295 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:24:15.295 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:24:15.295 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:24:15.295 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:24:15.295 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:24:15.295 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:24:15.295 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:24:15.295 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:15.295 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:24:15.295 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:24:15.295 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:24:15.295 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:24:15.295 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:24:15.295 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:24:15.295 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@107 -- # local dev=initiator1 00:24:15.295 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:24:15.295 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:24:15.295 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@109 -- # return 1 00:24:15.295 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@168 -- # dev= 00:24:15.295 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@169 -- # return 0 00:24:15.295 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:24:15.295 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:24:15.295 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:24:15.295 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:24:15.295 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:24:15.295 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:15.295 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:15.295 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@168 -- # get_net_dev target0 00:24:15.295 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@107 -- # local dev=target0 00:24:15.295 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:24:15.295 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:24:15.295 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:24:15.295 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:24:15.295 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:24:15.295 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:24:15.556 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:24:15.556 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:24:15.556 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:24:15.556 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:15.556 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:24:15.556 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:24:15.556 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:24:15.556 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:24:15.557 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:15.557 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:15.557 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@168 -- # get_net_dev target1 00:24:15.557 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@107 -- # local dev=target1 00:24:15.557 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:24:15.557 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:24:15.557 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@109 -- # return 1 00:24:15.557 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@168 -- # dev= 00:24:15.557 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@169 -- # return 0 00:24:15.557 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:24:15.557 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:15.557 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:24:15.557 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:24:15.557 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:15.557 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:24:15.557 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:24:15.557 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:24:15.557 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:24:15.557 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:15.557 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:15.557 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # nvmfpid=429473 00:24:15.557 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@329 -- # waitforlisten 429473 00:24:15.557 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:15.557 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@833 -- # '[' -z 429473 ']' 00:24:15.557 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:15.557 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:15.557 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:15.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:15.557 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:15.557 19:13:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:15.557 [2024-11-05 19:13:44.740494] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:24:15.557 [2024-11-05 19:13:44.740559] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:15.557 [2024-11-05 19:13:44.822956] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:15.557 [2024-11-05 19:13:44.865906] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:15.557 [2024-11-05 19:13:44.865941] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:15.557 [2024-11-05 19:13:44.865949] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:15.557 [2024-11-05 19:13:44.865957] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:15.557 [2024-11-05 19:13:44.865963] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:15.557 [2024-11-05 19:13:44.867785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:15.557 [2024-11-05 19:13:44.867873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:15.557 [2024-11-05 19:13:44.868006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:15.557 [2024-11-05 19:13:44.868008] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:16.500 19:13:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:16.500 19:13:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@866 -- # return 0 00:24:16.500 19:13:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:24:16.500 19:13:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:16.500 19:13:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:16.500 19:13:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:16.500 19:13:45 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:16.500 19:13:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.500 19:13:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:16.500 [2024-11-05 19:13:45.602866] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:16.500 19:13:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.500 19:13:45 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:24:16.500 19:13:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.500 19:13:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:16.500 Malloc0 00:24:16.500 19:13:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.500 19:13:45 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:24:16.500 19:13:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.500 19:13:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:16.500 19:13:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.500 19:13:45 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:16.500 19:13:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.500 19:13:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:16.500 19:13:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.500 19:13:45 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:16.500 19:13:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.500 19:13:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:16.500 [2024-11-05 19:13:45.676062] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:16.500 19:13:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.500 19:13:45 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:24:16.500 19:13:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.500 19:13:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:16.500 [ 00:24:16.500 { 00:24:16.500 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:16.500 "subtype": "Discovery", 00:24:16.500 "listen_addresses": [], 00:24:16.500 "allow_any_host": true, 00:24:16.500 "hosts": [] 00:24:16.500 }, 00:24:16.500 { 00:24:16.500 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:16.500 "subtype": "NVMe", 00:24:16.500 "listen_addresses": [ 00:24:16.500 { 00:24:16.500 "trtype": "TCP", 00:24:16.500 "adrfam": "IPv4", 00:24:16.500 "traddr": "10.0.0.2", 00:24:16.500 "trsvcid": "4420" 00:24:16.500 } 00:24:16.500 ], 00:24:16.500 "allow_any_host": true, 00:24:16.500 "hosts": [], 00:24:16.500 "serial_number": "SPDK00000000000001", 00:24:16.500 "model_number": "SPDK bdev Controller", 00:24:16.500 "max_namespaces": 2, 00:24:16.500 "min_cntlid": 1, 00:24:16.500 "max_cntlid": 65519, 00:24:16.500 "namespaces": [ 00:24:16.500 { 00:24:16.500 "nsid": 1, 00:24:16.500 "bdev_name": "Malloc0", 00:24:16.500 "name": "Malloc0", 00:24:16.500 "nguid": "6E80956C5D0F45CE8C8F76CE05F63650", 00:24:16.500 "uuid": "6e80956c-5d0f-45ce-8c8f-76ce05f63650" 00:24:16.500 } 00:24:16.500 ] 00:24:16.500 } 00:24:16.500 ] 00:24:16.500 19:13:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.500 19:13:45 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:24:16.500 19:13:45 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:24:16.500 19:13:45 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=429635 00:24:16.500 19:13:45 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:24:16.500 19:13:45 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:24:16.500 19:13:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # local i=0 00:24:16.500 19:13:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:16.500 19:13:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # '[' 0 -lt 200 ']' 00:24:16.500 19:13:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # i=1 00:24:16.500 19:13:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # sleep 0.1 00:24:16.500 19:13:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:16.500 19:13:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # '[' 1 -lt 200 ']' 00:24:16.500 19:13:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # i=2 00:24:16.500 19:13:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # sleep 0.1 00:24:16.761 19:13:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:16.761 19:13:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1274 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:16.761 19:13:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1278 -- # return 0 00:24:16.761 19:13:45 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:24:16.761 19:13:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.761 19:13:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:16.761 Malloc1 00:24:16.761 19:13:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.761 19:13:45 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:24:16.762 19:13:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.762 19:13:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:16.762 19:13:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.762 19:13:45 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:24:16.762 19:13:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.762 19:13:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:16.762 Asynchronous Event Request test 00:24:16.762 Attaching to 10.0.0.2 00:24:16.762 Attached to 10.0.0.2 00:24:16.762 Registering asynchronous event callbacks... 00:24:16.762 Starting namespace attribute notice tests for all controllers... 00:24:16.762 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:24:16.762 aer_cb - Changed Namespace 00:24:16.762 Cleaning up... 00:24:16.762 [ 00:24:16.762 { 00:24:16.762 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:16.762 "subtype": "Discovery", 00:24:16.762 "listen_addresses": [], 00:24:16.762 "allow_any_host": true, 00:24:16.762 "hosts": [] 00:24:16.762 }, 00:24:16.762 { 00:24:16.762 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:16.762 "subtype": "NVMe", 00:24:16.762 "listen_addresses": [ 00:24:16.762 { 00:24:16.762 "trtype": "TCP", 00:24:16.762 "adrfam": "IPv4", 00:24:16.762 "traddr": "10.0.0.2", 00:24:16.762 "trsvcid": "4420" 00:24:16.762 } 00:24:16.762 ], 00:24:16.762 "allow_any_host": true, 00:24:16.762 "hosts": [], 00:24:16.762 "serial_number": "SPDK00000000000001", 00:24:16.762 "model_number": "SPDK bdev Controller", 00:24:16.762 "max_namespaces": 2, 00:24:16.762 "min_cntlid": 1, 00:24:16.762 "max_cntlid": 65519, 00:24:16.762 "namespaces": [ 00:24:16.762 { 00:24:16.762 "nsid": 1, 00:24:16.762 "bdev_name": "Malloc0", 00:24:16.762 "name": "Malloc0", 00:24:16.762 "nguid": "6E80956C5D0F45CE8C8F76CE05F63650", 00:24:16.762 "uuid": "6e80956c-5d0f-45ce-8c8f-76ce05f63650" 00:24:16.762 }, 00:24:16.762 { 00:24:16.762 "nsid": 2, 00:24:16.762 "bdev_name": "Malloc1", 00:24:16.762 "name": "Malloc1", 00:24:16.762 "nguid": "2516251D53454C3DA616BC82A831A06F", 00:24:16.762 "uuid": "2516251d-5345-4c3d-a616-bc82a831a06f" 00:24:16.762 } 00:24:16.762 ] 00:24:16.762 } 00:24:16.762 ] 00:24:16.762 19:13:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.762 19:13:45 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 429635 00:24:16.762 19:13:45 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:24:16.762 19:13:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.762 19:13:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:16.762 19:13:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.762 19:13:45 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:24:16.762 19:13:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.762 19:13:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:16.762 19:13:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.762 19:13:46 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:16.762 19:13:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.762 19:13:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:16.762 19:13:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.762 19:13:46 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:24:16.762 19:13:46 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:24:16.762 19:13:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@335 -- # nvmfcleanup 00:24:16.762 19:13:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@99 -- # sync 00:24:16.762 19:13:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:24:16.762 19:13:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@102 -- # set +e 00:24:16.762 19:13:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@103 -- # for i in {1..20} 00:24:16.762 19:13:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:24:16.762 rmmod nvme_tcp 00:24:16.762 rmmod nvme_fabrics 00:24:16.762 rmmod nvme_keyring 00:24:16.762 19:13:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:24:17.023 19:13:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # set -e 00:24:17.023 19:13:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # return 0 00:24:17.023 19:13:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # '[' -n 429473 ']' 00:24:17.023 19:13:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@337 -- # killprocess 429473 00:24:17.023 19:13:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@952 -- # '[' -z 429473 ']' 00:24:17.023 19:13:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # kill -0 429473 00:24:17.023 19:13:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@957 -- # uname 00:24:17.023 19:13:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:17.023 19:13:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 429473 00:24:17.023 19:13:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:17.023 19:13:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:17.023 19:13:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@970 -- # echo 'killing process with pid 429473' 00:24:17.023 killing process with pid 429473 00:24:17.023 19:13:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@971 -- # kill 429473 00:24:17.023 19:13:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@976 -- # wait 429473 00:24:17.023 19:13:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:24:17.023 19:13:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # nvmf_fini 00:24:17.023 19:13:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@264 -- # local dev 00:24:17.023 19:13:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@267 -- # remove_target_ns 00:24:17.023 19:13:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:24:17.023 19:13:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:24:17.023 19:13:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_target_ns 00:24:19.572 19:13:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@268 -- # delete_main_bridge 00:24:19.572 19:13:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:24:19.572 19:13:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@130 -- # return 0 00:24:19.572 19:13:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:24:19.572 19:13:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:24:19.572 19:13:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:24:19.572 19:13:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:24:19.572 19:13:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:24:19.572 19:13:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:24:19.572 19:13:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:24:19.572 19:13:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:24:19.572 19:13:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:24:19.572 19:13:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:24:19.572 19:13:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:24:19.572 19:13:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:24:19.572 19:13:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:24:19.572 19:13:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:24:19.572 19:13:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:24:19.572 19:13:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:24:19.572 19:13:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:24:19.572 19:13:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@41 -- # _dev=0 00:24:19.572 19:13:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@41 -- # dev_map=() 00:24:19.572 19:13:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@284 -- # iptr 00:24:19.572 19:13:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@542 -- # iptables-save 00:24:19.572 19:13:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:24:19.572 19:13:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@542 -- # iptables-restore 00:24:19.572 00:24:19.572 real 0m10.742s 00:24:19.572 user 0m7.650s 00:24:19.572 sys 0m5.644s 00:24:19.572 19:13:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:19.572 19:13:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:19.572 ************************************ 00:24:19.572 END TEST nvmf_aer 00:24:19.572 ************************************ 00:24:19.572 19:13:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:24:19.572 19:13:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:24:19.572 19:13:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:19.572 19:13:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.572 ************************************ 00:24:19.572 START TEST nvmf_async_init 00:24:19.572 ************************************ 00:24:19.572 19:13:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:24:19.572 * Looking for test storage... 00:24:19.572 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:19.572 19:13:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:19.572 19:13:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # lcov --version 00:24:19.572 19:13:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:19.572 19:13:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:19.572 19:13:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:19.572 19:13:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:19.572 19:13:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:19.572 19:13:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:24:19.572 19:13:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:24:19.572 19:13:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:24:19.572 19:13:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:24:19.572 19:13:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:24:19.572 19:13:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:24:19.572 19:13:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:24:19.572 19:13:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:19.572 19:13:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:24:19.572 19:13:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:24:19.572 19:13:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:19.572 19:13:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:19.572 19:13:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:24:19.572 19:13:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:24:19.572 19:13:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:19.572 19:13:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:24:19.572 19:13:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:24:19.572 19:13:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:24:19.572 19:13:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:24:19.572 19:13:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:19.572 19:13:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:24:19.572 19:13:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:24:19.572 19:13:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:19.572 19:13:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:19.572 19:13:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:24:19.572 19:13:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:19.572 19:13:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:19.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:19.572 --rc genhtml_branch_coverage=1 00:24:19.572 --rc genhtml_function_coverage=1 00:24:19.572 --rc genhtml_legend=1 00:24:19.572 --rc geninfo_all_blocks=1 00:24:19.572 --rc geninfo_unexecuted_blocks=1 00:24:19.572 00:24:19.572 ' 00:24:19.572 19:13:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:19.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:19.572 --rc genhtml_branch_coverage=1 00:24:19.572 --rc genhtml_function_coverage=1 00:24:19.572 --rc genhtml_legend=1 00:24:19.572 --rc geninfo_all_blocks=1 00:24:19.572 --rc geninfo_unexecuted_blocks=1 00:24:19.572 00:24:19.572 ' 00:24:19.572 19:13:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:19.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:19.572 --rc genhtml_branch_coverage=1 00:24:19.572 --rc genhtml_function_coverage=1 00:24:19.572 --rc genhtml_legend=1 00:24:19.573 --rc geninfo_all_blocks=1 00:24:19.573 --rc geninfo_unexecuted_blocks=1 00:24:19.573 00:24:19.573 ' 00:24:19.573 19:13:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:19.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:19.573 --rc genhtml_branch_coverage=1 00:24:19.573 --rc genhtml_function_coverage=1 00:24:19.573 --rc genhtml_legend=1 00:24:19.573 --rc geninfo_all_blocks=1 00:24:19.573 --rc geninfo_unexecuted_blocks=1 00:24:19.573 00:24:19.573 ' 00:24:19.573 19:13:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:19.573 19:13:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:24:19.573 19:13:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:19.573 19:13:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:19.573 19:13:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:19.573 19:13:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:19.573 19:13:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:19.573 19:13:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:24:19.573 19:13:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:19.573 19:13:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:24:19.573 19:13:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:19.573 19:13:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:19.573 19:13:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:19.573 19:13:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:24:19.573 19:13:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:24:19.573 19:13:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:19.573 19:13:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:19.573 19:13:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:24:19.573 19:13:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:19.573 19:13:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:19.573 19:13:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:19.573 19:13:48 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.573 19:13:48 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.573 19:13:48 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.573 19:13:48 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:24:19.573 19:13:48 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.573 19:13:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:24:19.573 19:13:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:24:19.573 19:13:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:24:19.573 19:13:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:24:19.573 19:13:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@50 -- # : 0 00:24:19.573 19:13:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:24:19.573 19:13:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:24:19.573 19:13:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:24:19.573 19:13:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:19.573 19:13:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:19.573 19:13:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:24:19.573 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:24:19.573 19:13:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:24:19.573 19:13:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:24:19.573 19:13:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@54 -- # have_pci_nics=0 00:24:19.573 19:13:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:24:19.573 19:13:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:24:19.573 19:13:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:24:19.573 19:13:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:24:19.573 19:13:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:24:19.573 19:13:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:24:19.573 19:13:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=39d1ea7a9f4f4e86af17f09938db5219 00:24:19.573 19:13:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:24:19.573 19:13:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:24:19.573 19:13:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:19.573 19:13:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # prepare_net_devs 00:24:19.573 19:13:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # local -g is_hw=no 00:24:19.573 19:13:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@260 -- # remove_target_ns 00:24:19.573 19:13:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:24:19.573 19:13:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:24:19.573 19:13:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_target_ns 00:24:19.573 19:13:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:24:19.573 19:13:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:24:19.573 19:13:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # xtrace_disable 00:24:19.573 19:13:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:27.722 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:27.722 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@131 -- # pci_devs=() 00:24:27.722 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@131 -- # local -a pci_devs 00:24:27.722 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@132 -- # pci_net_devs=() 00:24:27.722 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:24:27.722 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@133 -- # pci_drivers=() 00:24:27.722 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@133 -- # local -A pci_drivers 00:24:27.722 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@135 -- # net_devs=() 00:24:27.722 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@135 -- # local -ga net_devs 00:24:27.722 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@136 -- # e810=() 00:24:27.722 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@136 -- # local -ga e810 00:24:27.722 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@137 -- # x722=() 00:24:27.722 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@137 -- # local -ga x722 00:24:27.722 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@138 -- # mlx=() 00:24:27.722 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@138 -- # local -ga mlx 00:24:27.722 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:27.722 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:27.722 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:27.722 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:27.722 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:27.722 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:27.722 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:27.722 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:27.722 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:27.722 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:27.722 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:27.722 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:27.722 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:24:27.722 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:24:27.722 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:24:27.722 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:24:27.722 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:24:27.722 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:24:27.722 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:24:27.722 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:27.722 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:27.722 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:24:27.722 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:24:27.722 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:27.722 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:27.722 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:24:27.722 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:24:27.722 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:27.722 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:27.722 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:24:27.722 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:24:27.722 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:27.722 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:27.722 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:24:27.722 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:24:27.722 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:24:27.722 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:24:27.722 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:24:27.722 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:27.722 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:24:27.722 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:27.722 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@234 -- # [[ up == up ]] 00:24:27.722 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:24:27.722 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:27.722 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:27.722 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:27.722 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:24:27.722 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:24:27.722 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:27.722 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:24:27.722 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:27.722 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@234 -- # [[ up == up ]] 00:24:27.722 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:24:27.722 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:27.722 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:27.722 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:27.722 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:24:27.722 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:24:27.722 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:24:27.722 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # is_hw=yes 00:24:27.722 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:24:27.722 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:24:27.722 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:24:27.722 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:24:27.722 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@257 -- # create_target_ns 00:24:27.722 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:24:27.722 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:24:27.722 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:24:27.722 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:27.722 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:24:27.722 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:24:27.722 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:27.722 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:27.722 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:24:27.722 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:24:27.723 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:24:27.723 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:24:27.723 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@27 -- # local -gA dev_map 00:24:27.723 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@28 -- # local -g _dev 00:24:27.723 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:24:27.723 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:24:27.723 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:24:27.723 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:24:27.723 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@44 -- # ips=() 00:24:27.723 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:24:27.723 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:24:27.723 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:24:27.723 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:24:27.723 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:24:27.723 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:24:27.723 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:24:27.723 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:24:27.723 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:24:27.723 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:24:27.723 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:24:27.723 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:24:27.723 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:24:27.723 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:24:27.723 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:24:27.723 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:24:27.723 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:24:27.723 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:24:27.723 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:27.723 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:24:27.723 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@11 -- # local val=167772161 00:24:27.723 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:24:27.723 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:24:27.723 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:24:27.723 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:24:27.723 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:24:27.723 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:24:27.723 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:24:27.723 10.0.0.1 00:24:27.723 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:24:27.723 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:24:27.723 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:27.723 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:27.723 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:24:27.723 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@11 -- # local val=167772162 00:24:27.723 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:24:27.723 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:24:27.723 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:24:27.723 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:24:27.723 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:24:27.723 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:24:27.723 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:24:27.723 10.0.0.2 00:24:27.723 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:24:27.723 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:24:27.723 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:24:27.723 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:24:27.723 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:24:27.723 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:24:27.723 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:24:27.723 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:27.723 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:27.723 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:24:27.723 19:13:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:24:27.723 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:24:27.723 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:24:27.723 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:24:27.723 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:24:27.723 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:24:27.723 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:24:27.723 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:24:27.723 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:24:27.723 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:24:27.723 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@38 -- # ping_ips 1 00:24:27.723 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:24:27.723 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:24:27.723 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:24:27.723 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:24:27.723 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:24:27.723 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:24:27.723 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:24:27.723 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:24:27.723 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:24:27.723 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@107 -- # local dev=initiator0 00:24:27.723 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:24:27.723 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:24:27.723 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:24:27.723 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:24:27.723 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:24:27.723 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:24:27.723 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:24:27.723 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:24:27.723 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:24:27.723 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:24:27.723 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:24:27.723 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:27.723 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:27.723 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:24:27.723 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:24:27.723 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:27.723 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.616 ms 00:24:27.723 00:24:27.723 --- 10.0.0.1 ping statistics --- 00:24:27.723 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:27.723 rtt min/avg/max/mdev = 0.616/0.616/0.616/0.000 ms 00:24:27.723 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:24:27.723 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:24:27.723 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:24:27.723 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:24:27.723 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:27.723 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:27.723 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@168 -- # get_net_dev target0 00:24:27.723 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@107 -- # local dev=target0 00:24:27.723 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:24:27.723 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:24:27.723 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:24:27.723 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:24:27.723 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:24:27.724 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:24:27.724 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:24:27.724 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:24:27.724 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:24:27.724 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:24:27.724 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:24:27.724 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:24:27.724 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:24:27.724 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:24:27.724 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:27.724 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.327 ms 00:24:27.724 00:24:27.724 --- 10.0.0.2 ping statistics --- 00:24:27.724 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:27.724 rtt min/avg/max/mdev = 0.327/0.327/0.327/0.000 ms 00:24:27.724 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@98 -- # (( pair++ )) 00:24:27.724 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:24:27.724 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:27.724 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@270 -- # return 0 00:24:27.724 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:24:27.724 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:24:27.724 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:24:27.724 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:24:27.724 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:24:27.724 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:24:27.724 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:24:27.724 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:24:27.724 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:24:27.724 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:24:27.724 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@107 -- # local dev=initiator0 00:24:27.724 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:24:27.724 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:24:27.724 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:24:27.724 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:24:27.724 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:24:27.724 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:24:27.724 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:24:27.724 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:24:27.724 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:24:27.724 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:27.724 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:24:27.724 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:24:27.724 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:24:27.724 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:24:27.724 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:24:27.724 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:24:27.724 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@107 -- # local dev=initiator1 00:24:27.724 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:24:27.724 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:24:27.724 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@109 -- # return 1 00:24:27.724 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@168 -- # dev= 00:24:27.724 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@169 -- # return 0 00:24:27.724 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:24:27.724 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:24:27.724 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:24:27.724 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:24:27.724 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:24:27.724 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:27.724 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:27.724 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@168 -- # get_net_dev target0 00:24:27.724 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@107 -- # local dev=target0 00:24:27.724 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:24:27.724 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:24:27.724 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:24:27.724 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:24:27.724 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:24:27.724 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:24:27.724 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:24:27.724 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:24:27.724 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:24:27.724 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:27.724 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:24:27.724 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:24:27.724 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:24:27.724 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:24:27.724 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:27.724 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:27.724 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@168 -- # get_net_dev target1 00:24:27.724 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@107 -- # local dev=target1 00:24:27.724 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:24:27.724 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:24:27.724 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@109 -- # return 1 00:24:27.724 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@168 -- # dev= 00:24:27.724 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@169 -- # return 0 00:24:27.724 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:24:27.724 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:27.724 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:24:27.724 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:24:27.724 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:27.724 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:24:27.724 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:24:27.724 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:24:27.724 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:24:27.724 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:27.724 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:27.724 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # nvmfpid=433874 00:24:27.724 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@329 -- # waitforlisten 433874 00:24:27.724 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:27.724 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@833 -- # '[' -z 433874 ']' 00:24:27.724 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:27.724 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:27.724 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:27.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:27.724 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:27.724 19:13:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:27.724 [2024-11-05 19:13:56.267321] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:24:27.724 [2024-11-05 19:13:56.267393] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:27.724 [2024-11-05 19:13:56.349212] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:27.724 [2024-11-05 19:13:56.390476] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:27.724 [2024-11-05 19:13:56.390508] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:27.724 [2024-11-05 19:13:56.390516] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:27.724 [2024-11-05 19:13:56.390522] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:27.724 [2024-11-05 19:13:56.390528] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:27.724 [2024-11-05 19:13:56.391191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:27.985 19:13:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:27.985 19:13:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@866 -- # return 0 00:24:27.985 19:13:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:24:27.985 19:13:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:27.985 19:13:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:27.985 19:13:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:27.985 19:13:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:24:27.985 19:13:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.985 19:13:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:27.985 [2024-11-05 19:13:57.108515] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:27.985 19:13:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.985 19:13:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:24:27.985 19:13:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.985 19:13:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:27.985 null0 00:24:27.985 19:13:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.985 19:13:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:24:27.985 19:13:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.985 19:13:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:27.985 19:13:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.986 19:13:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:24:27.986 19:13:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.986 19:13:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:27.986 19:13:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.986 19:13:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 39d1ea7a9f4f4e86af17f09938db5219 00:24:27.986 19:13:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.986 19:13:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:27.986 19:13:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.986 19:13:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:27.986 19:13:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.986 19:13:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:27.986 [2024-11-05 19:13:57.168798] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:27.986 19:13:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.986 19:13:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:24:27.986 19:13:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.986 19:13:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:28.247 nvme0n1 00:24:28.247 19:13:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.247 19:13:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:28.247 19:13:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.247 19:13:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:28.247 [ 00:24:28.247 { 00:24:28.247 "name": "nvme0n1", 00:24:28.247 "aliases": [ 00:24:28.247 "39d1ea7a-9f4f-4e86-af17-f09938db5219" 00:24:28.247 ], 00:24:28.247 "product_name": "NVMe disk", 00:24:28.247 "block_size": 512, 00:24:28.247 "num_blocks": 2097152, 00:24:28.247 "uuid": "39d1ea7a-9f4f-4e86-af17-f09938db5219", 00:24:28.247 "numa_id": 0, 00:24:28.247 "assigned_rate_limits": { 00:24:28.247 "rw_ios_per_sec": 0, 00:24:28.247 "rw_mbytes_per_sec": 0, 00:24:28.247 "r_mbytes_per_sec": 0, 00:24:28.247 "w_mbytes_per_sec": 0 00:24:28.247 }, 00:24:28.247 "claimed": false, 00:24:28.247 "zoned": false, 00:24:28.247 "supported_io_types": { 00:24:28.247 "read": true, 00:24:28.247 "write": true, 00:24:28.247 "unmap": false, 00:24:28.247 "flush": true, 00:24:28.247 "reset": true, 00:24:28.247 "nvme_admin": true, 00:24:28.247 "nvme_io": true, 00:24:28.247 "nvme_io_md": false, 00:24:28.247 "write_zeroes": true, 00:24:28.247 "zcopy": false, 00:24:28.247 "get_zone_info": false, 00:24:28.247 "zone_management": false, 00:24:28.247 "zone_append": false, 00:24:28.247 "compare": true, 00:24:28.247 "compare_and_write": true, 00:24:28.247 "abort": true, 00:24:28.247 "seek_hole": false, 00:24:28.247 "seek_data": false, 00:24:28.247 "copy": true, 00:24:28.247 "nvme_iov_md": false 00:24:28.247 }, 00:24:28.247 "memory_domains": [ 00:24:28.247 { 00:24:28.247 "dma_device_id": "system", 00:24:28.247 "dma_device_type": 1 00:24:28.247 } 00:24:28.247 ], 00:24:28.247 "driver_specific": { 00:24:28.247 "nvme": [ 00:24:28.247 { 00:24:28.247 "trid": { 00:24:28.247 "trtype": "TCP", 00:24:28.247 "adrfam": "IPv4", 00:24:28.247 "traddr": "10.0.0.2", 00:24:28.247 "trsvcid": "4420", 00:24:28.247 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:28.247 }, 00:24:28.247 "ctrlr_data": { 00:24:28.247 "cntlid": 1, 00:24:28.247 "vendor_id": "0x8086", 00:24:28.247 "model_number": "SPDK bdev Controller", 00:24:28.247 "serial_number": "00000000000000000000", 00:24:28.247 "firmware_revision": "25.01", 00:24:28.247 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:28.247 "oacs": { 00:24:28.247 "security": 0, 00:24:28.247 "format": 0, 00:24:28.247 "firmware": 0, 00:24:28.247 "ns_manage": 0 00:24:28.247 }, 00:24:28.247 "multi_ctrlr": true, 00:24:28.247 "ana_reporting": false 00:24:28.247 }, 00:24:28.247 "vs": { 00:24:28.247 "nvme_version": "1.3" 00:24:28.247 }, 00:24:28.247 "ns_data": { 00:24:28.247 "id": 1, 00:24:28.247 "can_share": true 00:24:28.247 } 00:24:28.247 } 00:24:28.247 ], 00:24:28.247 "mp_policy": "active_passive" 00:24:28.247 } 00:24:28.247 } 00:24:28.247 ] 00:24:28.247 19:13:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.247 19:13:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:24:28.247 19:13:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.247 19:13:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:28.247 [2024-11-05 19:13:57.445999] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:28.247 [2024-11-05 19:13:57.446064] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1891f60 (9): Bad file descriptor 00:24:28.509 [2024-11-05 19:13:57.577845] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:24:28.509 19:13:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.509 19:13:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:28.509 19:13:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.509 19:13:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:28.509 [ 00:24:28.509 { 00:24:28.509 "name": "nvme0n1", 00:24:28.509 "aliases": [ 00:24:28.509 "39d1ea7a-9f4f-4e86-af17-f09938db5219" 00:24:28.509 ], 00:24:28.509 "product_name": "NVMe disk", 00:24:28.509 "block_size": 512, 00:24:28.509 "num_blocks": 2097152, 00:24:28.509 "uuid": "39d1ea7a-9f4f-4e86-af17-f09938db5219", 00:24:28.509 "numa_id": 0, 00:24:28.509 "assigned_rate_limits": { 00:24:28.509 "rw_ios_per_sec": 0, 00:24:28.509 "rw_mbytes_per_sec": 0, 00:24:28.509 "r_mbytes_per_sec": 0, 00:24:28.509 "w_mbytes_per_sec": 0 00:24:28.509 }, 00:24:28.509 "claimed": false, 00:24:28.509 "zoned": false, 00:24:28.509 "supported_io_types": { 00:24:28.509 "read": true, 00:24:28.509 "write": true, 00:24:28.509 "unmap": false, 00:24:28.509 "flush": true, 00:24:28.509 "reset": true, 00:24:28.509 "nvme_admin": true, 00:24:28.509 "nvme_io": true, 00:24:28.509 "nvme_io_md": false, 00:24:28.509 "write_zeroes": true, 00:24:28.509 "zcopy": false, 00:24:28.509 "get_zone_info": false, 00:24:28.509 "zone_management": false, 00:24:28.509 "zone_append": false, 00:24:28.509 "compare": true, 00:24:28.509 "compare_and_write": true, 00:24:28.509 "abort": true, 00:24:28.509 "seek_hole": false, 00:24:28.509 "seek_data": false, 00:24:28.509 "copy": true, 00:24:28.509 "nvme_iov_md": false 00:24:28.509 }, 00:24:28.509 "memory_domains": [ 00:24:28.509 { 00:24:28.509 "dma_device_id": "system", 00:24:28.509 "dma_device_type": 1 00:24:28.509 } 00:24:28.509 ], 00:24:28.509 "driver_specific": { 00:24:28.509 "nvme": [ 00:24:28.509 { 00:24:28.509 "trid": { 00:24:28.509 "trtype": "TCP", 00:24:28.509 "adrfam": "IPv4", 00:24:28.509 "traddr": "10.0.0.2", 00:24:28.509 "trsvcid": "4420", 00:24:28.509 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:28.509 }, 00:24:28.509 "ctrlr_data": { 00:24:28.509 "cntlid": 2, 00:24:28.509 "vendor_id": "0x8086", 00:24:28.509 "model_number": "SPDK bdev Controller", 00:24:28.509 "serial_number": "00000000000000000000", 00:24:28.509 "firmware_revision": "25.01", 00:24:28.509 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:28.509 "oacs": { 00:24:28.509 "security": 0, 00:24:28.509 "format": 0, 00:24:28.509 "firmware": 0, 00:24:28.509 "ns_manage": 0 00:24:28.509 }, 00:24:28.509 "multi_ctrlr": true, 00:24:28.509 "ana_reporting": false 00:24:28.509 }, 00:24:28.509 "vs": { 00:24:28.509 "nvme_version": "1.3" 00:24:28.509 }, 00:24:28.509 "ns_data": { 00:24:28.509 "id": 1, 00:24:28.509 "can_share": true 00:24:28.509 } 00:24:28.509 } 00:24:28.509 ], 00:24:28.509 "mp_policy": "active_passive" 00:24:28.509 } 00:24:28.509 } 00:24:28.509 ] 00:24:28.509 19:13:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.509 19:13:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:28.509 19:13:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.509 19:13:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:28.509 19:13:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.509 19:13:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:24:28.509 19:13:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.k7fZTSQBfv 00:24:28.509 19:13:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:24:28.509 19:13:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.k7fZTSQBfv 00:24:28.509 19:13:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.k7fZTSQBfv 00:24:28.509 19:13:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.509 19:13:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:28.509 19:13:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.509 19:13:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:24:28.509 19:13:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.509 19:13:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:28.509 19:13:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.509 19:13:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:24:28.509 19:13:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.509 19:13:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:28.509 [2024-11-05 19:13:57.666687] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:28.509 [2024-11-05 19:13:57.666804] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:28.509 19:13:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.509 19:13:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:24:28.509 19:13:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.509 19:13:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:28.509 19:13:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.509 19:13:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:28.509 19:13:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.509 19:13:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:28.509 [2024-11-05 19:13:57.690772] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:28.509 nvme0n1 00:24:28.509 19:13:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.509 19:13:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:28.509 19:13:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.509 19:13:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:28.509 [ 00:24:28.509 { 00:24:28.509 "name": "nvme0n1", 00:24:28.509 "aliases": [ 00:24:28.509 "39d1ea7a-9f4f-4e86-af17-f09938db5219" 00:24:28.509 ], 00:24:28.509 "product_name": "NVMe disk", 00:24:28.509 "block_size": 512, 00:24:28.509 "num_blocks": 2097152, 00:24:28.509 "uuid": "39d1ea7a-9f4f-4e86-af17-f09938db5219", 00:24:28.509 "numa_id": 0, 00:24:28.509 "assigned_rate_limits": { 00:24:28.509 "rw_ios_per_sec": 0, 00:24:28.509 "rw_mbytes_per_sec": 0, 00:24:28.509 "r_mbytes_per_sec": 0, 00:24:28.509 "w_mbytes_per_sec": 0 00:24:28.509 }, 00:24:28.509 "claimed": false, 00:24:28.509 "zoned": false, 00:24:28.509 "supported_io_types": { 00:24:28.509 "read": true, 00:24:28.509 "write": true, 00:24:28.509 "unmap": false, 00:24:28.509 "flush": true, 00:24:28.509 "reset": true, 00:24:28.509 "nvme_admin": true, 00:24:28.509 "nvme_io": true, 00:24:28.509 "nvme_io_md": false, 00:24:28.509 "write_zeroes": true, 00:24:28.510 "zcopy": false, 00:24:28.510 "get_zone_info": false, 00:24:28.510 "zone_management": false, 00:24:28.510 "zone_append": false, 00:24:28.510 "compare": true, 00:24:28.510 "compare_and_write": true, 00:24:28.510 "abort": true, 00:24:28.510 "seek_hole": false, 00:24:28.510 "seek_data": false, 00:24:28.510 "copy": true, 00:24:28.510 "nvme_iov_md": false 00:24:28.510 }, 00:24:28.510 "memory_domains": [ 00:24:28.510 { 00:24:28.510 "dma_device_id": "system", 00:24:28.510 "dma_device_type": 1 00:24:28.510 } 00:24:28.510 ], 00:24:28.510 "driver_specific": { 00:24:28.510 "nvme": [ 00:24:28.510 { 00:24:28.510 "trid": { 00:24:28.510 "trtype": "TCP", 00:24:28.510 "adrfam": "IPv4", 00:24:28.510 "traddr": "10.0.0.2", 00:24:28.510 "trsvcid": "4421", 00:24:28.510 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:28.510 }, 00:24:28.510 "ctrlr_data": { 00:24:28.510 "cntlid": 3, 00:24:28.510 "vendor_id": "0x8086", 00:24:28.510 "model_number": "SPDK bdev Controller", 00:24:28.510 "serial_number": "00000000000000000000", 00:24:28.510 "firmware_revision": "25.01", 00:24:28.510 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:28.510 "oacs": { 00:24:28.510 "security": 0, 00:24:28.510 "format": 0, 00:24:28.510 "firmware": 0, 00:24:28.510 "ns_manage": 0 00:24:28.510 }, 00:24:28.510 "multi_ctrlr": true, 00:24:28.510 "ana_reporting": false 00:24:28.510 }, 00:24:28.510 "vs": { 00:24:28.510 "nvme_version": "1.3" 00:24:28.510 }, 00:24:28.510 "ns_data": { 00:24:28.510 "id": 1, 00:24:28.510 "can_share": true 00:24:28.510 } 00:24:28.510 } 00:24:28.510 ], 00:24:28.510 "mp_policy": "active_passive" 00:24:28.510 } 00:24:28.510 } 00:24:28.510 ] 00:24:28.510 19:13:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.510 19:13:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:28.510 19:13:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.510 19:13:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:28.510 19:13:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.510 19:13:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.k7fZTSQBfv 00:24:28.510 19:13:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:24:28.510 19:13:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:24:28.510 19:13:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@335 -- # nvmfcleanup 00:24:28.510 19:13:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@99 -- # sync 00:24:28.510 19:13:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:24:28.510 19:13:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@102 -- # set +e 00:24:28.510 19:13:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@103 -- # for i in {1..20} 00:24:28.510 19:13:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:24:28.510 rmmod nvme_tcp 00:24:28.771 rmmod nvme_fabrics 00:24:28.771 rmmod nvme_keyring 00:24:28.771 19:13:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:24:28.771 19:13:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # set -e 00:24:28.771 19:13:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # return 0 00:24:28.771 19:13:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # '[' -n 433874 ']' 00:24:28.771 19:13:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@337 -- # killprocess 433874 00:24:28.771 19:13:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@952 -- # '[' -z 433874 ']' 00:24:28.771 19:13:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # kill -0 433874 00:24:28.771 19:13:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@957 -- # uname 00:24:28.771 19:13:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:28.771 19:13:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 433874 00:24:28.771 19:13:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:28.771 19:13:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:28.771 19:13:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@970 -- # echo 'killing process with pid 433874' 00:24:28.771 killing process with pid 433874 00:24:28.771 19:13:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@971 -- # kill 433874 00:24:28.771 19:13:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@976 -- # wait 433874 00:24:28.771 19:13:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:24:28.771 19:13:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # nvmf_fini 00:24:28.771 19:13:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@264 -- # local dev 00:24:28.771 19:13:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@267 -- # remove_target_ns 00:24:28.771 19:13:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:24:28.771 19:13:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:24:28.771 19:13:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_target_ns 00:24:31.319 19:14:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@268 -- # delete_main_bridge 00:24:31.319 19:14:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:24:31.319 19:14:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@130 -- # return 0 00:24:31.319 19:14:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:24:31.319 19:14:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:24:31.319 19:14:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:24:31.319 19:14:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:24:31.319 19:14:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:24:31.320 19:14:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:24:31.320 19:14:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:24:31.320 19:14:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:24:31.320 19:14:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:24:31.320 19:14:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:24:31.320 19:14:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:24:31.320 19:14:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:24:31.320 19:14:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:24:31.320 19:14:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:24:31.320 19:14:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:24:31.320 19:14:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:24:31.320 19:14:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:24:31.320 19:14:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@41 -- # _dev=0 00:24:31.320 19:14:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@41 -- # dev_map=() 00:24:31.320 19:14:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@284 -- # iptr 00:24:31.320 19:14:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@542 -- # iptables-save 00:24:31.320 19:14:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:24:31.320 19:14:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@542 -- # iptables-restore 00:24:31.320 00:24:31.320 real 0m11.698s 00:24:31.320 user 0m4.253s 00:24:31.320 sys 0m5.964s 00:24:31.320 19:14:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:31.320 19:14:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:31.320 ************************************ 00:24:31.320 END TEST nvmf_async_init 00:24:31.320 ************************************ 00:24:31.320 19:14:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@20 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:24:31.320 19:14:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:24:31.320 19:14:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:31.320 19:14:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.320 ************************************ 00:24:31.320 START TEST nvmf_identify 00:24:31.320 ************************************ 00:24:31.320 19:14:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:24:31.320 * Looking for test storage... 00:24:31.320 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:31.320 19:14:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:31.320 19:14:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lcov --version 00:24:31.320 19:14:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:31.320 19:14:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:31.320 19:14:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:31.320 19:14:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:31.320 19:14:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:31.320 19:14:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:24:31.320 19:14:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:24:31.320 19:14:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:24:31.320 19:14:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:24:31.320 19:14:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:24:31.320 19:14:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:24:31.320 19:14:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:24:31.320 19:14:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:31.320 19:14:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:24:31.320 19:14:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:24:31.320 19:14:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:31.320 19:14:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:31.320 19:14:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:24:31.320 19:14:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:24:31.320 19:14:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:31.320 19:14:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:24:31.320 19:14:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:24:31.320 19:14:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:24:31.320 19:14:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:24:31.320 19:14:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:31.320 19:14:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:24:31.320 19:14:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:24:31.320 19:14:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:31.320 19:14:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:31.320 19:14:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:24:31.320 19:14:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:31.320 19:14:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:31.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:31.320 --rc genhtml_branch_coverage=1 00:24:31.320 --rc genhtml_function_coverage=1 00:24:31.320 --rc genhtml_legend=1 00:24:31.320 --rc geninfo_all_blocks=1 00:24:31.320 --rc geninfo_unexecuted_blocks=1 00:24:31.320 00:24:31.320 ' 00:24:31.320 19:14:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:31.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:31.320 --rc genhtml_branch_coverage=1 00:24:31.320 --rc genhtml_function_coverage=1 00:24:31.320 --rc genhtml_legend=1 00:24:31.320 --rc geninfo_all_blocks=1 00:24:31.320 --rc geninfo_unexecuted_blocks=1 00:24:31.320 00:24:31.320 ' 00:24:31.320 19:14:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:31.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:31.320 --rc genhtml_branch_coverage=1 00:24:31.320 --rc genhtml_function_coverage=1 00:24:31.320 --rc genhtml_legend=1 00:24:31.320 --rc geninfo_all_blocks=1 00:24:31.320 --rc geninfo_unexecuted_blocks=1 00:24:31.320 00:24:31.320 ' 00:24:31.320 19:14:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:31.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:31.320 --rc genhtml_branch_coverage=1 00:24:31.320 --rc genhtml_function_coverage=1 00:24:31.320 --rc genhtml_legend=1 00:24:31.320 --rc geninfo_all_blocks=1 00:24:31.320 --rc geninfo_unexecuted_blocks=1 00:24:31.320 00:24:31.320 ' 00:24:31.320 19:14:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:31.320 19:14:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:24:31.320 19:14:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:31.320 19:14:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:31.320 19:14:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:31.320 19:14:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:31.320 19:14:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:31.320 19:14:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:24:31.320 19:14:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:31.320 19:14:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:24:31.320 19:14:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:31.320 19:14:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:31.320 19:14:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:31.320 19:14:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:24:31.320 19:14:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:24:31.320 19:14:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:31.320 19:14:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:31.320 19:14:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:24:31.320 19:14:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:31.320 19:14:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:31.320 19:14:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:31.320 19:14:00 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.320 19:14:00 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.321 19:14:00 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.321 19:14:00 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:24:31.321 19:14:00 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.321 19:14:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:24:31.321 19:14:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:24:31.321 19:14:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:24:31.321 19:14:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:24:31.321 19:14:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@50 -- # : 0 00:24:31.321 19:14:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:24:31.321 19:14:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:24:31.321 19:14:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:24:31.321 19:14:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:31.321 19:14:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:31.321 19:14:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:24:31.321 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:24:31.321 19:14:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:24:31.321 19:14:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:24:31.321 19:14:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@54 -- # have_pci_nics=0 00:24:31.321 19:14:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:31.321 19:14:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:31.321 19:14:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:24:31.321 19:14:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:24:31.321 19:14:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:31.321 19:14:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # prepare_net_devs 00:24:31.321 19:14:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # local -g is_hw=no 00:24:31.321 19:14:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@260 -- # remove_target_ns 00:24:31.321 19:14:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:24:31.321 19:14:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:24:31.321 19:14:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_target_ns 00:24:31.321 19:14:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:24:31.321 19:14:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:24:31.321 19:14:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # xtrace_disable 00:24:31.321 19:14:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:39.477 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:39.477 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@131 -- # pci_devs=() 00:24:39.477 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@131 -- # local -a pci_devs 00:24:39.477 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@132 -- # pci_net_devs=() 00:24:39.477 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:24:39.477 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@133 -- # pci_drivers=() 00:24:39.477 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@133 -- # local -A pci_drivers 00:24:39.477 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@135 -- # net_devs=() 00:24:39.477 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@135 -- # local -ga net_devs 00:24:39.477 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@136 -- # e810=() 00:24:39.477 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@136 -- # local -ga e810 00:24:39.477 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@137 -- # x722=() 00:24:39.477 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@137 -- # local -ga x722 00:24:39.477 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@138 -- # mlx=() 00:24:39.477 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@138 -- # local -ga mlx 00:24:39.477 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:39.477 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:39.477 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:39.477 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:39.477 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:39.477 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:39.478 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:39.478 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:39.478 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:39.478 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:39.478 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:39.478 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:39.478 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:24:39.478 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:24:39.478 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:24:39.478 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:24:39.478 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:24:39.478 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:24:39.478 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:24:39.478 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:39.478 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:39.478 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:24:39.478 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:24:39.478 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:39.478 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:39.478 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:24:39.478 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:24:39.478 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:39.478 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:39.478 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:24:39.478 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:24:39.478 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:39.478 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:39.478 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:24:39.478 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:24:39.478 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:24:39.478 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:24:39.478 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:24:39.478 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:39.478 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:24:39.478 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:39.478 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # [[ up == up ]] 00:24:39.478 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:24:39.478 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:39.478 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:39.479 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:39.479 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:24:39.479 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:24:39.479 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:39.479 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:24:39.479 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:39.479 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # [[ up == up ]] 00:24:39.479 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:24:39.479 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:39.479 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:39.479 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:39.479 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:24:39.479 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:24:39.479 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:24:39.479 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # is_hw=yes 00:24:39.479 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:24:39.479 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:24:39.479 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:24:39.479 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:24:39.479 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@257 -- # create_target_ns 00:24:39.479 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:24:39.479 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:24:39.479 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:24:39.479 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:39.479 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:24:39.479 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:24:39.479 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:39.479 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:39.479 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:24:39.479 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:24:39.479 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:24:39.479 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:24:39.479 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@27 -- # local -gA dev_map 00:24:39.479 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@28 -- # local -g _dev 00:24:39.479 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:24:39.479 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:24:39.479 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:24:39.479 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:24:39.479 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@44 -- # ips=() 00:24:39.479 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:24:39.479 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:24:39.479 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:24:39.479 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:24:39.479 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:24:39.479 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:24:39.479 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:24:39.479 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:24:39.479 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:24:39.479 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:24:39.479 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:24:39.479 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:24:39.479 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:24:39.479 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:24:39.479 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:24:39.479 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:24:39.479 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:24:39.479 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:24:39.479 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:39.479 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:24:39.479 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@11 -- # local val=167772161 00:24:39.479 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:24:39.479 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:24:39.479 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:24:39.479 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:24:39.479 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:24:39.479 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:24:39.479 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:24:39.479 10.0.0.1 00:24:39.479 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:24:39.479 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:24:39.479 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:39.479 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:39.479 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:24:39.479 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@11 -- # local val=167772162 00:24:39.479 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:24:39.479 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:24:39.479 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:24:39.479 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:24:39.479 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:24:39.479 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:24:39.479 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:24:39.479 10.0.0.2 00:24:39.479 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:24:39.479 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:24:39.479 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:24:39.479 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:24:39.479 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:24:39.479 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:24:39.479 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:24:39.479 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:39.479 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:39.479 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:24:39.479 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:24:39.479 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:24:39.479 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:24:39.479 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:24:39.479 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:24:39.479 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:24:39.479 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:24:39.479 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:24:39.479 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:24:39.479 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:24:39.479 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@38 -- # ping_ips 1 00:24:39.479 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:24:39.479 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:24:39.479 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:24:39.479 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:24:39.479 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:24:39.479 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:24:39.479 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:24:39.479 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:24:39.479 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:24:39.479 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@107 -- # local dev=initiator0 00:24:39.479 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:24:39.479 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:24:39.480 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:24:39.480 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:24:39.480 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:24:39.480 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:24:39.480 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:24:39.480 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:24:39.480 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:24:39.480 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:24:39.480 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:24:39.480 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:39.480 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:39.480 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:24:39.480 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:24:39.480 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:39.480 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.595 ms 00:24:39.480 00:24:39.480 --- 10.0.0.1 ping statistics --- 00:24:39.480 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:39.480 rtt min/avg/max/mdev = 0.595/0.595/0.595/0.000 ms 00:24:39.480 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:24:39.480 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:24:39.480 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:24:39.480 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:24:39.480 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:39.480 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:39.480 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@168 -- # get_net_dev target0 00:24:39.480 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@107 -- # local dev=target0 00:24:39.480 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:24:39.480 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:24:39.480 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:24:39.480 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:24:39.480 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:24:39.480 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:24:39.480 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:24:39.480 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:24:39.480 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:24:39.480 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:24:39.480 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:24:39.480 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:24:39.480 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:24:39.480 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:24:39.480 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:39.480 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.342 ms 00:24:39.480 00:24:39.480 --- 10.0.0.2 ping statistics --- 00:24:39.480 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:39.480 rtt min/avg/max/mdev = 0.342/0.342/0.342/0.000 ms 00:24:39.480 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@98 -- # (( pair++ )) 00:24:39.480 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:24:39.480 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:39.480 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@270 -- # return 0 00:24:39.480 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:24:39.480 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:24:39.480 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:24:39.480 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:24:39.480 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:24:39.480 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:24:39.480 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:24:39.480 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:24:39.480 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:24:39.480 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:24:39.480 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@107 -- # local dev=initiator0 00:24:39.480 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:24:39.480 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:24:39.480 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:24:39.480 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:24:39.480 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:24:39.480 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:24:39.480 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:24:39.480 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:24:39.480 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:24:39.480 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:39.480 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:24:39.480 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:24:39.480 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:24:39.480 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:24:39.480 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:24:39.480 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:24:39.480 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@107 -- # local dev=initiator1 00:24:39.480 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:24:39.480 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:24:39.480 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@109 -- # return 1 00:24:39.480 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@168 -- # dev= 00:24:39.480 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@169 -- # return 0 00:24:39.480 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:24:39.480 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:24:39.480 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:24:39.480 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:24:39.480 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:24:39.480 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:39.480 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:39.480 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@168 -- # get_net_dev target0 00:24:39.480 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@107 -- # local dev=target0 00:24:39.480 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:24:39.480 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:24:39.480 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:24:39.480 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:24:39.480 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:24:39.480 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:24:39.480 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:24:39.480 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:24:39.480 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:24:39.480 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:39.480 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:24:39.480 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:24:39.480 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:24:39.480 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:24:39.480 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:39.480 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:39.480 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@168 -- # get_net_dev target1 00:24:39.480 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@107 -- # local dev=target1 00:24:39.480 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:24:39.480 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:24:39.480 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@109 -- # return 1 00:24:39.480 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@168 -- # dev= 00:24:39.480 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@169 -- # return 0 00:24:39.480 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:24:39.480 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:39.480 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:24:39.480 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:24:39.480 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:39.480 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:24:39.480 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:24:39.480 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:24:39.480 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:39.480 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:39.480 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=438554 00:24:39.480 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:39.480 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:39.480 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 438554 00:24:39.480 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@833 -- # '[' -z 438554 ']' 00:24:39.480 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:39.480 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:39.480 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:39.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:39.481 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:39.481 19:14:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:39.481 [2024-11-05 19:14:07.976368] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:24:39.481 [2024-11-05 19:14:07.976438] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:39.481 [2024-11-05 19:14:08.059448] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:39.481 [2024-11-05 19:14:08.102549] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:39.481 [2024-11-05 19:14:08.102585] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:39.481 [2024-11-05 19:14:08.102593] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:39.481 [2024-11-05 19:14:08.102600] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:39.481 [2024-11-05 19:14:08.102606] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:39.481 [2024-11-05 19:14:08.104461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:39.481 [2024-11-05 19:14:08.104565] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:39.481 [2024-11-05 19:14:08.104723] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:39.481 [2024-11-05 19:14:08.104724] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:39.481 19:14:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:39.481 19:14:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@866 -- # return 0 00:24:39.481 19:14:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:39.481 19:14:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.481 19:14:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:39.481 [2024-11-05 19:14:08.791406] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:39.743 19:14:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.743 19:14:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:24:39.743 19:14:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:39.743 19:14:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:39.743 19:14:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:39.743 19:14:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.743 19:14:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:39.743 Malloc0 00:24:39.743 19:14:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.743 19:14:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:39.743 19:14:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.743 19:14:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:39.743 19:14:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.743 19:14:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:24:39.743 19:14:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.743 19:14:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:39.743 19:14:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.743 19:14:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:39.743 19:14:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.743 19:14:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:39.743 [2024-11-05 19:14:08.903161] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:39.743 19:14:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.743 19:14:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:39.743 19:14:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.743 19:14:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:39.743 19:14:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.743 19:14:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:24:39.743 19:14:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.743 19:14:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:39.743 [ 00:24:39.743 { 00:24:39.743 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:39.743 "subtype": "Discovery", 00:24:39.743 "listen_addresses": [ 00:24:39.743 { 00:24:39.743 "trtype": "TCP", 00:24:39.743 "adrfam": "IPv4", 00:24:39.743 "traddr": "10.0.0.2", 00:24:39.743 "trsvcid": "4420" 00:24:39.743 } 00:24:39.743 ], 00:24:39.743 "allow_any_host": true, 00:24:39.743 "hosts": [] 00:24:39.743 }, 00:24:39.743 { 00:24:39.743 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:39.743 "subtype": "NVMe", 00:24:39.743 "listen_addresses": [ 00:24:39.743 { 00:24:39.743 "trtype": "TCP", 00:24:39.743 "adrfam": "IPv4", 00:24:39.743 "traddr": "10.0.0.2", 00:24:39.743 "trsvcid": "4420" 00:24:39.743 } 00:24:39.743 ], 00:24:39.743 "allow_any_host": true, 00:24:39.743 "hosts": [], 00:24:39.743 "serial_number": "SPDK00000000000001", 00:24:39.743 "model_number": "SPDK bdev Controller", 00:24:39.743 "max_namespaces": 32, 00:24:39.743 "min_cntlid": 1, 00:24:39.743 "max_cntlid": 65519, 00:24:39.743 "namespaces": [ 00:24:39.743 { 00:24:39.743 "nsid": 1, 00:24:39.743 "bdev_name": "Malloc0", 00:24:39.743 "name": "Malloc0", 00:24:39.743 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:24:39.743 "eui64": "ABCDEF0123456789", 00:24:39.743 "uuid": "6a17da61-76bd-4cef-9352-c489d989ac2a" 00:24:39.743 } 00:24:39.743 ] 00:24:39.743 } 00:24:39.743 ] 00:24:39.743 19:14:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.743 19:14:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:24:39.743 [2024-11-05 19:14:08.966618] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:24:39.743 [2024-11-05 19:14:08.966660] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid438712 ] 00:24:39.743 [2024-11-05 19:14:09.022322] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:24:39.743 [2024-11-05 19:14:09.022379] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:39.743 [2024-11-05 19:14:09.022386] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:39.743 [2024-11-05 19:14:09.022398] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:39.743 [2024-11-05 19:14:09.022407] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:39.743 [2024-11-05 19:14:09.023142] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:24:39.743 [2024-11-05 19:14:09.023178] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x22ad690 0 00:24:39.743 [2024-11-05 19:14:09.033759] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:39.743 [2024-11-05 19:14:09.033773] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:39.743 [2024-11-05 19:14:09.033778] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:39.743 [2024-11-05 19:14:09.033782] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:39.743 [2024-11-05 19:14:09.033814] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:39.743 [2024-11-05 19:14:09.033821] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:39.743 [2024-11-05 19:14:09.033825] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22ad690) 00:24:39.743 [2024-11-05 19:14:09.033839] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:39.743 [2024-11-05 19:14:09.033858] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230f100, cid 0, qid 0 00:24:39.743 [2024-11-05 19:14:09.044758] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:39.743 [2024-11-05 19:14:09.044768] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:39.743 [2024-11-05 19:14:09.044772] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:39.743 [2024-11-05 19:14:09.044777] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x230f100) on tqpair=0x22ad690 00:24:39.743 [2024-11-05 19:14:09.044790] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:39.743 [2024-11-05 19:14:09.044798] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:24:39.743 [2024-11-05 19:14:09.044803] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:24:39.743 [2024-11-05 19:14:09.044817] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:39.743 [2024-11-05 19:14:09.044821] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:39.743 [2024-11-05 19:14:09.044825] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22ad690) 00:24:39.744 [2024-11-05 19:14:09.044836] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.744 [2024-11-05 19:14:09.044851] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230f100, cid 0, qid 0 00:24:39.744 [2024-11-05 19:14:09.045030] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:39.744 [2024-11-05 19:14:09.045037] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:39.744 [2024-11-05 19:14:09.045041] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:39.744 [2024-11-05 19:14:09.045045] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x230f100) on tqpair=0x22ad690 00:24:39.744 [2024-11-05 19:14:09.045050] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:24:39.744 [2024-11-05 19:14:09.045057] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:24:39.744 [2024-11-05 19:14:09.045065] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:39.744 [2024-11-05 19:14:09.045068] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:39.744 [2024-11-05 19:14:09.045072] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22ad690) 00:24:39.744 [2024-11-05 19:14:09.045079] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.744 [2024-11-05 19:14:09.045090] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230f100, cid 0, qid 0 00:24:39.744 [2024-11-05 19:14:09.045249] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:39.744 [2024-11-05 19:14:09.045256] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:39.744 [2024-11-05 19:14:09.045259] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:39.744 [2024-11-05 19:14:09.045263] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x230f100) on tqpair=0x22ad690 00:24:39.744 [2024-11-05 19:14:09.045269] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:24:39.744 [2024-11-05 19:14:09.045277] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:24:39.744 [2024-11-05 19:14:09.045283] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:39.744 [2024-11-05 19:14:09.045287] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:39.744 [2024-11-05 19:14:09.045291] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22ad690) 00:24:39.744 [2024-11-05 19:14:09.045298] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.744 [2024-11-05 19:14:09.045308] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230f100, cid 0, qid 0 00:24:39.744 [2024-11-05 19:14:09.045468] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:39.744 [2024-11-05 19:14:09.045474] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:39.744 [2024-11-05 19:14:09.045478] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:39.744 [2024-11-05 19:14:09.045482] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x230f100) on tqpair=0x22ad690 00:24:39.744 [2024-11-05 19:14:09.045487] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:39.744 [2024-11-05 19:14:09.045496] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:39.744 [2024-11-05 19:14:09.045500] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:39.744 [2024-11-05 19:14:09.045504] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22ad690) 00:24:39.744 [2024-11-05 19:14:09.045510] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.744 [2024-11-05 19:14:09.045525] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230f100, cid 0, qid 0 00:24:39.744 [2024-11-05 19:14:09.045727] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:39.744 [2024-11-05 19:14:09.045733] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:39.744 [2024-11-05 19:14:09.045737] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:39.744 [2024-11-05 19:14:09.045741] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x230f100) on tqpair=0x22ad690 00:24:39.744 [2024-11-05 19:14:09.045750] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:24:39.744 [2024-11-05 19:14:09.045755] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:24:39.744 [2024-11-05 19:14:09.045763] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:39.744 [2024-11-05 19:14:09.045871] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:24:39.744 [2024-11-05 19:14:09.045876] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:39.744 [2024-11-05 19:14:09.045885] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:39.744 [2024-11-05 19:14:09.045889] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:39.744 [2024-11-05 19:14:09.045893] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22ad690) 00:24:39.744 [2024-11-05 19:14:09.045899] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.744 [2024-11-05 19:14:09.045910] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230f100, cid 0, qid 0 00:24:39.744 [2024-11-05 19:14:09.046102] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:39.744 [2024-11-05 19:14:09.046109] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:39.744 [2024-11-05 19:14:09.046112] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:39.744 [2024-11-05 19:14:09.046116] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x230f100) on tqpair=0x22ad690 00:24:39.744 [2024-11-05 19:14:09.046121] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:39.744 [2024-11-05 19:14:09.046130] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:39.744 [2024-11-05 19:14:09.046135] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:39.744 [2024-11-05 19:14:09.046138] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22ad690) 00:24:39.744 [2024-11-05 19:14:09.046145] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.744 [2024-11-05 19:14:09.046155] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230f100, cid 0, qid 0 00:24:39.744 [2024-11-05 19:14:09.046319] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:39.744 [2024-11-05 19:14:09.046325] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:39.744 [2024-11-05 19:14:09.046329] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:39.744 [2024-11-05 19:14:09.046333] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x230f100) on tqpair=0x22ad690 00:24:39.744 [2024-11-05 19:14:09.046338] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:39.744 [2024-11-05 19:14:09.046343] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:24:39.744 [2024-11-05 19:14:09.046351] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:24:39.744 [2024-11-05 19:14:09.046361] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:24:39.744 [2024-11-05 19:14:09.046370] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:39.744 [2024-11-05 19:14:09.046374] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22ad690) 00:24:39.744 [2024-11-05 19:14:09.046381] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.744 [2024-11-05 19:14:09.046391] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230f100, cid 0, qid 0 00:24:39.744 [2024-11-05 19:14:09.046598] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:39.744 [2024-11-05 19:14:09.046605] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:39.744 [2024-11-05 19:14:09.046609] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:39.744 [2024-11-05 19:14:09.046614] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x22ad690): datao=0, datal=4096, cccid=0 00:24:39.744 [2024-11-05 19:14:09.046618] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x230f100) on tqpair(0x22ad690): expected_datao=0, payload_size=4096 00:24:39.744 [2024-11-05 19:14:09.046623] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:39.744 [2024-11-05 19:14:09.046631] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:39.744 [2024-11-05 19:14:09.046635] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:39.744 [2024-11-05 19:14:09.046768] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:39.744 [2024-11-05 19:14:09.046775] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:39.744 [2024-11-05 19:14:09.046779] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:39.744 [2024-11-05 19:14:09.046782] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x230f100) on tqpair=0x22ad690 00:24:39.744 [2024-11-05 19:14:09.046790] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:24:39.744 [2024-11-05 19:14:09.046795] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:24:39.744 [2024-11-05 19:14:09.046800] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:24:39.744 [2024-11-05 19:14:09.046805] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:24:39.744 [2024-11-05 19:14:09.046813] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:24:39.744 [2024-11-05 19:14:09.046818] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:24:39.744 [2024-11-05 19:14:09.046827] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:24:39.744 [2024-11-05 19:14:09.046834] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:39.744 [2024-11-05 19:14:09.046838] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:39.744 [2024-11-05 19:14:09.046841] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22ad690) 00:24:39.744 [2024-11-05 19:14:09.046848] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:39.744 [2024-11-05 19:14:09.046860] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230f100, cid 0, qid 0 00:24:39.744 [2024-11-05 19:14:09.047055] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:39.744 [2024-11-05 19:14:09.047062] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:39.744 [2024-11-05 19:14:09.047065] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:39.744 [2024-11-05 19:14:09.047069] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x230f100) on tqpair=0x22ad690 00:24:39.744 [2024-11-05 19:14:09.047081] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:39.745 [2024-11-05 19:14:09.047085] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:39.745 [2024-11-05 19:14:09.047089] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22ad690) 00:24:39.745 [2024-11-05 19:14:09.047095] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:39.745 [2024-11-05 19:14:09.047101] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:39.745 [2024-11-05 19:14:09.047105] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:39.745 [2024-11-05 19:14:09.047109] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x22ad690) 00:24:39.745 [2024-11-05 19:14:09.047115] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:39.745 [2024-11-05 19:14:09.047121] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:39.745 [2024-11-05 19:14:09.047125] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:39.745 [2024-11-05 19:14:09.047129] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x22ad690) 00:24:39.745 [2024-11-05 19:14:09.047134] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:39.745 [2024-11-05 19:14:09.047140] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:39.745 [2024-11-05 19:14:09.047144] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:39.745 [2024-11-05 19:14:09.047148] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22ad690) 00:24:39.745 [2024-11-05 19:14:09.047153] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:39.745 [2024-11-05 19:14:09.047158] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:24:39.745 [2024-11-05 19:14:09.047166] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:39.745 [2024-11-05 19:14:09.047173] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:39.745 [2024-11-05 19:14:09.047176] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x22ad690) 00:24:39.745 [2024-11-05 19:14:09.047183] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.745 [2024-11-05 19:14:09.047195] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230f100, cid 0, qid 0 00:24:39.745 [2024-11-05 19:14:09.047200] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230f280, cid 1, qid 0 00:24:39.745 [2024-11-05 19:14:09.047205] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230f400, cid 2, qid 0 00:24:39.745 [2024-11-05 19:14:09.047210] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230f580, cid 3, qid 0 00:24:39.745 [2024-11-05 19:14:09.047215] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230f700, cid 4, qid 0 00:24:39.745 [2024-11-05 19:14:09.047430] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:39.745 [2024-11-05 19:14:09.047437] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:39.745 [2024-11-05 19:14:09.047440] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:39.745 [2024-11-05 19:14:09.047444] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x230f700) on tqpair=0x22ad690 00:24:39.745 [2024-11-05 19:14:09.047452] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:24:39.745 [2024-11-05 19:14:09.047457] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:24:39.745 [2024-11-05 19:14:09.047468] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:39.745 [2024-11-05 19:14:09.047474] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x22ad690) 00:24:39.745 [2024-11-05 19:14:09.047480] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.745 [2024-11-05 19:14:09.047490] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230f700, cid 4, qid 0 00:24:39.745 [2024-11-05 19:14:09.047705] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:39.745 [2024-11-05 19:14:09.047712] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:39.745 [2024-11-05 19:14:09.047715] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:39.745 [2024-11-05 19:14:09.047719] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x22ad690): datao=0, datal=4096, cccid=4 00:24:39.745 [2024-11-05 19:14:09.047724] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x230f700) on tqpair(0x22ad690): expected_datao=0, payload_size=4096 00:24:39.745 [2024-11-05 19:14:09.047728] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:39.745 [2024-11-05 19:14:09.047744] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:39.745 [2024-11-05 19:14:09.047752] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:39.745 [2024-11-05 19:14:09.047959] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:39.745 [2024-11-05 19:14:09.047965] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:39.745 [2024-11-05 19:14:09.047968] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:39.745 [2024-11-05 19:14:09.047972] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x230f700) on tqpair=0x22ad690 00:24:39.745 [2024-11-05 19:14:09.047984] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:24:39.745 [2024-11-05 19:14:09.048007] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:39.745 [2024-11-05 19:14:09.048011] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x22ad690) 00:24:39.745 [2024-11-05 19:14:09.048018] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.745 [2024-11-05 19:14:09.048025] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:39.745 [2024-11-05 19:14:09.048028] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:39.745 [2024-11-05 19:14:09.048032] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x22ad690) 00:24:39.745 [2024-11-05 19:14:09.048038] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:39.745 [2024-11-05 19:14:09.048052] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230f700, cid 4, qid 0 00:24:39.745 [2024-11-05 19:14:09.048057] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230f880, cid 5, qid 0 00:24:39.745 [2024-11-05 19:14:09.048265] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:39.745 [2024-11-05 19:14:09.048272] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:39.745 [2024-11-05 19:14:09.048275] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:39.745 [2024-11-05 19:14:09.048279] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x22ad690): datao=0, datal=1024, cccid=4 00:24:39.745 [2024-11-05 19:14:09.048283] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x230f700) on tqpair(0x22ad690): expected_datao=0, payload_size=1024 00:24:39.745 [2024-11-05 19:14:09.048288] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:39.745 [2024-11-05 19:14:09.048294] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:39.745 [2024-11-05 19:14:09.048298] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:39.745 [2024-11-05 19:14:09.048304] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:39.745 [2024-11-05 19:14:09.048310] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:39.745 [2024-11-05 19:14:09.048315] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:39.745 [2024-11-05 19:14:09.048319] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x230f880) on tqpair=0x22ad690 00:24:40.011 [2024-11-05 19:14:09.090758] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:40.011 [2024-11-05 19:14:09.090770] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:40.011 [2024-11-05 19:14:09.090774] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:40.011 [2024-11-05 19:14:09.090778] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x230f700) on tqpair=0x22ad690 00:24:40.011 [2024-11-05 19:14:09.090790] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:40.011 [2024-11-05 19:14:09.090794] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x22ad690) 00:24:40.011 [2024-11-05 19:14:09.090801] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.011 [2024-11-05 19:14:09.090816] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230f700, cid 4, qid 0 00:24:40.011 [2024-11-05 19:14:09.091003] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:40.011 [2024-11-05 19:14:09.091010] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:40.011 [2024-11-05 19:14:09.091013] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:40.011 [2024-11-05 19:14:09.091017] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x22ad690): datao=0, datal=3072, cccid=4 00:24:40.011 [2024-11-05 19:14:09.091022] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x230f700) on tqpair(0x22ad690): expected_datao=0, payload_size=3072 00:24:40.011 [2024-11-05 19:14:09.091026] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:40.011 [2024-11-05 19:14:09.091043] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:40.011 [2024-11-05 19:14:09.091047] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:40.011 [2024-11-05 19:14:09.132944] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:40.011 [2024-11-05 19:14:09.132954] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:40.011 [2024-11-05 19:14:09.132957] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:40.011 [2024-11-05 19:14:09.132961] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x230f700) on tqpair=0x22ad690 00:24:40.011 [2024-11-05 19:14:09.132971] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:40.011 [2024-11-05 19:14:09.132975] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x22ad690) 00:24:40.011 [2024-11-05 19:14:09.132982] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.011 [2024-11-05 19:14:09.132997] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230f700, cid 4, qid 0 00:24:40.011 [2024-11-05 19:14:09.133217] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:40.011 [2024-11-05 19:14:09.133223] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:40.011 [2024-11-05 19:14:09.133227] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:40.011 [2024-11-05 19:14:09.133231] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x22ad690): datao=0, datal=8, cccid=4 00:24:40.011 [2024-11-05 19:14:09.133235] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x230f700) on tqpair(0x22ad690): expected_datao=0, payload_size=8 00:24:40.011 [2024-11-05 19:14:09.133240] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:40.011 [2024-11-05 19:14:09.133246] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:40.011 [2024-11-05 19:14:09.133250] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:40.011 [2024-11-05 19:14:09.174923] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:40.011 [2024-11-05 19:14:09.174932] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:40.011 [2024-11-05 19:14:09.174935] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:40.011 [2024-11-05 19:14:09.174943] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x230f700) on tqpair=0x22ad690 00:24:40.011 ===================================================== 00:24:40.011 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:24:40.011 ===================================================== 00:24:40.011 Controller Capabilities/Features 00:24:40.011 ================================ 00:24:40.011 Vendor ID: 0000 00:24:40.011 Subsystem Vendor ID: 0000 00:24:40.011 Serial Number: .................... 00:24:40.011 Model Number: ........................................ 00:24:40.011 Firmware Version: 25.01 00:24:40.011 Recommended Arb Burst: 0 00:24:40.011 IEEE OUI Identifier: 00 00 00 00:24:40.011 Multi-path I/O 00:24:40.011 May have multiple subsystem ports: No 00:24:40.011 May have multiple controllers: No 00:24:40.011 Associated with SR-IOV VF: No 00:24:40.011 Max Data Transfer Size: 131072 00:24:40.011 Max Number of Namespaces: 0 00:24:40.011 Max Number of I/O Queues: 1024 00:24:40.011 NVMe Specification Version (VS): 1.3 00:24:40.011 NVMe Specification Version (Identify): 1.3 00:24:40.011 Maximum Queue Entries: 128 00:24:40.011 Contiguous Queues Required: Yes 00:24:40.011 Arbitration Mechanisms Supported 00:24:40.011 Weighted Round Robin: Not Supported 00:24:40.011 Vendor Specific: Not Supported 00:24:40.011 Reset Timeout: 15000 ms 00:24:40.011 Doorbell Stride: 4 bytes 00:24:40.011 NVM Subsystem Reset: Not Supported 00:24:40.011 Command Sets Supported 00:24:40.011 NVM Command Set: Supported 00:24:40.011 Boot Partition: Not Supported 00:24:40.011 Memory Page Size Minimum: 4096 bytes 00:24:40.011 Memory Page Size Maximum: 4096 bytes 00:24:40.011 Persistent Memory Region: Not Supported 00:24:40.011 Optional Asynchronous Events Supported 00:24:40.011 Namespace Attribute Notices: Not Supported 00:24:40.011 Firmware Activation Notices: Not Supported 00:24:40.011 ANA Change Notices: Not Supported 00:24:40.011 PLE Aggregate Log Change Notices: Not Supported 00:24:40.011 LBA Status Info Alert Notices: Not Supported 00:24:40.011 EGE Aggregate Log Change Notices: Not Supported 00:24:40.011 Normal NVM Subsystem Shutdown event: Not Supported 00:24:40.011 Zone Descriptor Change Notices: Not Supported 00:24:40.011 Discovery Log Change Notices: Supported 00:24:40.011 Controller Attributes 00:24:40.011 128-bit Host Identifier: Not Supported 00:24:40.011 Non-Operational Permissive Mode: Not Supported 00:24:40.011 NVM Sets: Not Supported 00:24:40.011 Read Recovery Levels: Not Supported 00:24:40.011 Endurance Groups: Not Supported 00:24:40.011 Predictable Latency Mode: Not Supported 00:24:40.011 Traffic Based Keep ALive: Not Supported 00:24:40.011 Namespace Granularity: Not Supported 00:24:40.011 SQ Associations: Not Supported 00:24:40.011 UUID List: Not Supported 00:24:40.011 Multi-Domain Subsystem: Not Supported 00:24:40.011 Fixed Capacity Management: Not Supported 00:24:40.011 Variable Capacity Management: Not Supported 00:24:40.011 Delete Endurance Group: Not Supported 00:24:40.011 Delete NVM Set: Not Supported 00:24:40.011 Extended LBA Formats Supported: Not Supported 00:24:40.011 Flexible Data Placement Supported: Not Supported 00:24:40.011 00:24:40.011 Controller Memory Buffer Support 00:24:40.011 ================================ 00:24:40.011 Supported: No 00:24:40.011 00:24:40.011 Persistent Memory Region Support 00:24:40.011 ================================ 00:24:40.011 Supported: No 00:24:40.011 00:24:40.011 Admin Command Set Attributes 00:24:40.011 ============================ 00:24:40.011 Security Send/Receive: Not Supported 00:24:40.011 Format NVM: Not Supported 00:24:40.011 Firmware Activate/Download: Not Supported 00:24:40.011 Namespace Management: Not Supported 00:24:40.011 Device Self-Test: Not Supported 00:24:40.011 Directives: Not Supported 00:24:40.011 NVMe-MI: Not Supported 00:24:40.011 Virtualization Management: Not Supported 00:24:40.011 Doorbell Buffer Config: Not Supported 00:24:40.011 Get LBA Status Capability: Not Supported 00:24:40.011 Command & Feature Lockdown Capability: Not Supported 00:24:40.011 Abort Command Limit: 1 00:24:40.011 Async Event Request Limit: 4 00:24:40.011 Number of Firmware Slots: N/A 00:24:40.011 Firmware Slot 1 Read-Only: N/A 00:24:40.011 Firmware Activation Without Reset: N/A 00:24:40.011 Multiple Update Detection Support: N/A 00:24:40.011 Firmware Update Granularity: No Information Provided 00:24:40.011 Per-Namespace SMART Log: No 00:24:40.011 Asymmetric Namespace Access Log Page: Not Supported 00:24:40.011 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:24:40.011 Command Effects Log Page: Not Supported 00:24:40.011 Get Log Page Extended Data: Supported 00:24:40.011 Telemetry Log Pages: Not Supported 00:24:40.011 Persistent Event Log Pages: Not Supported 00:24:40.011 Supported Log Pages Log Page: May Support 00:24:40.011 Commands Supported & Effects Log Page: Not Supported 00:24:40.011 Feature Identifiers & Effects Log Page:May Support 00:24:40.011 NVMe-MI Commands & Effects Log Page: May Support 00:24:40.012 Data Area 4 for Telemetry Log: Not Supported 00:24:40.012 Error Log Page Entries Supported: 128 00:24:40.012 Keep Alive: Not Supported 00:24:40.012 00:24:40.012 NVM Command Set Attributes 00:24:40.012 ========================== 00:24:40.012 Submission Queue Entry Size 00:24:40.012 Max: 1 00:24:40.012 Min: 1 00:24:40.012 Completion Queue Entry Size 00:24:40.012 Max: 1 00:24:40.012 Min: 1 00:24:40.012 Number of Namespaces: 0 00:24:40.012 Compare Command: Not Supported 00:24:40.012 Write Uncorrectable Command: Not Supported 00:24:40.012 Dataset Management Command: Not Supported 00:24:40.012 Write Zeroes Command: Not Supported 00:24:40.012 Set Features Save Field: Not Supported 00:24:40.012 Reservations: Not Supported 00:24:40.012 Timestamp: Not Supported 00:24:40.012 Copy: Not Supported 00:24:40.012 Volatile Write Cache: Not Present 00:24:40.012 Atomic Write Unit (Normal): 1 00:24:40.012 Atomic Write Unit (PFail): 1 00:24:40.012 Atomic Compare & Write Unit: 1 00:24:40.012 Fused Compare & Write: Supported 00:24:40.012 Scatter-Gather List 00:24:40.012 SGL Command Set: Supported 00:24:40.012 SGL Keyed: Supported 00:24:40.012 SGL Bit Bucket Descriptor: Not Supported 00:24:40.012 SGL Metadata Pointer: Not Supported 00:24:40.012 Oversized SGL: Not Supported 00:24:40.012 SGL Metadata Address: Not Supported 00:24:40.012 SGL Offset: Supported 00:24:40.012 Transport SGL Data Block: Not Supported 00:24:40.012 Replay Protected Memory Block: Not Supported 00:24:40.012 00:24:40.012 Firmware Slot Information 00:24:40.012 ========================= 00:24:40.012 Active slot: 0 00:24:40.012 00:24:40.012 00:24:40.012 Error Log 00:24:40.012 ========= 00:24:40.012 00:24:40.012 Active Namespaces 00:24:40.012 ================= 00:24:40.012 Discovery Log Page 00:24:40.012 ================== 00:24:40.012 Generation Counter: 2 00:24:40.012 Number of Records: 2 00:24:40.012 Record Format: 0 00:24:40.012 00:24:40.012 Discovery Log Entry 0 00:24:40.012 ---------------------- 00:24:40.012 Transport Type: 3 (TCP) 00:24:40.012 Address Family: 1 (IPv4) 00:24:40.012 Subsystem Type: 3 (Current Discovery Subsystem) 00:24:40.012 Entry Flags: 00:24:40.012 Duplicate Returned Information: 1 00:24:40.012 Explicit Persistent Connection Support for Discovery: 1 00:24:40.012 Transport Requirements: 00:24:40.012 Secure Channel: Not Required 00:24:40.012 Port ID: 0 (0x0000) 00:24:40.012 Controller ID: 65535 (0xffff) 00:24:40.012 Admin Max SQ Size: 128 00:24:40.012 Transport Service Identifier: 4420 00:24:40.012 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:24:40.012 Transport Address: 10.0.0.2 00:24:40.012 Discovery Log Entry 1 00:24:40.012 ---------------------- 00:24:40.012 Transport Type: 3 (TCP) 00:24:40.012 Address Family: 1 (IPv4) 00:24:40.012 Subsystem Type: 2 (NVM Subsystem) 00:24:40.012 Entry Flags: 00:24:40.012 Duplicate Returned Information: 0 00:24:40.012 Explicit Persistent Connection Support for Discovery: 0 00:24:40.012 Transport Requirements: 00:24:40.012 Secure Channel: Not Required 00:24:40.012 Port ID: 0 (0x0000) 00:24:40.012 Controller ID: 65535 (0xffff) 00:24:40.012 Admin Max SQ Size: 128 00:24:40.012 Transport Service Identifier: 4420 00:24:40.012 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:24:40.012 Transport Address: 10.0.0.2 [2024-11-05 19:14:09.175029] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:24:40.012 [2024-11-05 19:14:09.175041] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x230f100) on tqpair=0x22ad690 00:24:40.012 [2024-11-05 19:14:09.175047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.012 [2024-11-05 19:14:09.175053] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x230f280) on tqpair=0x22ad690 00:24:40.012 [2024-11-05 19:14:09.175058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.012 [2024-11-05 19:14:09.175063] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x230f400) on tqpair=0x22ad690 00:24:40.012 [2024-11-05 19:14:09.175067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.012 [2024-11-05 19:14:09.175072] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x230f580) on tqpair=0x22ad690 00:24:40.012 [2024-11-05 19:14:09.175077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.012 [2024-11-05 19:14:09.175086] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:40.012 [2024-11-05 19:14:09.175090] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:40.012 [2024-11-05 19:14:09.175093] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22ad690) 00:24:40.012 [2024-11-05 19:14:09.175101] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.012 [2024-11-05 19:14:09.175114] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230f580, cid 3, qid 0 00:24:40.012 [2024-11-05 19:14:09.175308] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:40.012 [2024-11-05 19:14:09.175314] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:40.012 [2024-11-05 19:14:09.175318] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:40.012 [2024-11-05 19:14:09.175321] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x230f580) on tqpair=0x22ad690 00:24:40.012 [2024-11-05 19:14:09.175331] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:40.012 [2024-11-05 19:14:09.175335] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:40.012 [2024-11-05 19:14:09.175338] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22ad690) 00:24:40.012 [2024-11-05 19:14:09.175345] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.012 [2024-11-05 19:14:09.175359] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230f580, cid 3, qid 0 00:24:40.012 [2024-11-05 19:14:09.175546] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:40.012 [2024-11-05 19:14:09.175552] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:40.012 [2024-11-05 19:14:09.175556] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:40.012 [2024-11-05 19:14:09.175560] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x230f580) on tqpair=0x22ad690 00:24:40.012 [2024-11-05 19:14:09.175565] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:24:40.012 [2024-11-05 19:14:09.175570] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:24:40.012 [2024-11-05 19:14:09.175579] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:40.012 [2024-11-05 19:14:09.175583] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:40.012 [2024-11-05 19:14:09.175587] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22ad690) 00:24:40.012 [2024-11-05 19:14:09.175594] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.012 [2024-11-05 19:14:09.175606] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230f580, cid 3, qid 0 00:24:40.012 [2024-11-05 19:14:09.175796] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:40.012 [2024-11-05 19:14:09.175803] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:40.012 [2024-11-05 19:14:09.175807] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:40.012 [2024-11-05 19:14:09.175811] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x230f580) on tqpair=0x22ad690 00:24:40.012 [2024-11-05 19:14:09.175821] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:40.012 [2024-11-05 19:14:09.175825] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:40.012 [2024-11-05 19:14:09.175828] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22ad690) 00:24:40.012 [2024-11-05 19:14:09.175835] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.012 [2024-11-05 19:14:09.175845] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230f580, cid 3, qid 0 00:24:40.012 [2024-11-05 19:14:09.176017] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:40.012 [2024-11-05 19:14:09.176024] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:40.012 [2024-11-05 19:14:09.176027] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:40.012 [2024-11-05 19:14:09.176031] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x230f580) on tqpair=0x22ad690 00:24:40.012 [2024-11-05 19:14:09.176041] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:40.012 [2024-11-05 19:14:09.176045] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:40.012 [2024-11-05 19:14:09.176048] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22ad690) 00:24:40.012 [2024-11-05 19:14:09.176055] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.012 [2024-11-05 19:14:09.176065] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230f580, cid 3, qid 0 00:24:40.012 [2024-11-05 19:14:09.176240] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:40.012 [2024-11-05 19:14:09.176247] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:40.012 [2024-11-05 19:14:09.176250] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:40.012 [2024-11-05 19:14:09.176254] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x230f580) on tqpair=0x22ad690 00:24:40.012 [2024-11-05 19:14:09.176264] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:40.012 [2024-11-05 19:14:09.176268] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:40.012 [2024-11-05 19:14:09.176271] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22ad690) 00:24:40.012 [2024-11-05 19:14:09.176278] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.012 [2024-11-05 19:14:09.176288] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230f580, cid 3, qid 0 00:24:40.013 [2024-11-05 19:14:09.176494] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:40.013 [2024-11-05 19:14:09.176501] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:40.013 [2024-11-05 19:14:09.176504] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:40.013 [2024-11-05 19:14:09.176508] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x230f580) on tqpair=0x22ad690 00:24:40.013 [2024-11-05 19:14:09.176518] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:40.013 [2024-11-05 19:14:09.176522] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:40.013 [2024-11-05 19:14:09.176526] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22ad690) 00:24:40.013 [2024-11-05 19:14:09.176532] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.013 [2024-11-05 19:14:09.176542] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230f580, cid 3, qid 0 00:24:40.013 [2024-11-05 19:14:09.176741] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:40.013 [2024-11-05 19:14:09.180761] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:40.013 [2024-11-05 19:14:09.180766] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:40.013 [2024-11-05 19:14:09.180770] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x230f580) on tqpair=0x22ad690 00:24:40.013 [2024-11-05 19:14:09.180780] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:40.013 [2024-11-05 19:14:09.180784] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:40.013 [2024-11-05 19:14:09.180788] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22ad690) 00:24:40.013 [2024-11-05 19:14:09.180795] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.013 [2024-11-05 19:14:09.180807] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230f580, cid 3, qid 0 00:24:40.013 [2024-11-05 19:14:09.180989] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:40.013 [2024-11-05 19:14:09.180995] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:40.013 [2024-11-05 19:14:09.180999] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:40.013 [2024-11-05 19:14:09.181003] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x230f580) on tqpair=0x22ad690 00:24:40.013 [2024-11-05 19:14:09.181010] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 5 milliseconds 00:24:40.013 00:24:40.013 19:14:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:24:40.013 [2024-11-05 19:14:09.225597] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:24:40.013 [2024-11-05 19:14:09.225637] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid438833 ] 00:24:40.013 [2024-11-05 19:14:09.277888] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:24:40.013 [2024-11-05 19:14:09.277940] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:40.013 [2024-11-05 19:14:09.277946] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:40.013 [2024-11-05 19:14:09.277959] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:40.013 [2024-11-05 19:14:09.277967] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:40.013 [2024-11-05 19:14:09.281959] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:24:40.013 [2024-11-05 19:14:09.281989] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1689690 0 00:24:40.013 [2024-11-05 19:14:09.289758] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:40.013 [2024-11-05 19:14:09.289771] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:40.013 [2024-11-05 19:14:09.289775] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:40.013 [2024-11-05 19:14:09.289779] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:40.013 [2024-11-05 19:14:09.289807] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:40.013 [2024-11-05 19:14:09.289812] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:40.013 [2024-11-05 19:14:09.289817] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1689690) 00:24:40.013 [2024-11-05 19:14:09.289831] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:40.013 [2024-11-05 19:14:09.289849] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16eb100, cid 0, qid 0 00:24:40.013 [2024-11-05 19:14:09.297757] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:40.013 [2024-11-05 19:14:09.297766] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:40.013 [2024-11-05 19:14:09.297770] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:40.013 [2024-11-05 19:14:09.297775] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16eb100) on tqpair=0x1689690 00:24:40.013 [2024-11-05 19:14:09.297786] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:40.013 [2024-11-05 19:14:09.297793] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:24:40.013 [2024-11-05 19:14:09.297798] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:24:40.013 [2024-11-05 19:14:09.297810] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:40.013 [2024-11-05 19:14:09.297815] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:40.013 [2024-11-05 19:14:09.297818] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1689690) 00:24:40.013 [2024-11-05 19:14:09.297826] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.013 [2024-11-05 19:14:09.297840] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16eb100, cid 0, qid 0 00:24:40.013 [2024-11-05 19:14:09.298022] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:40.013 [2024-11-05 19:14:09.298030] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:40.013 [2024-11-05 19:14:09.298033] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:40.013 [2024-11-05 19:14:09.298037] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16eb100) on tqpair=0x1689690 00:24:40.013 [2024-11-05 19:14:09.298043] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:24:40.013 [2024-11-05 19:14:09.298050] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:24:40.013 [2024-11-05 19:14:09.298057] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:40.013 [2024-11-05 19:14:09.298061] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:40.013 [2024-11-05 19:14:09.298064] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1689690) 00:24:40.013 [2024-11-05 19:14:09.298071] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.013 [2024-11-05 19:14:09.298082] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16eb100, cid 0, qid 0 00:24:40.013 [2024-11-05 19:14:09.298227] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:40.013 [2024-11-05 19:14:09.298233] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:40.013 [2024-11-05 19:14:09.298237] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:40.013 [2024-11-05 19:14:09.298241] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16eb100) on tqpair=0x1689690 00:24:40.013 [2024-11-05 19:14:09.298246] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:24:40.013 [2024-11-05 19:14:09.298254] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:24:40.013 [2024-11-05 19:14:09.298260] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:40.013 [2024-11-05 19:14:09.298264] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:40.013 [2024-11-05 19:14:09.298268] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1689690) 00:24:40.013 [2024-11-05 19:14:09.298277] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.013 [2024-11-05 19:14:09.298288] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16eb100, cid 0, qid 0 00:24:40.013 [2024-11-05 19:14:09.298449] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:40.013 [2024-11-05 19:14:09.298455] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:40.013 [2024-11-05 19:14:09.298459] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:40.013 [2024-11-05 19:14:09.298463] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16eb100) on tqpair=0x1689690 00:24:40.013 [2024-11-05 19:14:09.298468] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:40.013 [2024-11-05 19:14:09.298477] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:40.013 [2024-11-05 19:14:09.298481] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:40.013 [2024-11-05 19:14:09.298485] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1689690) 00:24:40.013 [2024-11-05 19:14:09.298491] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.013 [2024-11-05 19:14:09.298502] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16eb100, cid 0, qid 0 00:24:40.013 [2024-11-05 19:14:09.298711] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:40.013 [2024-11-05 19:14:09.298718] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:40.013 [2024-11-05 19:14:09.298721] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:40.013 [2024-11-05 19:14:09.298725] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16eb100) on tqpair=0x1689690 00:24:40.013 [2024-11-05 19:14:09.298730] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:24:40.013 [2024-11-05 19:14:09.298735] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:24:40.013 [2024-11-05 19:14:09.298742] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:40.013 [2024-11-05 19:14:09.298854] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:24:40.013 [2024-11-05 19:14:09.298860] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:40.013 [2024-11-05 19:14:09.298867] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:40.013 [2024-11-05 19:14:09.298871] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:40.013 [2024-11-05 19:14:09.298874] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1689690) 00:24:40.014 [2024-11-05 19:14:09.298881] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.014 [2024-11-05 19:14:09.298892] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16eb100, cid 0, qid 0 00:24:40.014 [2024-11-05 19:14:09.299053] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:40.014 [2024-11-05 19:14:09.299060] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:40.014 [2024-11-05 19:14:09.299063] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:40.014 [2024-11-05 19:14:09.299067] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16eb100) on tqpair=0x1689690 00:24:40.014 [2024-11-05 19:14:09.299072] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:40.014 [2024-11-05 19:14:09.299081] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:40.014 [2024-11-05 19:14:09.299085] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:40.014 [2024-11-05 19:14:09.299088] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1689690) 00:24:40.014 [2024-11-05 19:14:09.299097] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.014 [2024-11-05 19:14:09.299108] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16eb100, cid 0, qid 0 00:24:40.014 [2024-11-05 19:14:09.299284] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:40.014 [2024-11-05 19:14:09.299291] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:40.014 [2024-11-05 19:14:09.299294] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:40.014 [2024-11-05 19:14:09.299298] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16eb100) on tqpair=0x1689690 00:24:40.014 [2024-11-05 19:14:09.299303] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:40.014 [2024-11-05 19:14:09.299307] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:24:40.014 [2024-11-05 19:14:09.299315] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:24:40.014 [2024-11-05 19:14:09.299324] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:24:40.014 [2024-11-05 19:14:09.299333] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:40.014 [2024-11-05 19:14:09.299337] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1689690) 00:24:40.014 [2024-11-05 19:14:09.299343] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.014 [2024-11-05 19:14:09.299355] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16eb100, cid 0, qid 0 00:24:40.014 [2024-11-05 19:14:09.299536] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:40.014 [2024-11-05 19:14:09.299543] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:40.014 [2024-11-05 19:14:09.299547] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:40.014 [2024-11-05 19:14:09.299551] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1689690): datao=0, datal=4096, cccid=0 00:24:40.014 [2024-11-05 19:14:09.299556] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x16eb100) on tqpair(0x1689690): expected_datao=0, payload_size=4096 00:24:40.014 [2024-11-05 19:14:09.299560] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:40.014 [2024-11-05 19:14:09.299568] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:40.014 [2024-11-05 19:14:09.299571] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:40.014 [2024-11-05 19:14:09.299741] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:40.014 [2024-11-05 19:14:09.299751] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:40.014 [2024-11-05 19:14:09.299755] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:40.014 [2024-11-05 19:14:09.299759] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16eb100) on tqpair=0x1689690 00:24:40.014 [2024-11-05 19:14:09.299766] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:24:40.014 [2024-11-05 19:14:09.299771] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:24:40.014 [2024-11-05 19:14:09.299775] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:24:40.014 [2024-11-05 19:14:09.299779] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:24:40.014 [2024-11-05 19:14:09.299786] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:24:40.014 [2024-11-05 19:14:09.299791] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:24:40.014 [2024-11-05 19:14:09.299801] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:24:40.014 [2024-11-05 19:14:09.299808] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:40.014 [2024-11-05 19:14:09.299812] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:40.014 [2024-11-05 19:14:09.299816] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1689690) 00:24:40.014 [2024-11-05 19:14:09.299823] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:40.014 [2024-11-05 19:14:09.299834] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16eb100, cid 0, qid 0 00:24:40.014 [2024-11-05 19:14:09.300029] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:40.014 [2024-11-05 19:14:09.300036] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:40.014 [2024-11-05 19:14:09.300040] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:40.014 [2024-11-05 19:14:09.300044] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16eb100) on tqpair=0x1689690 00:24:40.014 [2024-11-05 19:14:09.300053] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:40.014 [2024-11-05 19:14:09.300057] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:40.014 [2024-11-05 19:14:09.300060] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1689690) 00:24:40.014 [2024-11-05 19:14:09.300067] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:40.014 [2024-11-05 19:14:09.300073] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:40.014 [2024-11-05 19:14:09.300076] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:40.014 [2024-11-05 19:14:09.300080] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1689690) 00:24:40.014 [2024-11-05 19:14:09.300086] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:40.014 [2024-11-05 19:14:09.300092] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:40.014 [2024-11-05 19:14:09.300096] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:40.014 [2024-11-05 19:14:09.300099] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1689690) 00:24:40.014 [2024-11-05 19:14:09.300105] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:40.014 [2024-11-05 19:14:09.300111] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:40.014 [2024-11-05 19:14:09.300115] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:40.014 [2024-11-05 19:14:09.300119] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1689690) 00:24:40.014 [2024-11-05 19:14:09.300124] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:40.014 [2024-11-05 19:14:09.300129] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:24:40.014 [2024-11-05 19:14:09.300137] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:40.014 [2024-11-05 19:14:09.300143] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:40.014 [2024-11-05 19:14:09.300147] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1689690) 00:24:40.014 [2024-11-05 19:14:09.300154] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.014 [2024-11-05 19:14:09.300166] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16eb100, cid 0, qid 0 00:24:40.014 [2024-11-05 19:14:09.300171] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16eb280, cid 1, qid 0 00:24:40.014 [2024-11-05 19:14:09.300178] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16eb400, cid 2, qid 0 00:24:40.014 [2024-11-05 19:14:09.300183] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16eb580, cid 3, qid 0 00:24:40.014 [2024-11-05 19:14:09.300188] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16eb700, cid 4, qid 0 00:24:40.014 [2024-11-05 19:14:09.300354] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:40.014 [2024-11-05 19:14:09.300361] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:40.014 [2024-11-05 19:14:09.300365] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:40.014 [2024-11-05 19:14:09.300369] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16eb700) on tqpair=0x1689690 00:24:40.014 [2024-11-05 19:14:09.300375] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:24:40.014 [2024-11-05 19:14:09.300380] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:24:40.014 [2024-11-05 19:14:09.300388] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:24:40.014 [2024-11-05 19:14:09.300395] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:24:40.014 [2024-11-05 19:14:09.300401] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:40.014 [2024-11-05 19:14:09.300405] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:40.014 [2024-11-05 19:14:09.300409] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1689690) 00:24:40.014 [2024-11-05 19:14:09.300415] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:40.014 [2024-11-05 19:14:09.300425] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16eb700, cid 4, qid 0 00:24:40.014 [2024-11-05 19:14:09.300572] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:40.014 [2024-11-05 19:14:09.300579] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:40.014 [2024-11-05 19:14:09.300582] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:40.014 [2024-11-05 19:14:09.300586] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16eb700) on tqpair=0x1689690 00:24:40.014 [2024-11-05 19:14:09.300651] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:24:40.014 [2024-11-05 19:14:09.300660] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:24:40.014 [2024-11-05 19:14:09.300667] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:40.014 [2024-11-05 19:14:09.300671] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1689690) 00:24:40.015 [2024-11-05 19:14:09.300677] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.015 [2024-11-05 19:14:09.300688] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16eb700, cid 4, qid 0 00:24:40.015 [2024-11-05 19:14:09.300880] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:40.015 [2024-11-05 19:14:09.300888] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:40.015 [2024-11-05 19:14:09.300891] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:40.015 [2024-11-05 19:14:09.300895] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1689690): datao=0, datal=4096, cccid=4 00:24:40.015 [2024-11-05 19:14:09.300900] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x16eb700) on tqpair(0x1689690): expected_datao=0, payload_size=4096 00:24:40.015 [2024-11-05 19:14:09.300904] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:40.015 [2024-11-05 19:14:09.300911] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:40.015 [2024-11-05 19:14:09.300917] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:40.015 [2024-11-05 19:14:09.301071] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:40.015 [2024-11-05 19:14:09.301077] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:40.015 [2024-11-05 19:14:09.301081] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:40.015 [2024-11-05 19:14:09.301084] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16eb700) on tqpair=0x1689690 00:24:40.015 [2024-11-05 19:14:09.301094] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:24:40.015 [2024-11-05 19:14:09.301108] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:24:40.015 [2024-11-05 19:14:09.301117] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:24:40.015 [2024-11-05 19:14:09.301124] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:40.015 [2024-11-05 19:14:09.301128] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1689690) 00:24:40.015 [2024-11-05 19:14:09.301134] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.015 [2024-11-05 19:14:09.301145] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16eb700, cid 4, qid 0 00:24:40.015 [2024-11-05 19:14:09.301323] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:40.015 [2024-11-05 19:14:09.301330] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:40.015 [2024-11-05 19:14:09.301333] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:40.015 [2024-11-05 19:14:09.301337] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1689690): datao=0, datal=4096, cccid=4 00:24:40.015 [2024-11-05 19:14:09.301341] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x16eb700) on tqpair(0x1689690): expected_datao=0, payload_size=4096 00:24:40.015 [2024-11-05 19:14:09.301346] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:40.015 [2024-11-05 19:14:09.301352] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:40.015 [2024-11-05 19:14:09.301356] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:40.015 [2024-11-05 19:14:09.301556] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:40.015 [2024-11-05 19:14:09.301562] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:40.015 [2024-11-05 19:14:09.301566] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:40.015 [2024-11-05 19:14:09.301570] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16eb700) on tqpair=0x1689690 00:24:40.015 [2024-11-05 19:14:09.301581] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:24:40.015 [2024-11-05 19:14:09.301590] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:24:40.015 [2024-11-05 19:14:09.301597] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:40.015 [2024-11-05 19:14:09.301601] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1689690) 00:24:40.015 [2024-11-05 19:14:09.301608] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.015 [2024-11-05 19:14:09.301618] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16eb700, cid 4, qid 0 00:24:40.015 [2024-11-05 19:14:09.305755] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:40.015 [2024-11-05 19:14:09.305763] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:40.015 [2024-11-05 19:14:09.305767] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:40.015 [2024-11-05 19:14:09.305770] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1689690): datao=0, datal=4096, cccid=4 00:24:40.015 [2024-11-05 19:14:09.305777] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x16eb700) on tqpair(0x1689690): expected_datao=0, payload_size=4096 00:24:40.015 [2024-11-05 19:14:09.305782] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:40.015 [2024-11-05 19:14:09.305788] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:40.015 [2024-11-05 19:14:09.305792] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:40.015 [2024-11-05 19:14:09.305798] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:40.015 [2024-11-05 19:14:09.305804] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:40.015 [2024-11-05 19:14:09.305807] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:40.015 [2024-11-05 19:14:09.305811] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16eb700) on tqpair=0x1689690 00:24:40.015 [2024-11-05 19:14:09.305819] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:24:40.015 [2024-11-05 19:14:09.305827] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:24:40.015 [2024-11-05 19:14:09.305835] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:24:40.015 [2024-11-05 19:14:09.305841] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:24:40.015 [2024-11-05 19:14:09.305847] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:24:40.015 [2024-11-05 19:14:09.305852] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:24:40.015 [2024-11-05 19:14:09.305857] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:24:40.015 [2024-11-05 19:14:09.305862] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:24:40.015 [2024-11-05 19:14:09.305867] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:24:40.015 [2024-11-05 19:14:09.305881] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:40.015 [2024-11-05 19:14:09.305884] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1689690) 00:24:40.015 [2024-11-05 19:14:09.305891] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.015 [2024-11-05 19:14:09.305898] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:40.015 [2024-11-05 19:14:09.305901] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:40.015 [2024-11-05 19:14:09.305905] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1689690) 00:24:40.015 [2024-11-05 19:14:09.305911] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:40.015 [2024-11-05 19:14:09.305925] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16eb700, cid 4, qid 0 00:24:40.015 [2024-11-05 19:14:09.305930] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16eb880, cid 5, qid 0 00:24:40.015 [2024-11-05 19:14:09.306121] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:40.015 [2024-11-05 19:14:09.306128] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:40.015 [2024-11-05 19:14:09.306131] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:40.015 [2024-11-05 19:14:09.306135] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16eb700) on tqpair=0x1689690 00:24:40.015 [2024-11-05 19:14:09.306142] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:40.015 [2024-11-05 19:14:09.306148] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:40.015 [2024-11-05 19:14:09.306151] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:40.015 [2024-11-05 19:14:09.306157] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16eb880) on tqpair=0x1689690 00:24:40.015 [2024-11-05 19:14:09.306167] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:40.015 [2024-11-05 19:14:09.306171] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1689690) 00:24:40.015 [2024-11-05 19:14:09.306177] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.015 [2024-11-05 19:14:09.306187] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16eb880, cid 5, qid 0 00:24:40.015 [2024-11-05 19:14:09.306332] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:40.015 [2024-11-05 19:14:09.306339] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:40.015 [2024-11-05 19:14:09.306342] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:40.016 [2024-11-05 19:14:09.306346] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16eb880) on tqpair=0x1689690 00:24:40.016 [2024-11-05 19:14:09.306355] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:40.016 [2024-11-05 19:14:09.306359] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1689690) 00:24:40.016 [2024-11-05 19:14:09.306366] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.016 [2024-11-05 19:14:09.306375] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16eb880, cid 5, qid 0 00:24:40.016 [2024-11-05 19:14:09.306556] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:40.016 [2024-11-05 19:14:09.306563] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:40.016 [2024-11-05 19:14:09.306566] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:40.016 [2024-11-05 19:14:09.306570] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16eb880) on tqpair=0x1689690 00:24:40.016 [2024-11-05 19:14:09.306579] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:40.016 [2024-11-05 19:14:09.306583] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1689690) 00:24:40.016 [2024-11-05 19:14:09.306589] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.016 [2024-11-05 19:14:09.306599] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16eb880, cid 5, qid 0 00:24:40.016 [2024-11-05 19:14:09.306774] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:40.016 [2024-11-05 19:14:09.306780] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:40.016 [2024-11-05 19:14:09.306784] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:40.016 [2024-11-05 19:14:09.306788] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16eb880) on tqpair=0x1689690 00:24:40.016 [2024-11-05 19:14:09.306801] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:40.016 [2024-11-05 19:14:09.306805] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1689690) 00:24:40.016 [2024-11-05 19:14:09.306812] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.016 [2024-11-05 19:14:09.306819] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:40.016 [2024-11-05 19:14:09.306823] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1689690) 00:24:40.016 [2024-11-05 19:14:09.306829] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.016 [2024-11-05 19:14:09.306836] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:40.016 [2024-11-05 19:14:09.306840] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1689690) 00:24:40.016 [2024-11-05 19:14:09.306846] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.016 [2024-11-05 19:14:09.306859] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:40.016 [2024-11-05 19:14:09.306862] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1689690) 00:24:40.016 [2024-11-05 19:14:09.306869] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.016 [2024-11-05 19:14:09.306880] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16eb880, cid 5, qid 0 00:24:40.016 [2024-11-05 19:14:09.306885] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16eb700, cid 4, qid 0 00:24:40.016 [2024-11-05 19:14:09.306890] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16eba00, cid 6, qid 0 00:24:40.016 [2024-11-05 19:14:09.306895] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16ebb80, cid 7, qid 0 00:24:40.016 [2024-11-05 19:14:09.307111] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:40.016 [2024-11-05 19:14:09.307117] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:40.016 [2024-11-05 19:14:09.307121] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:40.016 [2024-11-05 19:14:09.307124] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1689690): datao=0, datal=8192, cccid=5 00:24:40.016 [2024-11-05 19:14:09.307129] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x16eb880) on tqpair(0x1689690): expected_datao=0, payload_size=8192 00:24:40.016 [2024-11-05 19:14:09.307133] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:40.016 [2024-11-05 19:14:09.307225] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:40.016 [2024-11-05 19:14:09.307229] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:40.016 [2024-11-05 19:14:09.307235] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:40.016 [2024-11-05 19:14:09.307241] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:40.016 [2024-11-05 19:14:09.307245] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:40.016 [2024-11-05 19:14:09.307248] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1689690): datao=0, datal=512, cccid=4 00:24:40.016 [2024-11-05 19:14:09.307253] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x16eb700) on tqpair(0x1689690): expected_datao=0, payload_size=512 00:24:40.016 [2024-11-05 19:14:09.307257] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:40.016 [2024-11-05 19:14:09.307264] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:40.016 [2024-11-05 19:14:09.307267] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:40.016 [2024-11-05 19:14:09.307273] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:40.016 [2024-11-05 19:14:09.307279] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:40.016 [2024-11-05 19:14:09.307282] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:40.016 [2024-11-05 19:14:09.307285] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1689690): datao=0, datal=512, cccid=6 00:24:40.016 [2024-11-05 19:14:09.307290] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x16eba00) on tqpair(0x1689690): expected_datao=0, payload_size=512 00:24:40.016 [2024-11-05 19:14:09.307294] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:40.016 [2024-11-05 19:14:09.307300] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:40.016 [2024-11-05 19:14:09.307304] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:40.016 [2024-11-05 19:14:09.307310] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:40.016 [2024-11-05 19:14:09.307315] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:40.016 [2024-11-05 19:14:09.307319] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:40.016 [2024-11-05 19:14:09.307322] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1689690): datao=0, datal=4096, cccid=7 00:24:40.016 [2024-11-05 19:14:09.307327] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x16ebb80) on tqpair(0x1689690): expected_datao=0, payload_size=4096 00:24:40.016 [2024-11-05 19:14:09.307333] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:40.016 [2024-11-05 19:14:09.307340] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:40.016 [2024-11-05 19:14:09.307343] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:40.016 [2024-11-05 19:14:09.307353] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:40.016 [2024-11-05 19:14:09.307359] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:40.016 [2024-11-05 19:14:09.307363] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:40.016 [2024-11-05 19:14:09.307366] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16eb880) on tqpair=0x1689690 00:24:40.016 [2024-11-05 19:14:09.307378] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:40.016 [2024-11-05 19:14:09.307384] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:40.016 [2024-11-05 19:14:09.307388] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:40.016 [2024-11-05 19:14:09.307392] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16eb700) on tqpair=0x1689690 00:24:40.016 [2024-11-05 19:14:09.307401] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:40.016 [2024-11-05 19:14:09.307407] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:40.016 [2024-11-05 19:14:09.307411] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:40.016 [2024-11-05 19:14:09.307415] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16eba00) on tqpair=0x1689690 00:24:40.016 [2024-11-05 19:14:09.307422] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:40.016 [2024-11-05 19:14:09.307428] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:40.016 [2024-11-05 19:14:09.307431] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:40.016 [2024-11-05 19:14:09.307435] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16ebb80) on tqpair=0x1689690 00:24:40.016 ===================================================== 00:24:40.016 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:40.016 ===================================================== 00:24:40.016 Controller Capabilities/Features 00:24:40.016 ================================ 00:24:40.016 Vendor ID: 8086 00:24:40.016 Subsystem Vendor ID: 8086 00:24:40.016 Serial Number: SPDK00000000000001 00:24:40.016 Model Number: SPDK bdev Controller 00:24:40.016 Firmware Version: 25.01 00:24:40.016 Recommended Arb Burst: 6 00:24:40.016 IEEE OUI Identifier: e4 d2 5c 00:24:40.016 Multi-path I/O 00:24:40.016 May have multiple subsystem ports: Yes 00:24:40.016 May have multiple controllers: Yes 00:24:40.016 Associated with SR-IOV VF: No 00:24:40.016 Max Data Transfer Size: 131072 00:24:40.016 Max Number of Namespaces: 32 00:24:40.016 Max Number of I/O Queues: 127 00:24:40.016 NVMe Specification Version (VS): 1.3 00:24:40.016 NVMe Specification Version (Identify): 1.3 00:24:40.016 Maximum Queue Entries: 128 00:24:40.016 Contiguous Queues Required: Yes 00:24:40.016 Arbitration Mechanisms Supported 00:24:40.016 Weighted Round Robin: Not Supported 00:24:40.016 Vendor Specific: Not Supported 00:24:40.016 Reset Timeout: 15000 ms 00:24:40.016 Doorbell Stride: 4 bytes 00:24:40.016 NVM Subsystem Reset: Not Supported 00:24:40.016 Command Sets Supported 00:24:40.016 NVM Command Set: Supported 00:24:40.016 Boot Partition: Not Supported 00:24:40.016 Memory Page Size Minimum: 4096 bytes 00:24:40.016 Memory Page Size Maximum: 4096 bytes 00:24:40.016 Persistent Memory Region: Not Supported 00:24:40.016 Optional Asynchronous Events Supported 00:24:40.016 Namespace Attribute Notices: Supported 00:24:40.016 Firmware Activation Notices: Not Supported 00:24:40.016 ANA Change Notices: Not Supported 00:24:40.016 PLE Aggregate Log Change Notices: Not Supported 00:24:40.016 LBA Status Info Alert Notices: Not Supported 00:24:40.016 EGE Aggregate Log Change Notices: Not Supported 00:24:40.016 Normal NVM Subsystem Shutdown event: Not Supported 00:24:40.016 Zone Descriptor Change Notices: Not Supported 00:24:40.016 Discovery Log Change Notices: Not Supported 00:24:40.016 Controller Attributes 00:24:40.017 128-bit Host Identifier: Supported 00:24:40.017 Non-Operational Permissive Mode: Not Supported 00:24:40.017 NVM Sets: Not Supported 00:24:40.017 Read Recovery Levels: Not Supported 00:24:40.017 Endurance Groups: Not Supported 00:24:40.017 Predictable Latency Mode: Not Supported 00:24:40.017 Traffic Based Keep ALive: Not Supported 00:24:40.017 Namespace Granularity: Not Supported 00:24:40.017 SQ Associations: Not Supported 00:24:40.017 UUID List: Not Supported 00:24:40.017 Multi-Domain Subsystem: Not Supported 00:24:40.017 Fixed Capacity Management: Not Supported 00:24:40.017 Variable Capacity Management: Not Supported 00:24:40.017 Delete Endurance Group: Not Supported 00:24:40.017 Delete NVM Set: Not Supported 00:24:40.017 Extended LBA Formats Supported: Not Supported 00:24:40.017 Flexible Data Placement Supported: Not Supported 00:24:40.017 00:24:40.017 Controller Memory Buffer Support 00:24:40.017 ================================ 00:24:40.017 Supported: No 00:24:40.017 00:24:40.017 Persistent Memory Region Support 00:24:40.017 ================================ 00:24:40.017 Supported: No 00:24:40.017 00:24:40.017 Admin Command Set Attributes 00:24:40.017 ============================ 00:24:40.017 Security Send/Receive: Not Supported 00:24:40.017 Format NVM: Not Supported 00:24:40.017 Firmware Activate/Download: Not Supported 00:24:40.017 Namespace Management: Not Supported 00:24:40.017 Device Self-Test: Not Supported 00:24:40.017 Directives: Not Supported 00:24:40.017 NVMe-MI: Not Supported 00:24:40.017 Virtualization Management: Not Supported 00:24:40.017 Doorbell Buffer Config: Not Supported 00:24:40.017 Get LBA Status Capability: Not Supported 00:24:40.017 Command & Feature Lockdown Capability: Not Supported 00:24:40.017 Abort Command Limit: 4 00:24:40.017 Async Event Request Limit: 4 00:24:40.017 Number of Firmware Slots: N/A 00:24:40.017 Firmware Slot 1 Read-Only: N/A 00:24:40.017 Firmware Activation Without Reset: N/A 00:24:40.017 Multiple Update Detection Support: N/A 00:24:40.017 Firmware Update Granularity: No Information Provided 00:24:40.017 Per-Namespace SMART Log: No 00:24:40.017 Asymmetric Namespace Access Log Page: Not Supported 00:24:40.017 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:24:40.017 Command Effects Log Page: Supported 00:24:40.017 Get Log Page Extended Data: Supported 00:24:40.017 Telemetry Log Pages: Not Supported 00:24:40.017 Persistent Event Log Pages: Not Supported 00:24:40.017 Supported Log Pages Log Page: May Support 00:24:40.017 Commands Supported & Effects Log Page: Not Supported 00:24:40.017 Feature Identifiers & Effects Log Page:May Support 00:24:40.017 NVMe-MI Commands & Effects Log Page: May Support 00:24:40.017 Data Area 4 for Telemetry Log: Not Supported 00:24:40.017 Error Log Page Entries Supported: 128 00:24:40.017 Keep Alive: Supported 00:24:40.017 Keep Alive Granularity: 10000 ms 00:24:40.017 00:24:40.017 NVM Command Set Attributes 00:24:40.017 ========================== 00:24:40.017 Submission Queue Entry Size 00:24:40.017 Max: 64 00:24:40.017 Min: 64 00:24:40.017 Completion Queue Entry Size 00:24:40.017 Max: 16 00:24:40.017 Min: 16 00:24:40.017 Number of Namespaces: 32 00:24:40.017 Compare Command: Supported 00:24:40.017 Write Uncorrectable Command: Not Supported 00:24:40.017 Dataset Management Command: Supported 00:24:40.017 Write Zeroes Command: Supported 00:24:40.017 Set Features Save Field: Not Supported 00:24:40.017 Reservations: Supported 00:24:40.017 Timestamp: Not Supported 00:24:40.017 Copy: Supported 00:24:40.017 Volatile Write Cache: Present 00:24:40.017 Atomic Write Unit (Normal): 1 00:24:40.017 Atomic Write Unit (PFail): 1 00:24:40.017 Atomic Compare & Write Unit: 1 00:24:40.017 Fused Compare & Write: Supported 00:24:40.017 Scatter-Gather List 00:24:40.017 SGL Command Set: Supported 00:24:40.017 SGL Keyed: Supported 00:24:40.017 SGL Bit Bucket Descriptor: Not Supported 00:24:40.017 SGL Metadata Pointer: Not Supported 00:24:40.017 Oversized SGL: Not Supported 00:24:40.017 SGL Metadata Address: Not Supported 00:24:40.017 SGL Offset: Supported 00:24:40.017 Transport SGL Data Block: Not Supported 00:24:40.017 Replay Protected Memory Block: Not Supported 00:24:40.017 00:24:40.017 Firmware Slot Information 00:24:40.017 ========================= 00:24:40.017 Active slot: 1 00:24:40.017 Slot 1 Firmware Revision: 25.01 00:24:40.017 00:24:40.017 00:24:40.017 Commands Supported and Effects 00:24:40.017 ============================== 00:24:40.017 Admin Commands 00:24:40.017 -------------- 00:24:40.017 Get Log Page (02h): Supported 00:24:40.017 Identify (06h): Supported 00:24:40.017 Abort (08h): Supported 00:24:40.017 Set Features (09h): Supported 00:24:40.017 Get Features (0Ah): Supported 00:24:40.017 Asynchronous Event Request (0Ch): Supported 00:24:40.017 Keep Alive (18h): Supported 00:24:40.017 I/O Commands 00:24:40.017 ------------ 00:24:40.017 Flush (00h): Supported LBA-Change 00:24:40.017 Write (01h): Supported LBA-Change 00:24:40.017 Read (02h): Supported 00:24:40.017 Compare (05h): Supported 00:24:40.017 Write Zeroes (08h): Supported LBA-Change 00:24:40.017 Dataset Management (09h): Supported LBA-Change 00:24:40.017 Copy (19h): Supported LBA-Change 00:24:40.017 00:24:40.017 Error Log 00:24:40.017 ========= 00:24:40.017 00:24:40.017 Arbitration 00:24:40.017 =========== 00:24:40.017 Arbitration Burst: 1 00:24:40.017 00:24:40.017 Power Management 00:24:40.017 ================ 00:24:40.017 Number of Power States: 1 00:24:40.017 Current Power State: Power State #0 00:24:40.017 Power State #0: 00:24:40.017 Max Power: 0.00 W 00:24:40.017 Non-Operational State: Operational 00:24:40.017 Entry Latency: Not Reported 00:24:40.017 Exit Latency: Not Reported 00:24:40.017 Relative Read Throughput: 0 00:24:40.017 Relative Read Latency: 0 00:24:40.017 Relative Write Throughput: 0 00:24:40.017 Relative Write Latency: 0 00:24:40.017 Idle Power: Not Reported 00:24:40.017 Active Power: Not Reported 00:24:40.017 Non-Operational Permissive Mode: Not Supported 00:24:40.017 00:24:40.017 Health Information 00:24:40.017 ================== 00:24:40.017 Critical Warnings: 00:24:40.017 Available Spare Space: OK 00:24:40.017 Temperature: OK 00:24:40.017 Device Reliability: OK 00:24:40.017 Read Only: No 00:24:40.017 Volatile Memory Backup: OK 00:24:40.017 Current Temperature: 0 Kelvin (-273 Celsius) 00:24:40.017 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:24:40.017 Available Spare: 0% 00:24:40.017 Available Spare Threshold: 0% 00:24:40.017 Life Percentage Used:[2024-11-05 19:14:09.307531] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:40.017 [2024-11-05 19:14:09.307536] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1689690) 00:24:40.017 [2024-11-05 19:14:09.307543] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.017 [2024-11-05 19:14:09.307554] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16ebb80, cid 7, qid 0 00:24:40.017 [2024-11-05 19:14:09.307704] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:40.017 [2024-11-05 19:14:09.307710] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:40.017 [2024-11-05 19:14:09.307714] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:40.017 [2024-11-05 19:14:09.307718] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16ebb80) on tqpair=0x1689690 00:24:40.017 [2024-11-05 19:14:09.307753] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:24:40.017 [2024-11-05 19:14:09.307763] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16eb100) on tqpair=0x1689690 00:24:40.017 [2024-11-05 19:14:09.307769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.017 [2024-11-05 19:14:09.307774] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16eb280) on tqpair=0x1689690 00:24:40.017 [2024-11-05 19:14:09.307779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.017 [2024-11-05 19:14:09.307784] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16eb400) on tqpair=0x1689690 00:24:40.017 [2024-11-05 19:14:09.307789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.017 [2024-11-05 19:14:09.307794] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16eb580) on tqpair=0x1689690 00:24:40.017 [2024-11-05 19:14:09.307799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.017 [2024-11-05 19:14:09.307808] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:40.017 [2024-11-05 19:14:09.307812] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:40.017 [2024-11-05 19:14:09.307816] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1689690) 00:24:40.017 [2024-11-05 19:14:09.307823] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.017 [2024-11-05 19:14:09.307835] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16eb580, cid 3, qid 0 00:24:40.017 [2024-11-05 19:14:09.308018] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:40.017 [2024-11-05 19:14:09.308025] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:40.017 [2024-11-05 19:14:09.308029] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:40.017 [2024-11-05 19:14:09.308032] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16eb580) on tqpair=0x1689690 00:24:40.018 [2024-11-05 19:14:09.308039] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:40.018 [2024-11-05 19:14:09.308043] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:40.018 [2024-11-05 19:14:09.308047] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1689690) 00:24:40.018 [2024-11-05 19:14:09.308053] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.018 [2024-11-05 19:14:09.308066] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16eb580, cid 3, qid 0 00:24:40.018 [2024-11-05 19:14:09.308260] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:40.018 [2024-11-05 19:14:09.308267] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:40.018 [2024-11-05 19:14:09.308270] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:40.018 [2024-11-05 19:14:09.308274] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16eb580) on tqpair=0x1689690 00:24:40.018 [2024-11-05 19:14:09.308279] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:24:40.018 [2024-11-05 19:14:09.308284] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:24:40.018 [2024-11-05 19:14:09.308293] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:40.018 [2024-11-05 19:14:09.308297] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:40.018 [2024-11-05 19:14:09.308300] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1689690) 00:24:40.018 [2024-11-05 19:14:09.308307] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.018 [2024-11-05 19:14:09.308317] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16eb580, cid 3, qid 0 00:24:40.018 [2024-11-05 19:14:09.308473] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:40.018 [2024-11-05 19:14:09.308479] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:40.018 [2024-11-05 19:14:09.308483] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:40.018 [2024-11-05 19:14:09.308487] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16eb580) on tqpair=0x1689690 00:24:40.018 [2024-11-05 19:14:09.308496] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:40.018 [2024-11-05 19:14:09.308500] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:40.018 [2024-11-05 19:14:09.308504] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1689690) 00:24:40.018 [2024-11-05 19:14:09.308511] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.018 [2024-11-05 19:14:09.308520] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16eb580, cid 3, qid 0 00:24:40.018 [2024-11-05 19:14:09.308695] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:40.018 [2024-11-05 19:14:09.308702] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:40.018 [2024-11-05 19:14:09.308707] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:40.018 [2024-11-05 19:14:09.308711] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16eb580) on tqpair=0x1689690 00:24:40.018 [2024-11-05 19:14:09.308720] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:40.018 [2024-11-05 19:14:09.308725] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:40.018 [2024-11-05 19:14:09.308728] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1689690) 00:24:40.018 [2024-11-05 19:14:09.308735] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.018 [2024-11-05 19:14:09.308748] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16eb580, cid 3, qid 0 00:24:40.018 [2024-11-05 19:14:09.308972] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:40.018 [2024-11-05 19:14:09.308978] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:40.018 [2024-11-05 19:14:09.308982] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:40.018 [2024-11-05 19:14:09.308986] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16eb580) on tqpair=0x1689690 00:24:40.018 [2024-11-05 19:14:09.308995] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:40.018 [2024-11-05 19:14:09.308999] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:40.018 [2024-11-05 19:14:09.309003] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1689690) 00:24:40.018 [2024-11-05 19:14:09.309009] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.018 [2024-11-05 19:14:09.309019] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16eb580, cid 3, qid 0 00:24:40.018 [2024-11-05 19:14:09.309242] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:40.018 [2024-11-05 19:14:09.309248] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:40.018 [2024-11-05 19:14:09.309252] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:40.018 [2024-11-05 19:14:09.309256] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16eb580) on tqpair=0x1689690 00:24:40.018 [2024-11-05 19:14:09.309265] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:40.018 [2024-11-05 19:14:09.309269] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:40.018 [2024-11-05 19:14:09.309273] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1689690) 00:24:40.018 [2024-11-05 19:14:09.309280] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.018 [2024-11-05 19:14:09.309289] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16eb580, cid 3, qid 0 00:24:40.018 [2024-11-05 19:14:09.309514] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:40.018 [2024-11-05 19:14:09.309520] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:40.018 [2024-11-05 19:14:09.309524] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:40.018 [2024-11-05 19:14:09.309528] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16eb580) on tqpair=0x1689690 00:24:40.018 [2024-11-05 19:14:09.309537] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:40.018 [2024-11-05 19:14:09.309541] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:40.018 [2024-11-05 19:14:09.309545] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1689690) 00:24:40.018 [2024-11-05 19:14:09.309552] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.018 [2024-11-05 19:14:09.309561] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16eb580, cid 3, qid 0 00:24:40.018 [2024-11-05 19:14:09.313757] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:40.018 [2024-11-05 19:14:09.313766] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:40.018 [2024-11-05 19:14:09.313769] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:40.018 [2024-11-05 19:14:09.313775] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16eb580) on tqpair=0x1689690 00:24:40.018 [2024-11-05 19:14:09.313785] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:40.018 [2024-11-05 19:14:09.313789] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:40.018 [2024-11-05 19:14:09.313793] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1689690) 00:24:40.018 [2024-11-05 19:14:09.313800] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.018 [2024-11-05 19:14:09.313811] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16eb580, cid 3, qid 0 00:24:40.018 [2024-11-05 19:14:09.313972] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:40.018 [2024-11-05 19:14:09.313979] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:40.018 [2024-11-05 19:14:09.313982] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:40.018 [2024-11-05 19:14:09.313986] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16eb580) on tqpair=0x1689690 00:24:40.018 [2024-11-05 19:14:09.313994] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 5 milliseconds 00:24:40.018 0% 00:24:40.018 Data Units Read: 0 00:24:40.018 Data Units Written: 0 00:24:40.018 Host Read Commands: 0 00:24:40.018 Host Write Commands: 0 00:24:40.018 Controller Busy Time: 0 minutes 00:24:40.018 Power Cycles: 0 00:24:40.018 Power On Hours: 0 hours 00:24:40.018 Unsafe Shutdowns: 0 00:24:40.018 Unrecoverable Media Errors: 0 00:24:40.018 Lifetime Error Log Entries: 0 00:24:40.018 Warning Temperature Time: 0 minutes 00:24:40.018 Critical Temperature Time: 0 minutes 00:24:40.018 00:24:40.018 Number of Queues 00:24:40.018 ================ 00:24:40.018 Number of I/O Submission Queues: 127 00:24:40.018 Number of I/O Completion Queues: 127 00:24:40.018 00:24:40.018 Active Namespaces 00:24:40.018 ================= 00:24:40.018 Namespace ID:1 00:24:40.018 Error Recovery Timeout: Unlimited 00:24:40.018 Command Set Identifier: NVM (00h) 00:24:40.018 Deallocate: Supported 00:24:40.018 Deallocated/Unwritten Error: Not Supported 00:24:40.018 Deallocated Read Value: Unknown 00:24:40.018 Deallocate in Write Zeroes: Not Supported 00:24:40.018 Deallocated Guard Field: 0xFFFF 00:24:40.018 Flush: Supported 00:24:40.018 Reservation: Supported 00:24:40.018 Namespace Sharing Capabilities: Multiple Controllers 00:24:40.018 Size (in LBAs): 131072 (0GiB) 00:24:40.018 Capacity (in LBAs): 131072 (0GiB) 00:24:40.018 Utilization (in LBAs): 131072 (0GiB) 00:24:40.018 NGUID: ABCDEF0123456789ABCDEF0123456789 00:24:40.018 EUI64: ABCDEF0123456789 00:24:40.018 UUID: 6a17da61-76bd-4cef-9352-c489d989ac2a 00:24:40.018 Thin Provisioning: Not Supported 00:24:40.018 Per-NS Atomic Units: Yes 00:24:40.018 Atomic Boundary Size (Normal): 0 00:24:40.018 Atomic Boundary Size (PFail): 0 00:24:40.018 Atomic Boundary Offset: 0 00:24:40.018 Maximum Single Source Range Length: 65535 00:24:40.018 Maximum Copy Length: 65535 00:24:40.018 Maximum Source Range Count: 1 00:24:40.018 NGUID/EUI64 Never Reused: No 00:24:40.018 Namespace Write Protected: No 00:24:40.018 Number of LBA Formats: 1 00:24:40.018 Current LBA Format: LBA Format #00 00:24:40.018 LBA Format #00: Data Size: 512 Metadata Size: 0 00:24:40.018 00:24:40.018 19:14:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:24:40.280 19:14:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:40.280 19:14:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:40.280 19:14:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:40.280 19:14:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:40.280 19:14:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:24:40.280 19:14:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:24:40.280 19:14:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@335 -- # nvmfcleanup 00:24:40.280 19:14:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@99 -- # sync 00:24:40.280 19:14:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:24:40.280 19:14:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@102 -- # set +e 00:24:40.280 19:14:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@103 -- # for i in {1..20} 00:24:40.280 19:14:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:24:40.280 rmmod nvme_tcp 00:24:40.280 rmmod nvme_fabrics 00:24:40.280 rmmod nvme_keyring 00:24:40.280 19:14:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:24:40.280 19:14:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # set -e 00:24:40.280 19:14:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # return 0 00:24:40.280 19:14:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # '[' -n 438554 ']' 00:24:40.280 19:14:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@337 -- # killprocess 438554 00:24:40.280 19:14:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@952 -- # '[' -z 438554 ']' 00:24:40.280 19:14:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # kill -0 438554 00:24:40.280 19:14:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@957 -- # uname 00:24:40.280 19:14:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:40.280 19:14:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 438554 00:24:40.280 19:14:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:40.280 19:14:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:40.280 19:14:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@970 -- # echo 'killing process with pid 438554' 00:24:40.280 killing process with pid 438554 00:24:40.280 19:14:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@971 -- # kill 438554 00:24:40.280 19:14:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@976 -- # wait 438554 00:24:40.280 19:14:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:24:40.280 19:14:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # nvmf_fini 00:24:40.280 19:14:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@264 -- # local dev 00:24:40.280 19:14:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@267 -- # remove_target_ns 00:24:40.542 19:14:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:24:40.542 19:14:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:24:40.542 19:14:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_target_ns 00:24:42.458 19:14:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@268 -- # delete_main_bridge 00:24:42.459 19:14:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:24:42.459 19:14:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@130 -- # return 0 00:24:42.459 19:14:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:24:42.459 19:14:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:24:42.459 19:14:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:24:42.459 19:14:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:24:42.459 19:14:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:24:42.459 19:14:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:24:42.459 19:14:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:24:42.459 19:14:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:24:42.459 19:14:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:24:42.459 19:14:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:24:42.459 19:14:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:24:42.459 19:14:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:24:42.459 19:14:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:24:42.459 19:14:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:24:42.459 19:14:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:24:42.459 19:14:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:24:42.459 19:14:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:24:42.459 19:14:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@41 -- # _dev=0 00:24:42.459 19:14:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@41 -- # dev_map=() 00:24:42.459 19:14:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@284 -- # iptr 00:24:42.459 19:14:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@542 -- # iptables-save 00:24:42.459 19:14:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:24:42.459 19:14:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@542 -- # iptables-restore 00:24:42.459 00:24:42.459 real 0m11.466s 00:24:42.459 user 0m8.353s 00:24:42.459 sys 0m5.974s 00:24:42.459 19:14:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:42.459 19:14:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:42.459 ************************************ 00:24:42.459 END TEST nvmf_identify 00:24:42.459 ************************************ 00:24:42.459 19:14:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@21 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:42.459 19:14:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:24:42.459 19:14:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:42.459 19:14:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.459 ************************************ 00:24:42.459 START TEST nvmf_perf 00:24:42.459 ************************************ 00:24:42.459 19:14:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:42.721 * Looking for test storage... 00:24:42.721 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:42.721 19:14:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:42.721 19:14:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lcov --version 00:24:42.721 19:14:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:42.721 19:14:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:42.721 19:14:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:42.721 19:14:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:42.721 19:14:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:42.721 19:14:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:24:42.721 19:14:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:24:42.721 19:14:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:24:42.721 19:14:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:24:42.721 19:14:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:24:42.721 19:14:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:24:42.721 19:14:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:24:42.721 19:14:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:42.721 19:14:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:24:42.721 19:14:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:24:42.721 19:14:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:42.721 19:14:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:42.721 19:14:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:24:42.721 19:14:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:24:42.721 19:14:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:42.721 19:14:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:24:42.721 19:14:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:24:42.721 19:14:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:24:42.721 19:14:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:24:42.721 19:14:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:42.721 19:14:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:24:42.721 19:14:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:24:42.721 19:14:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:42.721 19:14:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:42.721 19:14:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:24:42.721 19:14:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:42.721 19:14:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:42.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:42.721 --rc genhtml_branch_coverage=1 00:24:42.721 --rc genhtml_function_coverage=1 00:24:42.721 --rc genhtml_legend=1 00:24:42.721 --rc geninfo_all_blocks=1 00:24:42.721 --rc geninfo_unexecuted_blocks=1 00:24:42.721 00:24:42.721 ' 00:24:42.721 19:14:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:42.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:42.721 --rc genhtml_branch_coverage=1 00:24:42.721 --rc genhtml_function_coverage=1 00:24:42.721 --rc genhtml_legend=1 00:24:42.722 --rc geninfo_all_blocks=1 00:24:42.722 --rc geninfo_unexecuted_blocks=1 00:24:42.722 00:24:42.722 ' 00:24:42.722 19:14:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:42.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:42.722 --rc genhtml_branch_coverage=1 00:24:42.722 --rc genhtml_function_coverage=1 00:24:42.722 --rc genhtml_legend=1 00:24:42.722 --rc geninfo_all_blocks=1 00:24:42.722 --rc geninfo_unexecuted_blocks=1 00:24:42.722 00:24:42.722 ' 00:24:42.722 19:14:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:42.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:42.722 --rc genhtml_branch_coverage=1 00:24:42.722 --rc genhtml_function_coverage=1 00:24:42.722 --rc genhtml_legend=1 00:24:42.722 --rc geninfo_all_blocks=1 00:24:42.722 --rc geninfo_unexecuted_blocks=1 00:24:42.722 00:24:42.722 ' 00:24:42.722 19:14:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:42.722 19:14:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:24:42.722 19:14:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:42.722 19:14:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:42.722 19:14:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:42.722 19:14:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:42.722 19:14:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:42.722 19:14:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:24:42.722 19:14:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:42.722 19:14:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:24:42.722 19:14:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:42.722 19:14:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:42.722 19:14:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:42.722 19:14:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:24:42.722 19:14:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:24:42.722 19:14:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:42.722 19:14:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:42.722 19:14:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:24:42.722 19:14:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:42.722 19:14:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:42.722 19:14:11 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:42.722 19:14:11 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:42.722 19:14:11 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:42.722 19:14:11 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:42.722 19:14:12 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:24:42.722 19:14:12 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:42.722 19:14:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:24:42.722 19:14:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:24:42.722 19:14:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:24:42.722 19:14:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:24:42.722 19:14:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@50 -- # : 0 00:24:42.722 19:14:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:24:42.722 19:14:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:24:42.722 19:14:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:24:42.722 19:14:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:42.722 19:14:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:42.722 19:14:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:24:42.722 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:24:42.722 19:14:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:24:42.722 19:14:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:24:42.722 19:14:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@54 -- # have_pci_nics=0 00:24:42.722 19:14:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:42.722 19:14:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:42.722 19:14:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:42.722 19:14:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:24:42.722 19:14:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:24:42.722 19:14:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:42.722 19:14:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # prepare_net_devs 00:24:42.722 19:14:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # local -g is_hw=no 00:24:42.722 19:14:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@260 -- # remove_target_ns 00:24:42.722 19:14:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:24:42.722 19:14:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:24:42.722 19:14:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_target_ns 00:24:42.722 19:14:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:24:42.722 19:14:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:24:42.722 19:14:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # xtrace_disable 00:24:42.722 19:14:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:50.865 19:14:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:50.866 19:14:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@131 -- # pci_devs=() 00:24:50.866 19:14:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@131 -- # local -a pci_devs 00:24:50.866 19:14:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@132 -- # pci_net_devs=() 00:24:50.866 19:14:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:24:50.866 19:14:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@133 -- # pci_drivers=() 00:24:50.866 19:14:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@133 -- # local -A pci_drivers 00:24:50.866 19:14:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@135 -- # net_devs=() 00:24:50.866 19:14:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@135 -- # local -ga net_devs 00:24:50.866 19:14:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@136 -- # e810=() 00:24:50.866 19:14:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@136 -- # local -ga e810 00:24:50.866 19:14:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@137 -- # x722=() 00:24:50.866 19:14:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@137 -- # local -ga x722 00:24:50.866 19:14:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@138 -- # mlx=() 00:24:50.866 19:14:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@138 -- # local -ga mlx 00:24:50.866 19:14:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:50.866 19:14:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:50.866 19:14:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:50.866 19:14:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:50.866 19:14:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:50.866 19:14:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:50.866 19:14:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:50.866 19:14:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:50.866 19:14:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:50.866 19:14:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:50.866 19:14:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:50.866 19:14:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:50.866 19:14:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:24:50.866 19:14:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:24:50.866 19:14:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:24:50.866 19:14:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:24:50.866 19:14:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:24:50.866 19:14:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:24:50.866 19:14:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:24:50.866 19:14:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:50.866 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:50.866 19:14:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:24:50.866 19:14:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:24:50.866 19:14:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:50.866 19:14:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:50.866 19:14:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:24:50.866 19:14:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:24:50.866 19:14:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:50.866 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:50.866 19:14:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:24:50.866 19:14:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:24:50.866 19:14:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:50.866 19:14:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:50.866 19:14:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:24:50.866 19:14:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:24:50.866 19:14:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:24:50.866 19:14:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:24:50.866 19:14:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:24:50.866 19:14:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:50.866 19:14:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:24:50.866 19:14:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:50.866 19:14:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # [[ up == up ]] 00:24:50.866 19:14:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:24:50.866 19:14:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:50.866 19:14:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:50.866 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:50.866 19:14:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:24:50.866 19:14:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:24:50.866 19:14:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:50.866 19:14:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:24:50.866 19:14:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:50.866 19:14:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # [[ up == up ]] 00:24:50.866 19:14:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:24:50.866 19:14:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:50.866 19:14:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:50.866 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:50.866 19:14:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:24:50.866 19:14:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:24:50.866 19:14:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:24:50.866 19:14:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # is_hw=yes 00:24:50.866 19:14:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:24:50.866 19:14:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:24:50.866 19:14:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:24:50.866 19:14:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:24:50.866 19:14:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@257 -- # create_target_ns 00:24:50.866 19:14:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:24:50.866 19:14:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:24:50.866 19:14:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:24:50.866 19:14:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:50.866 19:14:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:24:50.866 19:14:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:24:50.866 19:14:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:50.866 19:14:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:50.866 19:14:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:24:50.866 19:14:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:24:50.866 19:14:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:24:50.866 19:14:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:24:50.866 19:14:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@27 -- # local -gA dev_map 00:24:50.866 19:14:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@28 -- # local -g _dev 00:24:50.866 19:14:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:24:50.866 19:14:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:24:50.866 19:14:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:24:50.866 19:14:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:24:50.866 19:14:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@44 -- # ips=() 00:24:50.866 19:14:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:24:50.866 19:14:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:24:50.866 19:14:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:24:50.866 19:14:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:24:50.866 19:14:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:24:50.866 19:14:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:24:50.866 19:14:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:24:50.866 19:14:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:24:50.866 19:14:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:24:50.866 19:14:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:24:50.866 19:14:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:24:50.866 19:14:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:24:50.866 19:14:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:24:50.866 19:14:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:24:50.867 19:14:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:24:50.867 19:14:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:24:50.867 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:24:50.867 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:24:50.867 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:50.867 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:24:50.867 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@11 -- # local val=167772161 00:24:50.867 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:24:50.867 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:24:50.867 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:24:50.867 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:24:50.867 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:24:50.867 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:24:50.867 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:24:50.867 10.0.0.1 00:24:50.867 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:24:50.867 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:24:50.867 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:50.867 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:50.867 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:24:50.867 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@11 -- # local val=167772162 00:24:50.867 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:24:50.867 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:24:50.867 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:24:50.867 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:24:50.867 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:24:50.867 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:24:50.867 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:24:50.867 10.0.0.2 00:24:50.867 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:24:50.867 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:24:50.867 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:24:50.867 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:24:50.867 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:24:50.867 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:24:50.867 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:24:50.867 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:50.867 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:50.867 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:24:50.867 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:24:50.867 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:24:50.867 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:24:50.867 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:24:50.867 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:24:50.867 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:24:50.867 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:24:50.867 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:24:50.867 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:24:50.867 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:24:50.867 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@38 -- # ping_ips 1 00:24:50.867 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:24:50.867 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:24:50.867 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:24:50.867 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:24:50.867 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:24:50.867 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:24:50.867 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:24:50.867 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:24:50.867 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:24:50.867 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@107 -- # local dev=initiator0 00:24:50.867 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:24:50.867 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:24:50.867 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:24:50.867 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:24:50.867 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:24:50.867 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:24:50.867 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:24:50.867 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:24:50.867 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:24:50.867 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:24:50.867 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:24:50.867 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:50.867 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:50.867 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:24:50.867 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:24:50.867 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:50.867 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.693 ms 00:24:50.867 00:24:50.867 --- 10.0.0.1 ping statistics --- 00:24:50.867 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:50.867 rtt min/avg/max/mdev = 0.693/0.693/0.693/0.000 ms 00:24:50.867 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:24:50.867 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:24:50.867 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:24:50.867 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:24:50.867 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:50.867 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:50.867 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@168 -- # get_net_dev target0 00:24:50.867 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@107 -- # local dev=target0 00:24:50.867 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:24:50.867 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:24:50.867 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:24:50.867 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:24:50.867 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:24:50.867 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:24:50.867 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:24:50.867 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:24:50.867 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:24:50.867 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:24:50.867 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:24:50.867 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:24:50.867 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:24:50.867 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:24:50.867 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:50.867 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.319 ms 00:24:50.867 00:24:50.867 --- 10.0.0.2 ping statistics --- 00:24:50.867 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:50.867 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:24:50.867 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@98 -- # (( pair++ )) 00:24:50.867 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:24:50.867 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:50.867 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@270 -- # return 0 00:24:50.867 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:24:50.867 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:24:50.867 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:24:50.867 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:24:50.867 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:24:50.867 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:24:50.867 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:24:50.867 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:24:50.867 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:24:50.867 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:24:50.867 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@107 -- # local dev=initiator0 00:24:50.868 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:24:50.868 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:24:50.868 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:24:50.868 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:24:50.868 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:24:50.868 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:24:50.868 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:24:50.868 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:24:50.868 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:24:50.868 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:50.868 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:24:50.868 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:24:50.868 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:24:50.868 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:24:50.868 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:24:50.868 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:24:50.868 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@107 -- # local dev=initiator1 00:24:50.868 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:24:50.868 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:24:50.868 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@109 -- # return 1 00:24:50.868 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@168 -- # dev= 00:24:50.868 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@169 -- # return 0 00:24:50.868 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:24:50.868 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:24:50.868 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:24:50.868 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:24:50.868 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:24:50.868 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:50.868 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:50.868 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@168 -- # get_net_dev target0 00:24:50.868 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@107 -- # local dev=target0 00:24:50.868 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:24:50.868 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:24:50.868 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:24:50.868 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:24:50.868 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:24:50.868 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:24:50.868 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:24:50.868 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:24:50.868 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:24:50.868 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:50.868 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:24:50.868 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:24:50.868 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:24:50.868 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:24:50.868 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:50.868 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:50.868 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@168 -- # get_net_dev target1 00:24:50.868 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@107 -- # local dev=target1 00:24:50.868 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:24:50.868 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:24:50.868 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@109 -- # return 1 00:24:50.868 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@168 -- # dev= 00:24:50.868 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@169 -- # return 0 00:24:50.868 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:24:50.868 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:50.868 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:24:50.868 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:24:50.868 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:50.868 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:24:50.868 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:24:50.868 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:24:50.868 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:24:50.868 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:50.868 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:50.868 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # nvmfpid=442932 00:24:50.868 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@329 -- # waitforlisten 442932 00:24:50.868 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:50.868 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@833 -- # '[' -z 442932 ']' 00:24:50.868 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:50.868 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:50.868 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:50.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:50.868 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:50.868 19:14:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:50.868 [2024-11-05 19:14:19.445897] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:24:50.868 [2024-11-05 19:14:19.445958] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:50.868 [2024-11-05 19:14:19.525737] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:50.868 [2024-11-05 19:14:19.562367] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:50.868 [2024-11-05 19:14:19.562400] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:50.868 [2024-11-05 19:14:19.562408] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:50.868 [2024-11-05 19:14:19.562415] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:50.868 [2024-11-05 19:14:19.562421] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:50.868 [2024-11-05 19:14:19.563907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:50.868 [2024-11-05 19:14:19.567764] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:50.868 [2024-11-05 19:14:19.567866] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:50.868 [2024-11-05 19:14:19.567869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:51.129 19:14:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:51.129 19:14:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@866 -- # return 0 00:24:51.129 19:14:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:24:51.129 19:14:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:51.129 19:14:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:51.129 19:14:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:51.129 19:14:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:24:51.129 19:14:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:24:51.699 19:14:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:24:51.699 19:14:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:24:51.699 19:14:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:24:51.699 19:14:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:24:51.960 19:14:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:24:51.960 19:14:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:24:51.960 19:14:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:24:51.960 19:14:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:24:51.960 19:14:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:52.221 [2024-11-05 19:14:21.328781] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:52.221 19:14:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:52.482 19:14:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:52.482 19:14:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:52.482 19:14:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:52.482 19:14:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:24:52.742 19:14:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:52.742 [2024-11-05 19:14:22.063496] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:53.002 19:14:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:53.002 19:14:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:24:53.002 19:14:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:24:53.002 19:14:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:24:53.002 19:14:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:24:54.387 Initializing NVMe Controllers 00:24:54.387 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:24:54.387 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:24:54.387 Initialization complete. Launching workers. 00:24:54.387 ======================================================== 00:24:54.387 Latency(us) 00:24:54.387 Device Information : IOPS MiB/s Average min max 00:24:54.387 PCIE (0000:65:00.0) NSID 1 from core 0: 78874.55 308.10 405.08 13.38 8228.93 00:24:54.387 ======================================================== 00:24:54.387 Total : 78874.55 308.10 405.08 13.38 8228.93 00:24:54.387 00:24:54.387 19:14:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:55.770 Initializing NVMe Controllers 00:24:55.770 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:55.770 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:55.770 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:55.770 Initialization complete. Launching workers. 00:24:55.770 ======================================================== 00:24:55.770 Latency(us) 00:24:55.770 Device Information : IOPS MiB/s Average min max 00:24:55.770 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 57.00 0.22 17820.20 205.56 45742.56 00:24:55.770 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 51.00 0.20 19698.34 7364.10 47889.55 00:24:55.770 ======================================================== 00:24:55.770 Total : 108.00 0.42 18707.10 205.56 47889.55 00:24:55.770 00:24:55.770 19:14:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:57.152 Initializing NVMe Controllers 00:24:57.152 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:57.152 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:57.152 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:57.152 Initialization complete. Launching workers. 00:24:57.152 ======================================================== 00:24:57.152 Latency(us) 00:24:57.152 Device Information : IOPS MiB/s Average min max 00:24:57.152 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10366.99 40.50 3086.87 524.79 6545.44 00:24:57.152 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3814.00 14.90 8539.32 6868.55 47548.43 00:24:57.152 ======================================================== 00:24:57.152 Total : 14180.99 55.39 4553.32 524.79 47548.43 00:24:57.153 00:24:57.153 19:14:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:24:57.153 19:14:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:24:57.153 19:14:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:59.698 Initializing NVMe Controllers 00:24:59.698 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:59.698 Controller IO queue size 128, less than required. 00:24:59.698 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:59.698 Controller IO queue size 128, less than required. 00:24:59.698 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:59.698 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:59.698 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:59.698 Initialization complete. Launching workers. 00:24:59.698 ======================================================== 00:24:59.698 Latency(us) 00:24:59.698 Device Information : IOPS MiB/s Average min max 00:24:59.698 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1688.96 422.24 76553.99 47103.79 117973.09 00:24:59.698 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 593.99 148.50 225736.69 60701.44 346764.55 00:24:59.698 ======================================================== 00:24:59.698 Total : 2282.95 570.74 115368.94 47103.79 346764.55 00:24:59.698 00:24:59.698 19:14:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:24:59.698 No valid NVMe controllers or AIO or URING devices found 00:24:59.698 Initializing NVMe Controllers 00:24:59.699 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:59.699 Controller IO queue size 128, less than required. 00:24:59.699 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:59.699 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:24:59.699 Controller IO queue size 128, less than required. 00:24:59.699 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:59.699 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:24:59.699 WARNING: Some requested NVMe devices were skipped 00:24:59.959 19:14:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:25:02.584 Initializing NVMe Controllers 00:25:02.584 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:02.584 Controller IO queue size 128, less than required. 00:25:02.584 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:02.584 Controller IO queue size 128, less than required. 00:25:02.584 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:02.584 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:02.584 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:02.584 Initialization complete. Launching workers. 00:25:02.584 00:25:02.584 ==================== 00:25:02.584 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:25:02.584 TCP transport: 00:25:02.584 polls: 18429 00:25:02.584 idle_polls: 9649 00:25:02.584 sock_completions: 8780 00:25:02.584 nvme_completions: 8367 00:25:02.584 submitted_requests: 12576 00:25:02.584 queued_requests: 1 00:25:02.584 00:25:02.584 ==================== 00:25:02.584 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:25:02.584 TCP transport: 00:25:02.584 polls: 19253 00:25:02.584 idle_polls: 10548 00:25:02.584 sock_completions: 8705 00:25:02.584 nvme_completions: 6411 00:25:02.584 submitted_requests: 9620 00:25:02.584 queued_requests: 1 00:25:02.584 ======================================================== 00:25:02.584 Latency(us) 00:25:02.584 Device Information : IOPS MiB/s Average min max 00:25:02.584 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2090.63 522.66 61962.74 39901.58 115732.24 00:25:02.584 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1601.83 400.46 81862.55 43661.98 140909.89 00:25:02.584 ======================================================== 00:25:02.584 Total : 3692.45 923.11 70595.51 39901.58 140909.89 00:25:02.584 00:25:02.584 19:14:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:25:02.584 19:14:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:02.584 19:14:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:25:02.584 19:14:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:25:02.584 19:14:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:25:02.584 19:14:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@335 -- # nvmfcleanup 00:25:02.584 19:14:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@99 -- # sync 00:25:02.584 19:14:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:25:02.584 19:14:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@102 -- # set +e 00:25:02.584 19:14:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@103 -- # for i in {1..20} 00:25:02.584 19:14:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:25:02.584 rmmod nvme_tcp 00:25:02.584 rmmod nvme_fabrics 00:25:02.584 rmmod nvme_keyring 00:25:02.584 19:14:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:25:02.584 19:14:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # set -e 00:25:02.584 19:14:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # return 0 00:25:02.584 19:14:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # '[' -n 442932 ']' 00:25:02.584 19:14:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@337 -- # killprocess 442932 00:25:02.584 19:14:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@952 -- # '[' -z 442932 ']' 00:25:02.584 19:14:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # kill -0 442932 00:25:02.584 19:14:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@957 -- # uname 00:25:02.584 19:14:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:02.584 19:14:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 442932 00:25:02.584 19:14:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:02.584 19:14:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:02.584 19:14:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 442932' 00:25:02.584 killing process with pid 442932 00:25:02.584 19:14:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@971 -- # kill 442932 00:25:02.584 19:14:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@976 -- # wait 442932 00:25:05.131 19:14:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:25:05.131 19:14:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # nvmf_fini 00:25:05.131 19:14:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@264 -- # local dev 00:25:05.131 19:14:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@267 -- # remove_target_ns 00:25:05.131 19:14:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:25:05.131 19:14:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:25:05.131 19:14:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_target_ns 00:25:07.048 19:14:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@268 -- # delete_main_bridge 00:25:07.048 19:14:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:25:07.048 19:14:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@130 -- # return 0 00:25:07.048 19:14:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:25:07.048 19:14:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:25:07.048 19:14:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:25:07.048 19:14:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:25:07.048 19:14:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:25:07.048 19:14:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:25:07.048 19:14:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:25:07.048 19:14:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:25:07.048 19:14:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:25:07.048 19:14:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:25:07.048 19:14:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:25:07.048 19:14:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:25:07.048 19:14:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:25:07.048 19:14:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:25:07.048 19:14:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:25:07.048 19:14:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:25:07.048 19:14:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:25:07.048 19:14:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@41 -- # _dev=0 00:25:07.048 19:14:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@41 -- # dev_map=() 00:25:07.048 19:14:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@284 -- # iptr 00:25:07.048 19:14:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@542 -- # iptables-save 00:25:07.048 19:14:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:25:07.048 19:14:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@542 -- # iptables-restore 00:25:07.048 00:25:07.048 real 0m24.139s 00:25:07.048 user 0m58.614s 00:25:07.048 sys 0m8.278s 00:25:07.048 19:14:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:07.048 19:14:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:07.048 ************************************ 00:25:07.048 END TEST nvmf_perf 00:25:07.048 ************************************ 00:25:07.048 19:14:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:25:07.048 19:14:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:25:07.048 19:14:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:07.048 19:14:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.048 ************************************ 00:25:07.048 START TEST nvmf_fio_host 00:25:07.048 ************************************ 00:25:07.048 19:14:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:25:07.048 * Looking for test storage... 00:25:07.048 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:07.048 19:14:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:07.048 19:14:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lcov --version 00:25:07.048 19:14:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:07.048 19:14:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:07.048 19:14:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:07.048 19:14:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:07.048 19:14:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:07.048 19:14:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:25:07.048 19:14:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:25:07.048 19:14:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:25:07.048 19:14:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:25:07.048 19:14:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:25:07.048 19:14:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:25:07.048 19:14:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:25:07.048 19:14:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:07.048 19:14:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:25:07.048 19:14:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:25:07.048 19:14:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:07.048 19:14:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:07.048 19:14:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:25:07.048 19:14:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:25:07.048 19:14:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:07.048 19:14:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:25:07.048 19:14:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:25:07.048 19:14:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:25:07.048 19:14:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:25:07.048 19:14:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:07.048 19:14:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:25:07.048 19:14:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:25:07.048 19:14:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:07.049 19:14:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:07.049 19:14:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:25:07.049 19:14:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:07.049 19:14:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:07.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:07.049 --rc genhtml_branch_coverage=1 00:25:07.049 --rc genhtml_function_coverage=1 00:25:07.049 --rc genhtml_legend=1 00:25:07.049 --rc geninfo_all_blocks=1 00:25:07.049 --rc geninfo_unexecuted_blocks=1 00:25:07.049 00:25:07.049 ' 00:25:07.049 19:14:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:07.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:07.049 --rc genhtml_branch_coverage=1 00:25:07.049 --rc genhtml_function_coverage=1 00:25:07.049 --rc genhtml_legend=1 00:25:07.049 --rc geninfo_all_blocks=1 00:25:07.049 --rc geninfo_unexecuted_blocks=1 00:25:07.049 00:25:07.049 ' 00:25:07.049 19:14:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:07.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:07.049 --rc genhtml_branch_coverage=1 00:25:07.049 --rc genhtml_function_coverage=1 00:25:07.049 --rc genhtml_legend=1 00:25:07.049 --rc geninfo_all_blocks=1 00:25:07.049 --rc geninfo_unexecuted_blocks=1 00:25:07.049 00:25:07.049 ' 00:25:07.049 19:14:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:07.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:07.049 --rc genhtml_branch_coverage=1 00:25:07.049 --rc genhtml_function_coverage=1 00:25:07.049 --rc genhtml_legend=1 00:25:07.049 --rc geninfo_all_blocks=1 00:25:07.049 --rc geninfo_unexecuted_blocks=1 00:25:07.049 00:25:07.049 ' 00:25:07.049 19:14:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:07.049 19:14:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:25:07.049 19:14:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:07.049 19:14:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:07.049 19:14:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:07.049 19:14:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:07.049 19:14:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:07.049 19:14:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:07.049 19:14:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:25:07.049 19:14:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:07.049 19:14:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:07.049 19:14:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:25:07.049 19:14:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:07.049 19:14:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:07.049 19:14:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:07.049 19:14:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:07.049 19:14:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:07.049 19:14:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:25:07.049 19:14:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:07.049 19:14:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:25:07.049 19:14:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:07.049 19:14:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:07.049 19:14:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:07.049 19:14:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:25:07.049 19:14:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:25:07.049 19:14:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:07.049 19:14:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:07.049 19:14:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:25:07.049 19:14:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:07.049 19:14:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:07.049 19:14:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:07.049 19:14:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:07.049 19:14:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:07.049 19:14:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:07.049 19:14:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:25:07.049 19:14:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:07.049 19:14:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:25:07.049 19:14:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:25:07.049 19:14:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:25:07.049 19:14:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:25:07.049 19:14:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@50 -- # : 0 00:25:07.049 19:14:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:25:07.049 19:14:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:25:07.049 19:14:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:25:07.049 19:14:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:07.049 19:14:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:07.049 19:14:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:25:07.049 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:25:07.049 19:14:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:25:07.049 19:14:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:25:07.050 19:14:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@54 -- # have_pci_nics=0 00:25:07.050 19:14:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:07.050 19:14:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:25:07.050 19:14:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:25:07.050 19:14:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:07.050 19:14:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # prepare_net_devs 00:25:07.050 19:14:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # local -g is_hw=no 00:25:07.050 19:14:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@260 -- # remove_target_ns 00:25:07.050 19:14:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:25:07.050 19:14:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:25:07.050 19:14:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_target_ns 00:25:07.050 19:14:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:25:07.050 19:14:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:25:07.050 19:14:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # xtrace_disable 00:25:07.050 19:14:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.202 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:15.202 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@131 -- # pci_devs=() 00:25:15.202 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@131 -- # local -a pci_devs 00:25:15.202 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@132 -- # pci_net_devs=() 00:25:15.202 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:25:15.202 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@133 -- # pci_drivers=() 00:25:15.202 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@133 -- # local -A pci_drivers 00:25:15.202 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@135 -- # net_devs=() 00:25:15.202 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@135 -- # local -ga net_devs 00:25:15.202 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@136 -- # e810=() 00:25:15.202 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@136 -- # local -ga e810 00:25:15.202 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@137 -- # x722=() 00:25:15.202 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@137 -- # local -ga x722 00:25:15.202 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@138 -- # mlx=() 00:25:15.202 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@138 -- # local -ga mlx 00:25:15.202 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:15.202 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:15.202 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:15.202 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:15.202 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:15.202 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:15.202 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:15.202 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:15.202 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:15.202 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:15.202 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:15.202 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:15.202 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:25:15.202 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:25:15.202 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:25:15.202 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:25:15.202 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:25:15.202 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:25:15.202 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:25:15.202 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:15.202 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:15.202 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:25:15.202 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:25:15.202 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:15.202 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:15.202 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:25:15.202 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:25:15.202 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:15.202 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:15.202 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:25:15.202 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:25:15.202 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:15.202 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:15.202 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:25:15.202 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:25:15.202 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:25:15.202 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:25:15.202 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:25:15.202 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:15.203 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:25:15.203 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:15.203 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # [[ up == up ]] 00:25:15.203 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:25:15.203 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:15.203 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:15.203 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:15.203 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:25:15.203 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:25:15.203 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:15.203 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:25:15.203 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:15.203 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # [[ up == up ]] 00:25:15.203 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:25:15.203 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:15.203 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:15.203 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:15.203 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:25:15.203 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:25:15.203 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:25:15.203 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # is_hw=yes 00:25:15.203 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:25:15.203 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:25:15.203 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:25:15.203 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:25:15.203 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@257 -- # create_target_ns 00:25:15.203 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:25:15.203 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:25:15.203 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:25:15.203 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:15.203 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:25:15.203 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:25:15.203 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:15.203 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:15.203 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:25:15.203 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:25:15.203 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:25:15.203 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:25:15.203 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@27 -- # local -gA dev_map 00:25:15.203 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@28 -- # local -g _dev 00:25:15.203 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:25:15.203 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:25:15.203 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:25:15.203 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:25:15.203 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@44 -- # ips=() 00:25:15.203 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:25:15.203 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:25:15.203 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:25:15.203 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:25:15.203 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:25:15.203 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:25:15.203 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:25:15.203 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:25:15.203 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:25:15.203 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:25:15.203 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:25:15.203 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:25:15.203 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:25:15.203 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:25:15.203 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:25:15.203 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:25:15.203 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:25:15.203 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:25:15.203 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:15.203 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:25:15.203 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@11 -- # local val=167772161 00:25:15.203 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:25:15.203 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:25:15.203 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:25:15.203 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:25:15.203 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:25:15.203 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:25:15.203 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:25:15.203 10.0.0.1 00:25:15.203 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:25:15.203 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:25:15.203 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:15.203 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:15.203 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:25:15.203 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@11 -- # local val=167772162 00:25:15.203 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:25:15.203 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:25:15.203 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:25:15.203 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:25:15.203 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:25:15.203 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:25:15.203 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:25:15.203 10.0.0.2 00:25:15.203 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:25:15.203 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:25:15.203 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:25:15.203 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:25:15.203 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:25:15.203 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:25:15.203 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:25:15.203 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:15.203 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:15.203 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:25:15.203 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:25:15.203 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:25:15.203 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:25:15.203 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:25:15.203 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:25:15.203 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:25:15.203 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:25:15.203 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:25:15.203 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:25:15.203 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:25:15.203 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@38 -- # ping_ips 1 00:25:15.203 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:25:15.203 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:25:15.203 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:25:15.203 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:25:15.203 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:25:15.203 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:25:15.203 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:25:15.203 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:25:15.203 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:25:15.203 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@107 -- # local dev=initiator0 00:25:15.204 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:25:15.204 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:25:15.204 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:25:15.204 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:25:15.204 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:25:15.204 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:25:15.204 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:25:15.204 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:25:15.204 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:25:15.204 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:25:15.204 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:25:15.204 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:15.204 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:15.204 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:25:15.204 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:25:15.204 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:15.204 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.629 ms 00:25:15.204 00:25:15.204 --- 10.0.0.1 ping statistics --- 00:25:15.204 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:15.204 rtt min/avg/max/mdev = 0.629/0.629/0.629/0.000 ms 00:25:15.204 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:25:15.204 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:25:15.204 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:25:15.204 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:25:15.204 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:15.204 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:15.204 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@168 -- # get_net_dev target0 00:25:15.204 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@107 -- # local dev=target0 00:25:15.204 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:25:15.204 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:25:15.204 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:25:15.204 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:25:15.204 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:25:15.204 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:25:15.204 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:25:15.204 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:25:15.204 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:25:15.204 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:25:15.204 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:25:15.204 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:25:15.204 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:25:15.204 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:25:15.204 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:15.204 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.265 ms 00:25:15.204 00:25:15.204 --- 10.0.0.2 ping statistics --- 00:25:15.204 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:15.204 rtt min/avg/max/mdev = 0.265/0.265/0.265/0.000 ms 00:25:15.204 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@98 -- # (( pair++ )) 00:25:15.204 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:25:15.204 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:15.204 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@270 -- # return 0 00:25:15.204 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:25:15.204 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:25:15.204 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:25:15.204 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:25:15.204 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:25:15.204 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:25:15.204 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:25:15.204 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:25:15.204 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:25:15.204 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:25:15.204 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@107 -- # local dev=initiator0 00:25:15.204 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:25:15.204 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:25:15.204 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:25:15.204 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:25:15.204 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:25:15.204 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:25:15.204 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:25:15.204 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:25:15.204 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:25:15.204 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:15.204 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:25:15.204 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:25:15.204 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:25:15.204 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:25:15.204 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:25:15.204 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:25:15.204 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@107 -- # local dev=initiator1 00:25:15.204 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:25:15.204 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:25:15.204 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@109 -- # return 1 00:25:15.204 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@168 -- # dev= 00:25:15.204 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@169 -- # return 0 00:25:15.204 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:25:15.204 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:25:15.204 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:25:15.204 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:25:15.204 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:25:15.204 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:15.204 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:15.204 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@168 -- # get_net_dev target0 00:25:15.204 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@107 -- # local dev=target0 00:25:15.204 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:25:15.204 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:25:15.204 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:25:15.204 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:25:15.204 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:25:15.204 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:25:15.204 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:25:15.204 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:25:15.204 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:25:15.204 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:15.204 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:25:15.204 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:25:15.204 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:25:15.204 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:25:15.204 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:15.204 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:15.204 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@168 -- # get_net_dev target1 00:25:15.204 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@107 -- # local dev=target1 00:25:15.204 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:25:15.204 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:25:15.204 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@109 -- # return 1 00:25:15.204 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@168 -- # dev= 00:25:15.204 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@169 -- # return 0 00:25:15.204 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:25:15.204 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:15.204 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:25:15.204 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:25:15.204 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:15.204 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:25:15.204 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:25:15.205 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:25:15.205 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:25:15.205 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:15.205 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.205 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=450014 00:25:15.205 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:15.205 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:15.205 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 450014 00:25:15.205 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@833 -- # '[' -z 450014 ']' 00:25:15.205 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:15.205 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:15.205 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:15.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:15.205 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:15.205 19:14:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.205 [2024-11-05 19:14:43.829961] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:25:15.205 [2024-11-05 19:14:43.830013] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:15.205 [2024-11-05 19:14:43.908741] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:15.205 [2024-11-05 19:14:43.944645] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:15.205 [2024-11-05 19:14:43.944677] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:15.205 [2024-11-05 19:14:43.944685] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:15.205 [2024-11-05 19:14:43.944692] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:15.205 [2024-11-05 19:14:43.944698] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:15.205 [2024-11-05 19:14:43.946378] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:15.205 [2024-11-05 19:14:43.946492] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:15.205 [2024-11-05 19:14:43.946649] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:15.205 [2024-11-05 19:14:43.946649] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:15.465 19:14:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:15.465 19:14:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@866 -- # return 0 00:25:15.465 19:14:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:15.465 [2024-11-05 19:14:44.773067] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:15.727 19:14:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:25:15.727 19:14:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:15.727 19:14:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.727 19:14:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:25:15.727 Malloc1 00:25:15.727 19:14:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:15.988 19:14:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:16.249 19:14:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:16.249 [2024-11-05 19:14:45.528724] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:16.249 19:14:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:16.511 19:14:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:25:16.511 19:14:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:16.511 19:14:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1362 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:16.511 19:14:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:25:16.511 19:14:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:16.511 19:14:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local sanitizers 00:25:16.511 19:14:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:16.511 19:14:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # shift 00:25:16.511 19:14:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # local asan_lib= 00:25:16.511 19:14:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:25:16.511 19:14:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:16.511 19:14:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libasan 00:25:16.511 19:14:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:25:16.511 19:14:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:25:16.511 19:14:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:25:16.511 19:14:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:25:16.511 19:14:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:16.511 19:14:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:25:16.511 19:14:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:25:16.511 19:14:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:25:16.511 19:14:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:25:16.511 19:14:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:25:16.511 19:14:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:17.098 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:25:17.098 fio-3.35 00:25:17.098 Starting 1 thread 00:25:19.644 00:25:19.644 test: (groupid=0, jobs=1): err= 0: pid=450811: Tue Nov 5 19:14:48 2024 00:25:19.644 read: IOPS=13.8k, BW=53.8MiB/s (56.4MB/s)(108MiB/2005msec) 00:25:19.644 slat (usec): min=2, max=306, avg= 2.16, stdev= 2.60 00:25:19.644 clat (usec): min=3604, max=9491, avg=5122.63, stdev=390.47 00:25:19.644 lat (usec): min=3607, max=9504, avg=5124.79, stdev=390.73 00:25:19.644 clat percentiles (usec): 00:25:19.644 | 1.00th=[ 4293], 5.00th=[ 4555], 10.00th=[ 4686], 20.00th=[ 4817], 00:25:19.644 | 30.00th=[ 4948], 40.00th=[ 5014], 50.00th=[ 5145], 60.00th=[ 5211], 00:25:19.644 | 70.00th=[ 5276], 80.00th=[ 5407], 90.00th=[ 5538], 95.00th=[ 5669], 00:25:19.644 | 99.00th=[ 5997], 99.50th=[ 6259], 99.90th=[ 8586], 99.95th=[ 8979], 00:25:19.644 | 99.99th=[ 9503] 00:25:19.644 bw ( KiB/s): min=54032, max=55584, per=100.00%, avg=55094.00, stdev=715.11, samples=4 00:25:19.644 iops : min=13508, max=13896, avg=13773.50, stdev=178.78, samples=4 00:25:19.644 write: IOPS=13.8k, BW=53.7MiB/s (56.3MB/s)(108MiB/2005msec); 0 zone resets 00:25:19.644 slat (usec): min=2, max=274, avg= 2.22, stdev= 1.82 00:25:19.644 clat (usec): min=2735, max=8038, avg=4141.59, stdev=341.20 00:25:19.644 lat (usec): min=2737, max=8044, avg=4143.81, stdev=341.50 00:25:19.644 clat percentiles (usec): 00:25:19.644 | 1.00th=[ 3425], 5.00th=[ 3654], 10.00th=[ 3785], 20.00th=[ 3916], 00:25:19.644 | 30.00th=[ 3982], 40.00th=[ 4080], 50.00th=[ 4146], 60.00th=[ 4228], 00:25:19.644 | 70.00th=[ 4293], 80.00th=[ 4359], 90.00th=[ 4490], 95.00th=[ 4555], 00:25:19.644 | 99.00th=[ 4883], 99.50th=[ 5735], 99.90th=[ 7439], 99.95th=[ 7635], 00:25:19.644 | 99.99th=[ 7963] 00:25:19.644 bw ( KiB/s): min=54392, max=55424, per=100.00%, avg=55042.00, stdev=451.15, samples=4 00:25:19.644 iops : min=13598, max=13856, avg=13760.50, stdev=112.79, samples=4 00:25:19.644 lat (msec) : 4=15.58%, 10=84.42% 00:25:19.644 cpu : usr=76.55%, sys=22.11%, ctx=26, majf=0, minf=16 00:25:19.645 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:25:19.645 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:19.645 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:19.645 issued rwts: total=27615,27580,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:19.645 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:19.645 00:25:19.645 Run status group 0 (all jobs): 00:25:19.645 READ: bw=53.8MiB/s (56.4MB/s), 53.8MiB/s-53.8MiB/s (56.4MB/s-56.4MB/s), io=108MiB (113MB), run=2005-2005msec 00:25:19.645 WRITE: bw=53.7MiB/s (56.3MB/s), 53.7MiB/s-53.7MiB/s (56.3MB/s-56.3MB/s), io=108MiB (113MB), run=2005-2005msec 00:25:19.645 19:14:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:19.645 19:14:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1362 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:19.645 19:14:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:25:19.645 19:14:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:19.645 19:14:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local sanitizers 00:25:19.645 19:14:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:19.645 19:14:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # shift 00:25:19.645 19:14:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # local asan_lib= 00:25:19.645 19:14:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:25:19.645 19:14:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:19.645 19:14:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libasan 00:25:19.645 19:14:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:25:19.645 19:14:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:25:19.645 19:14:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:25:19.645 19:14:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:25:19.645 19:14:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:19.645 19:14:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:25:19.645 19:14:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:25:19.645 19:14:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:25:19.645 19:14:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:25:19.645 19:14:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:25:19.645 19:14:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:19.645 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:25:19.645 fio-3.35 00:25:19.645 Starting 1 thread 00:25:22.187 00:25:22.187 test: (groupid=0, jobs=1): err= 0: pid=451388: Tue Nov 5 19:14:51 2024 00:25:22.187 read: IOPS=9212, BW=144MiB/s (151MB/s)(289MiB/2008msec) 00:25:22.187 slat (usec): min=3, max=110, avg= 3.62, stdev= 1.73 00:25:22.187 clat (usec): min=1842, max=17055, avg=8504.56, stdev=2123.95 00:25:22.187 lat (usec): min=1846, max=17058, avg=8508.18, stdev=2124.10 00:25:22.187 clat percentiles (usec): 00:25:22.187 | 1.00th=[ 4293], 5.00th=[ 5276], 10.00th=[ 5866], 20.00th=[ 6652], 00:25:22.187 | 30.00th=[ 7177], 40.00th=[ 7767], 50.00th=[ 8356], 60.00th=[ 8979], 00:25:22.187 | 70.00th=[ 9765], 80.00th=[10552], 90.00th=[11076], 95.00th=[11863], 00:25:22.187 | 99.00th=[14091], 99.50th=[14615], 99.90th=[15270], 99.95th=[15533], 00:25:22.187 | 99.99th=[16909] 00:25:22.187 bw ( KiB/s): min=64480, max=83808, per=49.12%, avg=72408.00, stdev=8154.09, samples=4 00:25:22.187 iops : min= 4030, max= 5238, avg=4525.50, stdev=509.63, samples=4 00:25:22.187 write: IOPS=5553, BW=86.8MiB/s (91.0MB/s)(148MiB/1711msec); 0 zone resets 00:25:22.187 slat (usec): min=39, max=453, avg=41.10, stdev= 8.72 00:25:22.187 clat (usec): min=1866, max=16493, avg=9498.84, stdev=1565.89 00:25:22.187 lat (usec): min=1906, max=16538, avg=9539.94, stdev=1568.08 00:25:22.187 clat percentiles (usec): 00:25:22.187 | 1.00th=[ 6390], 5.00th=[ 7242], 10.00th=[ 7767], 20.00th=[ 8291], 00:25:22.187 | 30.00th=[ 8717], 40.00th=[ 8979], 50.00th=[ 9372], 60.00th=[ 9634], 00:25:22.187 | 70.00th=[10159], 80.00th=[10683], 90.00th=[11469], 95.00th=[12256], 00:25:22.187 | 99.00th=[14222], 99.50th=[15008], 99.90th=[15926], 99.95th=[16057], 00:25:22.187 | 99.99th=[16450] 00:25:22.187 bw ( KiB/s): min=66656, max=87136, per=85.03%, avg=75552.00, stdev=8532.95, samples=4 00:25:22.187 iops : min= 4166, max= 5446, avg=4722.00, stdev=533.31, samples=4 00:25:22.187 lat (msec) : 2=0.02%, 4=0.46%, 10=70.66%, 20=28.86% 00:25:22.187 cpu : usr=85.30%, sys=12.95%, ctx=21, majf=0, minf=42 00:25:22.187 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:25:22.187 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:22.187 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:22.188 issued rwts: total=18499,9502,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:22.188 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:22.188 00:25:22.188 Run status group 0 (all jobs): 00:25:22.188 READ: bw=144MiB/s (151MB/s), 144MiB/s-144MiB/s (151MB/s-151MB/s), io=289MiB (303MB), run=2008-2008msec 00:25:22.188 WRITE: bw=86.8MiB/s (91.0MB/s), 86.8MiB/s-86.8MiB/s (91.0MB/s-91.0MB/s), io=148MiB (156MB), run=1711-1711msec 00:25:22.188 19:14:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:22.448 19:14:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:25:22.448 19:14:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:25:22.448 19:14:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:25:22.448 19:14:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:25:22.448 19:14:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@335 -- # nvmfcleanup 00:25:22.448 19:14:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@99 -- # sync 00:25:22.448 19:14:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:25:22.448 19:14:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@102 -- # set +e 00:25:22.448 19:14:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@103 -- # for i in {1..20} 00:25:22.448 19:14:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:25:22.448 rmmod nvme_tcp 00:25:22.448 rmmod nvme_fabrics 00:25:22.448 rmmod nvme_keyring 00:25:22.448 19:14:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:25:22.448 19:14:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # set -e 00:25:22.448 19:14:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # return 0 00:25:22.448 19:14:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # '[' -n 450014 ']' 00:25:22.448 19:14:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@337 -- # killprocess 450014 00:25:22.448 19:14:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@952 -- # '[' -z 450014 ']' 00:25:22.448 19:14:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # kill -0 450014 00:25:22.448 19:14:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@957 -- # uname 00:25:22.448 19:14:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:22.448 19:14:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 450014 00:25:22.448 19:14:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:22.448 19:14:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:22.448 19:14:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@970 -- # echo 'killing process with pid 450014' 00:25:22.449 killing process with pid 450014 00:25:22.449 19:14:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@971 -- # kill 450014 00:25:22.449 19:14:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@976 -- # wait 450014 00:25:22.709 19:14:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:25:22.709 19:14:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # nvmf_fini 00:25:22.709 19:14:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@264 -- # local dev 00:25:22.709 19:14:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@267 -- # remove_target_ns 00:25:22.709 19:14:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:25:22.709 19:14:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:25:22.709 19:14:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_target_ns 00:25:24.622 19:14:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@268 -- # delete_main_bridge 00:25:24.622 19:14:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:25:24.622 19:14:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@130 -- # return 0 00:25:24.622 19:14:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:25:24.622 19:14:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:25:24.622 19:14:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:25:24.622 19:14:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:25:24.622 19:14:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:25:24.622 19:14:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:25:24.622 19:14:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:25:24.622 19:14:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:25:24.622 19:14:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:25:24.622 19:14:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:25:24.622 19:14:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:25:24.622 19:14:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:25:24.622 19:14:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:25:24.622 19:14:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:25:24.622 19:14:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:25:24.622 19:14:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:25:24.622 19:14:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:25:24.622 19:14:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@41 -- # _dev=0 00:25:24.622 19:14:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@41 -- # dev_map=() 00:25:24.883 19:14:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@284 -- # iptr 00:25:24.883 19:14:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@542 -- # iptables-save 00:25:24.883 19:14:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:25:24.883 19:14:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@542 -- # iptables-restore 00:25:24.883 00:25:24.883 real 0m17.967s 00:25:24.883 user 1m11.135s 00:25:24.883 sys 0m7.571s 00:25:24.883 19:14:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:24.883 19:14:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.883 ************************************ 00:25:24.883 END TEST nvmf_fio_host 00:25:24.883 ************************************ 00:25:24.883 19:14:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:25:24.883 19:14:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:25:24.883 19:14:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:24.883 19:14:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.883 ************************************ 00:25:24.883 START TEST nvmf_failover 00:25:24.883 ************************************ 00:25:24.883 19:14:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:25:24.883 * Looking for test storage... 00:25:24.883 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:24.883 19:14:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:24.883 19:14:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lcov --version 00:25:24.883 19:14:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:25.144 19:14:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:25.144 19:14:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:25.144 19:14:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:25.144 19:14:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:25.144 19:14:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:25:25.144 19:14:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:25:25.144 19:14:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:25:25.144 19:14:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:25:25.144 19:14:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:25:25.144 19:14:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:25:25.144 19:14:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:25:25.144 19:14:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:25.144 19:14:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:25:25.144 19:14:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:25:25.144 19:14:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:25.144 19:14:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:25.144 19:14:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:25:25.144 19:14:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:25:25.144 19:14:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:25.144 19:14:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:25:25.144 19:14:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:25:25.144 19:14:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:25:25.144 19:14:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:25:25.144 19:14:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:25.144 19:14:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:25:25.144 19:14:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:25:25.144 19:14:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:25.144 19:14:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:25.144 19:14:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:25:25.144 19:14:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:25.144 19:14:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:25.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:25.144 --rc genhtml_branch_coverage=1 00:25:25.144 --rc genhtml_function_coverage=1 00:25:25.144 --rc genhtml_legend=1 00:25:25.144 --rc geninfo_all_blocks=1 00:25:25.144 --rc geninfo_unexecuted_blocks=1 00:25:25.144 00:25:25.144 ' 00:25:25.144 19:14:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:25.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:25.144 --rc genhtml_branch_coverage=1 00:25:25.144 --rc genhtml_function_coverage=1 00:25:25.144 --rc genhtml_legend=1 00:25:25.144 --rc geninfo_all_blocks=1 00:25:25.144 --rc geninfo_unexecuted_blocks=1 00:25:25.144 00:25:25.144 ' 00:25:25.144 19:14:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:25.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:25.144 --rc genhtml_branch_coverage=1 00:25:25.144 --rc genhtml_function_coverage=1 00:25:25.144 --rc genhtml_legend=1 00:25:25.144 --rc geninfo_all_blocks=1 00:25:25.144 --rc geninfo_unexecuted_blocks=1 00:25:25.144 00:25:25.144 ' 00:25:25.144 19:14:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:25.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:25.144 --rc genhtml_branch_coverage=1 00:25:25.144 --rc genhtml_function_coverage=1 00:25:25.144 --rc genhtml_legend=1 00:25:25.144 --rc geninfo_all_blocks=1 00:25:25.144 --rc geninfo_unexecuted_blocks=1 00:25:25.144 00:25:25.144 ' 00:25:25.145 19:14:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:25.145 19:14:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:25:25.145 19:14:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:25.145 19:14:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:25.145 19:14:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:25.145 19:14:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:25.145 19:14:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:25.145 19:14:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:25:25.145 19:14:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:25.145 19:14:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:25:25.145 19:14:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:25.145 19:14:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:25.145 19:14:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:25.145 19:14:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:25:25.145 19:14:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:25:25.145 19:14:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:25.145 19:14:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:25.145 19:14:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:25:25.145 19:14:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:25.145 19:14:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:25.145 19:14:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:25.145 19:14:54 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:25.145 19:14:54 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:25.145 19:14:54 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:25.145 19:14:54 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:25:25.145 19:14:54 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:25.145 19:14:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:25:25.145 19:14:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:25:25.145 19:14:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:25:25.145 19:14:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:25:25.145 19:14:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@50 -- # : 0 00:25:25.145 19:14:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:25:25.145 19:14:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:25:25.145 19:14:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:25:25.145 19:14:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:25.145 19:14:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:25.145 19:14:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:25:25.145 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:25:25.145 19:14:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:25:25.145 19:14:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:25:25.145 19:14:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@54 -- # have_pci_nics=0 00:25:25.145 19:14:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:25.145 19:14:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:25.145 19:14:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:25.145 19:14:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:25.145 19:14:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:25:25.145 19:14:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:25:25.145 19:14:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:25.145 19:14:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # prepare_net_devs 00:25:25.145 19:14:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # local -g is_hw=no 00:25:25.145 19:14:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@260 -- # remove_target_ns 00:25:25.145 19:14:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:25:25.145 19:14:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:25:25.145 19:14:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_target_ns 00:25:25.145 19:14:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:25:25.145 19:14:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:25:25.145 19:14:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # xtrace_disable 00:25:25.145 19:14:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:33.307 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:33.307 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@131 -- # pci_devs=() 00:25:33.307 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@131 -- # local -a pci_devs 00:25:33.307 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@132 -- # pci_net_devs=() 00:25:33.307 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:25:33.307 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@133 -- # pci_drivers=() 00:25:33.307 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@133 -- # local -A pci_drivers 00:25:33.307 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@135 -- # net_devs=() 00:25:33.307 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@135 -- # local -ga net_devs 00:25:33.307 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@136 -- # e810=() 00:25:33.307 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@136 -- # local -ga e810 00:25:33.307 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@137 -- # x722=() 00:25:33.307 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@137 -- # local -ga x722 00:25:33.307 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@138 -- # mlx=() 00:25:33.307 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@138 -- # local -ga mlx 00:25:33.307 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:33.307 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:33.307 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:33.307 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:33.307 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:33.307 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:33.307 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:33.307 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:33.307 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:33.307 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:33.307 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:33.307 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:33.307 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:25:33.307 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:25:33.307 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:25:33.307 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:25:33.307 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:25:33.307 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:25:33.307 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:25:33.307 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:33.307 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:33.307 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:25:33.307 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:25:33.307 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:33.307 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:33.307 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:25:33.307 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:25:33.307 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:33.307 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:33.307 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:25:33.307 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:25:33.307 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:33.307 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:33.307 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:25:33.307 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:25:33.307 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:25:33.307 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:25:33.307 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:25:33.308 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:33.308 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:25:33.308 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:33.308 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # [[ up == up ]] 00:25:33.308 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:25:33.308 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:33.308 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:33.308 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:33.308 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:25:33.308 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:25:33.308 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:33.308 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:25:33.308 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:33.308 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # [[ up == up ]] 00:25:33.308 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:25:33.308 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:33.308 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:33.308 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:33.308 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:25:33.308 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:25:33.308 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:25:33.308 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # is_hw=yes 00:25:33.308 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:25:33.308 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:25:33.308 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:25:33.308 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:25:33.308 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@257 -- # create_target_ns 00:25:33.308 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:25:33.308 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:25:33.308 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:25:33.308 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:33.308 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:25:33.308 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:25:33.308 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:33.308 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:33.308 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:25:33.308 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:25:33.308 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:25:33.308 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:25:33.308 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@27 -- # local -gA dev_map 00:25:33.308 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@28 -- # local -g _dev 00:25:33.308 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:25:33.308 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:25:33.308 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:25:33.308 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:25:33.308 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@44 -- # ips=() 00:25:33.308 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:25:33.308 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:25:33.308 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:25:33.308 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:25:33.308 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:25:33.308 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:25:33.308 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:25:33.308 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:25:33.308 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:25:33.308 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:25:33.308 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:25:33.308 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:25:33.308 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:25:33.308 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:25:33.308 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:25:33.308 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:25:33.308 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:25:33.308 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:25:33.308 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:33.308 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:25:33.308 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@11 -- # local val=167772161 00:25:33.308 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:25:33.308 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:25:33.308 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:25:33.308 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:25:33.308 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:25:33.308 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:25:33.308 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:25:33.308 10.0.0.1 00:25:33.308 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:25:33.308 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:25:33.308 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:33.308 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:33.308 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:25:33.308 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@11 -- # local val=167772162 00:25:33.308 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:25:33.308 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:25:33.308 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:25:33.308 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:25:33.308 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:25:33.308 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:25:33.308 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:25:33.308 10.0.0.2 00:25:33.308 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:25:33.308 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:25:33.308 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:25:33.308 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:25:33.308 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:25:33.308 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:25:33.308 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:25:33.308 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:33.308 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:33.308 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:25:33.308 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:25:33.308 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:25:33.308 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:25:33.308 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:25:33.308 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:25:33.308 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:25:33.308 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:25:33.308 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:25:33.308 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:25:33.308 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:25:33.308 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@38 -- # ping_ips 1 00:25:33.308 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:25:33.308 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:25:33.308 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:25:33.308 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:25:33.308 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:25:33.308 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:25:33.309 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:25:33.309 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:25:33.309 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:25:33.309 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@107 -- # local dev=initiator0 00:25:33.309 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:25:33.309 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:25:33.309 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:25:33.309 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:25:33.309 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:25:33.309 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:25:33.309 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:25:33.309 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:25:33.309 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:25:33.309 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:25:33.309 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:25:33.309 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:33.309 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:33.309 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:25:33.309 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:25:33.309 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:33.309 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.570 ms 00:25:33.309 00:25:33.309 --- 10.0.0.1 ping statistics --- 00:25:33.309 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:33.309 rtt min/avg/max/mdev = 0.570/0.570/0.570/0.000 ms 00:25:33.309 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:25:33.309 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:25:33.309 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:25:33.309 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:25:33.309 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:33.309 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:33.309 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@168 -- # get_net_dev target0 00:25:33.309 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@107 -- # local dev=target0 00:25:33.309 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:25:33.309 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:25:33.309 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:25:33.309 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:25:33.309 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:25:33.309 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:25:33.309 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:25:33.309 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:25:33.309 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:25:33.309 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:25:33.309 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:25:33.309 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:25:33.309 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:25:33.309 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:25:33.309 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:33.309 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.169 ms 00:25:33.309 00:25:33.309 --- 10.0.0.2 ping statistics --- 00:25:33.309 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:33.309 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:25:33.309 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@98 -- # (( pair++ )) 00:25:33.309 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:25:33.309 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:33.309 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@270 -- # return 0 00:25:33.309 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:25:33.309 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:25:33.309 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:25:33.309 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:25:33.309 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:25:33.309 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:25:33.309 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:25:33.309 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:25:33.309 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:25:33.309 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:25:33.309 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@107 -- # local dev=initiator0 00:25:33.309 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:25:33.309 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:25:33.309 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:25:33.309 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:25:33.309 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:25:33.309 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:25:33.309 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:25:33.309 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:25:33.309 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:25:33.309 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:33.309 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:25:33.309 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:25:33.309 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:25:33.309 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:25:33.309 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:25:33.309 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:25:33.309 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@107 -- # local dev=initiator1 00:25:33.309 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:25:33.309 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:25:33.309 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@109 -- # return 1 00:25:33.309 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@168 -- # dev= 00:25:33.309 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@169 -- # return 0 00:25:33.309 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:25:33.309 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:25:33.309 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:25:33.309 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:25:33.309 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:25:33.309 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:33.309 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:33.309 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@168 -- # get_net_dev target0 00:25:33.309 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@107 -- # local dev=target0 00:25:33.309 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:25:33.309 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:25:33.309 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:25:33.309 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:25:33.309 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:25:33.309 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:25:33.309 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:25:33.309 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:25:33.309 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:25:33.309 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:33.309 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:25:33.309 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:25:33.309 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:25:33.309 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:25:33.309 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:33.309 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:33.309 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@168 -- # get_net_dev target1 00:25:33.309 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@107 -- # local dev=target1 00:25:33.309 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:25:33.309 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:25:33.309 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@109 -- # return 1 00:25:33.309 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@168 -- # dev= 00:25:33.309 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@169 -- # return 0 00:25:33.309 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:25:33.309 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:33.310 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:25:33.310 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:25:33.310 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:33.310 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:25:33.310 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:25:33.310 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:25:33.310 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:25:33.310 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:33.310 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:33.310 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # nvmfpid=456261 00:25:33.310 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@329 -- # waitforlisten 456261 00:25:33.310 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:33.310 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 456261 ']' 00:25:33.310 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:33.310 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:33.310 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:33.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:33.310 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:33.310 19:15:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:33.310 [2024-11-05 19:15:01.998834] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:25:33.310 [2024-11-05 19:15:01.998902] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:33.310 [2024-11-05 19:15:02.084703] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:33.310 [2024-11-05 19:15:02.137058] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:33.310 [2024-11-05 19:15:02.137112] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:33.310 [2024-11-05 19:15:02.137122] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:33.310 [2024-11-05 19:15:02.137130] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:33.310 [2024-11-05 19:15:02.137137] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:33.310 [2024-11-05 19:15:02.138986] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:33.310 [2024-11-05 19:15:02.139248] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:33.310 [2024-11-05 19:15:02.139249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:33.571 19:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:33.571 19:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:25:33.571 19:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:25:33.571 19:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:33.571 19:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:33.571 19:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:33.571 19:15:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:33.832 [2024-11-05 19:15:03.007499] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:33.832 19:15:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:34.093 Malloc0 00:25:34.093 19:15:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:34.093 19:15:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:34.354 19:15:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:34.615 [2024-11-05 19:15:03.745180] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:34.615 19:15:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:34.615 [2024-11-05 19:15:03.929622] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:34.877 19:15:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:34.877 [2024-11-05 19:15:04.114166] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:25:34.877 19:15:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=456848 00:25:34.877 19:15:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:25:34.877 19:15:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:34.877 19:15:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 456848 /var/tmp/bdevperf.sock 00:25:34.877 19:15:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 456848 ']' 00:25:34.877 19:15:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:34.877 19:15:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:34.877 19:15:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:34.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:34.877 19:15:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:34.877 19:15:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:35.819 19:15:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:35.819 19:15:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:25:35.819 19:15:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:36.080 NVMe0n1 00:25:36.081 19:15:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:36.342 00:25:36.342 19:15:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=457037 00:25:36.342 19:15:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:25:36.342 19:15:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:37.285 19:15:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:37.548 [2024-11-05 19:15:06.723804] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac94e0 is same with the state(6) to be set 00:25:37.548 [2024-11-05 19:15:06.723860] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac94e0 is same with the state(6) to be set 00:25:37.548 [2024-11-05 19:15:06.723867] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac94e0 is same with the state(6) to be set 00:25:37.548 [2024-11-05 19:15:06.723871] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac94e0 is same with the state(6) to be set 00:25:37.548 [2024-11-05 19:15:06.723876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac94e0 is same with the state(6) to be set 00:25:37.548 [2024-11-05 19:15:06.723881] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac94e0 is same with the state(6) to be set 00:25:37.548 [2024-11-05 19:15:06.723885] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac94e0 is same with the state(6) to be set 00:25:37.548 [2024-11-05 19:15:06.723890] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac94e0 is same with the state(6) to be set 00:25:37.548 [2024-11-05 19:15:06.723895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac94e0 is same with the state(6) to be set 00:25:37.548 [2024-11-05 19:15:06.723899] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac94e0 is same with the state(6) to be set 00:25:37.548 [2024-11-05 19:15:06.723903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac94e0 is same with the state(6) to be set 00:25:37.548 [2024-11-05 19:15:06.723908] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac94e0 is same with the state(6) to be set 00:25:37.548 [2024-11-05 19:15:06.723913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac94e0 is same with the state(6) to be set 00:25:37.548 [2024-11-05 19:15:06.723917] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac94e0 is same with the state(6) to be set 00:25:37.548 [2024-11-05 19:15:06.723922] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac94e0 is same with the state(6) to be set 00:25:37.548 [2024-11-05 19:15:06.723927] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac94e0 is same with the state(6) to be set 00:25:37.548 [2024-11-05 19:15:06.723931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac94e0 is same with the state(6) to be set 00:25:37.548 [2024-11-05 19:15:06.723936] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac94e0 is same with the state(6) to be set 00:25:37.548 [2024-11-05 19:15:06.723940] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac94e0 is same with the state(6) to be set 00:25:37.548 [2024-11-05 19:15:06.723944] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac94e0 is same with the state(6) to be set 00:25:37.548 [2024-11-05 19:15:06.723949] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac94e0 is same with the state(6) to be set 00:25:37.548 [2024-11-05 19:15:06.723953] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac94e0 is same with the state(6) to be set 00:25:37.548 [2024-11-05 19:15:06.723957] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac94e0 is same with the state(6) to be set 00:25:37.548 [2024-11-05 19:15:06.723968] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac94e0 is same with the state(6) to be set 00:25:37.548 [2024-11-05 19:15:06.723972] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac94e0 is same with the state(6) to be set 00:25:37.548 [2024-11-05 19:15:06.723976] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac94e0 is same with the state(6) to be set 00:25:37.548 [2024-11-05 19:15:06.723981] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac94e0 is same with the state(6) to be set 00:25:37.548 [2024-11-05 19:15:06.723985] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac94e0 is same with the state(6) to be set 00:25:37.548 [2024-11-05 19:15:06.723990] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac94e0 is same with the state(6) to be set 00:25:37.548 [2024-11-05 19:15:06.723994] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac94e0 is same with the state(6) to be set 00:25:37.548 [2024-11-05 19:15:06.723999] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac94e0 is same with the state(6) to be set 00:25:37.548 [2024-11-05 19:15:06.724003] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac94e0 is same with the state(6) to be set 00:25:37.548 [2024-11-05 19:15:06.724007] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac94e0 is same with the state(6) to be set 00:25:37.548 [2024-11-05 19:15:06.724012] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac94e0 is same with the state(6) to be set 00:25:37.548 [2024-11-05 19:15:06.724016] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac94e0 is same with the state(6) to be set 00:25:37.548 [2024-11-05 19:15:06.724021] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac94e0 is same with the state(6) to be set 00:25:37.548 [2024-11-05 19:15:06.724025] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac94e0 is same with the state(6) to be set 00:25:37.548 [2024-11-05 19:15:06.724030] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac94e0 is same with the state(6) to be set 00:25:37.548 [2024-11-05 19:15:06.724034] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac94e0 is same with the state(6) to be set 00:25:37.548 [2024-11-05 19:15:06.724038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac94e0 is same with the state(6) to be set 00:25:37.548 [2024-11-05 19:15:06.724043] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac94e0 is same with the state(6) to be set 00:25:37.548 [2024-11-05 19:15:06.724047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac94e0 is same with the state(6) to be set 00:25:37.548 [2024-11-05 19:15:06.724052] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac94e0 is same with the state(6) to be set 00:25:37.548 [2024-11-05 19:15:06.724056] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac94e0 is same with the state(6) to be set 00:25:37.548 [2024-11-05 19:15:06.724060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac94e0 is same with the state(6) to be set 00:25:37.548 [2024-11-05 19:15:06.724065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac94e0 is same with the state(6) to be set 00:25:37.548 [2024-11-05 19:15:06.724069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac94e0 is same with the state(6) to be set 00:25:37.548 [2024-11-05 19:15:06.724074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac94e0 is same with the state(6) to be set 00:25:37.548 [2024-11-05 19:15:06.724079] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac94e0 is same with the state(6) to be set 00:25:37.548 [2024-11-05 19:15:06.724084] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac94e0 is same with the state(6) to be set 00:25:37.548 [2024-11-05 19:15:06.724089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac94e0 is same with the state(6) to be set 00:25:37.548 [2024-11-05 19:15:06.724094] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac94e0 is same with the state(6) to be set 00:25:37.548 [2024-11-05 19:15:06.724099] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac94e0 is same with the state(6) to be set 00:25:37.548 [2024-11-05 19:15:06.724103] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac94e0 is same with the state(6) to be set 00:25:37.548 [2024-11-05 19:15:06.724108] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac94e0 is same with the state(6) to be set 00:25:37.548 [2024-11-05 19:15:06.724112] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac94e0 is same with the state(6) to be set 00:25:37.548 [2024-11-05 19:15:06.724116] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac94e0 is same with the state(6) to be set 00:25:37.548 [2024-11-05 19:15:06.724121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac94e0 is same with the state(6) to be set 00:25:37.548 [2024-11-05 19:15:06.724125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac94e0 is same with the state(6) to be set 00:25:37.548 [2024-11-05 19:15:06.724130] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac94e0 is same with the state(6) to be set 00:25:37.548 [2024-11-05 19:15:06.724134] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac94e0 is same with the state(6) to be set 00:25:37.548 [2024-11-05 19:15:06.724138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac94e0 is same with the state(6) to be set 00:25:37.548 [2024-11-05 19:15:06.724143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac94e0 is same with the state(6) to be set 00:25:37.548 [2024-11-05 19:15:06.724147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac94e0 is same with the state(6) to be set 00:25:37.548 [2024-11-05 19:15:06.724152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac94e0 is same with the state(6) to be set 00:25:37.548 [2024-11-05 19:15:06.724157] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac94e0 is same with the state(6) to be set 00:25:37.548 [2024-11-05 19:15:06.724163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac94e0 is same with the state(6) to be set 00:25:37.549 [2024-11-05 19:15:06.724167] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac94e0 is same with the state(6) to be set 00:25:37.549 [2024-11-05 19:15:06.724172] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac94e0 is same with the state(6) to be set 00:25:37.549 [2024-11-05 19:15:06.724176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac94e0 is same with the state(6) to be set 00:25:37.549 [2024-11-05 19:15:06.724181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac94e0 is same with the state(6) to be set 00:25:37.549 [2024-11-05 19:15:06.724185] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac94e0 is same with the state(6) to be set 00:25:37.549 [2024-11-05 19:15:06.724189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac94e0 is same with the state(6) to be set 00:25:37.549 [2024-11-05 19:15:06.724194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac94e0 is same with the state(6) to be set 00:25:37.549 [2024-11-05 19:15:06.724198] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac94e0 is same with the state(6) to be set 00:25:37.549 [2024-11-05 19:15:06.724203] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac94e0 is same with the state(6) to be set 00:25:37.549 [2024-11-05 19:15:06.724207] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac94e0 is same with the state(6) to be set 00:25:37.549 [2024-11-05 19:15:06.724213] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac94e0 is same with the state(6) to be set 00:25:37.549 [2024-11-05 19:15:06.724218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac94e0 is same with the state(6) to be set 00:25:37.549 [2024-11-05 19:15:06.724222] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac94e0 is same with the state(6) to be set 00:25:37.549 [2024-11-05 19:15:06.724227] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac94e0 is same with the state(6) to be set 00:25:37.549 [2024-11-05 19:15:06.724231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac94e0 is same with the state(6) to be set 00:25:37.549 [2024-11-05 19:15:06.724236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac94e0 is same with the state(6) to be set 00:25:37.549 [2024-11-05 19:15:06.724240] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac94e0 is same with the state(6) to be set 00:25:37.549 [2024-11-05 19:15:06.724245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac94e0 is same with the state(6) to be set 00:25:37.549 [2024-11-05 19:15:06.724249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac94e0 is same with the state(6) to be set 00:25:37.549 [2024-11-05 19:15:06.724254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac94e0 is same with the state(6) to be set 00:25:37.549 [2024-11-05 19:15:06.724258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac94e0 is same with the state(6) to be set 00:25:37.549 [2024-11-05 19:15:06.724263] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac94e0 is same with the state(6) to be set 00:25:37.549 [2024-11-05 19:15:06.724267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac94e0 is same with the state(6) to be set 00:25:37.549 [2024-11-05 19:15:06.724272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac94e0 is same with the state(6) to be set 00:25:37.549 [2024-11-05 19:15:06.724277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac94e0 is same with the state(6) to be set 00:25:37.549 [2024-11-05 19:15:06.724281] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac94e0 is same with the state(6) to be set 00:25:37.549 [2024-11-05 19:15:06.724286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac94e0 is same with the state(6) to be set 00:25:37.549 [2024-11-05 19:15:06.724290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac94e0 is same with the state(6) to be set 00:25:37.549 [2024-11-05 19:15:06.724295] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac94e0 is same with the state(6) to be set 00:25:37.549 [2024-11-05 19:15:06.724299] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac94e0 is same with the state(6) to be set 00:25:37.549 [2024-11-05 19:15:06.724303] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac94e0 is same with the state(6) to be set 00:25:37.549 [2024-11-05 19:15:06.724308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac94e0 is same with the state(6) to be set 00:25:37.549 [2024-11-05 19:15:06.724312] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac94e0 is same with the state(6) to be set 00:25:37.549 [2024-11-05 19:15:06.724316] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac94e0 is same with the state(6) to be set 00:25:37.549 [2024-11-05 19:15:06.724321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac94e0 is same with the state(6) to be set 00:25:37.549 [2024-11-05 19:15:06.724325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac94e0 is same with the state(6) to be set 00:25:37.549 [2024-11-05 19:15:06.724330] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac94e0 is same with the state(6) to be set 00:25:37.549 [2024-11-05 19:15:06.724334] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac94e0 is same with the state(6) to be set 00:25:37.549 [2024-11-05 19:15:06.724340] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac94e0 is same with the state(6) to be set 00:25:37.549 [2024-11-05 19:15:06.724345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac94e0 is same with the state(6) to be set 00:25:37.549 [2024-11-05 19:15:06.724349] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac94e0 is same with the state(6) to be set 00:25:37.549 [2024-11-05 19:15:06.724354] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac94e0 is same with the state(6) to be set 00:25:37.549 [2024-11-05 19:15:06.724358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac94e0 is same with the state(6) to be set 00:25:37.549 [2024-11-05 19:15:06.724362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac94e0 is same with the state(6) to be set 00:25:37.549 [2024-11-05 19:15:06.724367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac94e0 is same with the state(6) to be set 00:25:37.549 [2024-11-05 19:15:06.724371] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac94e0 is same with the state(6) to be set 00:25:37.549 [2024-11-05 19:15:06.724375] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac94e0 is same with the state(6) to be set 00:25:37.549 [2024-11-05 19:15:06.724380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac94e0 is same with the state(6) to be set 00:25:37.549 [2024-11-05 19:15:06.724384] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac94e0 is same with the state(6) to be set 00:25:37.549 [2024-11-05 19:15:06.724389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac94e0 is same with the state(6) to be set 00:25:37.549 [2024-11-05 19:15:06.724393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac94e0 is same with the state(6) to be set 00:25:37.549 [2024-11-05 19:15:06.724398] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac94e0 is same with the state(6) to be set 00:25:37.549 [2024-11-05 19:15:06.724402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac94e0 is same with the state(6) to be set 00:25:37.549 [2024-11-05 19:15:06.724407] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac94e0 is same with the state(6) to be set 00:25:37.549 [2024-11-05 19:15:06.724411] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac94e0 is same with the state(6) to be set 00:25:37.549 [2024-11-05 19:15:06.724415] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac94e0 is same with the state(6) to be set 00:25:37.549 [2024-11-05 19:15:06.724420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac94e0 is same with the state(6) to be set 00:25:37.549 [2024-11-05 19:15:06.724424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac94e0 is same with the state(6) to be set 00:25:37.549 [2024-11-05 19:15:06.724428] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac94e0 is same with the state(6) to be set 00:25:37.549 [2024-11-05 19:15:06.724433] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac94e0 is same with the state(6) to be set 00:25:37.549 [2024-11-05 19:15:06.724437] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac94e0 is same with the state(6) to be set 00:25:37.549 19:15:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:25:40.869 19:15:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:40.869 00:25:40.869 19:15:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:41.131 [2024-11-05 19:15:10.246619] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaca030 is same with the state(6) to be set 00:25:41.131 [2024-11-05 19:15:10.246658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaca030 is same with the state(6) to be set 00:25:41.131 [2024-11-05 19:15:10.246664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaca030 is same with the state(6) to be set 00:25:41.131 [2024-11-05 19:15:10.246669] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaca030 is same with the state(6) to be set 00:25:41.131 [2024-11-05 19:15:10.246674] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaca030 is same with the state(6) to be set 00:25:41.131 [2024-11-05 19:15:10.246679] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaca030 is same with the state(6) to be set 00:25:41.131 [2024-11-05 19:15:10.246683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaca030 is same with the state(6) to be set 00:25:41.131 [2024-11-05 19:15:10.246688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaca030 is same with the state(6) to be set 00:25:41.131 [2024-11-05 19:15:10.246693] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaca030 is same with the state(6) to be set 00:25:41.131 [2024-11-05 19:15:10.246698] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaca030 is same with the state(6) to be set 00:25:41.131 [2024-11-05 19:15:10.246702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaca030 is same with the state(6) to be set 00:25:41.131 [2024-11-05 19:15:10.246707] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaca030 is same with the state(6) to be set 00:25:41.131 [2024-11-05 19:15:10.246712] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaca030 is same with the state(6) to be set 00:25:41.131 [2024-11-05 19:15:10.246716] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaca030 is same with the state(6) to be set 00:25:41.131 [2024-11-05 19:15:10.246721] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaca030 is same with the state(6) to be set 00:25:41.131 [2024-11-05 19:15:10.246725] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaca030 is same with the state(6) to be set 00:25:41.131 [2024-11-05 19:15:10.246730] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaca030 is same with the state(6) to be set 00:25:41.131 [2024-11-05 19:15:10.246734] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaca030 is same with the state(6) to be set 00:25:41.131 [2024-11-05 19:15:10.246739] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaca030 is same with the state(6) to be set 00:25:41.131 [2024-11-05 19:15:10.246743] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaca030 is same with the state(6) to be set 00:25:41.131 [2024-11-05 19:15:10.246754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaca030 is same with the state(6) to be set 00:25:41.131 [2024-11-05 19:15:10.246759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaca030 is same with the state(6) to be set 00:25:41.131 [2024-11-05 19:15:10.246763] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaca030 is same with the state(6) to be set 00:25:41.131 [2024-11-05 19:15:10.246768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaca030 is same with the state(6) to be set 00:25:41.131 [2024-11-05 19:15:10.246772] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaca030 is same with the state(6) to be set 00:25:41.131 [2024-11-05 19:15:10.246777] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaca030 is same with the state(6) to be set 00:25:41.131 [2024-11-05 19:15:10.246781] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaca030 is same with the state(6) to be set 00:25:41.131 [2024-11-05 19:15:10.246791] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaca030 is same with the state(6) to be set 00:25:41.131 [2024-11-05 19:15:10.246796] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaca030 is same with the state(6) to be set 00:25:41.131 [2024-11-05 19:15:10.246800] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaca030 is same with the state(6) to be set 00:25:41.131 [2024-11-05 19:15:10.246805] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaca030 is same with the state(6) to be set 00:25:41.131 [2024-11-05 19:15:10.246809] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaca030 is same with the state(6) to be set 00:25:41.131 [2024-11-05 19:15:10.246814] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaca030 is same with the state(6) to be set 00:25:41.131 [2024-11-05 19:15:10.246819] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaca030 is same with the state(6) to be set 00:25:41.131 [2024-11-05 19:15:10.246824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaca030 is same with the state(6) to be set 00:25:41.131 [2024-11-05 19:15:10.246828] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaca030 is same with the state(6) to be set 00:25:41.131 [2024-11-05 19:15:10.246833] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaca030 is same with the state(6) to be set 00:25:41.131 [2024-11-05 19:15:10.246838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaca030 is same with the state(6) to be set 00:25:41.131 [2024-11-05 19:15:10.246842] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaca030 is same with the state(6) to be set 00:25:41.131 [2024-11-05 19:15:10.246847] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaca030 is same with the state(6) to be set 00:25:41.131 [2024-11-05 19:15:10.246852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaca030 is same with the state(6) to be set 00:25:41.131 [2024-11-05 19:15:10.246856] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaca030 is same with the state(6) to be set 00:25:41.131 [2024-11-05 19:15:10.246861] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaca030 is same with the state(6) to be set 00:25:41.131 [2024-11-05 19:15:10.246866] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaca030 is same with the state(6) to be set 00:25:41.131 [2024-11-05 19:15:10.246870] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaca030 is same with the state(6) to be set 00:25:41.131 [2024-11-05 19:15:10.246875] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaca030 is same with the state(6) to be set 00:25:41.131 [2024-11-05 19:15:10.246879] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaca030 is same with the state(6) to be set 00:25:41.131 [2024-11-05 19:15:10.246884] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaca030 is same with the state(6) to be set 00:25:41.131 [2024-11-05 19:15:10.246888] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaca030 is same with the state(6) to be set 00:25:41.132 [2024-11-05 19:15:10.246893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaca030 is same with the state(6) to be set 00:25:41.132 [2024-11-05 19:15:10.246897] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaca030 is same with the state(6) to be set 00:25:41.132 19:15:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:25:44.436 19:15:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:44.436 [2024-11-05 19:15:13.435656] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:44.436 19:15:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:25:45.379 19:15:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:45.379 [2024-11-05 19:15:14.629955] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f4e0 is same with the state(6) to be set 00:25:45.379 [2024-11-05 19:15:14.629996] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f4e0 is same with the state(6) to be set 00:25:45.379 [2024-11-05 19:15:14.630002] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f4e0 is same with the state(6) to be set 00:25:45.379 [2024-11-05 19:15:14.630007] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f4e0 is same with the state(6) to be set 00:25:45.379 [2024-11-05 19:15:14.630012] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f4e0 is same with the state(6) to be set 00:25:45.379 [2024-11-05 19:15:14.630016] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f4e0 is same with the state(6) to be set 00:25:45.379 [2024-11-05 19:15:14.630021] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f4e0 is same with the state(6) to be set 00:25:45.379 [2024-11-05 19:15:14.630026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f4e0 is same with the state(6) to be set 00:25:45.379 [2024-11-05 19:15:14.630030] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f4e0 is same with the state(6) to be set 00:25:45.379 [2024-11-05 19:15:14.630035] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f4e0 is same with the state(6) to be set 00:25:45.379 [2024-11-05 19:15:14.630039] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f4e0 is same with the state(6) to be set 00:25:45.379 [2024-11-05 19:15:14.630044] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f4e0 is same with the state(6) to be set 00:25:45.379 [2024-11-05 19:15:14.630048] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f4e0 is same with the state(6) to be set 00:25:45.379 [2024-11-05 19:15:14.630053] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f4e0 is same with the state(6) to be set 00:25:45.379 [2024-11-05 19:15:14.630057] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f4e0 is same with the state(6) to be set 00:25:45.379 [2024-11-05 19:15:14.630062] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f4e0 is same with the state(6) to be set 00:25:45.379 [2024-11-05 19:15:14.630066] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f4e0 is same with the state(6) to be set 00:25:45.379 [2024-11-05 19:15:14.630071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f4e0 is same with the state(6) to be set 00:25:45.379 [2024-11-05 19:15:14.630075] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f4e0 is same with the state(6) to be set 00:25:45.379 [2024-11-05 19:15:14.630079] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f4e0 is same with the state(6) to be set 00:25:45.379 [2024-11-05 19:15:14.630084] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f4e0 is same with the state(6) to be set 00:25:45.379 [2024-11-05 19:15:14.630088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f4e0 is same with the state(6) to be set 00:25:45.379 [2024-11-05 19:15:14.630093] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f4e0 is same with the state(6) to be set 00:25:45.379 [2024-11-05 19:15:14.630097] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f4e0 is same with the state(6) to be set 00:25:45.379 [2024-11-05 19:15:14.630101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f4e0 is same with the state(6) to be set 00:25:45.379 [2024-11-05 19:15:14.630106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f4e0 is same with the state(6) to be set 00:25:45.379 [2024-11-05 19:15:14.630116] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f4e0 is same with the state(6) to be set 00:25:45.379 [2024-11-05 19:15:14.630120] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f4e0 is same with the state(6) to be set 00:25:45.379 [2024-11-05 19:15:14.630125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f4e0 is same with the state(6) to be set 00:25:45.379 [2024-11-05 19:15:14.630130] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f4e0 is same with the state(6) to be set 00:25:45.379 [2024-11-05 19:15:14.630134] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f4e0 is same with the state(6) to be set 00:25:45.379 [2024-11-05 19:15:14.630139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f4e0 is same with the state(6) to be set 00:25:45.379 [2024-11-05 19:15:14.630143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f4e0 is same with the state(6) to be set 00:25:45.379 [2024-11-05 19:15:14.630148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f4e0 is same with the state(6) to be set 00:25:45.379 [2024-11-05 19:15:14.630152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f4e0 is same with the state(6) to be set 00:25:45.379 [2024-11-05 19:15:14.630157] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f4e0 is same with the state(6) to be set 00:25:45.379 [2024-11-05 19:15:14.630162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f4e0 is same with the state(6) to be set 00:25:45.379 [2024-11-05 19:15:14.630166] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f4e0 is same with the state(6) to be set 00:25:45.379 [2024-11-05 19:15:14.630171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f4e0 is same with the state(6) to be set 00:25:45.379 [2024-11-05 19:15:14.630175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f4e0 is same with the state(6) to be set 00:25:45.379 [2024-11-05 19:15:14.630179] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f4e0 is same with the state(6) to be set 00:25:45.379 [2024-11-05 19:15:14.630184] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f4e0 is same with the state(6) to be set 00:25:45.379 [2024-11-05 19:15:14.630189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f4e0 is same with the state(6) to be set 00:25:45.379 [2024-11-05 19:15:14.630194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f4e0 is same with the state(6) to be set 00:25:45.379 [2024-11-05 19:15:14.630198] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f4e0 is same with the state(6) to be set 00:25:45.379 [2024-11-05 19:15:14.630202] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f4e0 is same with the state(6) to be set 00:25:45.379 [2024-11-05 19:15:14.630207] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f4e0 is same with the state(6) to be set 00:25:45.379 [2024-11-05 19:15:14.630212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f4e0 is same with the state(6) to be set 00:25:45.379 [2024-11-05 19:15:14.630217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f4e0 is same with the state(6) to be set 00:25:45.379 [2024-11-05 19:15:14.630221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f4e0 is same with the state(6) to be set 00:25:45.379 [2024-11-05 19:15:14.630225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f4e0 is same with the state(6) to be set 00:25:45.379 [2024-11-05 19:15:14.630230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f4e0 is same with the state(6) to be set 00:25:45.379 [2024-11-05 19:15:14.630234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f4e0 is same with the state(6) to be set 00:25:45.379 [2024-11-05 19:15:14.630239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f4e0 is same with the state(6) to be set 00:25:45.379 [2024-11-05 19:15:14.630245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f4e0 is same with the state(6) to be set 00:25:45.379 [2024-11-05 19:15:14.630250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f4e0 is same with the state(6) to be set 00:25:45.379 [2024-11-05 19:15:14.630254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f4e0 is same with the state(6) to be set 00:25:45.379 [2024-11-05 19:15:14.630259] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f4e0 is same with the state(6) to be set 00:25:45.379 [2024-11-05 19:15:14.630263] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f4e0 is same with the state(6) to be set 00:25:45.379 [2024-11-05 19:15:14.630268] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f4e0 is same with the state(6) to be set 00:25:45.379 [2024-11-05 19:15:14.630273] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f4e0 is same with the state(6) to be set 00:25:45.379 [2024-11-05 19:15:14.630277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f4e0 is same with the state(6) to be set 00:25:45.379 [2024-11-05 19:15:14.630282] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f4e0 is same with the state(6) to be set 00:25:45.379 [2024-11-05 19:15:14.630286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f4e0 is same with the state(6) to be set 00:25:45.379 [2024-11-05 19:15:14.630291] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f4e0 is same with the state(6) to be set 00:25:45.379 [2024-11-05 19:15:14.630295] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f4e0 is same with the state(6) to be set 00:25:45.379 [2024-11-05 19:15:14.630301] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f4e0 is same with the state(6) to be set 00:25:45.379 [2024-11-05 19:15:14.630305] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f4e0 is same with the state(6) to be set 00:25:45.380 [2024-11-05 19:15:14.630310] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f4e0 is same with the state(6) to be set 00:25:45.380 [2024-11-05 19:15:14.630315] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f4e0 is same with the state(6) to be set 00:25:45.380 [2024-11-05 19:15:14.630319] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f4e0 is same with the state(6) to be set 00:25:45.380 [2024-11-05 19:15:14.630323] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f4e0 is same with the state(6) to be set 00:25:45.380 [2024-11-05 19:15:14.630328] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f4e0 is same with the state(6) to be set 00:25:45.380 [2024-11-05 19:15:14.630332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f4e0 is same with the state(6) to be set 00:25:45.380 [2024-11-05 19:15:14.630337] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f4e0 is same with the state(6) to be set 00:25:45.380 [2024-11-05 19:15:14.630342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f4e0 is same with the state(6) to be set 00:25:45.380 [2024-11-05 19:15:14.630347] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f4e0 is same with the state(6) to be set 00:25:45.380 [2024-11-05 19:15:14.630352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f4e0 is same with the state(6) to be set 00:25:45.380 [2024-11-05 19:15:14.630356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f4e0 is same with the state(6) to be set 00:25:45.380 [2024-11-05 19:15:14.630361] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f4e0 is same with the state(6) to be set 00:25:45.380 [2024-11-05 19:15:14.630365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f4e0 is same with the state(6) to be set 00:25:45.380 [2024-11-05 19:15:14.630371] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f4e0 is same with the state(6) to be set 00:25:45.380 [2024-11-05 19:15:14.630375] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f4e0 is same with the state(6) to be set 00:25:45.380 [2024-11-05 19:15:14.630380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f4e0 is same with the state(6) to be set 00:25:45.380 [2024-11-05 19:15:14.630384] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f4e0 is same with the state(6) to be set 00:25:45.380 [2024-11-05 19:15:14.630389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f4e0 is same with the state(6) to be set 00:25:45.380 [2024-11-05 19:15:14.630394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f4e0 is same with the state(6) to be set 00:25:45.380 [2024-11-05 19:15:14.630399] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f4e0 is same with the state(6) to be set 00:25:45.380 [2024-11-05 19:15:14.630404] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f4e0 is same with the state(6) to be set 00:25:45.380 [2024-11-05 19:15:14.630408] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f4e0 is same with the state(6) to be set 00:25:45.380 [2024-11-05 19:15:14.630413] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f4e0 is same with the state(6) to be set 00:25:45.380 [2024-11-05 19:15:14.630418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f4e0 is same with the state(6) to be set 00:25:45.380 [2024-11-05 19:15:14.630422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f4e0 is same with the state(6) to be set 00:25:45.380 [2024-11-05 19:15:14.630427] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f4e0 is same with the state(6) to be set 00:25:45.380 [2024-11-05 19:15:14.630431] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f4e0 is same with the state(6) to be set 00:25:45.380 [2024-11-05 19:15:14.630436] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f4e0 is same with the state(6) to be set 00:25:45.380 [2024-11-05 19:15:14.630440] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f4e0 is same with the state(6) to be set 00:25:45.380 [2024-11-05 19:15:14.630445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f4e0 is same with the state(6) to be set 00:25:45.380 [2024-11-05 19:15:14.630449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f4e0 is same with the state(6) to be set 00:25:45.380 [2024-11-05 19:15:14.630454] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f4e0 is same with the state(6) to be set 00:25:45.380 [2024-11-05 19:15:14.630458] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f4e0 is same with the state(6) to be set 00:25:45.380 [2024-11-05 19:15:14.630463] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f4e0 is same with the state(6) to be set 00:25:45.380 [2024-11-05 19:15:14.630467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f4e0 is same with the state(6) to be set 00:25:45.380 [2024-11-05 19:15:14.630471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f4e0 is same with the state(6) to be set 00:25:45.380 [2024-11-05 19:15:14.630476] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f4e0 is same with the state(6) to be set 00:25:45.380 [2024-11-05 19:15:14.630480] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f4e0 is same with the state(6) to be set 00:25:45.380 [2024-11-05 19:15:14.630484] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f4e0 is same with the state(6) to be set 00:25:45.380 [2024-11-05 19:15:14.630489] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f4e0 is same with the state(6) to be set 00:25:45.380 [2024-11-05 19:15:14.630494] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f4e0 is same with the state(6) to be set 00:25:45.380 [2024-11-05 19:15:14.630499] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f4e0 is same with the state(6) to be set 00:25:45.380 [2024-11-05 19:15:14.630503] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f4e0 is same with the state(6) to be set 00:25:45.380 [2024-11-05 19:15:14.630507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f4e0 is same with the state(6) to be set 00:25:45.380 [2024-11-05 19:15:14.630512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f4e0 is same with the state(6) to be set 00:25:45.380 [2024-11-05 19:15:14.630516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f4e0 is same with the state(6) to be set 00:25:45.380 [2024-11-05 19:15:14.630521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f4e0 is same with the state(6) to be set 00:25:45.380 [2024-11-05 19:15:14.630525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f4e0 is same with the state(6) to be set 00:25:45.380 [2024-11-05 19:15:14.630529] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f4e0 is same with the state(6) to be set 00:25:45.380 [2024-11-05 19:15:14.630534] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f4e0 is same with the state(6) to be set 00:25:45.380 [2024-11-05 19:15:14.630538] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f4e0 is same with the state(6) to be set 00:25:45.380 [2024-11-05 19:15:14.630543] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f4e0 is same with the state(6) to be set 00:25:45.380 [2024-11-05 19:15:14.630547] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f4e0 is same with the state(6) to be set 00:25:45.380 [2024-11-05 19:15:14.630552] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f4e0 is same with the state(6) to be set 00:25:45.380 [2024-11-05 19:15:14.630556] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f4e0 is same with the state(6) to be set 00:25:45.380 [2024-11-05 19:15:14.630561] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f4e0 is same with the state(6) to be set 00:25:45.380 [2024-11-05 19:15:14.630565] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f4e0 is same with the state(6) to be set 00:25:45.380 [2024-11-05 19:15:14.630569] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f4e0 is same with the state(6) to be set 00:25:45.380 19:15:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 457037 00:25:51.979 { 00:25:51.980 "results": [ 00:25:51.980 { 00:25:51.980 "job": "NVMe0n1", 00:25:51.980 "core_mask": "0x1", 00:25:51.980 "workload": "verify", 00:25:51.980 "status": "finished", 00:25:51.980 "verify_range": { 00:25:51.980 "start": 0, 00:25:51.980 "length": 16384 00:25:51.980 }, 00:25:51.980 "queue_depth": 128, 00:25:51.980 "io_size": 4096, 00:25:51.980 "runtime": 15.006778, 00:25:51.980 "iops": 10990.700335541713, 00:25:51.980 "mibps": 42.932423185709816, 00:25:51.980 "io_failed": 10156, 00:25:51.980 "io_timeout": 0, 00:25:51.980 "avg_latency_us": 10943.358953382338, 00:25:51.980 "min_latency_us": 771.4133333333333, 00:25:51.980 "max_latency_us": 25777.493333333332 00:25:51.980 } 00:25:51.980 ], 00:25:51.980 "core_count": 1 00:25:51.980 } 00:25:51.980 19:15:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 456848 00:25:51.980 19:15:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 456848 ']' 00:25:51.980 19:15:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 456848 00:25:51.980 19:15:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:25:51.980 19:15:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:51.980 19:15:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 456848 00:25:51.980 19:15:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:51.980 19:15:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:51.980 19:15:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 456848' 00:25:51.980 killing process with pid 456848 00:25:51.980 19:15:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 456848 00:25:51.980 19:15:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 456848 00:25:51.980 19:15:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:51.980 [2024-11-05 19:15:04.196322] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:25:51.980 [2024-11-05 19:15:04.196380] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid456848 ] 00:25:51.980 [2024-11-05 19:15:04.267332] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:51.980 [2024-11-05 19:15:04.303061] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:51.980 Running I/O for 15 seconds... 00:25:51.980 11075.00 IOPS, 43.26 MiB/s [2024-11-05T18:15:21.303Z] [2024-11-05 19:15:06.724775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:94968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.980 [2024-11-05 19:15:06.724811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.980 [2024-11-05 19:15:06.724828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:94976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.980 [2024-11-05 19:15:06.724837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.980 [2024-11-05 19:15:06.724847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:94984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.980 [2024-11-05 19:15:06.724854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.980 [2024-11-05 19:15:06.724864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:94992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.980 [2024-11-05 19:15:06.724871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.980 [2024-11-05 19:15:06.724881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:95000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.980 [2024-11-05 19:15:06.724889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.980 [2024-11-05 19:15:06.724898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:95008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.980 [2024-11-05 19:15:06.724906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.980 [2024-11-05 19:15:06.724916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:95016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.980 [2024-11-05 19:15:06.724923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.980 [2024-11-05 19:15:06.724932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:95024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.980 [2024-11-05 19:15:06.724940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.980 [2024-11-05 19:15:06.724949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:95032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.980 [2024-11-05 19:15:06.724956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.980 [2024-11-05 19:15:06.724966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:95040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.980 [2024-11-05 19:15:06.724973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.980 [2024-11-05 19:15:06.724982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:95048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.980 [2024-11-05 19:15:06.724990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.980 [2024-11-05 19:15:06.725005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:95056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.980 [2024-11-05 19:15:06.725012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.980 [2024-11-05 19:15:06.725022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:95064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.980 [2024-11-05 19:15:06.725029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.980 [2024-11-05 19:15:06.725038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:95072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.980 [2024-11-05 19:15:06.725046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.980 [2024-11-05 19:15:06.725055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:95080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.980 [2024-11-05 19:15:06.725062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.980 [2024-11-05 19:15:06.725072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:95088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.980 [2024-11-05 19:15:06.725079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.980 [2024-11-05 19:15:06.725088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:95096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.980 [2024-11-05 19:15:06.725096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.980 [2024-11-05 19:15:06.725106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:95104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.980 [2024-11-05 19:15:06.725113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.980 [2024-11-05 19:15:06.725122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:95112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.980 [2024-11-05 19:15:06.725130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.980 [2024-11-05 19:15:06.725139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:95120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.980 [2024-11-05 19:15:06.725146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.980 [2024-11-05 19:15:06.725156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:95128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.980 [2024-11-05 19:15:06.725163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.980 [2024-11-05 19:15:06.725173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:95136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.980 [2024-11-05 19:15:06.725180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.980 [2024-11-05 19:15:06.725190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:95144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.980 [2024-11-05 19:15:06.725197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.980 [2024-11-05 19:15:06.725207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:95152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.980 [2024-11-05 19:15:06.725216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.980 [2024-11-05 19:15:06.725225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:95160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.980 [2024-11-05 19:15:06.725232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.980 [2024-11-05 19:15:06.725242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:95168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.980 [2024-11-05 19:15:06.725249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.980 [2024-11-05 19:15:06.725258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:95176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.980 [2024-11-05 19:15:06.725265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.980 [2024-11-05 19:15:06.725275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:95184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.981 [2024-11-05 19:15:06.725282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.981 [2024-11-05 19:15:06.725291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:95192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.981 [2024-11-05 19:15:06.725298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.981 [2024-11-05 19:15:06.725308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:95200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.981 [2024-11-05 19:15:06.725316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.981 [2024-11-05 19:15:06.725325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:95208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.981 [2024-11-05 19:15:06.725332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.981 [2024-11-05 19:15:06.725342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:95216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.981 [2024-11-05 19:15:06.725349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.981 [2024-11-05 19:15:06.725358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:95224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.981 [2024-11-05 19:15:06.725366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.981 [2024-11-05 19:15:06.725376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:95232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.981 [2024-11-05 19:15:06.725384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.981 [2024-11-05 19:15:06.725393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:95240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.981 [2024-11-05 19:15:06.725401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.981 [2024-11-05 19:15:06.725412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:95248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.981 [2024-11-05 19:15:06.725420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.981 [2024-11-05 19:15:06.725436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:95256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.981 [2024-11-05 19:15:06.725446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.981 [2024-11-05 19:15:06.725457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:95264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.981 [2024-11-05 19:15:06.725466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.981 [2024-11-05 19:15:06.725476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:95272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.981 [2024-11-05 19:15:06.725484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.981 [2024-11-05 19:15:06.725495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:95280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.981 [2024-11-05 19:15:06.725503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.981 [2024-11-05 19:15:06.725514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:95288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.981 [2024-11-05 19:15:06.725522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.981 [2024-11-05 19:15:06.725532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:95296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.981 [2024-11-05 19:15:06.725539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.981 [2024-11-05 19:15:06.725549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:95304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.981 [2024-11-05 19:15:06.725557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.981 [2024-11-05 19:15:06.725566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:95312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.981 [2024-11-05 19:15:06.725574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.981 [2024-11-05 19:15:06.725583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:95320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.981 [2024-11-05 19:15:06.725590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.981 [2024-11-05 19:15:06.725600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:95328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.981 [2024-11-05 19:15:06.725607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.981 [2024-11-05 19:15:06.725616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:95336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.981 [2024-11-05 19:15:06.725623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.981 [2024-11-05 19:15:06.725633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:95344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.981 [2024-11-05 19:15:06.725640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.981 [2024-11-05 19:15:06.725649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:95352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.981 [2024-11-05 19:15:06.725658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.981 [2024-11-05 19:15:06.725667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:95360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.981 [2024-11-05 19:15:06.725675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.981 [2024-11-05 19:15:06.725684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:95368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.981 [2024-11-05 19:15:06.725691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.981 [2024-11-05 19:15:06.725701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:95376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.981 [2024-11-05 19:15:06.725708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.981 [2024-11-05 19:15:06.725717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:95384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.981 [2024-11-05 19:15:06.725724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.981 [2024-11-05 19:15:06.725733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:95392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.981 [2024-11-05 19:15:06.725741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.981 [2024-11-05 19:15:06.725756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:95400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.981 [2024-11-05 19:15:06.725763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.981 [2024-11-05 19:15:06.725773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:95408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.981 [2024-11-05 19:15:06.725780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.981 [2024-11-05 19:15:06.725789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:95416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.981 [2024-11-05 19:15:06.725796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.981 [2024-11-05 19:15:06.725806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:95424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.981 [2024-11-05 19:15:06.725813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.981 [2024-11-05 19:15:06.725822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:95432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.981 [2024-11-05 19:15:06.725830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.981 [2024-11-05 19:15:06.725839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:95440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.981 [2024-11-05 19:15:06.725846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.981 [2024-11-05 19:15:06.725855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:95448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.981 [2024-11-05 19:15:06.725862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.981 [2024-11-05 19:15:06.725872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:95456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.981 [2024-11-05 19:15:06.725881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.981 [2024-11-05 19:15:06.725890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:95464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.981 [2024-11-05 19:15:06.725897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.981 [2024-11-05 19:15:06.725906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:95472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.981 [2024-11-05 19:15:06.725913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.981 [2024-11-05 19:15:06.725923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:95480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.981 [2024-11-05 19:15:06.725931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.981 [2024-11-05 19:15:06.725940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:95488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.981 [2024-11-05 19:15:06.725947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.982 [2024-11-05 19:15:06.725957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:95496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.982 [2024-11-05 19:15:06.725964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.982 [2024-11-05 19:15:06.725974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:95504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.982 [2024-11-05 19:15:06.725981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.982 [2024-11-05 19:15:06.725991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:95512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.982 [2024-11-05 19:15:06.725998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.982 [2024-11-05 19:15:06.726007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:95520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.982 [2024-11-05 19:15:06.726015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.982 [2024-11-05 19:15:06.726024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:95528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.982 [2024-11-05 19:15:06.726031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.982 [2024-11-05 19:15:06.726042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.982 [2024-11-05 19:15:06.726049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.982 [2024-11-05 19:15:06.726059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:95544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.982 [2024-11-05 19:15:06.726066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.982 [2024-11-05 19:15:06.726076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:95552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.982 [2024-11-05 19:15:06.726084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.982 [2024-11-05 19:15:06.726095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:95560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.982 [2024-11-05 19:15:06.726102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.982 [2024-11-05 19:15:06.726112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:95568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.982 [2024-11-05 19:15:06.726119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.982 [2024-11-05 19:15:06.726129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:95576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.982 [2024-11-05 19:15:06.726136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.982 [2024-11-05 19:15:06.726146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:95584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.982 [2024-11-05 19:15:06.726154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.982 [2024-11-05 19:15:06.726163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:95592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.982 [2024-11-05 19:15:06.726171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.982 [2024-11-05 19:15:06.726180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:95600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.982 [2024-11-05 19:15:06.726188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.982 [2024-11-05 19:15:06.726198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:95608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.982 [2024-11-05 19:15:06.726205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.982 [2024-11-05 19:15:06.726214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:95616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.982 [2024-11-05 19:15:06.726223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.982 [2024-11-05 19:15:06.726233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:95624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.982 [2024-11-05 19:15:06.726240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.982 [2024-11-05 19:15:06.726250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:95632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.982 [2024-11-05 19:15:06.726257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.982 [2024-11-05 19:15:06.726267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:95640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.982 [2024-11-05 19:15:06.726274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.982 [2024-11-05 19:15:06.726284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:95648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.982 [2024-11-05 19:15:06.726291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.982 [2024-11-05 19:15:06.726301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:95656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.982 [2024-11-05 19:15:06.726310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.982 [2024-11-05 19:15:06.726320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:95664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.982 [2024-11-05 19:15:06.726328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.982 [2024-11-05 19:15:06.726338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:95672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.982 [2024-11-05 19:15:06.726345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.982 [2024-11-05 19:15:06.726354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:95680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.982 [2024-11-05 19:15:06.726363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.982 [2024-11-05 19:15:06.726373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:95688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.982 [2024-11-05 19:15:06.726380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.982 [2024-11-05 19:15:06.726389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:95696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.982 [2024-11-05 19:15:06.726397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.982 [2024-11-05 19:15:06.726407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:95704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.982 [2024-11-05 19:15:06.726415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.982 [2024-11-05 19:15:06.726424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:95712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.982 [2024-11-05 19:15:06.726431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.982 [2024-11-05 19:15:06.726441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:95720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.982 [2024-11-05 19:15:06.726448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.982 [2024-11-05 19:15:06.726458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:95728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.982 [2024-11-05 19:15:06.726465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.982 [2024-11-05 19:15:06.726474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:95736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.982 [2024-11-05 19:15:06.726488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.982 [2024-11-05 19:15:06.726497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:95744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.982 [2024-11-05 19:15:06.726505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.982 [2024-11-05 19:15:06.726514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:95752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.982 [2024-11-05 19:15:06.726523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.982 [2024-11-05 19:15:06.726535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:95760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.982 [2024-11-05 19:15:06.726543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.982 [2024-11-05 19:15:06.726552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:95768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.982 [2024-11-05 19:15:06.726560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.982 [2024-11-05 19:15:06.726569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:95776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.982 [2024-11-05 19:15:06.726576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.982 [2024-11-05 19:15:06.726586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:95784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.982 [2024-11-05 19:15:06.726593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.982 [2024-11-05 19:15:06.726602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:95792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.982 [2024-11-05 19:15:06.726610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.982 [2024-11-05 19:15:06.726619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:95800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.982 [2024-11-05 19:15:06.726626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.982 [2024-11-05 19:15:06.726635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:95808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.982 [2024-11-05 19:15:06.726643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.983 [2024-11-05 19:15:06.726652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:95816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.983 [2024-11-05 19:15:06.726659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.983 [2024-11-05 19:15:06.726669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:95824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.983 [2024-11-05 19:15:06.726676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.983 [2024-11-05 19:15:06.726685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:95832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.983 [2024-11-05 19:15:06.726692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.983 [2024-11-05 19:15:06.726702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:95840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.983 [2024-11-05 19:15:06.726709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.983 [2024-11-05 19:15:06.726719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:95848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.983 [2024-11-05 19:15:06.726726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.983 [2024-11-05 19:15:06.726735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:95856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.983 [2024-11-05 19:15:06.726744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.983 [2024-11-05 19:15:06.726757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:95864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.983 [2024-11-05 19:15:06.726764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.983 [2024-11-05 19:15:06.726774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:95872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.983 [2024-11-05 19:15:06.726781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.983 [2024-11-05 19:15:06.726791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:95880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.983 [2024-11-05 19:15:06.726798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.983 [2024-11-05 19:15:06.726807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:95888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.983 [2024-11-05 19:15:06.726814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.983 [2024-11-05 19:15:06.726823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:95896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.983 [2024-11-05 19:15:06.726831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.983 [2024-11-05 19:15:06.726840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:95904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.983 [2024-11-05 19:15:06.726847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.983 [2024-11-05 19:15:06.726856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:95912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.983 [2024-11-05 19:15:06.726863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.983 [2024-11-05 19:15:06.726872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:95920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.983 [2024-11-05 19:15:06.726880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.983 [2024-11-05 19:15:06.726889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:95928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.983 [2024-11-05 19:15:06.726896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.983 [2024-11-05 19:15:06.726905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:95936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.983 [2024-11-05 19:15:06.726912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.983 [2024-11-05 19:15:06.726922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:95944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.983 [2024-11-05 19:15:06.726929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.983 [2024-11-05 19:15:06.726938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:95952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.983 [2024-11-05 19:15:06.726945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.983 [2024-11-05 19:15:06.726955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:95960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.983 [2024-11-05 19:15:06.726963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.983 [2024-11-05 19:15:06.726973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:95968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.983 [2024-11-05 19:15:06.726980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.983 [2024-11-05 19:15:06.726990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:95976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.983 [2024-11-05 19:15:06.726998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.983 [2024-11-05 19:15:06.727006] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b0160 is same with the state(6) to be set 00:25:51.983 [2024-11-05 19:15:06.727015] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:51.983 [2024-11-05 19:15:06.727021] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:51.983 [2024-11-05 19:15:06.727031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95984 len:8 PRP1 0x0 PRP2 0x0 00:25:51.983 [2024-11-05 19:15:06.727039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.983 [2024-11-05 19:15:06.727084] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:51.983 [2024-11-05 19:15:06.727107] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:51.983 [2024-11-05 19:15:06.727116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.983 [2024-11-05 19:15:06.727124] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:51.983 [2024-11-05 19:15:06.727132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.983 [2024-11-05 19:15:06.727140] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:51.983 [2024-11-05 19:15:06.727147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.983 [2024-11-05 19:15:06.727155] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:51.983 [2024-11-05 19:15:06.727162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.983 [2024-11-05 19:15:06.727170] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:25:51.983 [2024-11-05 19:15:06.727196] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2091d70 (9): Bad file descriptor 00:25:51.983 [2024-11-05 19:15:06.730782] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:25:51.983 [2024-11-05 19:15:06.753770] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:25:51.983 10971.00 IOPS, 42.86 MiB/s [2024-11-05T18:15:21.306Z] 11007.33 IOPS, 43.00 MiB/s [2024-11-05T18:15:21.306Z] 11063.25 IOPS, 43.22 MiB/s [2024-11-05T18:15:21.306Z] [2024-11-05 19:15:10.248042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.983 [2024-11-05 19:15:10.248079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.983 [2024-11-05 19:15:10.248097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:21224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.983 [2024-11-05 19:15:10.248111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.983 [2024-11-05 19:15:10.248121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:21232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.983 [2024-11-05 19:15:10.248130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.983 [2024-11-05 19:15:10.248139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:21240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.983 [2024-11-05 19:15:10.248148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.983 [2024-11-05 19:15:10.248157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:21248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.983 [2024-11-05 19:15:10.248165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.983 [2024-11-05 19:15:10.248174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:21256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.983 [2024-11-05 19:15:10.248182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.983 [2024-11-05 19:15:10.248192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:21264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.983 [2024-11-05 19:15:10.248200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.983 [2024-11-05 19:15:10.248209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:21272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.983 [2024-11-05 19:15:10.248217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.983 [2024-11-05 19:15:10.248226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.983 [2024-11-05 19:15:10.248234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.983 [2024-11-05 19:15:10.248244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:21288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.983 [2024-11-05 19:15:10.248252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.984 [2024-11-05 19:15:10.248262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.984 [2024-11-05 19:15:10.248270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.984 [2024-11-05 19:15:10.248279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.984 [2024-11-05 19:15:10.248287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.984 [2024-11-05 19:15:10.248296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:21312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.984 [2024-11-05 19:15:10.248304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.984 [2024-11-05 19:15:10.248314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.984 [2024-11-05 19:15:10.248321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.984 [2024-11-05 19:15:10.248333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:21328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.984 [2024-11-05 19:15:10.248340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.984 [2024-11-05 19:15:10.248350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:21336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.984 [2024-11-05 19:15:10.248357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.984 [2024-11-05 19:15:10.248367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:21344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.984 [2024-11-05 19:15:10.248374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.984 [2024-11-05 19:15:10.248384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:21352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.984 [2024-11-05 19:15:10.248391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.984 [2024-11-05 19:15:10.248401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:21360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.984 [2024-11-05 19:15:10.248408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.984 [2024-11-05 19:15:10.248418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:21368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.984 [2024-11-05 19:15:10.248425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.984 [2024-11-05 19:15:10.248435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:21376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.984 [2024-11-05 19:15:10.248443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.984 [2024-11-05 19:15:10.248452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:21384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.984 [2024-11-05 19:15:10.248459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.984 [2024-11-05 19:15:10.248469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.984 [2024-11-05 19:15:10.248476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.984 [2024-11-05 19:15:10.248486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:21400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.984 [2024-11-05 19:15:10.248494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.984 [2024-11-05 19:15:10.248503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:21408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.984 [2024-11-05 19:15:10.248511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.984 [2024-11-05 19:15:10.248521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.984 [2024-11-05 19:15:10.248528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.984 [2024-11-05 19:15:10.248538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:21424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.984 [2024-11-05 19:15:10.248547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.984 [2024-11-05 19:15:10.248557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.984 [2024-11-05 19:15:10.248564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.984 [2024-11-05 19:15:10.248574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:21440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.984 [2024-11-05 19:15:10.248581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.984 [2024-11-05 19:15:10.248591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:21448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.984 [2024-11-05 19:15:10.248598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.984 [2024-11-05 19:15:10.248608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:21456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.984 [2024-11-05 19:15:10.248615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.984 [2024-11-05 19:15:10.248625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:21464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.984 [2024-11-05 19:15:10.248632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.984 [2024-11-05 19:15:10.248642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:21472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.984 [2024-11-05 19:15:10.248650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.984 [2024-11-05 19:15:10.248660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.984 [2024-11-05 19:15:10.248668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.984 [2024-11-05 19:15:10.248677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:21488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.984 [2024-11-05 19:15:10.248684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.984 [2024-11-05 19:15:10.248694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:21496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.984 [2024-11-05 19:15:10.248701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.984 [2024-11-05 19:15:10.248711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:21504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.984 [2024-11-05 19:15:10.248718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.984 [2024-11-05 19:15:10.248728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:21512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.984 [2024-11-05 19:15:10.248735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.984 [2024-11-05 19:15:10.248750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:21520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.984 [2024-11-05 19:15:10.248759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.984 [2024-11-05 19:15:10.248770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:21528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.984 [2024-11-05 19:15:10.248778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.984 [2024-11-05 19:15:10.248788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.984 [2024-11-05 19:15:10.248795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.984 [2024-11-05 19:15:10.248805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:21544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.984 [2024-11-05 19:15:10.248812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.984 [2024-11-05 19:15:10.248822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:21552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.984 [2024-11-05 19:15:10.248829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.984 [2024-11-05 19:15:10.248839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:21560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.985 [2024-11-05 19:15:10.248846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.985 [2024-11-05 19:15:10.248856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:21568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.985 [2024-11-05 19:15:10.248863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.985 [2024-11-05 19:15:10.248873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:21576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.985 [2024-11-05 19:15:10.248880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.985 [2024-11-05 19:15:10.248889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:21584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.985 [2024-11-05 19:15:10.248897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.985 [2024-11-05 19:15:10.248906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:21592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.985 [2024-11-05 19:15:10.248914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.985 [2024-11-05 19:15:10.248923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:21600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.985 [2024-11-05 19:15:10.248930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.985 [2024-11-05 19:15:10.248940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:21608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.985 [2024-11-05 19:15:10.248947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.985 [2024-11-05 19:15:10.248957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.985 [2024-11-05 19:15:10.248964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.985 [2024-11-05 19:15:10.248973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:21624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.985 [2024-11-05 19:15:10.248983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.985 [2024-11-05 19:15:10.248992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.985 [2024-11-05 19:15:10.249000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.985 [2024-11-05 19:15:10.249010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:21712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.985 [2024-11-05 19:15:10.249017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.985 [2024-11-05 19:15:10.249026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:21720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.985 [2024-11-05 19:15:10.249034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.985 [2024-11-05 19:15:10.249044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:21728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.985 [2024-11-05 19:15:10.249051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.985 [2024-11-05 19:15:10.249061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:21736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.985 [2024-11-05 19:15:10.249068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.985 [2024-11-05 19:15:10.249078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:21744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.985 [2024-11-05 19:15:10.249085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.985 [2024-11-05 19:15:10.249094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:21752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.985 [2024-11-05 19:15:10.249102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.985 [2024-11-05 19:15:10.249111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:21760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.985 [2024-11-05 19:15:10.249119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.985 [2024-11-05 19:15:10.249128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:21768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.985 [2024-11-05 19:15:10.249136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.985 [2024-11-05 19:15:10.249145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:21776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.985 [2024-11-05 19:15:10.249153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.985 [2024-11-05 19:15:10.249162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:21784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.985 [2024-11-05 19:15:10.249170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.985 [2024-11-05 19:15:10.249179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:21792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.985 [2024-11-05 19:15:10.249187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.985 [2024-11-05 19:15:10.249196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:21800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.985 [2024-11-05 19:15:10.249205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.985 [2024-11-05 19:15:10.249215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:21808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.985 [2024-11-05 19:15:10.249222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.985 [2024-11-05 19:15:10.249232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:21816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.985 [2024-11-05 19:15:10.249239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.985 [2024-11-05 19:15:10.249249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:21824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.985 [2024-11-05 19:15:10.249256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.985 [2024-11-05 19:15:10.249265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:21832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.985 [2024-11-05 19:15:10.249273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.985 [2024-11-05 19:15:10.249282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:21840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.985 [2024-11-05 19:15:10.249289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.985 [2024-11-05 19:15:10.249299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:21848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.985 [2024-11-05 19:15:10.249306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.985 [2024-11-05 19:15:10.249316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.985 [2024-11-05 19:15:10.249323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.985 [2024-11-05 19:15:10.249333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:21864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.985 [2024-11-05 19:15:10.249340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.985 [2024-11-05 19:15:10.249350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:21872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.985 [2024-11-05 19:15:10.249357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.985 [2024-11-05 19:15:10.249366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:21880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.985 [2024-11-05 19:15:10.249374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.985 [2024-11-05 19:15:10.249383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:21888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.985 [2024-11-05 19:15:10.249390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.985 [2024-11-05 19:15:10.249400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:21896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.985 [2024-11-05 19:15:10.249407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.985 [2024-11-05 19:15:10.249418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:21904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.985 [2024-11-05 19:15:10.249426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.985 [2024-11-05 19:15:10.249435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:21912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.985 [2024-11-05 19:15:10.249442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.985 [2024-11-05 19:15:10.249452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:21920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.985 [2024-11-05 19:15:10.249459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.985 [2024-11-05 19:15:10.249469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.985 [2024-11-05 19:15:10.249476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.985 [2024-11-05 19:15:10.249485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:21640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.985 [2024-11-05 19:15:10.249492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.985 [2024-11-05 19:15:10.249502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.985 [2024-11-05 19:15:10.249509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.986 [2024-11-05 19:15:10.249519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:21656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.986 [2024-11-05 19:15:10.249526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.986 [2024-11-05 19:15:10.249536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:21664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.986 [2024-11-05 19:15:10.249543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.986 [2024-11-05 19:15:10.249553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:21672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.986 [2024-11-05 19:15:10.249560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.986 [2024-11-05 19:15:10.249570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:21680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.986 [2024-11-05 19:15:10.249577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.986 [2024-11-05 19:15:10.249587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:21688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.986 [2024-11-05 19:15:10.249594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.986 [2024-11-05 19:15:10.249604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:21696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.986 [2024-11-05 19:15:10.249611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.986 [2024-11-05 19:15:10.249620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:21936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.986 [2024-11-05 19:15:10.249629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.986 [2024-11-05 19:15:10.249638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.986 [2024-11-05 19:15:10.249646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.986 [2024-11-05 19:15:10.249655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:21952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.986 [2024-11-05 19:15:10.249663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.986 [2024-11-05 19:15:10.249672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:21960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.986 [2024-11-05 19:15:10.249679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.986 [2024-11-05 19:15:10.249688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:21968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.986 [2024-11-05 19:15:10.249696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.986 [2024-11-05 19:15:10.249706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:21976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.986 [2024-11-05 19:15:10.249713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.986 [2024-11-05 19:15:10.249722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:21984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.986 [2024-11-05 19:15:10.249729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.986 [2024-11-05 19:15:10.249738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:21992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.986 [2024-11-05 19:15:10.249749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.986 [2024-11-05 19:15:10.249759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:22000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.986 [2024-11-05 19:15:10.249767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.986 [2024-11-05 19:15:10.249776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:22008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.986 [2024-11-05 19:15:10.249783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.986 [2024-11-05 19:15:10.249793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:22016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.986 [2024-11-05 19:15:10.249800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.986 [2024-11-05 19:15:10.249810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:22024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.986 [2024-11-05 19:15:10.249817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.986 [2024-11-05 19:15:10.249826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:22032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.986 [2024-11-05 19:15:10.249834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.986 [2024-11-05 19:15:10.249845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:22040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.986 [2024-11-05 19:15:10.249852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.986 [2024-11-05 19:15:10.249862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:22048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.986 [2024-11-05 19:15:10.249870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.986 [2024-11-05 19:15:10.249880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.986 [2024-11-05 19:15:10.249887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.986 [2024-11-05 19:15:10.249896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:22064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.986 [2024-11-05 19:15:10.249903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.986 [2024-11-05 19:15:10.249912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.986 [2024-11-05 19:15:10.249920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.986 [2024-11-05 19:15:10.249929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:22080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.986 [2024-11-05 19:15:10.249937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.986 [2024-11-05 19:15:10.249946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:22088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.986 [2024-11-05 19:15:10.249953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.986 [2024-11-05 19:15:10.249963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:22096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.986 [2024-11-05 19:15:10.249970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.986 [2024-11-05 19:15:10.249980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:22104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.986 [2024-11-05 19:15:10.249987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.986 [2024-11-05 19:15:10.249996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.986 [2024-11-05 19:15:10.250003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.986 [2024-11-05 19:15:10.250013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.986 [2024-11-05 19:15:10.250020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.986 [2024-11-05 19:15:10.250030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:22128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.986 [2024-11-05 19:15:10.250038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.986 [2024-11-05 19:15:10.250048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:22136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.986 [2024-11-05 19:15:10.250055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.986 [2024-11-05 19:15:10.250066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:22144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.986 [2024-11-05 19:15:10.250074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.986 [2024-11-05 19:15:10.250083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:22152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.986 [2024-11-05 19:15:10.250091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.986 [2024-11-05 19:15:10.250100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:22160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.986 [2024-11-05 19:15:10.250109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.986 [2024-11-05 19:15:10.250118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:22168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.986 [2024-11-05 19:15:10.250126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.986 [2024-11-05 19:15:10.250135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:22176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.986 [2024-11-05 19:15:10.250143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.986 [2024-11-05 19:15:10.250153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:22184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.986 [2024-11-05 19:15:10.250161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.986 [2024-11-05 19:15:10.250187] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:51.986 [2024-11-05 19:15:10.250195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22192 len:8 PRP1 0x0 PRP2 0x0 00:25:51.986 [2024-11-05 19:15:10.250203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.986 [2024-11-05 19:15:10.250215] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:51.986 [2024-11-05 19:15:10.250221] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:51.986 [2024-11-05 19:15:10.250227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22200 len:8 PRP1 0x0 PRP2 0x0 00:25:51.987 [2024-11-05 19:15:10.250235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.987 [2024-11-05 19:15:10.250242] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:51.987 [2024-11-05 19:15:10.250248] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:51.987 [2024-11-05 19:15:10.250254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22208 len:8 PRP1 0x0 PRP2 0x0 00:25:51.987 [2024-11-05 19:15:10.250261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.987 [2024-11-05 19:15:10.250269] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:51.987 [2024-11-05 19:15:10.250274] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:51.987 [2024-11-05 19:15:10.250280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22216 len:8 PRP1 0x0 PRP2 0x0 00:25:51.987 [2024-11-05 19:15:10.250287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.987 [2024-11-05 19:15:10.250295] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:51.987 [2024-11-05 19:15:10.250303] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:51.987 [2024-11-05 19:15:10.250309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22224 len:8 PRP1 0x0 PRP2 0x0 00:25:51.987 [2024-11-05 19:15:10.250317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.987 [2024-11-05 19:15:10.250325] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:51.987 [2024-11-05 19:15:10.250330] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:51.987 [2024-11-05 19:15:10.250337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22232 len:8 PRP1 0x0 PRP2 0x0 00:25:51.987 [2024-11-05 19:15:10.250344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.987 [2024-11-05 19:15:10.250352] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:51.987 [2024-11-05 19:15:10.250358] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:51.987 [2024-11-05 19:15:10.250364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21704 len:8 PRP1 0x0 PRP2 0x0 00:25:51.987 [2024-11-05 19:15:10.250371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.987 [2024-11-05 19:15:10.250411] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:25:51.987 [2024-11-05 19:15:10.250437] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:51.987 [2024-11-05 19:15:10.250446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.987 [2024-11-05 19:15:10.250454] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:51.987 [2024-11-05 19:15:10.250462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.987 [2024-11-05 19:15:10.250470] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:51.987 [2024-11-05 19:15:10.250477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.987 [2024-11-05 19:15:10.250486] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:51.987 [2024-11-05 19:15:10.250493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.987 [2024-11-05 19:15:10.250501] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:25:51.987 [2024-11-05 19:15:10.250534] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2091d70 (9): Bad file descriptor 00:25:51.987 [2024-11-05 19:15:10.254095] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:25:51.987 [2024-11-05 19:15:10.414802] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:25:51.987 10738.40 IOPS, 41.95 MiB/s [2024-11-05T18:15:21.310Z] 10837.17 IOPS, 42.33 MiB/s [2024-11-05T18:15:21.310Z] 10875.00 IOPS, 42.48 MiB/s [2024-11-05T18:15:21.310Z] 10895.62 IOPS, 42.56 MiB/s [2024-11-05T18:15:21.310Z] [2024-11-05 19:15:14.632347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:55304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.987 [2024-11-05 19:15:14.632383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.987 [2024-11-05 19:15:14.632400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:55312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.987 [2024-11-05 19:15:14.632413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.987 [2024-11-05 19:15:14.632423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:55320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.987 [2024-11-05 19:15:14.632431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.987 [2024-11-05 19:15:14.632441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:55328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.987 [2024-11-05 19:15:14.632448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.987 [2024-11-05 19:15:14.632458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:55336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.987 [2024-11-05 19:15:14.632465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.987 [2024-11-05 19:15:14.632474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:55344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.987 [2024-11-05 19:15:14.632482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.987 [2024-11-05 19:15:14.632491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:55352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.987 [2024-11-05 19:15:14.632498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.987 [2024-11-05 19:15:14.632508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:55360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.987 [2024-11-05 19:15:14.632515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.987 [2024-11-05 19:15:14.632524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:55368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.987 [2024-11-05 19:15:14.632532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.987 [2024-11-05 19:15:14.632541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:55376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.987 [2024-11-05 19:15:14.632549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.987 [2024-11-05 19:15:14.632558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:55384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.987 [2024-11-05 19:15:14.632565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.987 [2024-11-05 19:15:14.632574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:55392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.987 [2024-11-05 19:15:14.632582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.987 [2024-11-05 19:15:14.632592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:55400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.987 [2024-11-05 19:15:14.632599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.987 [2024-11-05 19:15:14.632608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:55408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.987 [2024-11-05 19:15:14.632616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.987 [2024-11-05 19:15:14.632631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:55416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.987 [2024-11-05 19:15:14.632638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.987 [2024-11-05 19:15:14.632647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:55424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.987 [2024-11-05 19:15:14.632654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.987 [2024-11-05 19:15:14.632664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:55432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.987 [2024-11-05 19:15:14.632671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.987 [2024-11-05 19:15:14.632681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:55440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.987 [2024-11-05 19:15:14.632688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.987 [2024-11-05 19:15:14.632697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:55448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.987 [2024-11-05 19:15:14.632704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.987 [2024-11-05 19:15:14.632714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:55456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.987 [2024-11-05 19:15:14.632722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.987 [2024-11-05 19:15:14.632731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:55464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.987 [2024-11-05 19:15:14.632738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.987 [2024-11-05 19:15:14.632752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:55472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.987 [2024-11-05 19:15:14.632760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.987 [2024-11-05 19:15:14.632769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:55480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.987 [2024-11-05 19:15:14.632777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.988 [2024-11-05 19:15:14.632786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:55488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.988 [2024-11-05 19:15:14.632793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.988 [2024-11-05 19:15:14.632802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:55496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.988 [2024-11-05 19:15:14.632809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.988 [2024-11-05 19:15:14.632819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:55504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.988 [2024-11-05 19:15:14.632826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.988 [2024-11-05 19:15:14.632835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:55512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.988 [2024-11-05 19:15:14.632844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.988 [2024-11-05 19:15:14.632854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:55520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.988 [2024-11-05 19:15:14.632861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.988 [2024-11-05 19:15:14.632870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:55528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.988 [2024-11-05 19:15:14.632877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.988 [2024-11-05 19:15:14.632887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:55536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.988 [2024-11-05 19:15:14.632894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.988 [2024-11-05 19:15:14.632903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:55544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.988 [2024-11-05 19:15:14.632910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.988 [2024-11-05 19:15:14.632919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:55552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.988 [2024-11-05 19:15:14.632926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.988 [2024-11-05 19:15:14.632936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:55560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.988 [2024-11-05 19:15:14.632944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.988 [2024-11-05 19:15:14.632953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.988 [2024-11-05 19:15:14.632960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.988 [2024-11-05 19:15:14.632970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:55576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.988 [2024-11-05 19:15:14.632977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.988 [2024-11-05 19:15:14.632987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:55584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.988 [2024-11-05 19:15:14.632994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.988 [2024-11-05 19:15:14.633003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:55592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.988 [2024-11-05 19:15:14.633010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.988 [2024-11-05 19:15:14.633020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:55600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.988 [2024-11-05 19:15:14.633027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.988 [2024-11-05 19:15:14.633037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:55608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.988 [2024-11-05 19:15:14.633044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.988 [2024-11-05 19:15:14.633055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:55616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.988 [2024-11-05 19:15:14.633062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.988 [2024-11-05 19:15:14.633072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:55624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.988 [2024-11-05 19:15:14.633079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.988 [2024-11-05 19:15:14.633088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:55632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.988 [2024-11-05 19:15:14.633095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.988 [2024-11-05 19:15:14.633105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:55640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.988 [2024-11-05 19:15:14.633112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.988 [2024-11-05 19:15:14.633121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:55648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.988 [2024-11-05 19:15:14.633128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.988 [2024-11-05 19:15:14.633137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:55656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.988 [2024-11-05 19:15:14.633145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.988 [2024-11-05 19:15:14.633154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:55664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.988 [2024-11-05 19:15:14.633161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.988 [2024-11-05 19:15:14.633171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:55672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.988 [2024-11-05 19:15:14.633178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.988 [2024-11-05 19:15:14.633187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:55680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.988 [2024-11-05 19:15:14.633194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.988 [2024-11-05 19:15:14.633203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:55688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.988 [2024-11-05 19:15:14.633211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.988 [2024-11-05 19:15:14.633220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:55696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.988 [2024-11-05 19:15:14.633227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.988 [2024-11-05 19:15:14.633236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:55704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.988 [2024-11-05 19:15:14.633244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.988 [2024-11-05 19:15:14.633253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:55712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.988 [2024-11-05 19:15:14.633260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.988 [2024-11-05 19:15:14.633271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:55720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.988 [2024-11-05 19:15:14.633278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.988 [2024-11-05 19:15:14.633288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:55728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.988 [2024-11-05 19:15:14.633295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.988 [2024-11-05 19:15:14.633304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:55736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.988 [2024-11-05 19:15:14.633311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.988 [2024-11-05 19:15:14.633321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:55744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.988 [2024-11-05 19:15:14.633328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.988 [2024-11-05 19:15:14.633337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:55752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.988 [2024-11-05 19:15:14.633345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.988 [2024-11-05 19:15:14.633354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:55760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.988 [2024-11-05 19:15:14.633361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.988 [2024-11-05 19:15:14.633370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:55768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.988 [2024-11-05 19:15:14.633378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.988 [2024-11-05 19:15:14.633387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:55776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.988 [2024-11-05 19:15:14.633394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.988 [2024-11-05 19:15:14.633403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:55784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.988 [2024-11-05 19:15:14.633410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.989 [2024-11-05 19:15:14.633420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:55792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.989 [2024-11-05 19:15:14.633427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.989 [2024-11-05 19:15:14.633436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:55800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.989 [2024-11-05 19:15:14.633443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.989 [2024-11-05 19:15:14.633452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:55808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.989 [2024-11-05 19:15:14.633460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.989 [2024-11-05 19:15:14.633469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:55816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.989 [2024-11-05 19:15:14.633510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.989 [2024-11-05 19:15:14.633519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:55824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.989 [2024-11-05 19:15:14.633526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.989 [2024-11-05 19:15:14.633535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:55832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.989 [2024-11-05 19:15:14.633542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.989 [2024-11-05 19:15:14.633551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:55840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.989 [2024-11-05 19:15:14.633558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.989 [2024-11-05 19:15:14.633568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:55848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.989 [2024-11-05 19:15:14.633575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.989 [2024-11-05 19:15:14.633584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:55856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.989 [2024-11-05 19:15:14.633591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.989 [2024-11-05 19:15:14.633600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:55864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.989 [2024-11-05 19:15:14.633607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.989 [2024-11-05 19:15:14.633616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:55872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.989 [2024-11-05 19:15:14.633623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.989 [2024-11-05 19:15:14.633632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:55880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.989 [2024-11-05 19:15:14.633639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.989 [2024-11-05 19:15:14.633648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:55888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.989 [2024-11-05 19:15:14.633656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.989 [2024-11-05 19:15:14.633665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:55896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.989 [2024-11-05 19:15:14.633672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.989 [2024-11-05 19:15:14.633681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:55904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.989 [2024-11-05 19:15:14.633688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.989 [2024-11-05 19:15:14.633697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:55912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.989 [2024-11-05 19:15:14.633704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.989 [2024-11-05 19:15:14.633715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:55920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.989 [2024-11-05 19:15:14.633722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.989 [2024-11-05 19:15:14.633732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:55928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.989 [2024-11-05 19:15:14.633739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.989 [2024-11-05 19:15:14.633751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:55936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.989 [2024-11-05 19:15:14.633759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.989 [2024-11-05 19:15:14.633768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:55944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.989 [2024-11-05 19:15:14.633775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.989 [2024-11-05 19:15:14.633784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:55952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.989 [2024-11-05 19:15:14.633791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.989 [2024-11-05 19:15:14.633801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:55960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.989 [2024-11-05 19:15:14.633808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.989 [2024-11-05 19:15:14.633817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:55968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.989 [2024-11-05 19:15:14.633824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.989 [2024-11-05 19:15:14.633833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:55976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.989 [2024-11-05 19:15:14.633840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.989 [2024-11-05 19:15:14.633849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:55984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.989 [2024-11-05 19:15:14.633856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.989 [2024-11-05 19:15:14.633866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:55992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.989 [2024-11-05 19:15:14.633873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.989 [2024-11-05 19:15:14.633882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:56000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.989 [2024-11-05 19:15:14.633889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.989 [2024-11-05 19:15:14.633898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:56008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.989 [2024-11-05 19:15:14.633905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.989 [2024-11-05 19:15:14.633914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:56016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.989 [2024-11-05 19:15:14.633923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.989 [2024-11-05 19:15:14.633932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:56024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.989 [2024-11-05 19:15:14.633939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.989 [2024-11-05 19:15:14.633948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:56032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.989 [2024-11-05 19:15:14.633955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.989 [2024-11-05 19:15:14.633965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:56040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.989 [2024-11-05 19:15:14.633972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.989 [2024-11-05 19:15:14.633981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:56048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.989 [2024-11-05 19:15:14.633989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.989 [2024-11-05 19:15:14.633998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:56056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.989 [2024-11-05 19:15:14.634005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.989 [2024-11-05 19:15:14.634015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:56064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.989 [2024-11-05 19:15:14.634022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.989 [2024-11-05 19:15:14.634045] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:51.989 [2024-11-05 19:15:14.634053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56072 len:8 PRP1 0x0 PRP2 0x0 00:25:51.989 [2024-11-05 19:15:14.634061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.989 [2024-11-05 19:15:14.634072] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:51.989 [2024-11-05 19:15:14.634078] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:51.990 [2024-11-05 19:15:14.634084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56080 len:8 PRP1 0x0 PRP2 0x0 00:25:51.990 [2024-11-05 19:15:14.634091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.990 [2024-11-05 19:15:14.634098] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:51.990 [2024-11-05 19:15:14.634104] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:51.990 [2024-11-05 19:15:14.634109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56088 len:8 PRP1 0x0 PRP2 0x0 00:25:51.990 [2024-11-05 19:15:14.634117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.990 [2024-11-05 19:15:14.634124] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:51.990 [2024-11-05 19:15:14.634130] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:51.990 [2024-11-05 19:15:14.634135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56096 len:8 PRP1 0x0 PRP2 0x0 00:25:51.990 [2024-11-05 19:15:14.634142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.990 [2024-11-05 19:15:14.634152] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:51.990 [2024-11-05 19:15:14.634157] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:51.990 [2024-11-05 19:15:14.634163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56104 len:8 PRP1 0x0 PRP2 0x0 00:25:51.990 [2024-11-05 19:15:14.634170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.990 [2024-11-05 19:15:14.634178] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:51.990 [2024-11-05 19:15:14.634183] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:51.990 [2024-11-05 19:15:14.634189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56112 len:8 PRP1 0x0 PRP2 0x0 00:25:51.990 [2024-11-05 19:15:14.634196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.990 [2024-11-05 19:15:14.634204] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:51.990 [2024-11-05 19:15:14.634209] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:51.990 [2024-11-05 19:15:14.634215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56120 len:8 PRP1 0x0 PRP2 0x0 00:25:51.990 [2024-11-05 19:15:14.634222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.990 [2024-11-05 19:15:14.634232] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:51.990 [2024-11-05 19:15:14.634238] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:51.990 [2024-11-05 19:15:14.634243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56128 len:8 PRP1 0x0 PRP2 0x0 00:25:51.990 [2024-11-05 19:15:14.634250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.990 [2024-11-05 19:15:14.634258] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:51.990 [2024-11-05 19:15:14.634263] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:51.990 [2024-11-05 19:15:14.634269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56136 len:8 PRP1 0x0 PRP2 0x0 00:25:51.990 [2024-11-05 19:15:14.634276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.990 [2024-11-05 19:15:14.634284] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:51.990 [2024-11-05 19:15:14.634289] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:51.990 [2024-11-05 19:15:14.634295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56144 len:8 PRP1 0x0 PRP2 0x0 00:25:51.990 [2024-11-05 19:15:14.634302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.990 [2024-11-05 19:15:14.634309] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:51.990 [2024-11-05 19:15:14.634315] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:51.990 [2024-11-05 19:15:14.634321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56152 len:8 PRP1 0x0 PRP2 0x0 00:25:51.990 [2024-11-05 19:15:14.634328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.990 [2024-11-05 19:15:14.634335] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:51.990 [2024-11-05 19:15:14.634340] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:51.990 [2024-11-05 19:15:14.634346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56160 len:8 PRP1 0x0 PRP2 0x0 00:25:51.990 [2024-11-05 19:15:14.634355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.990 [2024-11-05 19:15:14.634362] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:51.990 [2024-11-05 19:15:14.634368] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:51.990 [2024-11-05 19:15:14.634374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56168 len:8 PRP1 0x0 PRP2 0x0 00:25:51.990 [2024-11-05 19:15:14.634381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.990 [2024-11-05 19:15:14.634388] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:51.990 [2024-11-05 19:15:14.634393] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:51.990 [2024-11-05 19:15:14.634399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56176 len:8 PRP1 0x0 PRP2 0x0 00:25:51.990 [2024-11-05 19:15:14.634406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.990 [2024-11-05 19:15:14.634414] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:51.990 [2024-11-05 19:15:14.634419] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:51.990 [2024-11-05 19:15:14.634425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56184 len:8 PRP1 0x0 PRP2 0x0 00:25:51.990 [2024-11-05 19:15:14.634432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.990 [2024-11-05 19:15:14.634440] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:51.990 [2024-11-05 19:15:14.634445] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:51.990 [2024-11-05 19:15:14.634451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56192 len:8 PRP1 0x0 PRP2 0x0 00:25:51.990 [2024-11-05 19:15:14.634458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.990 [2024-11-05 19:15:14.634465] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:51.990 [2024-11-05 19:15:14.634471] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:51.990 [2024-11-05 19:15:14.634478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56200 len:8 PRP1 0x0 PRP2 0x0 00:25:51.990 [2024-11-05 19:15:14.634485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.990 [2024-11-05 19:15:14.634492] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:51.990 [2024-11-05 19:15:14.634497] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:51.990 [2024-11-05 19:15:14.634503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56208 len:8 PRP1 0x0 PRP2 0x0 00:25:51.990 [2024-11-05 19:15:14.634510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.990 [2024-11-05 19:15:14.634518] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:51.990 [2024-11-05 19:15:14.634523] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:51.990 [2024-11-05 19:15:14.634529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56216 len:8 PRP1 0x0 PRP2 0x0 00:25:51.990 [2024-11-05 19:15:14.634536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.990 [2024-11-05 19:15:14.634543] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:51.990 [2024-11-05 19:15:14.634549] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:51.990 [2024-11-05 19:15:14.634556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56224 len:8 PRP1 0x0 PRP2 0x0 00:25:51.990 [2024-11-05 19:15:14.634563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.990 [2024-11-05 19:15:14.634571] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:51.990 [2024-11-05 19:15:14.634576] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:51.990 [2024-11-05 19:15:14.634582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56232 len:8 PRP1 0x0 PRP2 0x0 00:25:51.990 [2024-11-05 19:15:14.634589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.990 [2024-11-05 19:15:14.634597] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:51.990 [2024-11-05 19:15:14.634602] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:51.990 [2024-11-05 19:15:14.634608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56240 len:8 PRP1 0x0 PRP2 0x0 00:25:51.990 [2024-11-05 19:15:14.634615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.990 [2024-11-05 19:15:14.649868] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:51.990 [2024-11-05 19:15:14.649894] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:51.990 [2024-11-05 19:15:14.649906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56248 len:8 PRP1 0x0 PRP2 0x0 00:25:51.990 [2024-11-05 19:15:14.649915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.990 [2024-11-05 19:15:14.649925] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:51.990 [2024-11-05 19:15:14.649931] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:51.990 [2024-11-05 19:15:14.649938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56256 len:8 PRP1 0x0 PRP2 0x0 00:25:51.990 [2024-11-05 19:15:14.649945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.990 [2024-11-05 19:15:14.649953] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:51.990 [2024-11-05 19:15:14.649958] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:51.990 [2024-11-05 19:15:14.649965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56264 len:8 PRP1 0x0 PRP2 0x0 00:25:51.990 [2024-11-05 19:15:14.649973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.991 [2024-11-05 19:15:14.649981] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:51.991 [2024-11-05 19:15:14.649987] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:51.991 [2024-11-05 19:15:14.649994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56272 len:8 PRP1 0x0 PRP2 0x0 00:25:51.991 [2024-11-05 19:15:14.650002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.991 [2024-11-05 19:15:14.650010] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:51.991 [2024-11-05 19:15:14.650016] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:51.991 [2024-11-05 19:15:14.650022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56280 len:8 PRP1 0x0 PRP2 0x0 00:25:51.991 [2024-11-05 19:15:14.650028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.991 [2024-11-05 19:15:14.650036] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:51.991 [2024-11-05 19:15:14.650046] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:51.991 [2024-11-05 19:15:14.650053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56288 len:8 PRP1 0x0 PRP2 0x0 00:25:51.991 [2024-11-05 19:15:14.650061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.991 [2024-11-05 19:15:14.650069] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:51.991 [2024-11-05 19:15:14.650075] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:51.991 [2024-11-05 19:15:14.650081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56296 len:8 PRP1 0x0 PRP2 0x0 00:25:51.991 [2024-11-05 19:15:14.650089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.991 [2024-11-05 19:15:14.650096] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:51.991 [2024-11-05 19:15:14.650102] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:51.991 [2024-11-05 19:15:14.650108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56304 len:8 PRP1 0x0 PRP2 0x0 00:25:51.991 [2024-11-05 19:15:14.650114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.991 [2024-11-05 19:15:14.650122] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:51.991 [2024-11-05 19:15:14.650128] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:51.991 [2024-11-05 19:15:14.650135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56312 len:8 PRP1 0x0 PRP2 0x0 00:25:51.991 [2024-11-05 19:15:14.650143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.991 [2024-11-05 19:15:14.650151] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:51.991 [2024-11-05 19:15:14.650156] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:51.991 [2024-11-05 19:15:14.650163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56320 len:8 PRP1 0x0 PRP2 0x0 00:25:51.991 [2024-11-05 19:15:14.650170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.991 [2024-11-05 19:15:14.650214] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:25:51.991 [2024-11-05 19:15:14.650245] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:51.991 [2024-11-05 19:15:14.650254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.991 [2024-11-05 19:15:14.650264] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:51.991 [2024-11-05 19:15:14.650271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.991 [2024-11-05 19:15:14.650280] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:51.991 [2024-11-05 19:15:14.650288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.991 [2024-11-05 19:15:14.650296] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:51.991 [2024-11-05 19:15:14.650304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.991 [2024-11-05 19:15:14.650313] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:25:51.991 [2024-11-05 19:15:14.650344] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2091d70 (9): Bad file descriptor 00:25:51.991 [2024-11-05 19:15:14.653932] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:25:51.991 10871.44 IOPS, 42.47 MiB/s [2024-11-05T18:15:21.314Z] [2024-11-05 19:15:14.723866] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:25:51.991 10859.70 IOPS, 42.42 MiB/s [2024-11-05T18:15:21.314Z] 10892.36 IOPS, 42.55 MiB/s [2024-11-05T18:15:21.314Z] 10935.75 IOPS, 42.72 MiB/s [2024-11-05T18:15:21.314Z] 10982.77 IOPS, 42.90 MiB/s [2024-11-05T18:15:21.314Z] 10992.07 IOPS, 42.94 MiB/s 00:25:51.991 Latency(us) 00:25:51.991 [2024-11-05T18:15:21.314Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:51.991 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:51.991 Verification LBA range: start 0x0 length 0x4000 00:25:51.991 NVMe0n1 : 15.01 10990.70 42.93 676.76 0.00 10943.36 771.41 25777.49 00:25:51.991 [2024-11-05T18:15:21.314Z] =================================================================================================================== 00:25:51.991 [2024-11-05T18:15:21.314Z] Total : 10990.70 42.93 676.76 0.00 10943.36 771.41 25777.49 00:25:51.991 Received shutdown signal, test time was about 15.000000 seconds 00:25:51.991 00:25:51.991 Latency(us) 00:25:51.991 [2024-11-05T18:15:21.314Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:51.991 [2024-11-05T18:15:21.314Z] =================================================================================================================== 00:25:51.991 [2024-11-05T18:15:21.314Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:51.991 19:15:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:25:51.991 19:15:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:25:51.991 19:15:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:25:51.991 19:15:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=460362 00:25:51.991 19:15:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 460362 /var/tmp/bdevperf.sock 00:25:51.991 19:15:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:25:51.991 19:15:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 460362 ']' 00:25:51.991 19:15:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:51.991 19:15:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:51.991 19:15:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:51.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:51.991 19:15:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:51.991 19:15:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:52.563 19:15:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:52.563 19:15:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:25:52.563 19:15:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:52.825 [2024-11-05 19:15:21.916471] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:52.825 19:15:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:52.825 [2024-11-05 19:15:22.096905] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:25:52.825 19:15:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:53.396 NVMe0n1 00:25:53.396 19:15:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:53.658 00:25:53.658 19:15:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:53.920 00:25:53.920 19:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:53.920 19:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:25:54.181 19:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:54.442 19:15:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:25:57.745 19:15:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:57.745 19:15:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:25:57.745 19:15:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=461671 00:25:57.745 19:15:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:57.745 19:15:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 461671 00:25:58.688 { 00:25:58.688 "results": [ 00:25:58.688 { 00:25:58.688 "job": "NVMe0n1", 00:25:58.688 "core_mask": "0x1", 00:25:58.688 "workload": "verify", 00:25:58.688 "status": "finished", 00:25:58.688 "verify_range": { 00:25:58.688 "start": 0, 00:25:58.688 "length": 16384 00:25:58.688 }, 00:25:58.688 "queue_depth": 128, 00:25:58.688 "io_size": 4096, 00:25:58.688 "runtime": 1.006545, 00:25:58.688 "iops": 11013.913933306509, 00:25:58.688 "mibps": 43.02310130197855, 00:25:58.688 "io_failed": 0, 00:25:58.688 "io_timeout": 0, 00:25:58.688 "avg_latency_us": 11558.872466173552, 00:25:58.688 "min_latency_us": 1174.1866666666667, 00:25:58.688 "max_latency_us": 10048.853333333333 00:25:58.688 } 00:25:58.688 ], 00:25:58.688 "core_count": 1 00:25:58.688 } 00:25:58.688 19:15:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:58.688 [2024-11-05 19:15:20.969388] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:25:58.688 [2024-11-05 19:15:20.969446] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid460362 ] 00:25:58.688 [2024-11-05 19:15:21.040458] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:58.688 [2024-11-05 19:15:21.077316] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:58.688 [2024-11-05 19:15:23.525185] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:58.688 [2024-11-05 19:15:23.525230] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:58.688 [2024-11-05 19:15:23.525243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.688 [2024-11-05 19:15:23.525253] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:58.688 [2024-11-05 19:15:23.525261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.688 [2024-11-05 19:15:23.525269] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:58.688 [2024-11-05 19:15:23.525276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.688 [2024-11-05 19:15:23.525284] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:58.688 [2024-11-05 19:15:23.525291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.688 [2024-11-05 19:15:23.525299] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:25:58.688 [2024-11-05 19:15:23.525324] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:25:58.688 [2024-11-05 19:15:23.525338] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5a7d70 (9): Bad file descriptor 00:25:58.688 [2024-11-05 19:15:23.546566] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:25:58.688 Running I/O for 1 seconds... 00:25:58.688 10927.00 IOPS, 42.68 MiB/s 00:25:58.688 Latency(us) 00:25:58.688 [2024-11-05T18:15:28.011Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:58.688 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:58.688 Verification LBA range: start 0x0 length 0x4000 00:25:58.688 NVMe0n1 : 1.01 11013.91 43.02 0.00 0.00 11558.87 1174.19 10048.85 00:25:58.688 [2024-11-05T18:15:28.011Z] =================================================================================================================== 00:25:58.688 [2024-11-05T18:15:28.011Z] Total : 11013.91 43.02 0.00 0.00 11558.87 1174.19 10048.85 00:25:58.688 19:15:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:58.688 19:15:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:25:58.949 19:15:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:58.949 19:15:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:58.949 19:15:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:25:59.211 19:15:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:59.471 19:15:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:26:02.885 19:15:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:02.885 19:15:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:26:02.885 19:15:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 460362 00:26:02.885 19:15:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 460362 ']' 00:26:02.885 19:15:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 460362 00:26:02.885 19:15:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:26:02.885 19:15:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:02.885 19:15:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 460362 00:26:02.885 19:15:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:26:02.885 19:15:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:26:02.885 19:15:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 460362' 00:26:02.885 killing process with pid 460362 00:26:02.885 19:15:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 460362 00:26:02.885 19:15:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 460362 00:26:02.885 19:15:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:26:02.885 19:15:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:02.885 19:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:26:02.885 19:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:02.885 19:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:26:02.885 19:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@335 -- # nvmfcleanup 00:26:02.885 19:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@99 -- # sync 00:26:02.885 19:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:26:02.885 19:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@102 -- # set +e 00:26:02.885 19:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@103 -- # for i in {1..20} 00:26:02.885 19:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:26:02.885 rmmod nvme_tcp 00:26:03.147 rmmod nvme_fabrics 00:26:03.147 rmmod nvme_keyring 00:26:03.147 19:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:26:03.147 19:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # set -e 00:26:03.147 19:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # return 0 00:26:03.147 19:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # '[' -n 456261 ']' 00:26:03.147 19:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@337 -- # killprocess 456261 00:26:03.147 19:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 456261 ']' 00:26:03.147 19:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 456261 00:26:03.147 19:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:26:03.147 19:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:03.147 19:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 456261 00:26:03.147 19:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:26:03.147 19:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:26:03.147 19:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 456261' 00:26:03.148 killing process with pid 456261 00:26:03.148 19:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 456261 00:26:03.148 19:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 456261 00:26:03.148 19:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:26:03.148 19:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # nvmf_fini 00:26:03.148 19:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@264 -- # local dev 00:26:03.148 19:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@267 -- # remove_target_ns 00:26:03.148 19:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:26:03.148 19:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:26:03.148 19:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_target_ns 00:26:05.694 19:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@268 -- # delete_main_bridge 00:26:05.694 19:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:26:05.694 19:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@130 -- # return 0 00:26:05.694 19:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:26:05.694 19:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:26:05.694 19:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:26:05.694 19:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:26:05.694 19:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:26:05.694 19:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:26:05.694 19:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:26:05.694 19:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:26:05.694 19:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:26:05.694 19:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:26:05.694 19:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:26:05.694 19:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:26:05.694 19:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:26:05.694 19:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:26:05.694 19:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:26:05.694 19:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:26:05.694 19:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:26:05.694 19:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@41 -- # _dev=0 00:26:05.694 19:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@41 -- # dev_map=() 00:26:05.694 19:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@284 -- # iptr 00:26:05.694 19:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@542 -- # iptables-save 00:26:05.694 19:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:26:05.694 19:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@542 -- # iptables-restore 00:26:05.694 00:26:05.694 real 0m40.488s 00:26:05.694 user 2m4.257s 00:26:05.694 sys 0m8.610s 00:26:05.694 19:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:05.694 19:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:05.694 ************************************ 00:26:05.694 END TEST nvmf_failover 00:26:05.694 ************************************ 00:26:05.694 19:15:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:26:05.695 19:15:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:26:05.695 19:15:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:05.695 19:15:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.695 ************************************ 00:26:05.695 START TEST nvmf_host_multipath_status 00:26:05.695 ************************************ 00:26:05.695 19:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:26:05.695 * Looking for test storage... 00:26:05.695 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:05.695 19:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:05.695 19:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lcov --version 00:26:05.695 19:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:05.695 19:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:05.695 19:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:05.695 19:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:05.695 19:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:05.695 19:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:26:05.695 19:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:26:05.695 19:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:26:05.695 19:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:26:05.695 19:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:26:05.695 19:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:26:05.695 19:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:26:05.695 19:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:05.695 19:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:26:05.695 19:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:26:05.695 19:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:05.695 19:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:05.695 19:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:26:05.695 19:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:26:05.695 19:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:05.695 19:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:26:05.695 19:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:26:05.695 19:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:26:05.695 19:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:26:05.695 19:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:05.695 19:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:26:05.695 19:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:26:05.695 19:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:05.695 19:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:05.695 19:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:26:05.695 19:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:05.695 19:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:05.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:05.695 --rc genhtml_branch_coverage=1 00:26:05.695 --rc genhtml_function_coverage=1 00:26:05.695 --rc genhtml_legend=1 00:26:05.695 --rc geninfo_all_blocks=1 00:26:05.695 --rc geninfo_unexecuted_blocks=1 00:26:05.695 00:26:05.695 ' 00:26:05.695 19:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:05.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:05.695 --rc genhtml_branch_coverage=1 00:26:05.695 --rc genhtml_function_coverage=1 00:26:05.695 --rc genhtml_legend=1 00:26:05.695 --rc geninfo_all_blocks=1 00:26:05.695 --rc geninfo_unexecuted_blocks=1 00:26:05.695 00:26:05.695 ' 00:26:05.695 19:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:05.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:05.695 --rc genhtml_branch_coverage=1 00:26:05.695 --rc genhtml_function_coverage=1 00:26:05.695 --rc genhtml_legend=1 00:26:05.695 --rc geninfo_all_blocks=1 00:26:05.695 --rc geninfo_unexecuted_blocks=1 00:26:05.695 00:26:05.695 ' 00:26:05.695 19:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:05.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:05.695 --rc genhtml_branch_coverage=1 00:26:05.695 --rc genhtml_function_coverage=1 00:26:05.695 --rc genhtml_legend=1 00:26:05.695 --rc geninfo_all_blocks=1 00:26:05.695 --rc geninfo_unexecuted_blocks=1 00:26:05.695 00:26:05.695 ' 00:26:05.695 19:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:05.695 19:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:26:05.695 19:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:05.695 19:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:05.695 19:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:05.695 19:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:05.695 19:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:05.695 19:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:26:05.695 19:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:05.695 19:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:26:05.695 19:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:05.695 19:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:05.695 19:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:05.695 19:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:26:05.695 19:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:26:05.695 19:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:05.695 19:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:05.695 19:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:26:05.695 19:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:05.695 19:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:05.695 19:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:05.695 19:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:05.695 19:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:05.695 19:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:05.695 19:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:26:05.695 19:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:05.695 19:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:26:05.695 19:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:26:05.695 19:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:26:05.695 19:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:26:05.695 19:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@50 -- # : 0 00:26:05.695 19:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:26:05.695 19:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:26:05.695 19:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:26:05.695 19:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:05.695 19:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:05.695 19:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:26:05.695 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:26:05.695 19:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:26:05.695 19:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:26:05.695 19:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@54 -- # have_pci_nics=0 00:26:05.695 19:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:26:05.695 19:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:26:05.696 19:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:05.696 19:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:26:05.696 19:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:05.696 19:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:26:05.696 19:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:26:05.696 19:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:26:05.696 19:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:05.696 19:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # prepare_net_devs 00:26:05.696 19:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # local -g is_hw=no 00:26:05.696 19:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # remove_target_ns 00:26:05.696 19:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:26:05.696 19:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:26:05.696 19:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_target_ns 00:26:05.696 19:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:26:05.696 19:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:26:05.696 19:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # xtrace_disable 00:26:05.696 19:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:13.838 19:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:13.838 19:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@131 -- # pci_devs=() 00:26:13.838 19:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@131 -- # local -a pci_devs 00:26:13.838 19:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@132 -- # pci_net_devs=() 00:26:13.838 19:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:26:13.838 19:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@133 -- # pci_drivers=() 00:26:13.838 19:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@133 -- # local -A pci_drivers 00:26:13.838 19:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@135 -- # net_devs=() 00:26:13.838 19:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@135 -- # local -ga net_devs 00:26:13.838 19:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@136 -- # e810=() 00:26:13.838 19:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@136 -- # local -ga e810 00:26:13.838 19:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@137 -- # x722=() 00:26:13.838 19:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@137 -- # local -ga x722 00:26:13.838 19:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@138 -- # mlx=() 00:26:13.838 19:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@138 -- # local -ga mlx 00:26:13.838 19:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:13.838 19:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:13.838 19:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:13.838 19:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:13.838 19:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:13.838 19:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:13.838 19:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:13.838 19:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:13.838 19:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:13.839 19:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:13.839 19:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:13.839 19:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:13.839 19:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:26:13.839 19:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:26:13.839 19:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:26:13.839 19:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:26:13.839 19:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:26:13.839 19:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:26:13.839 19:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:26:13.839 19:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:13.839 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:13.839 19:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:26:13.839 19:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:26:13.839 19:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:13.839 19:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:13.839 19:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:26:13.839 19:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:26:13.839 19:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:13.839 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:13.839 19:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:26:13.839 19:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:26:13.839 19:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:13.839 19:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:13.839 19:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:26:13.839 19:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:26:13.839 19:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:26:13.839 19:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:26:13.839 19:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:26:13.839 19:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:13.839 19:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:26:13.839 19:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:13.839 19:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # [[ up == up ]] 00:26:13.839 19:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:26:13.839 19:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:13.839 19:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:13.839 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:13.839 19:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:26:13.839 19:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:26:13.839 19:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:13.839 19:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:26:13.839 19:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:13.839 19:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # [[ up == up ]] 00:26:13.839 19:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:26:13.839 19:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:13.839 19:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:13.839 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:13.839 19:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:26:13.839 19:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:26:13.839 19:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:26:13.839 19:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # is_hw=yes 00:26:13.839 19:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:26:13.839 19:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:26:13.839 19:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:26:13.839 19:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:26:13.839 19:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@257 -- # create_target_ns 00:26:13.839 19:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:26:13.839 19:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:26:13.839 19:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:26:13.839 19:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:13.839 19:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:26:13.839 19:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:26:13.839 19:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:13.839 19:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:13.839 19:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:26:13.839 19:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:26:13.839 19:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:26:13.839 19:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:26:13.839 19:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@27 -- # local -gA dev_map 00:26:13.839 19:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@28 -- # local -g _dev 00:26:13.839 19:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:26:13.839 19:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:26:13.839 19:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:26:13.839 19:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:26:13.839 19:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@44 -- # ips=() 00:26:13.839 19:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:26:13.839 19:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:26:13.839 19:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:26:13.839 19:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:26:13.839 19:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:26:13.839 19:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:26:13.839 19:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:26:13.839 19:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:26:13.839 19:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:26:13.839 19:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:26:13.839 19:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:26:13.839 19:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:26:13.839 19:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:26:13.839 19:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:26:13.839 19:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:26:13.839 19:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:26:13.839 19:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:26:13.839 19:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:26:13.839 19:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:13.839 19:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:26:13.839 19:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@11 -- # local val=167772161 00:26:13.839 19:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:26:13.839 19:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:26:13.839 19:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:26:13.840 19:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:26:13.840 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:26:13.840 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:26:13.840 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:26:13.840 10.0.0.1 00:26:13.840 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:26:13.840 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:26:13.840 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:13.840 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:13.840 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:26:13.840 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@11 -- # local val=167772162 00:26:13.840 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:26:13.840 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:26:13.840 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:26:13.840 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:26:13.840 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:26:13.840 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:26:13.840 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:26:13.840 10.0.0.2 00:26:13.840 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:26:13.840 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:26:13.840 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:26:13.840 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:26:13.840 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:26:13.840 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:26:13.840 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:26:13.840 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:13.840 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:13.840 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:26:13.840 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:26:13.840 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:26:13.840 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:26:13.840 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:26:13.840 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:26:13.840 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:26:13.840 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:26:13.840 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:26:13.840 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:26:13.840 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:26:13.840 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@38 -- # ping_ips 1 00:26:13.840 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:26:13.840 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:26:13.840 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:26:13.840 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:26:13.840 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:26:13.840 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:26:13.840 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:26:13.840 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:26:13.840 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:26:13.840 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@107 -- # local dev=initiator0 00:26:13.840 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:26:13.840 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:26:13.840 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:26:13.840 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:26:13.840 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:26:13.840 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:26:13.840 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:26:13.840 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:26:13.840 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:26:13.840 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:26:13.840 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:26:13.840 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:13.840 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:13.840 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:26:13.840 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:26:13.840 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:13.840 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.708 ms 00:26:13.840 00:26:13.840 --- 10.0.0.1 ping statistics --- 00:26:13.840 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:13.840 rtt min/avg/max/mdev = 0.708/0.708/0.708/0.000 ms 00:26:13.840 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:26:13.840 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:26:13.840 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:26:13.840 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:26:13.840 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:13.840 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:13.840 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@168 -- # get_net_dev target0 00:26:13.840 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@107 -- # local dev=target0 00:26:13.840 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:26:13.840 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:26:13.840 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:26:13.840 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:26:13.840 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:26:13.840 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:26:13.840 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:26:13.840 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:26:13.840 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:26:13.840 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:26:13.840 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:26:13.840 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:26:13.840 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:26:13.840 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:26:13.840 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:13.840 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.383 ms 00:26:13.840 00:26:13.840 --- 10.0.0.2 ping statistics --- 00:26:13.841 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:13.841 rtt min/avg/max/mdev = 0.383/0.383/0.383/0.000 ms 00:26:13.841 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@98 -- # (( pair++ )) 00:26:13.841 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:26:13.841 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:13.841 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # return 0 00:26:13.841 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:26:13.841 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:26:13.841 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:26:13.841 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:26:13.841 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:26:13.841 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:26:13.841 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:26:13.841 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:26:13.841 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:26:13.841 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:26:13.841 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@107 -- # local dev=initiator0 00:26:13.841 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:26:13.841 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:26:13.841 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:26:13.841 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:26:13.841 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:26:13.841 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:26:13.841 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:26:13.841 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:26:13.841 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:26:13.841 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:13.841 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:26:13.841 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:26:13.841 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:26:13.841 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:26:13.841 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:26:13.841 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:26:13.841 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@107 -- # local dev=initiator1 00:26:13.841 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:26:13.841 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:26:13.841 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@109 -- # return 1 00:26:13.841 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@168 -- # dev= 00:26:13.841 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@169 -- # return 0 00:26:13.841 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:26:13.841 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:26:13.841 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:26:13.841 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:26:13.841 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:26:13.841 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:13.841 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:13.841 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@168 -- # get_net_dev target0 00:26:13.841 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@107 -- # local dev=target0 00:26:13.841 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:26:13.841 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:26:13.841 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:26:13.841 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:26:13.841 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:26:13.841 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:26:13.841 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:26:13.841 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:26:13.841 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:26:13.841 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:13.841 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:26:13.841 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:26:13.841 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:26:13.841 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:26:13.841 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:13.841 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:13.841 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@168 -- # get_net_dev target1 00:26:13.841 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@107 -- # local dev=target1 00:26:13.841 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:26:13.841 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:26:13.841 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@109 -- # return 1 00:26:13.841 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@168 -- # dev= 00:26:13.841 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@169 -- # return 0 00:26:13.841 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:26:13.841 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:13.841 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:26:13.841 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:26:13.841 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:13.841 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:26:13.841 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:26:13.841 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:26:13.841 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:26:13.841 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:13.841 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:13.841 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # nvmfpid=466839 00:26:13.841 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # waitforlisten 466839 00:26:13.841 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:26:13.841 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # '[' -z 466839 ']' 00:26:13.841 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:13.841 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:13.841 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:13.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:13.841 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:13.841 19:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:13.841 [2024-11-05 19:15:42.404453] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:26:13.841 [2024-11-05 19:15:42.404528] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:13.841 [2024-11-05 19:15:42.487128] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:13.841 [2024-11-05 19:15:42.527706] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:13.841 [2024-11-05 19:15:42.527740] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:13.842 [2024-11-05 19:15:42.527755] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:13.842 [2024-11-05 19:15:42.527762] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:13.842 [2024-11-05 19:15:42.527768] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:13.842 [2024-11-05 19:15:42.529012] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:13.842 [2024-11-05 19:15:42.529101] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:14.103 19:15:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:14.103 19:15:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@866 -- # return 0 00:26:14.103 19:15:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:26:14.103 19:15:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:14.103 19:15:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:14.103 19:15:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:14.103 19:15:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=466839 00:26:14.103 19:15:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:14.103 [2024-11-05 19:15:43.379198] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:14.103 19:15:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:26:14.381 Malloc0 00:26:14.381 19:15:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:26:14.643 19:15:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:14.643 19:15:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:14.904 [2024-11-05 19:15:44.063070] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:14.904 19:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:15.166 [2024-11-05 19:15:44.231462] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:15.166 19:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:26:15.166 19:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=467267 00:26:15.166 19:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:15.166 19:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 467267 /var/tmp/bdevperf.sock 00:26:15.166 19:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # '[' -z 467267 ']' 00:26:15.166 19:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:15.166 19:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:15.166 19:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:15.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:15.166 19:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:15.166 19:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:15.166 19:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:15.166 19:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@866 -- # return 0 00:26:15.166 19:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:26:15.427 19:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:26:15.689 Nvme0n1 00:26:15.689 19:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:26:15.949 Nvme0n1 00:26:16.211 19:15:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:26:16.211 19:15:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:26:18.122 19:15:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:26:18.122 19:15:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:26:18.382 19:15:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:18.382 19:15:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:26:19.767 19:15:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:26:19.767 19:15:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:19.767 19:15:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:19.767 19:15:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:19.767 19:15:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:19.767 19:15:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:19.767 19:15:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:19.767 19:15:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:19.767 19:15:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:19.767 19:15:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:19.767 19:15:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:19.767 19:15:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:20.027 19:15:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:20.027 19:15:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:20.027 19:15:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:20.027 19:15:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:20.288 19:15:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:20.288 19:15:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:20.288 19:15:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:20.288 19:15:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:20.548 19:15:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:20.548 19:15:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:20.548 19:15:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:20.548 19:15:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:20.548 19:15:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:20.548 19:15:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:26:20.548 19:15:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:20.808 19:15:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:21.068 19:15:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:26:22.008 19:15:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:26:22.008 19:15:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:22.008 19:15:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:22.008 19:15:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:22.269 19:15:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:22.269 19:15:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:22.269 19:15:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:22.269 19:15:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:22.269 19:15:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:22.269 19:15:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:22.269 19:15:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:22.269 19:15:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:22.529 19:15:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:22.529 19:15:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:22.529 19:15:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:22.529 19:15:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:22.789 19:15:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:22.789 19:15:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:22.789 19:15:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:22.789 19:15:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:22.789 19:15:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:22.789 19:15:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:22.789 19:15:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:22.789 19:15:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:23.049 19:15:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:23.049 19:15:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:26:23.049 19:15:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:23.309 19:15:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:26:23.570 19:15:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:26:24.511 19:15:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:26:24.511 19:15:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:24.511 19:15:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:24.511 19:15:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:24.771 19:15:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:24.771 19:15:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:24.772 19:15:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:24.772 19:15:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:24.772 19:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:24.772 19:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:24.772 19:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:24.772 19:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:25.032 19:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:25.032 19:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:25.032 19:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:25.032 19:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:25.293 19:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:25.293 19:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:25.293 19:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:25.293 19:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:25.293 19:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:25.293 19:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:25.293 19:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:25.293 19:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:25.554 19:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:25.554 19:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:26:25.554 19:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:25.814 19:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:25.814 19:15:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:26:27.196 19:15:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:26:27.196 19:15:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:27.196 19:15:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:27.196 19:15:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:27.196 19:15:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:27.196 19:15:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:27.196 19:15:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:27.196 19:15:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:27.196 19:15:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:27.196 19:15:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:27.196 19:15:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:27.196 19:15:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:27.456 19:15:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:27.456 19:15:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:27.456 19:15:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:27.456 19:15:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:27.716 19:15:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:27.716 19:15:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:27.716 19:15:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:27.716 19:15:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:27.977 19:15:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:27.977 19:15:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:27.977 19:15:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:27.977 19:15:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:27.977 19:15:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:27.977 19:15:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:26:27.977 19:15:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:28.237 19:15:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:28.497 19:15:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:26:29.437 19:15:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:26:29.438 19:15:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:29.438 19:15:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:29.438 19:15:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:29.698 19:15:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:29.698 19:15:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:29.698 19:15:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:29.698 19:15:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:29.698 19:15:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:29.698 19:15:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:29.698 19:15:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:29.698 19:15:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:29.959 19:15:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:29.959 19:15:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:29.959 19:15:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:29.959 19:15:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:30.219 19:15:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:30.219 19:15:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:26:30.219 19:15:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:30.219 19:15:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:30.219 19:15:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:30.219 19:15:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:30.219 19:15:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:30.219 19:15:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:30.478 19:15:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:30.478 19:15:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:26:30.478 19:15:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:30.738 19:15:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:30.738 19:16:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:26:32.120 19:16:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:26:32.120 19:16:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:32.120 19:16:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:32.120 19:16:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:32.120 19:16:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:32.120 19:16:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:32.120 19:16:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:32.120 19:16:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:32.120 19:16:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:32.120 19:16:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:32.120 19:16:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:32.120 19:16:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:32.380 19:16:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:32.380 19:16:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:32.380 19:16:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:32.380 19:16:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:32.640 19:16:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:32.640 19:16:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:26:32.640 19:16:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:32.640 19:16:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:32.900 19:16:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:32.900 19:16:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:32.900 19:16:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:32.900 19:16:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:32.900 19:16:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:32.900 19:16:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:26:33.160 19:16:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:26:33.161 19:16:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:26:33.420 19:16:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:33.420 19:16:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:26:34.802 19:16:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:26:34.802 19:16:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:34.802 19:16:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:34.802 19:16:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:34.802 19:16:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:34.802 19:16:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:34.802 19:16:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:34.802 19:16:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:34.802 19:16:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:34.802 19:16:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:34.802 19:16:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:34.802 19:16:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:35.063 19:16:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:35.063 19:16:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:35.063 19:16:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:35.063 19:16:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:35.323 19:16:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:35.323 19:16:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:35.323 19:16:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:35.323 19:16:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:35.584 19:16:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:35.584 19:16:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:35.584 19:16:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:35.584 19:16:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:35.584 19:16:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:35.584 19:16:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:26:35.584 19:16:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:35.845 19:16:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:35.845 19:16:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:26:37.228 19:16:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:26:37.228 19:16:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:37.228 19:16:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:37.228 19:16:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:37.228 19:16:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:37.228 19:16:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:37.228 19:16:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:37.228 19:16:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:37.228 19:16:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:37.228 19:16:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:37.228 19:16:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:37.228 19:16:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:37.489 19:16:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:37.489 19:16:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:37.489 19:16:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:37.489 19:16:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:37.750 19:16:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:37.750 19:16:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:37.750 19:16:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:37.750 19:16:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:37.750 19:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:37.750 19:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:37.750 19:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:37.750 19:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:38.010 19:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:38.010 19:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:26:38.010 19:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:38.270 19:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:26:38.530 19:16:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:26:39.472 19:16:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:26:39.472 19:16:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:39.472 19:16:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:39.472 19:16:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:39.733 19:16:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:39.733 19:16:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:39.733 19:16:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:39.733 19:16:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:39.733 19:16:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:39.733 19:16:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:39.733 19:16:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:39.733 19:16:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:39.994 19:16:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:39.994 19:16:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:39.994 19:16:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:39.994 19:16:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:40.255 19:16:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:40.255 19:16:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:40.255 19:16:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:40.255 19:16:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:40.255 19:16:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:40.255 19:16:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:40.255 19:16:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:40.255 19:16:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:40.519 19:16:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:40.519 19:16:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:26:40.519 19:16:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:40.780 19:16:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:41.040 19:16:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:26:41.983 19:16:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:26:41.983 19:16:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:41.983 19:16:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:41.983 19:16:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:41.983 19:16:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:41.983 19:16:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:42.244 19:16:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:42.244 19:16:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:42.244 19:16:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:42.244 19:16:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:42.244 19:16:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:42.244 19:16:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:42.505 19:16:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:42.505 19:16:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:42.505 19:16:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:42.505 19:16:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:42.765 19:16:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:42.765 19:16:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:42.765 19:16:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:42.765 19:16:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:42.765 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:42.765 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:42.765 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:42.765 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:43.026 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:43.026 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 467267 00:26:43.026 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' -z 467267 ']' 00:26:43.026 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # kill -0 467267 00:26:43.026 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # uname 00:26:43.026 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:43.026 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 467267 00:26:43.026 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:26:43.026 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:26:43.026 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # echo 'killing process with pid 467267' 00:26:43.026 killing process with pid 467267 00:26:43.026 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@971 -- # kill 467267 00:26:43.026 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@976 -- # wait 467267 00:26:43.026 { 00:26:43.026 "results": [ 00:26:43.026 { 00:26:43.026 "job": "Nvme0n1", 00:26:43.026 "core_mask": "0x4", 00:26:43.026 "workload": "verify", 00:26:43.026 "status": "terminated", 00:26:43.026 "verify_range": { 00:26:43.026 "start": 0, 00:26:43.026 "length": 16384 00:26:43.026 }, 00:26:43.026 "queue_depth": 128, 00:26:43.026 "io_size": 4096, 00:26:43.026 "runtime": 26.851575, 00:26:43.026 "iops": 10797.914088838364, 00:26:43.026 "mibps": 42.17935190952486, 00:26:43.026 "io_failed": 0, 00:26:43.026 "io_timeout": 0, 00:26:43.026 "avg_latency_us": 11837.401768773647, 00:26:43.026 "min_latency_us": 259.41333333333336, 00:26:43.026 "max_latency_us": 3019898.88 00:26:43.026 } 00:26:43.026 ], 00:26:43.026 "core_count": 1 00:26:43.026 } 00:26:43.290 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 467267 00:26:43.290 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:43.290 [2024-11-05 19:15:44.280612] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:26:43.290 [2024-11-05 19:15:44.280671] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid467267 ] 00:26:43.290 [2024-11-05 19:15:44.339204] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:43.290 [2024-11-05 19:15:44.368029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:43.290 Running I/O for 90 seconds... 00:26:43.290 9695.00 IOPS, 37.87 MiB/s [2024-11-05T18:16:12.613Z] 9690.00 IOPS, 37.85 MiB/s [2024-11-05T18:16:12.613Z] 9695.33 IOPS, 37.87 MiB/s [2024-11-05T18:16:12.613Z] 9695.50 IOPS, 37.87 MiB/s [2024-11-05T18:16:12.613Z] 9925.40 IOPS, 38.77 MiB/s [2024-11-05T18:16:12.613Z] 10381.17 IOPS, 40.55 MiB/s [2024-11-05T18:16:12.613Z] 10761.00 IOPS, 42.04 MiB/s [2024-11-05T18:16:12.613Z] 10756.12 IOPS, 42.02 MiB/s [2024-11-05T18:16:12.613Z] 10635.00 IOPS, 41.54 MiB/s [2024-11-05T18:16:12.613Z] 10534.20 IOPS, 41.15 MiB/s [2024-11-05T18:16:12.613Z] 10457.27 IOPS, 40.85 MiB/s [2024-11-05T18:16:12.613Z] 10395.75 IOPS, 40.61 MiB/s [2024-11-05T18:16:12.613Z] [2024-11-05 19:15:57.412230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:81632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.290 [2024-11-05 19:15:57.412261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:43.290 [2024-11-05 19:15:57.412295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:81640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.290 [2024-11-05 19:15:57.412302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:43.290 [2024-11-05 19:15:57.412312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:81648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.290 [2024-11-05 19:15:57.412318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:43.290 [2024-11-05 19:15:57.412329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:81656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.290 [2024-11-05 19:15:57.412334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:43.290 [2024-11-05 19:15:57.412344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:81664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.290 [2024-11-05 19:15:57.412349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:43.290 [2024-11-05 19:15:57.412360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:81672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.290 [2024-11-05 19:15:57.412365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:43.290 [2024-11-05 19:15:57.412375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:81680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.290 [2024-11-05 19:15:57.412381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:43.290 [2024-11-05 19:15:57.412391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:81688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.290 [2024-11-05 19:15:57.412396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:43.290 [2024-11-05 19:15:57.412430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:81696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.290 [2024-11-05 19:15:57.412436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:43.290 [2024-11-05 19:15:57.412446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:81704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.290 [2024-11-05 19:15:57.412457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:43.290 [2024-11-05 19:15:57.412467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:81712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.290 [2024-11-05 19:15:57.412473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:43.290 [2024-11-05 19:15:57.412484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:81720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.290 [2024-11-05 19:15:57.412489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:43.290 [2024-11-05 19:15:57.412499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:81728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.290 [2024-11-05 19:15:57.412504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:43.290 [2024-11-05 19:15:57.412515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:81736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.290 [2024-11-05 19:15:57.412521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:43.290 [2024-11-05 19:15:57.412531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:81744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.290 [2024-11-05 19:15:57.412537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:43.290 [2024-11-05 19:15:57.412547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:81752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.290 [2024-11-05 19:15:57.412552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:43.290 [2024-11-05 19:15:57.412563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:81760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.290 [2024-11-05 19:15:57.412568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:43.290 [2024-11-05 19:15:57.412579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:81768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.290 [2024-11-05 19:15:57.412584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:43.290 [2024-11-05 19:15:57.412595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:81776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.290 [2024-11-05 19:15:57.412600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:43.290 [2024-11-05 19:15:57.412610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:81784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.290 [2024-11-05 19:15:57.412615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:43.290 [2024-11-05 19:15:57.412625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:81792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.290 [2024-11-05 19:15:57.412630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:43.290 [2024-11-05 19:15:57.412641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:81800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.290 [2024-11-05 19:15:57.412647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:43.290 [2024-11-05 19:15:57.412658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:81808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.290 [2024-11-05 19:15:57.412663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:43.290 [2024-11-05 19:15:57.412674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:81816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.290 [2024-11-05 19:15:57.412679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:43.290 [2024-11-05 19:15:57.412690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:81824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.290 [2024-11-05 19:15:57.412695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:43.290 [2024-11-05 19:15:57.412705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:81832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.290 [2024-11-05 19:15:57.412711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:43.290 [2024-11-05 19:15:57.412722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:81840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.290 [2024-11-05 19:15:57.412727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:43.290 [2024-11-05 19:15:57.412738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:81848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.290 [2024-11-05 19:15:57.412743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:43.290 [2024-11-05 19:15:57.412759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:81856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.291 [2024-11-05 19:15:57.412765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:43.291 [2024-11-05 19:15:57.412775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:81864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.291 [2024-11-05 19:15:57.412780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:43.291 [2024-11-05 19:15:57.412791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:81872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.291 [2024-11-05 19:15:57.412796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:43.291 [2024-11-05 19:15:57.412806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:81880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.291 [2024-11-05 19:15:57.412812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:43.291 [2024-11-05 19:15:57.412822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:81888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.291 [2024-11-05 19:15:57.412827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:43.291 [2024-11-05 19:15:57.412838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:81896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.291 [2024-11-05 19:15:57.412843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:43.291 [2024-11-05 19:15:57.412855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:81904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.291 [2024-11-05 19:15:57.412860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:43.291 [2024-11-05 19:15:57.412871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:81912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.291 [2024-11-05 19:15:57.412876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:43.291 [2024-11-05 19:15:57.412887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:81920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.291 [2024-11-05 19:15:57.412892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:43.291 [2024-11-05 19:15:57.412902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:81928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.291 [2024-11-05 19:15:57.412907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:43.291 [2024-11-05 19:15:57.412917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:81936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.291 [2024-11-05 19:15:57.412923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:43.291 [2024-11-05 19:15:57.412933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:81944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.291 [2024-11-05 19:15:57.412939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:43.291 [2024-11-05 19:15:57.412949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:81952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.291 [2024-11-05 19:15:57.412954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:43.291 [2024-11-05 19:15:57.412964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:81960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.291 [2024-11-05 19:15:57.412969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:43.291 [2024-11-05 19:15:57.412979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:81968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.291 [2024-11-05 19:15:57.412984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:43.291 [2024-11-05 19:15:57.412994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:81976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.291 [2024-11-05 19:15:57.412999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:43.291 [2024-11-05 19:15:57.413010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:81984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.291 [2024-11-05 19:15:57.413015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:43.291 [2024-11-05 19:15:57.413025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:81992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.291 [2024-11-05 19:15:57.413030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:43.291 [2024-11-05 19:15:57.413041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:82000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.291 [2024-11-05 19:15:57.413046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:43.291 [2024-11-05 19:15:57.413057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:81512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.291 [2024-11-05 19:15:57.413062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:43.291 [2024-11-05 19:15:57.413073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:81520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.291 [2024-11-05 19:15:57.413078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:43.291 [2024-11-05 19:15:57.413088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:81528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.291 [2024-11-05 19:15:57.413094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:43.291 [2024-11-05 19:15:57.413218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:81536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.291 [2024-11-05 19:15:57.413226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:43.291 [2024-11-05 19:15:57.413240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:81544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.291 [2024-11-05 19:15:57.413245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:43.291 [2024-11-05 19:15:57.413258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:81552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.291 [2024-11-05 19:15:57.413264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:43.291 [2024-11-05 19:15:57.413276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:81560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.291 [2024-11-05 19:15:57.413281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:43.291 [2024-11-05 19:15:57.413294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:82008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.291 [2024-11-05 19:15:57.413299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:43.291 [2024-11-05 19:15:57.413312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.291 [2024-11-05 19:15:57.413317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:43.291 [2024-11-05 19:15:57.413330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:82024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.291 [2024-11-05 19:15:57.413335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:43.291 [2024-11-05 19:15:57.413347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:82032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.291 [2024-11-05 19:15:57.413352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:43.291 [2024-11-05 19:15:57.413365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:82040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.291 [2024-11-05 19:15:57.413372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:43.291 [2024-11-05 19:15:57.413385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:82048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.291 [2024-11-05 19:15:57.413390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:43.291 [2024-11-05 19:15:57.413402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:82056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.291 [2024-11-05 19:15:57.413408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:43.291 [2024-11-05 19:15:57.413420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:82064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.291 [2024-11-05 19:15:57.413425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:43.291 [2024-11-05 19:15:57.413438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:82072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.291 [2024-11-05 19:15:57.413443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:43.291 [2024-11-05 19:15:57.413455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:82080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.291 [2024-11-05 19:15:57.413461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:43.291 [2024-11-05 19:15:57.413473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:82088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.291 [2024-11-05 19:15:57.413478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:43.291 [2024-11-05 19:15:57.413491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:82096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.291 [2024-11-05 19:15:57.413497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:43.291 [2024-11-05 19:15:57.413510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:82104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.291 [2024-11-05 19:15:57.413515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:43.291 [2024-11-05 19:15:57.413527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.292 [2024-11-05 19:15:57.413532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:43.292 [2024-11-05 19:15:57.413545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:82120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.292 [2024-11-05 19:15:57.413551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:43.292 [2024-11-05 19:15:57.413596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:82128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.292 [2024-11-05 19:15:57.413602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:43.292 [2024-11-05 19:15:57.413617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:82136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.292 [2024-11-05 19:15:57.413622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:43.292 [2024-11-05 19:15:57.413637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:82144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.292 [2024-11-05 19:15:57.413642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:43.292 [2024-11-05 19:15:57.413656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:82152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.292 [2024-11-05 19:15:57.413662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:43.292 [2024-11-05 19:15:57.413675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:82160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.292 [2024-11-05 19:15:57.413681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:43.292 [2024-11-05 19:15:57.413695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:82168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.292 [2024-11-05 19:15:57.413700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:43.292 [2024-11-05 19:15:57.413713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:82176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.292 [2024-11-05 19:15:57.413718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:43.292 [2024-11-05 19:15:57.413732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:82184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.292 [2024-11-05 19:15:57.413737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:43.292 [2024-11-05 19:15:57.413773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:82192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.292 [2024-11-05 19:15:57.413780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:43.292 [2024-11-05 19:15:57.413794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:82200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.292 [2024-11-05 19:15:57.413800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:43.292 [2024-11-05 19:15:57.413814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:82208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.292 [2024-11-05 19:15:57.413819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:43.292 [2024-11-05 19:15:57.413833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:82216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.292 [2024-11-05 19:15:57.413839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:43.292 [2024-11-05 19:15:57.413852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:82224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.292 [2024-11-05 19:15:57.413858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:43.292 [2024-11-05 19:15:57.413872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:82232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.292 [2024-11-05 19:15:57.413876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:43.292 [2024-11-05 19:15:57.413892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:82240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.292 [2024-11-05 19:15:57.413897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:43.292 [2024-11-05 19:15:57.413911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:82248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.292 [2024-11-05 19:15:57.413917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:43.292 [2024-11-05 19:15:57.413949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:82256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.292 [2024-11-05 19:15:57.413955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:43.292 [2024-11-05 19:15:57.413970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:82264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.292 [2024-11-05 19:15:57.413975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:43.292 [2024-11-05 19:15:57.413989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:82272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.292 [2024-11-05 19:15:57.413994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:43.292 [2024-11-05 19:15:57.414009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:82280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.292 [2024-11-05 19:15:57.414014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:43.292 [2024-11-05 19:15:57.414028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:82288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.292 [2024-11-05 19:15:57.414034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:43.292 [2024-11-05 19:15:57.414048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:82296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.292 [2024-11-05 19:15:57.414053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:43.292 [2024-11-05 19:15:57.414067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:82304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.292 [2024-11-05 19:15:57.414072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:43.292 [2024-11-05 19:15:57.414087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:82312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.292 [2024-11-05 19:15:57.414092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:43.292 [2024-11-05 19:15:57.414169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:82320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.292 [2024-11-05 19:15:57.414175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:43.292 [2024-11-05 19:15:57.414191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:82328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.292 [2024-11-05 19:15:57.414196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:43.292 [2024-11-05 19:15:57.414212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:82336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.292 [2024-11-05 19:15:57.414217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:43.292 [2024-11-05 19:15:57.414232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:82344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.292 [2024-11-05 19:15:57.414237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:43.292 [2024-11-05 19:15:57.414253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:82352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.292 [2024-11-05 19:15:57.414258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:43.292 [2024-11-05 19:15:57.414273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:82360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.292 [2024-11-05 19:15:57.414278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:43.292 [2024-11-05 19:15:57.414293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:82368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.292 [2024-11-05 19:15:57.414298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:43.292 [2024-11-05 19:15:57.414313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:82376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.292 [2024-11-05 19:15:57.414318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:43.292 [2024-11-05 19:15:57.414551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:82384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.292 [2024-11-05 19:15:57.414557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:43.292 [2024-11-05 19:15:57.414574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.292 [2024-11-05 19:15:57.414579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:43.292 [2024-11-05 19:15:57.414594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:82400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.292 [2024-11-05 19:15:57.414599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:43.292 [2024-11-05 19:15:57.414614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.292 [2024-11-05 19:15:57.414620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:43.292 [2024-11-05 19:15:57.414635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:82416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.292 [2024-11-05 19:15:57.414640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:43.292 [2024-11-05 19:15:57.414655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:82424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.293 [2024-11-05 19:15:57.414660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:43.293 [2024-11-05 19:15:57.414675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:82432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.293 [2024-11-05 19:15:57.414682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:43.293 [2024-11-05 19:15:57.414698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.293 [2024-11-05 19:15:57.414703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:43.293 [2024-11-05 19:15:57.414836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:82448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.293 [2024-11-05 19:15:57.414842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:43.293 [2024-11-05 19:15:57.414859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:82456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.293 [2024-11-05 19:15:57.414864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:43.293 [2024-11-05 19:15:57.414880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:82464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.293 [2024-11-05 19:15:57.414885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:43.293 [2024-11-05 19:15:57.414901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:82472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.293 [2024-11-05 19:15:57.414906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:43.293 [2024-11-05 19:15:57.414922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:82480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.293 [2024-11-05 19:15:57.414927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:43.293 [2024-11-05 19:15:57.414942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:82488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.293 [2024-11-05 19:15:57.414947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:43.293 [2024-11-05 19:15:57.414963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:82496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.293 [2024-11-05 19:15:57.414968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:43.293 [2024-11-05 19:15:57.414984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:82504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.293 [2024-11-05 19:15:57.414989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:43.293 [2024-11-05 19:15:57.415126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:82512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.293 [2024-11-05 19:15:57.415132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:43.293 [2024-11-05 19:15:57.415149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:82520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.293 [2024-11-05 19:15:57.415154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:43.293 [2024-11-05 19:15:57.415170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:81568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.293 [2024-11-05 19:15:57.415176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:43.293 [2024-11-05 19:15:57.415192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:81576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.293 [2024-11-05 19:15:57.415197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:43.293 [2024-11-05 19:15:57.415213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:81584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.293 [2024-11-05 19:15:57.415218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.293 [2024-11-05 19:15:57.415234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:81592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.293 [2024-11-05 19:15:57.415239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:43.293 [2024-11-05 19:15:57.415255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:81600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.293 [2024-11-05 19:15:57.415260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:43.293 [2024-11-05 19:15:57.415276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:81608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.293 [2024-11-05 19:15:57.415281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:43.293 [2024-11-05 19:15:57.415297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:81616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.293 [2024-11-05 19:15:57.415302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:43.293 [2024-11-05 19:15:57.415318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:81624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.293 [2024-11-05 19:15:57.415324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:43.293 9605.92 IOPS, 37.52 MiB/s [2024-11-05T18:16:12.616Z] 8919.79 IOPS, 34.84 MiB/s [2024-11-05T18:16:12.616Z] 8325.13 IOPS, 32.52 MiB/s [2024-11-05T18:16:12.616Z] 8598.44 IOPS, 33.59 MiB/s [2024-11-05T18:16:12.616Z] 8844.18 IOPS, 34.55 MiB/s [2024-11-05T18:16:12.616Z] 9252.89 IOPS, 36.14 MiB/s [2024-11-05T18:16:12.616Z] 9656.68 IOPS, 37.72 MiB/s [2024-11-05T18:16:12.616Z] 9949.35 IOPS, 38.86 MiB/s [2024-11-05T18:16:12.616Z] 10087.52 IOPS, 39.40 MiB/s [2024-11-05T18:16:12.616Z] 10213.82 IOPS, 39.90 MiB/s [2024-11-05T18:16:12.616Z] 10470.00 IOPS, 40.90 MiB/s [2024-11-05T18:16:12.616Z] 10736.96 IOPS, 41.94 MiB/s [2024-11-05T18:16:12.616Z] [2024-11-05 19:16:10.091885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:53984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.293 [2024-11-05 19:16:10.091923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:43.293 [2024-11-05 19:16:10.091958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:54000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.293 [2024-11-05 19:16:10.091964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:43.293 [2024-11-05 19:16:10.091975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:54016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.293 [2024-11-05 19:16:10.091981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:43.293 [2024-11-05 19:16:10.092089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:54032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.293 [2024-11-05 19:16:10.092102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:43.293 [2024-11-05 19:16:10.092784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:54048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.293 [2024-11-05 19:16:10.092796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:43.293 [2024-11-05 19:16:10.092808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:54064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.293 [2024-11-05 19:16:10.092813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:43.293 [2024-11-05 19:16:10.092824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:54080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.293 [2024-11-05 19:16:10.092829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.293 [2024-11-05 19:16:10.092840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:53352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.293 [2024-11-05 19:16:10.092845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:43.293 [2024-11-05 19:16:10.092855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:53384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.293 [2024-11-05 19:16:10.092861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:43.293 [2024-11-05 19:16:10.092871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:53408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.293 [2024-11-05 19:16:10.092876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:43.293 [2024-11-05 19:16:10.092886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:53432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.293 [2024-11-05 19:16:10.092891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:43.293 [2024-11-05 19:16:10.092901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:53776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.293 [2024-11-05 19:16:10.092906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:43.293 [2024-11-05 19:16:10.092917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:53808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.293 [2024-11-05 19:16:10.092922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:43.293 [2024-11-05 19:16:10.092932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:53840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.293 [2024-11-05 19:16:10.092937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:43.293 [2024-11-05 19:16:10.092948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:53872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.293 [2024-11-05 19:16:10.092953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:43.293 [2024-11-05 19:16:10.092963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:54096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.293 [2024-11-05 19:16:10.092968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:43.293 [2024-11-05 19:16:10.092981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:54112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.294 [2024-11-05 19:16:10.092986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:43.294 [2024-11-05 19:16:10.092996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:54128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.294 [2024-11-05 19:16:10.093001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:43.294 [2024-11-05 19:16:10.093012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:54136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.294 [2024-11-05 19:16:10.093017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:43.294 10883.96 IOPS, 42.52 MiB/s [2024-11-05T18:16:12.617Z] 10836.42 IOPS, 42.33 MiB/s [2024-11-05T18:16:12.617Z] Received shutdown signal, test time was about 26.852189 seconds 00:26:43.294 00:26:43.294 Latency(us) 00:26:43.294 [2024-11-05T18:16:12.617Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:43.294 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:43.294 Verification LBA range: start 0x0 length 0x4000 00:26:43.294 Nvme0n1 : 26.85 10797.91 42.18 0.00 0.00 11837.40 259.41 3019898.88 00:26:43.294 [2024-11-05T18:16:12.617Z] =================================================================================================================== 00:26:43.294 [2024-11-05T18:16:12.617Z] Total : 10797.91 42.18 0.00 0.00 11837.40 259.41 3019898.88 00:26:43.294 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:43.294 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:26:43.294 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:43.294 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:26:43.294 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # nvmfcleanup 00:26:43.294 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@99 -- # sync 00:26:43.294 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:26:43.294 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # set +e 00:26:43.294 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # for i in {1..20} 00:26:43.294 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:26:43.294 rmmod nvme_tcp 00:26:43.294 rmmod nvme_fabrics 00:26:43.294 rmmod nvme_keyring 00:26:43.554 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:26:43.554 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # set -e 00:26:43.554 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # return 0 00:26:43.554 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # '[' -n 466839 ']' 00:26:43.554 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@337 -- # killprocess 466839 00:26:43.554 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' -z 466839 ']' 00:26:43.554 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # kill -0 466839 00:26:43.554 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # uname 00:26:43.555 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:43.555 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 466839 00:26:43.555 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:26:43.555 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:26:43.555 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # echo 'killing process with pid 466839' 00:26:43.555 killing process with pid 466839 00:26:43.555 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@971 -- # kill 466839 00:26:43.555 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@976 -- # wait 466839 00:26:43.555 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:26:43.555 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # nvmf_fini 00:26:43.555 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@264 -- # local dev 00:26:43.555 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@267 -- # remove_target_ns 00:26:43.555 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:26:43.555 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:26:43.555 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_target_ns 00:26:46.098 19:16:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@268 -- # delete_main_bridge 00:26:46.098 19:16:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:26:46.098 19:16:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@130 -- # return 0 00:26:46.098 19:16:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:26:46.098 19:16:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:26:46.098 19:16:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:26:46.098 19:16:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:26:46.098 19:16:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:26:46.098 19:16:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:26:46.098 19:16:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:26:46.098 19:16:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:26:46.098 19:16:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:26:46.098 19:16:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:26:46.098 19:16:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:26:46.098 19:16:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:26:46.098 19:16:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:26:46.098 19:16:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:26:46.098 19:16:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:26:46.098 19:16:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:26:46.098 19:16:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:26:46.098 19:16:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@41 -- # _dev=0 00:26:46.098 19:16:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@41 -- # dev_map=() 00:26:46.098 19:16:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@284 -- # iptr 00:26:46.098 19:16:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@542 -- # iptables-save 00:26:46.098 19:16:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:26:46.098 19:16:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@542 -- # iptables-restore 00:26:46.098 00:26:46.099 real 0m40.301s 00:26:46.099 user 1m43.832s 00:26:46.099 sys 0m11.542s 00:26:46.099 19:16:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:46.099 19:16:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:46.099 ************************************ 00:26:46.099 END TEST nvmf_host_multipath_status 00:26:46.099 ************************************ 00:26:46.099 19:16:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:46.099 19:16:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:26:46.099 19:16:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:46.099 19:16:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.099 ************************************ 00:26:46.099 START TEST nvmf_discovery_remove_ifc 00:26:46.099 ************************************ 00:26:46.099 19:16:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:46.099 * Looking for test storage... 00:26:46.099 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:46.099 19:16:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:46.099 19:16:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lcov --version 00:26:46.099 19:16:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:46.099 19:16:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:46.099 19:16:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:46.099 19:16:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:46.099 19:16:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:46.099 19:16:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:26:46.099 19:16:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:26:46.099 19:16:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:26:46.099 19:16:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:26:46.099 19:16:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:26:46.099 19:16:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:26:46.099 19:16:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:26:46.099 19:16:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:46.099 19:16:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:26:46.099 19:16:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:26:46.099 19:16:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:46.099 19:16:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:46.099 19:16:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:26:46.099 19:16:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:26:46.099 19:16:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:46.099 19:16:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:26:46.099 19:16:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:26:46.099 19:16:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:26:46.099 19:16:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:26:46.099 19:16:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:46.099 19:16:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:26:46.099 19:16:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:26:46.099 19:16:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:46.099 19:16:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:46.099 19:16:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:26:46.099 19:16:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:46.099 19:16:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:46.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:46.099 --rc genhtml_branch_coverage=1 00:26:46.099 --rc genhtml_function_coverage=1 00:26:46.099 --rc genhtml_legend=1 00:26:46.099 --rc geninfo_all_blocks=1 00:26:46.099 --rc geninfo_unexecuted_blocks=1 00:26:46.099 00:26:46.099 ' 00:26:46.099 19:16:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:46.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:46.099 --rc genhtml_branch_coverage=1 00:26:46.099 --rc genhtml_function_coverage=1 00:26:46.099 --rc genhtml_legend=1 00:26:46.099 --rc geninfo_all_blocks=1 00:26:46.099 --rc geninfo_unexecuted_blocks=1 00:26:46.099 00:26:46.099 ' 00:26:46.099 19:16:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:46.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:46.099 --rc genhtml_branch_coverage=1 00:26:46.099 --rc genhtml_function_coverage=1 00:26:46.099 --rc genhtml_legend=1 00:26:46.099 --rc geninfo_all_blocks=1 00:26:46.099 --rc geninfo_unexecuted_blocks=1 00:26:46.099 00:26:46.099 ' 00:26:46.099 19:16:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:46.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:46.099 --rc genhtml_branch_coverage=1 00:26:46.099 --rc genhtml_function_coverage=1 00:26:46.099 --rc genhtml_legend=1 00:26:46.099 --rc geninfo_all_blocks=1 00:26:46.099 --rc geninfo_unexecuted_blocks=1 00:26:46.099 00:26:46.099 ' 00:26:46.099 19:16:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:46.099 19:16:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:26:46.099 19:16:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:46.099 19:16:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:46.099 19:16:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:46.099 19:16:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:46.099 19:16:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:46.099 19:16:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:26:46.099 19:16:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:46.099 19:16:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:26:46.099 19:16:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:46.099 19:16:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:46.099 19:16:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:46.099 19:16:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:26:46.099 19:16:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:26:46.099 19:16:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:46.099 19:16:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:46.099 19:16:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:26:46.099 19:16:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:46.099 19:16:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:46.099 19:16:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:46.099 19:16:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:46.099 19:16:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:46.100 19:16:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:46.100 19:16:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:26:46.100 19:16:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:46.100 19:16:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:26:46.100 19:16:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:26:46.100 19:16:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:26:46.100 19:16:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:26:46.100 19:16:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@50 -- # : 0 00:26:46.100 19:16:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:26:46.100 19:16:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:26:46.100 19:16:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:26:46.100 19:16:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:46.100 19:16:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:46.100 19:16:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:26:46.100 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:26:46.100 19:16:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:26:46.100 19:16:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:26:46.100 19:16:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@54 -- # have_pci_nics=0 00:26:46.100 19:16:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # discovery_port=8009 00:26:46.100 19:16:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@15 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:26:46.100 19:16:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@18 -- # nqn=nqn.2016-06.io.spdk:cnode 00:26:46.100 19:16:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # host_nqn=nqn.2021-12.io.spdk:test 00:26:46.100 19:16:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@21 -- # host_sock=/tmp/host.sock 00:26:46.100 19:16:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # nvmftestinit 00:26:46.100 19:16:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:26:46.100 19:16:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:46.100 19:16:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # prepare_net_devs 00:26:46.100 19:16:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # local -g is_hw=no 00:26:46.100 19:16:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # remove_target_ns 00:26:46.100 19:16:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:26:46.100 19:16:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:26:46.100 19:16:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_target_ns 00:26:46.100 19:16:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:26:46.100 19:16:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:26:46.100 19:16:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # xtrace_disable 00:26:46.100 19:16:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:54.246 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:54.246 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@131 -- # pci_devs=() 00:26:54.246 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@131 -- # local -a pci_devs 00:26:54.246 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@132 -- # pci_net_devs=() 00:26:54.246 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:26:54.246 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@133 -- # pci_drivers=() 00:26:54.246 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@133 -- # local -A pci_drivers 00:26:54.246 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@135 -- # net_devs=() 00:26:54.246 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@135 -- # local -ga net_devs 00:26:54.246 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@136 -- # e810=() 00:26:54.246 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@136 -- # local -ga e810 00:26:54.246 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@137 -- # x722=() 00:26:54.246 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@137 -- # local -ga x722 00:26:54.246 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@138 -- # mlx=() 00:26:54.246 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@138 -- # local -ga mlx 00:26:54.246 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:54.246 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:54.246 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:54.246 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:54.246 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:54.246 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:54.246 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:54.246 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:54.246 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:54.246 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:54.246 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:54.246 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:54.246 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:26:54.246 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:26:54.246 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:26:54.246 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:26:54.246 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:26:54.246 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:26:54.246 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:26:54.246 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:54.246 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:54.246 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:26:54.246 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:26:54.246 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:54.246 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:54.246 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:26:54.246 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:26:54.246 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:54.246 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:54.246 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:26:54.246 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:26:54.246 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:54.247 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:54.247 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:26:54.247 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:26:54.247 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:26:54.247 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:26:54.247 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:26:54.247 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:54.247 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:26:54.247 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:54.247 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # [[ up == up ]] 00:26:54.247 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:26:54.247 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:54.247 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:54.247 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:54.247 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:26:54.247 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:26:54.247 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:54.247 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:26:54.247 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:54.247 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # [[ up == up ]] 00:26:54.247 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:26:54.247 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:54.247 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:54.247 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:54.247 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:26:54.247 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:26:54.247 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:26:54.247 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # is_hw=yes 00:26:54.247 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:26:54.247 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:26:54.247 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:26:54.247 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:26:54.247 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@257 -- # create_target_ns 00:26:54.247 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:26:54.247 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:26:54.247 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:26:54.247 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:54.247 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:26:54.247 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:26:54.247 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:54.247 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:54.247 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:26:54.247 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:26:54.247 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:26:54.247 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:26:54.247 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@27 -- # local -gA dev_map 00:26:54.247 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@28 -- # local -g _dev 00:26:54.247 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:26:54.247 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:26:54.247 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:26:54.247 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:26:54.247 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@44 -- # ips=() 00:26:54.247 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:26:54.247 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:26:54.247 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:26:54.247 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:26:54.247 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:26:54.247 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:26:54.247 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:26:54.247 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:26:54.247 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:26:54.247 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:26:54.247 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:26:54.247 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:26:54.247 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:26:54.247 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:26:54.247 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:26:54.247 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:26:54.247 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:26:54.247 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:26:54.247 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:54.247 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:26:54.247 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@11 -- # local val=167772161 00:26:54.247 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:26:54.247 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:26:54.247 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:26:54.247 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:26:54.247 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:26:54.247 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:26:54.247 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:26:54.247 10.0.0.1 00:26:54.247 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:26:54.247 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:26:54.247 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:54.247 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:54.247 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:26:54.247 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@11 -- # local val=167772162 00:26:54.247 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:26:54.247 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:26:54.247 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:26:54.247 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:26:54.247 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:26:54.247 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:26:54.247 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:26:54.247 10.0.0.2 00:26:54.247 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:26:54.247 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:26:54.247 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:26:54.247 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:26:54.247 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:26:54.247 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:26:54.247 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:26:54.247 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:54.247 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:54.247 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:26:54.247 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:26:54.247 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:26:54.247 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:26:54.247 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:26:54.247 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:26:54.248 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:26:54.248 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:26:54.248 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:26:54.248 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:26:54.248 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:26:54.248 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@38 -- # ping_ips 1 00:26:54.248 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:26:54.248 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:26:54.248 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:26:54.248 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:26:54.248 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:26:54.248 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:26:54.248 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:26:54.248 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:26:54.248 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:26:54.248 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@107 -- # local dev=initiator0 00:26:54.248 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:26:54.248 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:26:54.248 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:26:54.248 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:26:54.248 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:26:54.248 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:26:54.248 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:26:54.248 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:26:54.248 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:26:54.248 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:26:54.248 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:26:54.248 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:54.248 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:54.248 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:26:54.248 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:26:54.248 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:54.248 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.668 ms 00:26:54.248 00:26:54.248 --- 10.0.0.1 ping statistics --- 00:26:54.248 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:54.248 rtt min/avg/max/mdev = 0.668/0.668/0.668/0.000 ms 00:26:54.248 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:26:54.248 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:26:54.248 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:26:54.248 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:26:54.248 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:54.248 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:54.248 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@168 -- # get_net_dev target0 00:26:54.248 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@107 -- # local dev=target0 00:26:54.248 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:26:54.248 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:26:54.248 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:26:54.248 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:26:54.248 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:26:54.248 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:26:54.248 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:26:54.248 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:26:54.248 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:26:54.248 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:26:54.248 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:26:54.248 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:26:54.248 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:26:54.248 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:26:54.248 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:54.248 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.225 ms 00:26:54.248 00:26:54.248 --- 10.0.0.2 ping statistics --- 00:26:54.248 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:54.248 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:26:54.248 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@98 -- # (( pair++ )) 00:26:54.248 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:26:54.248 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:54.248 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # return 0 00:26:54.248 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:26:54.248 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:26:54.248 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:26:54.248 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:26:54.248 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:26:54.248 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:26:54.248 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:26:54.248 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:26:54.248 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:26:54.248 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:26:54.248 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@107 -- # local dev=initiator0 00:26:54.248 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:26:54.248 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:26:54.248 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:26:54.248 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:26:54.248 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:26:54.248 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:26:54.248 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:26:54.248 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:26:54.248 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:26:54.248 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:54.248 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:26:54.248 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:26:54.248 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:26:54.248 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:26:54.248 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:26:54.248 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:26:54.248 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@107 -- # local dev=initiator1 00:26:54.248 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:26:54.248 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:26:54.248 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@109 -- # return 1 00:26:54.248 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@168 -- # dev= 00:26:54.248 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@169 -- # return 0 00:26:54.248 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:26:54.248 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:26:54.248 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:26:54.248 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:26:54.248 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:26:54.248 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:54.248 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:54.248 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@168 -- # get_net_dev target0 00:26:54.248 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@107 -- # local dev=target0 00:26:54.248 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:26:54.248 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:26:54.248 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:26:54.248 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:26:54.248 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:26:54.249 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:26:54.249 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:26:54.249 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:26:54.249 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:26:54.249 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:54.249 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:26:54.249 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:26:54.249 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:26:54.249 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:26:54.249 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:54.249 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:54.249 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@168 -- # get_net_dev target1 00:26:54.249 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@107 -- # local dev=target1 00:26:54.249 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:26:54.249 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:26:54.249 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@109 -- # return 1 00:26:54.249 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@168 -- # dev= 00:26:54.249 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@169 -- # return 0 00:26:54.249 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:26:54.249 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:54.249 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:26:54.249 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:26:54.249 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:54.249 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:26:54.249 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:26:54.249 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@35 -- # nvmfappstart -m 0x2 00:26:54.249 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:26:54.249 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:54.249 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:54.249 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # nvmfpid=477160 00:26:54.249 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # waitforlisten 477160 00:26:54.249 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:54.249 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # '[' -z 477160 ']' 00:26:54.249 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:54.249 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:54.249 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:54.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:54.249 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:54.249 19:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:54.249 [2024-11-05 19:16:22.887598] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:26:54.249 [2024-11-05 19:16:22.887667] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:54.249 [2024-11-05 19:16:22.985925] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:54.249 [2024-11-05 19:16:23.036689] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:54.249 [2024-11-05 19:16:23.036739] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:54.249 [2024-11-05 19:16:23.036758] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:54.249 [2024-11-05 19:16:23.036765] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:54.249 [2024-11-05 19:16:23.036772] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:54.249 [2024-11-05 19:16:23.037517] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:54.539 19:16:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:54.539 19:16:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@866 -- # return 0 00:26:54.539 19:16:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:26:54.539 19:16:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:54.539 19:16:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:54.539 19:16:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:54.539 19:16:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@38 -- # rpc_cmd 00:26:54.539 19:16:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:54.539 19:16:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:54.539 [2024-11-05 19:16:23.778544] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:54.539 [2024-11-05 19:16:23.786805] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:26:54.539 null0 00:26:54.539 [2024-11-05 19:16:23.818738] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:54.539 19:16:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:54.539 19:16:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@54 -- # hostpid=477339 00:26:54.539 19:16:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:26:54.539 19:16:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@55 -- # waitforlisten 477339 /tmp/host.sock 00:26:54.539 19:16:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # '[' -z 477339 ']' 00:26:54.539 19:16:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # local rpc_addr=/tmp/host.sock 00:26:54.539 19:16:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:54.539 19:16:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:26:54.539 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:26:54.539 19:16:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:54.539 19:16:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:54.838 [2024-11-05 19:16:23.895017] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:26:54.838 [2024-11-05 19:16:23.895081] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid477339 ] 00:26:54.838 [2024-11-05 19:16:23.970552] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:54.838 [2024-11-05 19:16:24.012561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:55.419 19:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:55.419 19:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@866 -- # return 0 00:26:55.419 19:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@57 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:55.419 19:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:26:55.419 19:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:55.419 19:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:55.419 19:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:55.419 19:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@61 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:26:55.419 19:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:55.419 19:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:55.679 19:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:55.679 19:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:26:55.679 19:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:55.679 19:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:56.623 [2024-11-05 19:16:25.828709] bdev_nvme.c:7382:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:56.623 [2024-11-05 19:16:25.828729] bdev_nvme.c:7468:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:56.623 [2024-11-05 19:16:25.828743] bdev_nvme.c:7345:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:56.885 [2024-11-05 19:16:25.957211] bdev_nvme.c:7311:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:26:56.885 [2024-11-05 19:16:26.179517] bdev_nvme.c:5632:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:26:56.885 [2024-11-05 19:16:26.180464] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1d923f0:1 started. 00:26:56.885 [2024-11-05 19:16:26.182066] bdev_nvme.c:8178:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:56.885 [2024-11-05 19:16:26.182109] bdev_nvme.c:8178:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:56.885 [2024-11-05 19:16:26.182129] bdev_nvme.c:8178:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:56.885 [2024-11-05 19:16:26.182143] bdev_nvme.c:7201:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:56.885 [2024-11-05 19:16:26.182163] bdev_nvme.c:7160:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:56.885 19:16:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.885 19:16:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@67 -- # wait_for_bdev nvme0n1 00:26:56.885 19:16:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # get_bdev_list 00:26:56.885 [2024-11-05 19:16:26.186604] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1d923f0 was disconnected and freed. delete nvme_qpair. 00:26:56.885 19:16:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:56.885 19:16:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # jq -r '.[].name' 00:26:56.885 19:16:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.885 19:16:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # sort 00:26:56.885 19:16:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:56.885 19:16:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # xargs 00:26:57.147 19:16:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.147 19:16:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:26:57.147 19:16:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@70 -- # ip netns exec nvmf_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_1 00:26:57.147 19:16:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@71 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 down 00:26:57.147 19:16:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@74 -- # wait_for_bdev '' 00:26:57.147 19:16:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # get_bdev_list 00:26:57.147 19:16:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # xargs 00:26:57.147 19:16:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:57.147 19:16:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # jq -r '.[].name' 00:26:57.147 19:16:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.147 19:16:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # sort 00:26:57.147 19:16:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:57.147 19:16:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.147 19:16:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # [[ nvme0n1 != '' ]] 00:26:57.147 19:16:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sleep 1 00:26:58.090 19:16:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # get_bdev_list 00:26:58.090 19:16:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # sort 00:26:58.090 19:16:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:58.090 19:16:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # jq -r '.[].name' 00:26:58.090 19:16:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.090 19:16:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:58.090 19:16:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # xargs 00:26:58.351 19:16:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.351 19:16:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # [[ nvme0n1 != '' ]] 00:26:58.351 19:16:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sleep 1 00:26:59.296 19:16:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # get_bdev_list 00:26:59.296 19:16:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:59.296 19:16:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # jq -r '.[].name' 00:26:59.296 19:16:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.296 19:16:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # sort 00:26:59.296 19:16:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:59.296 19:16:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # xargs 00:26:59.296 19:16:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.296 19:16:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # [[ nvme0n1 != '' ]] 00:26:59.296 19:16:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sleep 1 00:27:00.239 19:16:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # get_bdev_list 00:27:00.239 19:16:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # xargs 00:27:00.239 19:16:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:00.239 19:16:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # jq -r '.[].name' 00:27:00.239 19:16:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.239 19:16:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # sort 00:27:00.239 19:16:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:00.239 19:16:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:00.239 19:16:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # [[ nvme0n1 != '' ]] 00:27:00.239 19:16:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sleep 1 00:27:01.621 19:16:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # get_bdev_list 00:27:01.621 19:16:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:01.621 19:16:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # jq -r '.[].name' 00:27:01.621 19:16:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.621 19:16:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # sort 00:27:01.621 19:16:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:01.621 19:16:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # xargs 00:27:01.621 19:16:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.621 19:16:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # [[ nvme0n1 != '' ]] 00:27:01.621 19:16:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sleep 1 00:27:02.564 19:16:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # get_bdev_list 00:27:02.564 19:16:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # xargs 00:27:02.564 19:16:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:02.564 19:16:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # jq -r '.[].name' 00:27:02.564 19:16:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.564 19:16:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # sort 00:27:02.564 19:16:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:02.564 [2024-11-05 19:16:31.622856] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:27:02.564 [2024-11-05 19:16:31.622898] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:02.564 [2024-11-05 19:16:31.622910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.564 [2024-11-05 19:16:31.622920] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:02.564 [2024-11-05 19:16:31.622928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.564 [2024-11-05 19:16:31.622937] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:02.564 [2024-11-05 19:16:31.622944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.564 [2024-11-05 19:16:31.622952] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:02.564 [2024-11-05 19:16:31.622959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.564 [2024-11-05 19:16:31.622968] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:02.564 [2024-11-05 19:16:31.622975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.564 [2024-11-05 19:16:31.622988] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6ec00 is same with the state(6) to be set 00:27:02.564 [2024-11-05 19:16:31.632877] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d6ec00 (9): Bad file descriptor 00:27:02.564 19:16:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.564 [2024-11-05 19:16:31.642916] bdev_nvme.c:2543:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:27:02.564 [2024-11-05 19:16:31.642928] bdev_nvme.c:2531:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:27:02.564 [2024-11-05 19:16:31.642933] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:02.564 [2024-11-05 19:16:31.642939] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:02.564 [2024-11-05 19:16:31.642961] bdev_nvme.c:2515:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:02.564 19:16:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # [[ nvme0n1 != '' ]] 00:27:02.564 19:16:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sleep 1 00:27:03.506 19:16:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # get_bdev_list 00:27:03.506 19:16:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # sort 00:27:03.506 19:16:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:03.506 19:16:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # jq -r '.[].name' 00:27:03.506 19:16:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.506 19:16:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:03.506 19:16:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # xargs 00:27:03.506 [2024-11-05 19:16:32.692781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:27:03.506 [2024-11-05 19:16:32.692831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d6ec00 with addr=10.0.0.2, port=4420 00:27:03.506 [2024-11-05 19:16:32.692845] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6ec00 is same with the state(6) to be set 00:27:03.506 [2024-11-05 19:16:32.692871] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d6ec00 (9): Bad file descriptor 00:27:03.506 [2024-11-05 19:16:32.693246] bdev_nvme.c:3166:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:27:03.506 [2024-11-05 19:16:32.693270] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:03.506 [2024-11-05 19:16:32.693278] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:03.506 [2024-11-05 19:16:32.693288] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:03.506 [2024-11-05 19:16:32.693295] bdev_nvme.c:2505:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:03.506 [2024-11-05 19:16:32.693301] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:03.506 [2024-11-05 19:16:32.693306] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:03.506 [2024-11-05 19:16:32.693314] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:03.506 [2024-11-05 19:16:32.693319] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:03.506 19:16:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.506 19:16:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # [[ nvme0n1 != '' ]] 00:27:03.506 19:16:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sleep 1 00:27:04.448 [2024-11-05 19:16:33.695694] bdev_nvme.c:2515:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:04.448 [2024-11-05 19:16:33.695714] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:04.448 [2024-11-05 19:16:33.695725] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:04.448 [2024-11-05 19:16:33.695732] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:04.448 [2024-11-05 19:16:33.695740] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:27:04.448 [2024-11-05 19:16:33.695751] bdev_nvme.c:2505:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:04.448 [2024-11-05 19:16:33.695756] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:04.448 [2024-11-05 19:16:33.695760] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:04.448 [2024-11-05 19:16:33.695782] bdev_nvme.c:7133:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:27:04.448 [2024-11-05 19:16:33.695805] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:04.448 [2024-11-05 19:16:33.695815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.448 [2024-11-05 19:16:33.695827] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:04.448 [2024-11-05 19:16:33.695834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.448 [2024-11-05 19:16:33.695843] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:04.448 [2024-11-05 19:16:33.695850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.448 [2024-11-05 19:16:33.695858] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:04.448 [2024-11-05 19:16:33.695866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.448 [2024-11-05 19:16:33.695874] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:04.448 [2024-11-05 19:16:33.695882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.448 [2024-11-05 19:16:33.695890] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:27:04.448 [2024-11-05 19:16:33.696306] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d5e340 (9): Bad file descriptor 00:27:04.448 [2024-11-05 19:16:33.697320] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:27:04.448 [2024-11-05 19:16:33.697331] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:27:04.449 19:16:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # get_bdev_list 00:27:04.449 19:16:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:04.449 19:16:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # jq -r '.[].name' 00:27:04.449 19:16:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.449 19:16:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # sort 00:27:04.449 19:16:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:04.449 19:16:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # xargs 00:27:04.449 19:16:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.709 19:16:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # [[ '' != '' ]] 00:27:04.709 19:16:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@77 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:27:04.709 19:16:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@78 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:27:04.709 19:16:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@81 -- # wait_for_bdev nvme1n1 00:27:04.710 19:16:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # get_bdev_list 00:27:04.710 19:16:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:04.710 19:16:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # jq -r '.[].name' 00:27:04.710 19:16:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.710 19:16:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # sort 00:27:04.710 19:16:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:04.710 19:16:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # xargs 00:27:04.710 19:16:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.710 19:16:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:04.710 19:16:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sleep 1 00:27:05.654 19:16:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # get_bdev_list 00:27:05.654 19:16:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # sort 00:27:05.654 19:16:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:05.654 19:16:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # jq -r '.[].name' 00:27:05.654 19:16:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.654 19:16:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:05.654 19:16:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # xargs 00:27:05.654 19:16:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.654 19:16:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:05.654 19:16:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sleep 1 00:27:06.596 [2024-11-05 19:16:35.752940] bdev_nvme.c:7382:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:06.596 [2024-11-05 19:16:35.752957] bdev_nvme.c:7468:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:06.596 [2024-11-05 19:16:35.752972] bdev_nvme.c:7345:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:06.596 [2024-11-05 19:16:35.839245] bdev_nvme.c:7311:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:27:06.857 19:16:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # get_bdev_list 00:27:06.857 19:16:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:06.857 19:16:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # jq -r '.[].name' 00:27:06.857 19:16:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.857 19:16:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # sort 00:27:06.857 19:16:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:06.857 19:16:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # xargs 00:27:06.857 19:16:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.857 19:16:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:06.857 19:16:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sleep 1 00:27:06.857 [2024-11-05 19:16:36.062507] bdev_nvme.c:5632:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:27:06.857 [2024-11-05 19:16:36.063441] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x1da1150:1 started. 00:27:06.857 [2024-11-05 19:16:36.064693] bdev_nvme.c:8178:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:27:06.857 [2024-11-05 19:16:36.064728] bdev_nvme.c:8178:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:27:06.857 [2024-11-05 19:16:36.064754] bdev_nvme.c:8178:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:27:06.857 [2024-11-05 19:16:36.064768] bdev_nvme.c:7201:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:27:06.857 [2024-11-05 19:16:36.064776] bdev_nvme.c:7160:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:06.857 [2024-11-05 19:16:36.071006] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x1da1150 was disconnected and freed. delete nvme_qpair. 00:27:07.801 19:16:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # get_bdev_list 00:27:07.801 19:16:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # sort 00:27:07.801 19:16:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:07.801 19:16:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # jq -r '.[].name' 00:27:07.801 19:16:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.801 19:16:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:07.801 19:16:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # xargs 00:27:07.801 19:16:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.801 19:16:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:27:07.801 19:16:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:27:07.801 19:16:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@85 -- # killprocess 477339 00:27:07.801 19:16:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # '[' -z 477339 ']' 00:27:07.801 19:16:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # kill -0 477339 00:27:07.801 19:16:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # uname 00:27:07.801 19:16:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:07.801 19:16:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 477339 00:27:08.062 19:16:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:27:08.062 19:16:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:27:08.062 19:16:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 477339' 00:27:08.062 killing process with pid 477339 00:27:08.062 19:16:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@971 -- # kill 477339 00:27:08.062 19:16:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@976 -- # wait 477339 00:27:08.062 19:16:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # nvmftestfini 00:27:08.062 19:16:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # nvmfcleanup 00:27:08.062 19:16:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@99 -- # sync 00:27:08.062 19:16:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:27:08.062 19:16:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@102 -- # set +e 00:27:08.062 19:16:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@103 -- # for i in {1..20} 00:27:08.062 19:16:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:27:08.062 rmmod nvme_tcp 00:27:08.062 rmmod nvme_fabrics 00:27:08.062 rmmod nvme_keyring 00:27:08.062 19:16:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:27:08.062 19:16:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@106 -- # set -e 00:27:08.062 19:16:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@107 -- # return 0 00:27:08.062 19:16:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # '[' -n 477160 ']' 00:27:08.062 19:16:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@337 -- # killprocess 477160 00:27:08.062 19:16:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # '[' -z 477160 ']' 00:27:08.062 19:16:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # kill -0 477160 00:27:08.062 19:16:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # uname 00:27:08.062 19:16:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:08.063 19:16:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 477160 00:27:08.063 19:16:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:27:08.063 19:16:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:27:08.063 19:16:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 477160' 00:27:08.063 killing process with pid 477160 00:27:08.063 19:16:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@971 -- # kill 477160 00:27:08.063 19:16:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@976 -- # wait 477160 00:27:08.323 19:16:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:27:08.323 19:16:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # nvmf_fini 00:27:08.323 19:16:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@264 -- # local dev 00:27:08.323 19:16:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@267 -- # remove_target_ns 00:27:08.323 19:16:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:27:08.323 19:16:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:27:08.323 19:16:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_target_ns 00:27:10.240 19:16:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@268 -- # delete_main_bridge 00:27:10.240 19:16:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:27:10.240 19:16:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@130 -- # return 0 00:27:10.240 19:16:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:27:10.240 19:16:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:27:10.240 19:16:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:27:10.240 19:16:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:27:10.240 19:16:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:27:10.240 19:16:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:27:10.240 19:16:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:27:10.240 19:16:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:27:10.240 19:16:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:27:10.240 19:16:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:27:10.240 19:16:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:27:10.240 19:16:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:27:10.240 19:16:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:27:10.240 19:16:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:27:10.240 19:16:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:27:10.240 19:16:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:27:10.240 19:16:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:27:10.240 19:16:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@41 -- # _dev=0 00:27:10.240 19:16:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@41 -- # dev_map=() 00:27:10.240 19:16:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@284 -- # iptr 00:27:10.241 19:16:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@542 -- # iptables-save 00:27:10.241 19:16:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:27:10.241 19:16:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@542 -- # iptables-restore 00:27:10.241 00:27:10.241 real 0m24.567s 00:27:10.241 user 0m29.720s 00:27:10.241 sys 0m7.107s 00:27:10.241 19:16:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:10.241 19:16:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:10.241 ************************************ 00:27:10.241 END TEST nvmf_discovery_remove_ifc 00:27:10.241 ************************************ 00:27:10.515 19:16:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:27:10.515 19:16:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:27:10.515 19:16:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:10.515 19:16:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.515 ************************************ 00:27:10.515 START TEST nvmf_identify_kernel_target 00:27:10.515 ************************************ 00:27:10.515 19:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:27:10.515 * Looking for test storage... 00:27:10.515 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:10.515 19:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:10.515 19:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lcov --version 00:27:10.515 19:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:10.515 19:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:10.515 19:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:10.515 19:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:10.515 19:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:10.515 19:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:27:10.515 19:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:27:10.515 19:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:27:10.515 19:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:27:10.515 19:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:27:10.515 19:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:27:10.515 19:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:27:10.515 19:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:10.515 19:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:27:10.515 19:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:27:10.515 19:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:10.515 19:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:10.515 19:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:27:10.515 19:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:27:10.515 19:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:10.515 19:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:27:10.515 19:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:27:10.778 19:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:27:10.778 19:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:27:10.778 19:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:10.778 19:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:27:10.778 19:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:27:10.778 19:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:10.778 19:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:10.778 19:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:27:10.778 19:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:10.778 19:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:10.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:10.778 --rc genhtml_branch_coverage=1 00:27:10.778 --rc genhtml_function_coverage=1 00:27:10.778 --rc genhtml_legend=1 00:27:10.778 --rc geninfo_all_blocks=1 00:27:10.778 --rc geninfo_unexecuted_blocks=1 00:27:10.778 00:27:10.778 ' 00:27:10.778 19:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:10.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:10.778 --rc genhtml_branch_coverage=1 00:27:10.778 --rc genhtml_function_coverage=1 00:27:10.778 --rc genhtml_legend=1 00:27:10.778 --rc geninfo_all_blocks=1 00:27:10.778 --rc geninfo_unexecuted_blocks=1 00:27:10.778 00:27:10.778 ' 00:27:10.778 19:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:10.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:10.778 --rc genhtml_branch_coverage=1 00:27:10.778 --rc genhtml_function_coverage=1 00:27:10.778 --rc genhtml_legend=1 00:27:10.778 --rc geninfo_all_blocks=1 00:27:10.778 --rc geninfo_unexecuted_blocks=1 00:27:10.778 00:27:10.778 ' 00:27:10.778 19:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:10.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:10.778 --rc genhtml_branch_coverage=1 00:27:10.778 --rc genhtml_function_coverage=1 00:27:10.778 --rc genhtml_legend=1 00:27:10.778 --rc geninfo_all_blocks=1 00:27:10.778 --rc geninfo_unexecuted_blocks=1 00:27:10.778 00:27:10.778 ' 00:27:10.778 19:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:10.778 19:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:27:10.778 19:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:10.778 19:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:10.778 19:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:10.778 19:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:10.778 19:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:10.778 19:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:27:10.778 19:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:10.778 19:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:27:10.778 19:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:10.778 19:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:10.778 19:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:10.778 19:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:27:10.778 19:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:27:10.778 19:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:10.778 19:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:10.778 19:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:27:10.778 19:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:10.778 19:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:10.778 19:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:10.778 19:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:10.778 19:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:10.778 19:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:10.778 19:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:27:10.778 19:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:10.778 19:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:27:10.778 19:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:27:10.778 19:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:27:10.778 19:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:27:10.778 19:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@50 -- # : 0 00:27:10.778 19:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:27:10.778 19:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:27:10.778 19:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:27:10.778 19:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:10.779 19:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:10.779 19:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:27:10.779 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:27:10.779 19:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:27:10.779 19:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:27:10.779 19:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@54 -- # have_pci_nics=0 00:27:10.779 19:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:27:10.779 19:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:27:10.779 19:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:10.779 19:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # prepare_net_devs 00:27:10.779 19:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # local -g is_hw=no 00:27:10.779 19:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # remove_target_ns 00:27:10.779 19:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:27:10.779 19:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:27:10.779 19:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_target_ns 00:27:10.779 19:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:27:10.779 19:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:27:10.779 19:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # xtrace_disable 00:27:10.779 19:16:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:27:18.924 19:16:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:18.924 19:16:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@131 -- # pci_devs=() 00:27:18.924 19:16:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@131 -- # local -a pci_devs 00:27:18.924 19:16:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@132 -- # pci_net_devs=() 00:27:18.924 19:16:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:27:18.924 19:16:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@133 -- # pci_drivers=() 00:27:18.924 19:16:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@133 -- # local -A pci_drivers 00:27:18.924 19:16:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@135 -- # net_devs=() 00:27:18.924 19:16:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@135 -- # local -ga net_devs 00:27:18.924 19:16:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@136 -- # e810=() 00:27:18.924 19:16:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@136 -- # local -ga e810 00:27:18.924 19:16:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@137 -- # x722=() 00:27:18.924 19:16:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@137 -- # local -ga x722 00:27:18.924 19:16:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@138 -- # mlx=() 00:27:18.924 19:16:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@138 -- # local -ga mlx 00:27:18.924 19:16:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:18.924 19:16:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:18.924 19:16:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:18.924 19:16:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:18.924 19:16:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:18.924 19:16:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:18.924 19:16:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:18.924 19:16:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:18.924 19:16:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:18.924 19:16:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:18.924 19:16:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:18.924 19:16:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:18.924 19:16:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:27:18.924 19:16:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:27:18.924 19:16:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:27:18.924 19:16:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:27:18.924 19:16:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:27:18.924 19:16:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:27:18.924 19:16:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:27:18.924 19:16:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:18.924 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:18.924 19:16:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:27:18.924 19:16:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:27:18.924 19:16:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:18.924 19:16:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:18.924 19:16:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:27:18.924 19:16:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:27:18.924 19:16:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:18.924 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:18.924 19:16:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:27:18.924 19:16:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:27:18.924 19:16:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:18.924 19:16:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:18.924 19:16:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:27:18.924 19:16:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:27:18.924 19:16:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:27:18.924 19:16:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:27:18.924 19:16:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:27:18.924 19:16:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:18.924 19:16:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:27:18.924 19:16:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:18.924 19:16:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # [[ up == up ]] 00:27:18.924 19:16:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:27:18.924 19:16:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:18.924 19:16:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:18.924 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:18.924 19:16:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:27:18.924 19:16:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:27:18.924 19:16:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:18.924 19:16:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:27:18.924 19:16:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:18.924 19:16:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # [[ up == up ]] 00:27:18.924 19:16:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:27:18.924 19:16:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:18.924 19:16:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:18.924 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:18.924 19:16:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:27:18.924 19:16:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:27:18.924 19:16:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:27:18.924 19:16:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # is_hw=yes 00:27:18.924 19:16:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:27:18.924 19:16:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:27:18.924 19:16:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:27:18.924 19:16:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:27:18.924 19:16:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@257 -- # create_target_ns 00:27:18.924 19:16:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:27:18.924 19:16:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:27:18.924 19:16:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:27:18.924 19:16:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:18.924 19:16:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:27:18.924 19:16:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:27:18.924 19:16:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:18.924 19:16:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:18.924 19:16:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:27:18.924 19:16:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:27:18.924 19:16:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:27:18.924 19:16:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:27:18.924 19:16:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@27 -- # local -gA dev_map 00:27:18.924 19:16:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@28 -- # local -g _dev 00:27:18.924 19:16:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:27:18.924 19:16:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:27:18.924 19:16:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:27:18.925 19:16:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:27:18.925 19:16:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@44 -- # ips=() 00:27:18.925 19:16:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:27:18.925 19:16:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:27:18.925 19:16:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:27:18.925 19:16:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:27:18.925 19:16:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:27:18.925 19:16:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:27:18.925 19:16:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:27:18.925 19:16:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:27:18.925 19:16:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:27:18.925 19:16:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:27:18.925 19:16:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:27:18.925 19:16:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:27:18.925 19:16:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:27:18.925 19:16:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:27:18.925 19:16:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:27:18.925 19:16:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:27:18.925 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:27:18.925 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:27:18.925 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:18.925 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:27:18.925 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@11 -- # local val=167772161 00:27:18.925 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:27:18.925 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:27:18.925 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:27:18.925 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:27:18.925 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:27:18.925 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:27:18.925 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:27:18.925 10.0.0.1 00:27:18.925 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:27:18.925 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:27:18.925 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:18.925 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:18.925 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:27:18.925 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@11 -- # local val=167772162 00:27:18.925 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:27:18.925 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:27:18.925 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:27:18.925 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:27:18.925 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:27:18.925 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:27:18.925 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:27:18.925 10.0.0.2 00:27:18.925 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:27:18.925 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:27:18.925 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:27:18.925 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:27:18.925 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:27:18.925 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:27:18.925 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:27:18.925 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:18.925 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:18.925 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:27:18.925 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:27:18.925 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:27:18.925 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:27:18.925 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:27:18.925 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:27:18.925 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:27:18.925 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:27:18.925 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:27:18.925 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:27:18.925 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:27:18.925 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@38 -- # ping_ips 1 00:27:18.925 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:27:18.925 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:27:18.925 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:27:18.925 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:27:18.925 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:27:18.925 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:27:18.925 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:27:18.925 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:27:18.925 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:27:18.925 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@107 -- # local dev=initiator0 00:27:18.925 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:27:18.925 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:27:18.925 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:27:18.925 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:27:18.925 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:27:18.925 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:27:18.925 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:27:18.925 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:27:18.925 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:27:18.925 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:27:18.925 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:27:18.925 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:18.925 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:18.925 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:27:18.925 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:27:18.925 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:18.925 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.665 ms 00:27:18.925 00:27:18.925 --- 10.0.0.1 ping statistics --- 00:27:18.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:18.925 rtt min/avg/max/mdev = 0.665/0.665/0.665/0.000 ms 00:27:18.925 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:27:18.925 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:27:18.925 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:27:18.925 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:27:18.925 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:18.925 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:18.925 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@168 -- # get_net_dev target0 00:27:18.925 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@107 -- # local dev=target0 00:27:18.925 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:27:18.925 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:27:18.926 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:27:18.926 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:27:18.926 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:27:18.926 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:27:18.926 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:27:18.926 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:27:18.926 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:27:18.926 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:27:18.926 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:27:18.926 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:27:18.926 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:27:18.926 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:27:18.926 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:18.926 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.293 ms 00:27:18.926 00:27:18.926 --- 10.0.0.2 ping statistics --- 00:27:18.926 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:18.926 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:27:18.926 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@98 -- # (( pair++ )) 00:27:18.926 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:27:18.926 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:18.926 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # return 0 00:27:18.926 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:27:18.926 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:27:18.926 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:27:18.926 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:27:18.926 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:27:18.926 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:27:18.926 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:27:18.926 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:27:18.926 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:27:18.926 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:27:18.926 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@107 -- # local dev=initiator0 00:27:18.926 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:27:18.926 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:27:18.926 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:27:18.926 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:27:18.926 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:27:18.926 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:27:18.926 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:27:18.926 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:27:18.926 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:27:18.926 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:18.926 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:27:18.926 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:27:18.926 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:27:18.926 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:27:18.926 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:27:18.926 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:27:18.926 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@107 -- # local dev=initiator1 00:27:18.926 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:27:18.926 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:27:18.926 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@109 -- # return 1 00:27:18.926 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@168 -- # dev= 00:27:18.926 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@169 -- # return 0 00:27:18.926 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:27:18.926 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:27:18.926 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:27:18.926 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:27:18.926 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:27:18.926 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:18.926 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:18.926 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@168 -- # get_net_dev target0 00:27:18.926 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@107 -- # local dev=target0 00:27:18.926 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:27:18.926 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:27:18.926 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:27:18.926 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:27:18.926 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:27:18.926 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:27:18.926 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:27:18.926 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:27:18.926 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:27:18.926 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:18.926 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:27:18.926 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:27:18.926 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:27:18.926 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:27:18.926 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:18.926 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:18.926 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@168 -- # get_net_dev target1 00:27:18.926 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@107 -- # local dev=target1 00:27:18.926 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:27:18.926 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:27:18.926 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@109 -- # return 1 00:27:18.926 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@168 -- # dev= 00:27:18.926 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@169 -- # return 0 00:27:18.926 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:27:18.926 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:18.926 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:27:18.926 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:27:18.926 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:18.926 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:27:18.926 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:27:18.926 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:27:18.926 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:27:18.926 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:27:18.926 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@434 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:27:18.926 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@436 -- # nvmet=/sys/kernel/config/nvmet 00:27:18.926 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@437 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:18.926 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:18.926 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@439 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:18.926 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # local block nvme 00:27:18.926 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@443 -- # [[ ! -e /sys/module/nvmet ]] 00:27:18.926 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # modprobe nvmet 00:27:18.926 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@447 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:18.926 19:16:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@449 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:21.474 Waiting for block devices as requested 00:27:21.736 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:21.736 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:21.736 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:21.999 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:21.999 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:21.999 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:22.260 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:22.260 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:22.260 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:27:22.521 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:22.521 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:22.521 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:22.782 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:22.782 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:22.782 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:22.782 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:23.043 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:23.305 19:16:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@452 -- # for block in /sys/block/nvme* 00:27:23.305 19:16:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@453 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:23.305 19:16:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # is_block_zoned nvme0n1 00:27:23.305 19:16:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:27:23.305 19:16:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:23.305 19:16:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:27:23.305 19:16:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # block_in_use nvme0n1 00:27:23.305 19:16:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:27:23.305 19:16:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:23.305 No valid GPT data, bailing 00:27:23.305 19:16:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:23.305 19:16:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:27:23.305 19:16:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:27:23.305 19:16:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # nvme=/dev/nvme0n1 00:27:23.305 19:16:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@458 -- # [[ -b /dev/nvme0n1 ]] 00:27:23.305 19:16:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@460 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:23.305 19:16:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@461 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:23.305 19:16:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@462 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:23.305 19:16:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@467 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:27:23.305 19:16:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # echo 1 00:27:23.305 19:16:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@470 -- # echo /dev/nvme0n1 00:27:23.305 19:16:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@471 -- # echo 1 00:27:23.305 19:16:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@473 -- # echo 10.0.0.1 00:27:23.305 19:16:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # echo tcp 00:27:23.305 19:16:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@475 -- # echo 4420 00:27:23.305 19:16:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # echo ipv4 00:27:23.305 19:16:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@479 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:23.305 19:16:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:27:23.567 00:27:23.567 Discovery Log Number of Records 2, Generation counter 2 00:27:23.567 =====Discovery Log Entry 0====== 00:27:23.567 trtype: tcp 00:27:23.567 adrfam: ipv4 00:27:23.567 subtype: current discovery subsystem 00:27:23.567 treq: not specified, sq flow control disable supported 00:27:23.567 portid: 1 00:27:23.567 trsvcid: 4420 00:27:23.567 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:23.567 traddr: 10.0.0.1 00:27:23.567 eflags: none 00:27:23.567 sectype: none 00:27:23.567 =====Discovery Log Entry 1====== 00:27:23.567 trtype: tcp 00:27:23.568 adrfam: ipv4 00:27:23.568 subtype: nvme subsystem 00:27:23.568 treq: not specified, sq flow control disable supported 00:27:23.568 portid: 1 00:27:23.568 trsvcid: 4420 00:27:23.568 subnqn: nqn.2016-06.io.spdk:testnqn 00:27:23.568 traddr: 10.0.0.1 00:27:23.568 eflags: none 00:27:23.568 sectype: none 00:27:23.568 19:16:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:27:23.568 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:27:23.568 ===================================================== 00:27:23.568 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:27:23.568 ===================================================== 00:27:23.568 Controller Capabilities/Features 00:27:23.568 ================================ 00:27:23.568 Vendor ID: 0000 00:27:23.568 Subsystem Vendor ID: 0000 00:27:23.568 Serial Number: 437f6473547e90a69709 00:27:23.568 Model Number: Linux 00:27:23.568 Firmware Version: 6.8.9-20 00:27:23.568 Recommended Arb Burst: 0 00:27:23.568 IEEE OUI Identifier: 00 00 00 00:27:23.568 Multi-path I/O 00:27:23.568 May have multiple subsystem ports: No 00:27:23.568 May have multiple controllers: No 00:27:23.568 Associated with SR-IOV VF: No 00:27:23.568 Max Data Transfer Size: Unlimited 00:27:23.568 Max Number of Namespaces: 0 00:27:23.568 Max Number of I/O Queues: 1024 00:27:23.568 NVMe Specification Version (VS): 1.3 00:27:23.568 NVMe Specification Version (Identify): 1.3 00:27:23.568 Maximum Queue Entries: 1024 00:27:23.568 Contiguous Queues Required: No 00:27:23.568 Arbitration Mechanisms Supported 00:27:23.568 Weighted Round Robin: Not Supported 00:27:23.568 Vendor Specific: Not Supported 00:27:23.568 Reset Timeout: 7500 ms 00:27:23.568 Doorbell Stride: 4 bytes 00:27:23.568 NVM Subsystem Reset: Not Supported 00:27:23.568 Command Sets Supported 00:27:23.568 NVM Command Set: Supported 00:27:23.568 Boot Partition: Not Supported 00:27:23.568 Memory Page Size Minimum: 4096 bytes 00:27:23.568 Memory Page Size Maximum: 4096 bytes 00:27:23.568 Persistent Memory Region: Not Supported 00:27:23.568 Optional Asynchronous Events Supported 00:27:23.568 Namespace Attribute Notices: Not Supported 00:27:23.568 Firmware Activation Notices: Not Supported 00:27:23.568 ANA Change Notices: Not Supported 00:27:23.568 PLE Aggregate Log Change Notices: Not Supported 00:27:23.568 LBA Status Info Alert Notices: Not Supported 00:27:23.568 EGE Aggregate Log Change Notices: Not Supported 00:27:23.568 Normal NVM Subsystem Shutdown event: Not Supported 00:27:23.568 Zone Descriptor Change Notices: Not Supported 00:27:23.568 Discovery Log Change Notices: Supported 00:27:23.568 Controller Attributes 00:27:23.568 128-bit Host Identifier: Not Supported 00:27:23.568 Non-Operational Permissive Mode: Not Supported 00:27:23.568 NVM Sets: Not Supported 00:27:23.568 Read Recovery Levels: Not Supported 00:27:23.568 Endurance Groups: Not Supported 00:27:23.568 Predictable Latency Mode: Not Supported 00:27:23.568 Traffic Based Keep ALive: Not Supported 00:27:23.568 Namespace Granularity: Not Supported 00:27:23.568 SQ Associations: Not Supported 00:27:23.568 UUID List: Not Supported 00:27:23.568 Multi-Domain Subsystem: Not Supported 00:27:23.568 Fixed Capacity Management: Not Supported 00:27:23.568 Variable Capacity Management: Not Supported 00:27:23.568 Delete Endurance Group: Not Supported 00:27:23.568 Delete NVM Set: Not Supported 00:27:23.568 Extended LBA Formats Supported: Not Supported 00:27:23.568 Flexible Data Placement Supported: Not Supported 00:27:23.568 00:27:23.568 Controller Memory Buffer Support 00:27:23.568 ================================ 00:27:23.568 Supported: No 00:27:23.568 00:27:23.568 Persistent Memory Region Support 00:27:23.568 ================================ 00:27:23.568 Supported: No 00:27:23.568 00:27:23.568 Admin Command Set Attributes 00:27:23.568 ============================ 00:27:23.568 Security Send/Receive: Not Supported 00:27:23.568 Format NVM: Not Supported 00:27:23.568 Firmware Activate/Download: Not Supported 00:27:23.568 Namespace Management: Not Supported 00:27:23.568 Device Self-Test: Not Supported 00:27:23.568 Directives: Not Supported 00:27:23.568 NVMe-MI: Not Supported 00:27:23.568 Virtualization Management: Not Supported 00:27:23.568 Doorbell Buffer Config: Not Supported 00:27:23.568 Get LBA Status Capability: Not Supported 00:27:23.568 Command & Feature Lockdown Capability: Not Supported 00:27:23.568 Abort Command Limit: 1 00:27:23.568 Async Event Request Limit: 1 00:27:23.568 Number of Firmware Slots: N/A 00:27:23.568 Firmware Slot 1 Read-Only: N/A 00:27:23.568 Firmware Activation Without Reset: N/A 00:27:23.568 Multiple Update Detection Support: N/A 00:27:23.568 Firmware Update Granularity: No Information Provided 00:27:23.568 Per-Namespace SMART Log: No 00:27:23.568 Asymmetric Namespace Access Log Page: Not Supported 00:27:23.568 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:27:23.568 Command Effects Log Page: Not Supported 00:27:23.568 Get Log Page Extended Data: Supported 00:27:23.568 Telemetry Log Pages: Not Supported 00:27:23.568 Persistent Event Log Pages: Not Supported 00:27:23.568 Supported Log Pages Log Page: May Support 00:27:23.568 Commands Supported & Effects Log Page: Not Supported 00:27:23.568 Feature Identifiers & Effects Log Page:May Support 00:27:23.568 NVMe-MI Commands & Effects Log Page: May Support 00:27:23.568 Data Area 4 for Telemetry Log: Not Supported 00:27:23.568 Error Log Page Entries Supported: 1 00:27:23.568 Keep Alive: Not Supported 00:27:23.568 00:27:23.568 NVM Command Set Attributes 00:27:23.568 ========================== 00:27:23.568 Submission Queue Entry Size 00:27:23.568 Max: 1 00:27:23.568 Min: 1 00:27:23.568 Completion Queue Entry Size 00:27:23.568 Max: 1 00:27:23.568 Min: 1 00:27:23.568 Number of Namespaces: 0 00:27:23.568 Compare Command: Not Supported 00:27:23.568 Write Uncorrectable Command: Not Supported 00:27:23.568 Dataset Management Command: Not Supported 00:27:23.568 Write Zeroes Command: Not Supported 00:27:23.568 Set Features Save Field: Not Supported 00:27:23.568 Reservations: Not Supported 00:27:23.568 Timestamp: Not Supported 00:27:23.568 Copy: Not Supported 00:27:23.568 Volatile Write Cache: Not Present 00:27:23.568 Atomic Write Unit (Normal): 1 00:27:23.568 Atomic Write Unit (PFail): 1 00:27:23.568 Atomic Compare & Write Unit: 1 00:27:23.568 Fused Compare & Write: Not Supported 00:27:23.568 Scatter-Gather List 00:27:23.568 SGL Command Set: Supported 00:27:23.568 SGL Keyed: Not Supported 00:27:23.568 SGL Bit Bucket Descriptor: Not Supported 00:27:23.568 SGL Metadata Pointer: Not Supported 00:27:23.568 Oversized SGL: Not Supported 00:27:23.568 SGL Metadata Address: Not Supported 00:27:23.568 SGL Offset: Supported 00:27:23.568 Transport SGL Data Block: Not Supported 00:27:23.568 Replay Protected Memory Block: Not Supported 00:27:23.568 00:27:23.568 Firmware Slot Information 00:27:23.568 ========================= 00:27:23.568 Active slot: 0 00:27:23.568 00:27:23.568 00:27:23.568 Error Log 00:27:23.568 ========= 00:27:23.568 00:27:23.568 Active Namespaces 00:27:23.568 ================= 00:27:23.568 Discovery Log Page 00:27:23.568 ================== 00:27:23.568 Generation Counter: 2 00:27:23.568 Number of Records: 2 00:27:23.568 Record Format: 0 00:27:23.568 00:27:23.568 Discovery Log Entry 0 00:27:23.568 ---------------------- 00:27:23.568 Transport Type: 3 (TCP) 00:27:23.568 Address Family: 1 (IPv4) 00:27:23.568 Subsystem Type: 3 (Current Discovery Subsystem) 00:27:23.568 Entry Flags: 00:27:23.568 Duplicate Returned Information: 0 00:27:23.568 Explicit Persistent Connection Support for Discovery: 0 00:27:23.568 Transport Requirements: 00:27:23.568 Secure Channel: Not Specified 00:27:23.568 Port ID: 1 (0x0001) 00:27:23.568 Controller ID: 65535 (0xffff) 00:27:23.568 Admin Max SQ Size: 32 00:27:23.568 Transport Service Identifier: 4420 00:27:23.568 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:27:23.568 Transport Address: 10.0.0.1 00:27:23.568 Discovery Log Entry 1 00:27:23.568 ---------------------- 00:27:23.568 Transport Type: 3 (TCP) 00:27:23.568 Address Family: 1 (IPv4) 00:27:23.569 Subsystem Type: 2 (NVM Subsystem) 00:27:23.569 Entry Flags: 00:27:23.569 Duplicate Returned Information: 0 00:27:23.569 Explicit Persistent Connection Support for Discovery: 0 00:27:23.569 Transport Requirements: 00:27:23.569 Secure Channel: Not Specified 00:27:23.569 Port ID: 1 (0x0001) 00:27:23.569 Controller ID: 65535 (0xffff) 00:27:23.569 Admin Max SQ Size: 32 00:27:23.569 Transport Service Identifier: 4420 00:27:23.569 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:27:23.569 Transport Address: 10.0.0.1 00:27:23.569 19:16:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:23.569 get_feature(0x01) failed 00:27:23.569 get_feature(0x02) failed 00:27:23.569 get_feature(0x04) failed 00:27:23.569 ===================================================== 00:27:23.569 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:23.569 ===================================================== 00:27:23.569 Controller Capabilities/Features 00:27:23.569 ================================ 00:27:23.569 Vendor ID: 0000 00:27:23.569 Subsystem Vendor ID: 0000 00:27:23.569 Serial Number: 308af7bdf77774be20a5 00:27:23.569 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:27:23.569 Firmware Version: 6.8.9-20 00:27:23.569 Recommended Arb Burst: 6 00:27:23.569 IEEE OUI Identifier: 00 00 00 00:27:23.569 Multi-path I/O 00:27:23.569 May have multiple subsystem ports: Yes 00:27:23.569 May have multiple controllers: Yes 00:27:23.569 Associated with SR-IOV VF: No 00:27:23.569 Max Data Transfer Size: Unlimited 00:27:23.569 Max Number of Namespaces: 1024 00:27:23.569 Max Number of I/O Queues: 128 00:27:23.569 NVMe Specification Version (VS): 1.3 00:27:23.569 NVMe Specification Version (Identify): 1.3 00:27:23.569 Maximum Queue Entries: 1024 00:27:23.569 Contiguous Queues Required: No 00:27:23.569 Arbitration Mechanisms Supported 00:27:23.569 Weighted Round Robin: Not Supported 00:27:23.569 Vendor Specific: Not Supported 00:27:23.569 Reset Timeout: 7500 ms 00:27:23.569 Doorbell Stride: 4 bytes 00:27:23.569 NVM Subsystem Reset: Not Supported 00:27:23.569 Command Sets Supported 00:27:23.569 NVM Command Set: Supported 00:27:23.569 Boot Partition: Not Supported 00:27:23.569 Memory Page Size Minimum: 4096 bytes 00:27:23.569 Memory Page Size Maximum: 4096 bytes 00:27:23.569 Persistent Memory Region: Not Supported 00:27:23.569 Optional Asynchronous Events Supported 00:27:23.569 Namespace Attribute Notices: Supported 00:27:23.569 Firmware Activation Notices: Not Supported 00:27:23.569 ANA Change Notices: Supported 00:27:23.569 PLE Aggregate Log Change Notices: Not Supported 00:27:23.569 LBA Status Info Alert Notices: Not Supported 00:27:23.569 EGE Aggregate Log Change Notices: Not Supported 00:27:23.569 Normal NVM Subsystem Shutdown event: Not Supported 00:27:23.569 Zone Descriptor Change Notices: Not Supported 00:27:23.569 Discovery Log Change Notices: Not Supported 00:27:23.569 Controller Attributes 00:27:23.569 128-bit Host Identifier: Supported 00:27:23.569 Non-Operational Permissive Mode: Not Supported 00:27:23.569 NVM Sets: Not Supported 00:27:23.569 Read Recovery Levels: Not Supported 00:27:23.569 Endurance Groups: Not Supported 00:27:23.569 Predictable Latency Mode: Not Supported 00:27:23.569 Traffic Based Keep ALive: Supported 00:27:23.569 Namespace Granularity: Not Supported 00:27:23.569 SQ Associations: Not Supported 00:27:23.569 UUID List: Not Supported 00:27:23.569 Multi-Domain Subsystem: Not Supported 00:27:23.569 Fixed Capacity Management: Not Supported 00:27:23.569 Variable Capacity Management: Not Supported 00:27:23.569 Delete Endurance Group: Not Supported 00:27:23.569 Delete NVM Set: Not Supported 00:27:23.569 Extended LBA Formats Supported: Not Supported 00:27:23.569 Flexible Data Placement Supported: Not Supported 00:27:23.569 00:27:23.569 Controller Memory Buffer Support 00:27:23.569 ================================ 00:27:23.569 Supported: No 00:27:23.569 00:27:23.569 Persistent Memory Region Support 00:27:23.569 ================================ 00:27:23.569 Supported: No 00:27:23.569 00:27:23.569 Admin Command Set Attributes 00:27:23.569 ============================ 00:27:23.569 Security Send/Receive: Not Supported 00:27:23.569 Format NVM: Not Supported 00:27:23.569 Firmware Activate/Download: Not Supported 00:27:23.569 Namespace Management: Not Supported 00:27:23.569 Device Self-Test: Not Supported 00:27:23.569 Directives: Not Supported 00:27:23.569 NVMe-MI: Not Supported 00:27:23.569 Virtualization Management: Not Supported 00:27:23.569 Doorbell Buffer Config: Not Supported 00:27:23.569 Get LBA Status Capability: Not Supported 00:27:23.569 Command & Feature Lockdown Capability: Not Supported 00:27:23.569 Abort Command Limit: 4 00:27:23.569 Async Event Request Limit: 4 00:27:23.569 Number of Firmware Slots: N/A 00:27:23.569 Firmware Slot 1 Read-Only: N/A 00:27:23.569 Firmware Activation Without Reset: N/A 00:27:23.569 Multiple Update Detection Support: N/A 00:27:23.569 Firmware Update Granularity: No Information Provided 00:27:23.569 Per-Namespace SMART Log: Yes 00:27:23.569 Asymmetric Namespace Access Log Page: Supported 00:27:23.569 ANA Transition Time : 10 sec 00:27:23.569 00:27:23.569 Asymmetric Namespace Access Capabilities 00:27:23.569 ANA Optimized State : Supported 00:27:23.569 ANA Non-Optimized State : Supported 00:27:23.569 ANA Inaccessible State : Supported 00:27:23.569 ANA Persistent Loss State : Supported 00:27:23.569 ANA Change State : Supported 00:27:23.569 ANAGRPID is not changed : No 00:27:23.569 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:27:23.569 00:27:23.569 ANA Group Identifier Maximum : 128 00:27:23.569 Number of ANA Group Identifiers : 128 00:27:23.569 Max Number of Allowed Namespaces : 1024 00:27:23.569 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:27:23.569 Command Effects Log Page: Supported 00:27:23.569 Get Log Page Extended Data: Supported 00:27:23.569 Telemetry Log Pages: Not Supported 00:27:23.569 Persistent Event Log Pages: Not Supported 00:27:23.569 Supported Log Pages Log Page: May Support 00:27:23.569 Commands Supported & Effects Log Page: Not Supported 00:27:23.569 Feature Identifiers & Effects Log Page:May Support 00:27:23.569 NVMe-MI Commands & Effects Log Page: May Support 00:27:23.569 Data Area 4 for Telemetry Log: Not Supported 00:27:23.569 Error Log Page Entries Supported: 128 00:27:23.569 Keep Alive: Supported 00:27:23.569 Keep Alive Granularity: 1000 ms 00:27:23.569 00:27:23.569 NVM Command Set Attributes 00:27:23.569 ========================== 00:27:23.569 Submission Queue Entry Size 00:27:23.569 Max: 64 00:27:23.569 Min: 64 00:27:23.569 Completion Queue Entry Size 00:27:23.569 Max: 16 00:27:23.569 Min: 16 00:27:23.569 Number of Namespaces: 1024 00:27:23.569 Compare Command: Not Supported 00:27:23.569 Write Uncorrectable Command: Not Supported 00:27:23.569 Dataset Management Command: Supported 00:27:23.569 Write Zeroes Command: Supported 00:27:23.569 Set Features Save Field: Not Supported 00:27:23.569 Reservations: Not Supported 00:27:23.569 Timestamp: Not Supported 00:27:23.569 Copy: Not Supported 00:27:23.569 Volatile Write Cache: Present 00:27:23.569 Atomic Write Unit (Normal): 1 00:27:23.569 Atomic Write Unit (PFail): 1 00:27:23.569 Atomic Compare & Write Unit: 1 00:27:23.569 Fused Compare & Write: Not Supported 00:27:23.569 Scatter-Gather List 00:27:23.569 SGL Command Set: Supported 00:27:23.569 SGL Keyed: Not Supported 00:27:23.569 SGL Bit Bucket Descriptor: Not Supported 00:27:23.569 SGL Metadata Pointer: Not Supported 00:27:23.569 Oversized SGL: Not Supported 00:27:23.569 SGL Metadata Address: Not Supported 00:27:23.569 SGL Offset: Supported 00:27:23.569 Transport SGL Data Block: Not Supported 00:27:23.569 Replay Protected Memory Block: Not Supported 00:27:23.569 00:27:23.569 Firmware Slot Information 00:27:23.569 ========================= 00:27:23.569 Active slot: 0 00:27:23.569 00:27:23.569 Asymmetric Namespace Access 00:27:23.569 =========================== 00:27:23.569 Change Count : 0 00:27:23.569 Number of ANA Group Descriptors : 1 00:27:23.569 ANA Group Descriptor : 0 00:27:23.569 ANA Group ID : 1 00:27:23.569 Number of NSID Values : 1 00:27:23.569 Change Count : 0 00:27:23.569 ANA State : 1 00:27:23.570 Namespace Identifier : 1 00:27:23.570 00:27:23.570 Commands Supported and Effects 00:27:23.570 ============================== 00:27:23.570 Admin Commands 00:27:23.570 -------------- 00:27:23.570 Get Log Page (02h): Supported 00:27:23.570 Identify (06h): Supported 00:27:23.570 Abort (08h): Supported 00:27:23.570 Set Features (09h): Supported 00:27:23.570 Get Features (0Ah): Supported 00:27:23.570 Asynchronous Event Request (0Ch): Supported 00:27:23.570 Keep Alive (18h): Supported 00:27:23.570 I/O Commands 00:27:23.570 ------------ 00:27:23.570 Flush (00h): Supported 00:27:23.570 Write (01h): Supported LBA-Change 00:27:23.570 Read (02h): Supported 00:27:23.570 Write Zeroes (08h): Supported LBA-Change 00:27:23.570 Dataset Management (09h): Supported 00:27:23.570 00:27:23.570 Error Log 00:27:23.570 ========= 00:27:23.570 Entry: 0 00:27:23.570 Error Count: 0x3 00:27:23.570 Submission Queue Id: 0x0 00:27:23.570 Command Id: 0x5 00:27:23.570 Phase Bit: 0 00:27:23.570 Status Code: 0x2 00:27:23.570 Status Code Type: 0x0 00:27:23.570 Do Not Retry: 1 00:27:23.570 Error Location: 0x28 00:27:23.570 LBA: 0x0 00:27:23.570 Namespace: 0x0 00:27:23.570 Vendor Log Page: 0x0 00:27:23.570 ----------- 00:27:23.570 Entry: 1 00:27:23.570 Error Count: 0x2 00:27:23.570 Submission Queue Id: 0x0 00:27:23.570 Command Id: 0x5 00:27:23.570 Phase Bit: 0 00:27:23.570 Status Code: 0x2 00:27:23.570 Status Code Type: 0x0 00:27:23.570 Do Not Retry: 1 00:27:23.570 Error Location: 0x28 00:27:23.570 LBA: 0x0 00:27:23.570 Namespace: 0x0 00:27:23.570 Vendor Log Page: 0x0 00:27:23.570 ----------- 00:27:23.570 Entry: 2 00:27:23.570 Error Count: 0x1 00:27:23.570 Submission Queue Id: 0x0 00:27:23.570 Command Id: 0x4 00:27:23.570 Phase Bit: 0 00:27:23.570 Status Code: 0x2 00:27:23.570 Status Code Type: 0x0 00:27:23.570 Do Not Retry: 1 00:27:23.570 Error Location: 0x28 00:27:23.570 LBA: 0x0 00:27:23.570 Namespace: 0x0 00:27:23.570 Vendor Log Page: 0x0 00:27:23.570 00:27:23.570 Number of Queues 00:27:23.570 ================ 00:27:23.570 Number of I/O Submission Queues: 128 00:27:23.570 Number of I/O Completion Queues: 128 00:27:23.570 00:27:23.570 ZNS Specific Controller Data 00:27:23.570 ============================ 00:27:23.570 Zone Append Size Limit: 0 00:27:23.570 00:27:23.570 00:27:23.570 Active Namespaces 00:27:23.570 ================= 00:27:23.570 get_feature(0x05) failed 00:27:23.570 Namespace ID:1 00:27:23.570 Command Set Identifier: NVM (00h) 00:27:23.570 Deallocate: Supported 00:27:23.570 Deallocated/Unwritten Error: Not Supported 00:27:23.570 Deallocated Read Value: Unknown 00:27:23.570 Deallocate in Write Zeroes: Not Supported 00:27:23.570 Deallocated Guard Field: 0xFFFF 00:27:23.570 Flush: Supported 00:27:23.570 Reservation: Not Supported 00:27:23.570 Namespace Sharing Capabilities: Multiple Controllers 00:27:23.570 Size (in LBAs): 3750748848 (1788GiB) 00:27:23.570 Capacity (in LBAs): 3750748848 (1788GiB) 00:27:23.570 Utilization (in LBAs): 3750748848 (1788GiB) 00:27:23.570 UUID: 0239a7d2-e635-457a-9d00-2f4c3fe3430c 00:27:23.570 Thin Provisioning: Not Supported 00:27:23.570 Per-NS Atomic Units: Yes 00:27:23.570 Atomic Write Unit (Normal): 8 00:27:23.570 Atomic Write Unit (PFail): 8 00:27:23.570 Preferred Write Granularity: 8 00:27:23.570 Atomic Compare & Write Unit: 8 00:27:23.570 Atomic Boundary Size (Normal): 0 00:27:23.570 Atomic Boundary Size (PFail): 0 00:27:23.570 Atomic Boundary Offset: 0 00:27:23.570 NGUID/EUI64 Never Reused: No 00:27:23.570 ANA group ID: 1 00:27:23.570 Namespace Write Protected: No 00:27:23.570 Number of LBA Formats: 1 00:27:23.570 Current LBA Format: LBA Format #00 00:27:23.570 LBA Format #00: Data Size: 512 Metadata Size: 0 00:27:23.570 00:27:23.570 19:16:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:27:23.570 19:16:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # nvmfcleanup 00:27:23.570 19:16:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@99 -- # sync 00:27:23.570 19:16:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:27:23.570 19:16:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # set +e 00:27:23.570 19:16:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # for i in {1..20} 00:27:23.570 19:16:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:27:23.570 rmmod nvme_tcp 00:27:23.570 rmmod nvme_fabrics 00:27:23.570 19:16:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:27:23.570 19:16:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # set -e 00:27:23.570 19:16:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # return 0 00:27:23.570 19:16:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # '[' -n '' ']' 00:27:23.832 19:16:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:27:23.832 19:16:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # nvmf_fini 00:27:23.832 19:16:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@264 -- # local dev 00:27:23.832 19:16:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@267 -- # remove_target_ns 00:27:23.833 19:16:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:27:23.833 19:16:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:27:23.833 19:16:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_target_ns 00:27:25.752 19:16:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@268 -- # delete_main_bridge 00:27:25.752 19:16:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:27:25.752 19:16:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@130 -- # return 0 00:27:25.752 19:16:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:27:25.752 19:16:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:27:25.752 19:16:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:27:25.752 19:16:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:27:25.752 19:16:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:27:25.752 19:16:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:27:25.753 19:16:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:27:25.753 19:16:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:27:25.753 19:16:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:27:25.753 19:16:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:27:25.753 19:16:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:27:25.753 19:16:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:27:25.753 19:16:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:27:25.753 19:16:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:27:25.753 19:16:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:27:25.753 19:16:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:27:25.753 19:16:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:27:25.753 19:16:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@41 -- # _dev=0 00:27:25.753 19:16:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@41 -- # dev_map=() 00:27:25.753 19:16:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@284 -- # iptr 00:27:25.753 19:16:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@542 -- # iptables-save 00:27:25.753 19:16:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:27:25.753 19:16:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@542 -- # iptables-restore 00:27:25.753 19:16:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:27:25.753 19:16:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@486 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:27:25.753 19:16:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # echo 0 00:27:25.753 19:16:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@490 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:25.753 19:16:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@491 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:25.753 19:16:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:25.753 19:16:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:25.753 19:16:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # modules=(/sys/module/nvmet/holders/*) 00:27:25.753 19:16:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@497 -- # modprobe -r nvmet_tcp nvmet 00:27:25.753 19:16:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@500 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:29.060 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:27:29.060 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:27:29.060 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:27:29.060 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:27:29.060 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:27:29.060 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:27:29.060 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:27:29.060 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:27:29.060 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:27:29.060 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:27:29.060 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:27:29.060 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:27:29.060 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:27:29.060 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:27:29.060 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:27:29.060 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:27:29.060 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:27:29.321 00:27:29.321 real 0m18.945s 00:27:29.321 user 0m5.019s 00:27:29.321 sys 0m10.935s 00:27:29.321 19:16:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:29.321 19:16:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:27:29.321 ************************************ 00:27:29.321 END TEST nvmf_identify_kernel_target 00:27:29.321 ************************************ 00:27:29.321 19:16:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:29.321 19:16:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:27:29.321 19:16:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:29.321 19:16:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.581 ************************************ 00:27:29.581 START TEST nvmf_auth_host 00:27:29.581 ************************************ 00:27:29.581 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:29.581 * Looking for test storage... 00:27:29.581 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:29.581 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:29.581 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lcov --version 00:27:29.581 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:29.581 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:29.581 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:29.581 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:29.581 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:29.581 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:27:29.581 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:27:29.581 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:27:29.581 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:27:29.581 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:27:29.581 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:27:29.582 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:27:29.582 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:29.582 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:27:29.582 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:27:29.582 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:29.582 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:29.582 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:27:29.582 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:27:29.582 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:29.582 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:27:29.582 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:27:29.582 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:27:29.582 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:27:29.582 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:29.582 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:27:29.582 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:27:29.582 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:29.582 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:29.582 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:27:29.582 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:29.582 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:29.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:29.582 --rc genhtml_branch_coverage=1 00:27:29.582 --rc genhtml_function_coverage=1 00:27:29.582 --rc genhtml_legend=1 00:27:29.582 --rc geninfo_all_blocks=1 00:27:29.582 --rc geninfo_unexecuted_blocks=1 00:27:29.582 00:27:29.582 ' 00:27:29.582 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:29.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:29.582 --rc genhtml_branch_coverage=1 00:27:29.582 --rc genhtml_function_coverage=1 00:27:29.582 --rc genhtml_legend=1 00:27:29.582 --rc geninfo_all_blocks=1 00:27:29.582 --rc geninfo_unexecuted_blocks=1 00:27:29.582 00:27:29.582 ' 00:27:29.582 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:29.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:29.582 --rc genhtml_branch_coverage=1 00:27:29.582 --rc genhtml_function_coverage=1 00:27:29.582 --rc genhtml_legend=1 00:27:29.582 --rc geninfo_all_blocks=1 00:27:29.582 --rc geninfo_unexecuted_blocks=1 00:27:29.582 00:27:29.582 ' 00:27:29.582 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:29.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:29.582 --rc genhtml_branch_coverage=1 00:27:29.582 --rc genhtml_function_coverage=1 00:27:29.582 --rc genhtml_legend=1 00:27:29.582 --rc geninfo_all_blocks=1 00:27:29.582 --rc geninfo_unexecuted_blocks=1 00:27:29.582 00:27:29.582 ' 00:27:29.582 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:29.582 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:27:29.582 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:29.582 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:29.582 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:29.582 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:29.582 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:29.582 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:27:29.582 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:29.582 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:27:29.582 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:29.582 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:29.582 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:29.582 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:27:29.582 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:27:29.582 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:29.582 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:29.582 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:27:29.582 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:29.582 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:29.582 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:29.582 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:29.582 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:29.582 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:29.582 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:27:29.582 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:29.582 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:27:29.582 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:27:29.582 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:27:29.582 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:27:29.582 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@50 -- # : 0 00:27:29.582 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:27:29.582 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:27:29.582 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:27:29.582 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:29.582 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:29.582 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:27:29.582 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:27:29.582 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:27:29.582 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:27:29.582 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@54 -- # have_pci_nics=0 00:27:29.582 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:27:29.582 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:27:29.582 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:27:29.582 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:27:29.582 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:29.582 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:29.582 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:27:29.582 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:27:29.582 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:27:29.582 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:27:29.582 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:29.582 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # prepare_net_devs 00:27:29.582 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # local -g is_hw=no 00:27:29.582 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@260 -- # remove_target_ns 00:27:29.582 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:27:29.582 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:27:29.582 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_target_ns 00:27:29.582 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:27:29.843 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:27:29.843 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # xtrace_disable 00:27:29.843 19:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.991 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:37.991 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@131 -- # pci_devs=() 00:27:37.991 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@131 -- # local -a pci_devs 00:27:37.991 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@132 -- # pci_net_devs=() 00:27:37.991 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:27:37.991 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@133 -- # pci_drivers=() 00:27:37.991 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@133 -- # local -A pci_drivers 00:27:37.991 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@135 -- # net_devs=() 00:27:37.991 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@135 -- # local -ga net_devs 00:27:37.991 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@136 -- # e810=() 00:27:37.991 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@136 -- # local -ga e810 00:27:37.991 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@137 -- # x722=() 00:27:37.991 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@137 -- # local -ga x722 00:27:37.991 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@138 -- # mlx=() 00:27:37.991 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@138 -- # local -ga mlx 00:27:37.991 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:37.991 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:37.991 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:37.991 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:37.991 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:37.991 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:37.991 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:37.991 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:37.991 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:37.991 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:37.991 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:37.991 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:37.991 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:27:37.991 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:27:37.991 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:27:37.991 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:27:37.992 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:27:37.992 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:27:37.992 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:27:37.992 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:37.992 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:37.992 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:27:37.992 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:27:37.992 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:37.992 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:37.992 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:27:37.992 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:27:37.992 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:37.992 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:37.992 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:27:37.992 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:27:37.992 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:37.992 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:37.992 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:27:37.992 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:27:37.992 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:27:37.992 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:27:37.992 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:27:37.992 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:37.992 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:27:37.992 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:37.992 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # [[ up == up ]] 00:27:37.992 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:27:37.992 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:37.992 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:37.992 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:37.992 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:27:37.992 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:27:37.992 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:37.992 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:27:37.992 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:37.992 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # [[ up == up ]] 00:27:37.992 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:27:37.992 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:37.992 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:37.992 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:37.992 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:27:37.992 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:27:37.992 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:27:37.992 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # is_hw=yes 00:27:37.992 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:27:37.992 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:27:37.992 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:27:37.992 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:27:37.992 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@257 -- # create_target_ns 00:27:37.992 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:27:37.992 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:27:37.992 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:27:37.992 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:37.992 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:27:37.992 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:27:37.992 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:37.992 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:37.992 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:27:37.992 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:27:37.992 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:27:37.992 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:27:37.992 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@27 -- # local -gA dev_map 00:27:37.992 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@28 -- # local -g _dev 00:27:37.992 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:27:37.992 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:27:37.992 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:27:37.992 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:27:37.992 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@44 -- # ips=() 00:27:37.992 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:27:37.992 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:27:37.992 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:27:37.992 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:27:37.992 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:27:37.992 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:27:37.992 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:27:37.992 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:27:37.992 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:27:37.992 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:27:37.992 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:27:37.992 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:27:37.992 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:27:37.992 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:27:37.992 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:27:37.992 19:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:27:37.992 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:27:37.992 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:27:37.992 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:37.992 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:27:37.992 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@11 -- # local val=167772161 00:27:37.992 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:27:37.992 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:27:37.992 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:27:37.992 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:27:37.992 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:27:37.992 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:27:37.992 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:27:37.992 10.0.0.1 00:27:37.992 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:27:37.992 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:27:37.992 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:37.992 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:37.992 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:27:37.992 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@11 -- # local val=167772162 00:27:37.992 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:27:37.992 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:27:37.992 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:27:37.992 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:27:37.992 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:27:37.992 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:27:37.992 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:27:37.992 10.0.0.2 00:27:37.992 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:27:37.992 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:27:37.992 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:27:37.992 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:27:37.992 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:27:37.992 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:27:37.992 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:27:37.992 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:37.993 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:37.993 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:27:37.993 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:27:37.993 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:27:37.993 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:27:37.993 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:27:37.993 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:27:37.993 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:27:37.993 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:27:37.993 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:27:37.993 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:27:37.993 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:27:37.993 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@38 -- # ping_ips 1 00:27:37.993 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:27:37.993 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:27:37.993 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:27:37.993 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:27:37.993 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:27:37.993 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:27:37.993 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:27:37.993 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:27:37.993 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:27:37.993 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@107 -- # local dev=initiator0 00:27:37.993 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:27:37.993 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:27:37.993 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:27:37.993 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:27:37.993 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:27:37.993 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:27:37.993 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:27:37.993 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:27:37.993 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:27:37.993 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:27:37.993 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:27:37.993 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:37.993 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:37.993 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:27:37.993 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:27:37.993 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:37.993 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.611 ms 00:27:37.993 00:27:37.993 --- 10.0.0.1 ping statistics --- 00:27:37.993 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:37.993 rtt min/avg/max/mdev = 0.611/0.611/0.611/0.000 ms 00:27:37.993 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:27:37.993 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:27:37.993 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:27:37.993 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:27:37.993 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:37.993 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:37.993 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@168 -- # get_net_dev target0 00:27:37.993 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@107 -- # local dev=target0 00:27:37.993 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:27:37.993 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:27:37.993 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:27:37.993 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:27:37.993 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:27:37.993 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:27:37.993 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:27:37.993 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:27:37.993 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:27:37.993 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:27:37.993 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:27:37.993 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:27:37.993 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:27:37.993 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:27:37.993 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:37.993 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.298 ms 00:27:37.993 00:27:37.993 --- 10.0.0.2 ping statistics --- 00:27:37.993 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:37.993 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:27:37.993 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # (( pair++ )) 00:27:37.993 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:27:37.993 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:37.993 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@270 -- # return 0 00:27:37.993 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:27:37.993 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:27:37.993 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:27:37.993 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:27:37.993 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:27:37.993 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:27:37.993 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:27:37.993 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:27:37.993 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:27:37.993 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:27:37.993 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@107 -- # local dev=initiator0 00:27:37.993 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:27:37.993 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:27:37.993 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:27:37.993 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:27:37.993 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:27:37.993 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:27:37.993 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:27:37.993 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:27:37.993 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:27:37.993 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:37.993 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:27:37.993 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:27:37.993 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:27:37.993 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:27:37.993 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:27:37.993 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:27:37.993 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@107 -- # local dev=initiator1 00:27:37.993 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:27:37.993 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:27:37.993 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@109 -- # return 1 00:27:37.993 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@168 -- # dev= 00:27:37.993 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@169 -- # return 0 00:27:37.993 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:27:37.993 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:27:37.993 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:27:37.993 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:27:37.993 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:27:37.993 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:37.993 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:37.993 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@168 -- # get_net_dev target0 00:27:37.993 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@107 -- # local dev=target0 00:27:37.993 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:27:37.993 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:27:37.993 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:27:37.993 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:27:37.993 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:27:37.994 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:27:37.994 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:27:37.994 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:27:37.994 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:27:37.994 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:37.994 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:27:37.994 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:27:37.994 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:27:37.994 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:27:37.994 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:37.994 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:37.994 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@168 -- # get_net_dev target1 00:27:37.994 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@107 -- # local dev=target1 00:27:37.994 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:27:37.994 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:27:37.994 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@109 -- # return 1 00:27:37.994 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@168 -- # dev= 00:27:37.994 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@169 -- # return 0 00:27:37.994 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:27:37.994 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:37.994 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:27:37.994 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:27:37.994 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:37.994 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:27:37.994 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:27:37.994 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:27:37.994 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:27:37.994 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:37.994 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.994 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # nvmfpid=491911 00:27:37.994 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@329 -- # waitforlisten 491911 00:27:37.994 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # '[' -z 491911 ']' 00:27:37.994 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:37.994 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:37.994 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:37.994 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:37.994 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.994 19:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:27:37.994 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:37.994 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@866 -- # return 0 00:27:37.994 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:27:37.994 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:37.994 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.994 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:37.994 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:27:37.994 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:27:37.994 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@525 -- # local digest len file key 00:27:37.994 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:37.994 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # local -A digests 00:27:37.994 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # digest=null 00:27:37.994 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # len=32 00:27:37.994 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:37.994 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # key=fbe83f923494286df97cc8e35e129d43 00:27:37.994 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # mktemp -t spdk.key-null.XXX 00:27:37.994 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-null.6f9 00:27:37.994 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@531 -- # format_dhchap_key fbe83f923494286df97cc8e35e129d43 0 00:27:37.994 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # format_key DHHC-1 fbe83f923494286df97cc8e35e129d43 0 00:27:37.994 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # local prefix key digest 00:27:37.994 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:27:37.994 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # key=fbe83f923494286df97cc8e35e129d43 00:27:37.994 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # digest=0 00:27:37.994 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # python - 00:27:38.256 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-null.6f9 00:27:38.256 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-null.6f9 00:27:38.256 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.6f9 00:27:38.256 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:27:38.256 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@525 -- # local digest len file key 00:27:38.256 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:38.256 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # local -A digests 00:27:38.256 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # digest=sha512 00:27:38.256 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # len=64 00:27:38.256 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:38.256 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # key=2791f84c04ac816c3d06f764aab440139661908acd907cf247392558cef6db16 00:27:38.256 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha512.XXX 00:27:38.256 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha512.Jqf 00:27:38.256 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@531 -- # format_dhchap_key 2791f84c04ac816c3d06f764aab440139661908acd907cf247392558cef6db16 3 00:27:38.256 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # format_key DHHC-1 2791f84c04ac816c3d06f764aab440139661908acd907cf247392558cef6db16 3 00:27:38.256 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # local prefix key digest 00:27:38.256 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:27:38.256 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # key=2791f84c04ac816c3d06f764aab440139661908acd907cf247392558cef6db16 00:27:38.256 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # digest=3 00:27:38.256 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # python - 00:27:38.256 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha512.Jqf 00:27:38.256 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha512.Jqf 00:27:38.256 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.Jqf 00:27:38.256 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:27:38.256 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@525 -- # local digest len file key 00:27:38.256 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:38.256 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # local -A digests 00:27:38.256 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # digest=null 00:27:38.256 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # len=48 00:27:38.256 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:38.256 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # key=f222e819377cc6ca7e732ad30a40e77073fb7b9cd56ba5d7 00:27:38.256 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # mktemp -t spdk.key-null.XXX 00:27:38.256 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-null.nEb 00:27:38.256 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@531 -- # format_dhchap_key f222e819377cc6ca7e732ad30a40e77073fb7b9cd56ba5d7 0 00:27:38.256 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # format_key DHHC-1 f222e819377cc6ca7e732ad30a40e77073fb7b9cd56ba5d7 0 00:27:38.256 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # local prefix key digest 00:27:38.256 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:27:38.256 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # key=f222e819377cc6ca7e732ad30a40e77073fb7b9cd56ba5d7 00:27:38.256 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # digest=0 00:27:38.256 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # python - 00:27:38.256 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-null.nEb 00:27:38.256 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-null.nEb 00:27:38.256 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.nEb 00:27:38.256 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:27:38.256 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@525 -- # local digest len file key 00:27:38.256 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:38.256 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # local -A digests 00:27:38.256 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # digest=sha384 00:27:38.256 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # len=48 00:27:38.256 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:38.256 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # key=d813ae805cfabda9d1a8d6a66e3e44a6cd88e0a3f0156bc0 00:27:38.256 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha384.XXX 00:27:38.256 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha384.n0L 00:27:38.256 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@531 -- # format_dhchap_key d813ae805cfabda9d1a8d6a66e3e44a6cd88e0a3f0156bc0 2 00:27:38.256 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # format_key DHHC-1 d813ae805cfabda9d1a8d6a66e3e44a6cd88e0a3f0156bc0 2 00:27:38.256 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # local prefix key digest 00:27:38.256 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:27:38.256 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # key=d813ae805cfabda9d1a8d6a66e3e44a6cd88e0a3f0156bc0 00:27:38.256 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # digest=2 00:27:38.256 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # python - 00:27:38.256 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha384.n0L 00:27:38.256 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha384.n0L 00:27:38.256 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.n0L 00:27:38.256 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:27:38.256 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@525 -- # local digest len file key 00:27:38.256 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:38.256 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # local -A digests 00:27:38.256 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # digest=sha256 00:27:38.256 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # len=32 00:27:38.256 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:38.256 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # key=e5d2b9f75274f3181567ac658f9a3258 00:27:38.256 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha256.XXX 00:27:38.256 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha256.6cb 00:27:38.256 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@531 -- # format_dhchap_key e5d2b9f75274f3181567ac658f9a3258 1 00:27:38.256 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # format_key DHHC-1 e5d2b9f75274f3181567ac658f9a3258 1 00:27:38.256 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # local prefix key digest 00:27:38.256 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:27:38.256 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # key=e5d2b9f75274f3181567ac658f9a3258 00:27:38.256 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # digest=1 00:27:38.256 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # python - 00:27:38.519 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha256.6cb 00:27:38.519 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha256.6cb 00:27:38.519 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.6cb 00:27:38.519 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:27:38.519 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@525 -- # local digest len file key 00:27:38.519 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:38.519 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # local -A digests 00:27:38.519 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # digest=sha256 00:27:38.519 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # len=32 00:27:38.519 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:38.519 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # key=fcb10e11750e0b89f6fccc30715f8073 00:27:38.519 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha256.XXX 00:27:38.519 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha256.UT9 00:27:38.519 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@531 -- # format_dhchap_key fcb10e11750e0b89f6fccc30715f8073 1 00:27:38.519 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # format_key DHHC-1 fcb10e11750e0b89f6fccc30715f8073 1 00:27:38.519 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # local prefix key digest 00:27:38.519 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:27:38.519 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # key=fcb10e11750e0b89f6fccc30715f8073 00:27:38.519 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # digest=1 00:27:38.519 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # python - 00:27:38.519 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha256.UT9 00:27:38.519 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha256.UT9 00:27:38.519 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.UT9 00:27:38.519 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:27:38.519 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@525 -- # local digest len file key 00:27:38.519 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:38.519 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # local -A digests 00:27:38.519 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # digest=sha384 00:27:38.519 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # len=48 00:27:38.519 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:38.519 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # key=3372ced53fbc2aa12c699396ca76204f87888ef42df053de 00:27:38.519 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha384.XXX 00:27:38.519 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha384.wnU 00:27:38.519 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@531 -- # format_dhchap_key 3372ced53fbc2aa12c699396ca76204f87888ef42df053de 2 00:27:38.519 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # format_key DHHC-1 3372ced53fbc2aa12c699396ca76204f87888ef42df053de 2 00:27:38.519 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # local prefix key digest 00:27:38.519 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:27:38.519 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # key=3372ced53fbc2aa12c699396ca76204f87888ef42df053de 00:27:38.519 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # digest=2 00:27:38.519 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # python - 00:27:38.519 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha384.wnU 00:27:38.519 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha384.wnU 00:27:38.519 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.wnU 00:27:38.519 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:27:38.519 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@525 -- # local digest len file key 00:27:38.519 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:38.519 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # local -A digests 00:27:38.519 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # digest=null 00:27:38.519 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # len=32 00:27:38.519 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:38.519 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # key=59bdb03a8679bd13633abcd329246670 00:27:38.519 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # mktemp -t spdk.key-null.XXX 00:27:38.519 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-null.Cn5 00:27:38.519 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@531 -- # format_dhchap_key 59bdb03a8679bd13633abcd329246670 0 00:27:38.519 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # format_key DHHC-1 59bdb03a8679bd13633abcd329246670 0 00:27:38.519 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # local prefix key digest 00:27:38.519 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:27:38.519 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # key=59bdb03a8679bd13633abcd329246670 00:27:38.519 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # digest=0 00:27:38.519 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # python - 00:27:38.519 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-null.Cn5 00:27:38.519 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-null.Cn5 00:27:38.519 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.Cn5 00:27:38.519 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:27:38.519 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@525 -- # local digest len file key 00:27:38.519 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:38.519 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # local -A digests 00:27:38.519 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # digest=sha512 00:27:38.519 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # len=64 00:27:38.519 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:38.519 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # key=d6c2537fd83ee38c5e15d6ddd350e291ea024c84716e4597403e6e7d0a9b018a 00:27:38.519 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha512.XXX 00:27:38.519 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha512.gaL 00:27:38.519 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@531 -- # format_dhchap_key d6c2537fd83ee38c5e15d6ddd350e291ea024c84716e4597403e6e7d0a9b018a 3 00:27:38.519 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # format_key DHHC-1 d6c2537fd83ee38c5e15d6ddd350e291ea024c84716e4597403e6e7d0a9b018a 3 00:27:38.519 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # local prefix key digest 00:27:38.519 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:27:38.519 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # key=d6c2537fd83ee38c5e15d6ddd350e291ea024c84716e4597403e6e7d0a9b018a 00:27:38.519 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # digest=3 00:27:38.519 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # python - 00:27:38.780 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha512.gaL 00:27:38.780 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha512.gaL 00:27:38.780 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.gaL 00:27:38.780 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:27:38.780 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 491911 00:27:38.780 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # '[' -z 491911 ']' 00:27:38.780 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:38.780 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:38.780 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:38.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:38.780 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:38.780 19:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.780 19:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:38.780 19:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@866 -- # return 0 00:27:38.780 19:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:38.780 19:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.6f9 00:27:38.780 19:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.780 19:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.780 19:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.780 19:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.Jqf ]] 00:27:38.780 19:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Jqf 00:27:38.780 19:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.780 19:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.780 19:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.780 19:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:38.780 19:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.nEb 00:27:38.780 19:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.780 19:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.780 19:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.780 19:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.n0L ]] 00:27:38.780 19:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.n0L 00:27:38.780 19:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.780 19:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.780 19:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.781 19:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:38.781 19:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.6cb 00:27:38.781 19:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.781 19:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.781 19:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.781 19:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.UT9 ]] 00:27:38.781 19:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.UT9 00:27:38.781 19:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.781 19:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.781 19:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.781 19:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:38.781 19:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.wnU 00:27:38.781 19:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.781 19:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.781 19:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.781 19:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.Cn5 ]] 00:27:38.781 19:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.Cn5 00:27:38.781 19:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.781 19:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.781 19:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.781 19:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:38.781 19:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.gaL 00:27:38.781 19:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.781 19:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.781 19:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.781 19:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:27:38.781 19:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:27:38.781 19:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:27:38.781 19:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@434 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:27:38.781 19:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@436 -- # nvmet=/sys/kernel/config/nvmet 00:27:38.781 19:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@437 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:38.781 19:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:38.781 19:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@439 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:38.781 19:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@441 -- # local block nvme 00:27:38.781 19:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@443 -- # [[ ! -e /sys/module/nvmet ]] 00:27:38.781 19:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # modprobe nvmet 00:27:39.041 19:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@447 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:39.041 19:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@449 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:42.343 Waiting for block devices as requested 00:27:42.343 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:42.343 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:42.343 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:42.343 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:42.603 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:42.603 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:42.603 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:42.863 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:42.863 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:27:43.123 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:43.123 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:43.123 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:43.384 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:43.384 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:43.384 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:43.384 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:43.644 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:44.584 19:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@452 -- # for block in /sys/block/nvme* 00:27:44.584 19:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@453 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:44.584 19:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # is_block_zoned nvme0n1 00:27:44.584 19:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:27:44.584 19:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:44.584 19:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:27:44.584 19:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # block_in_use nvme0n1 00:27:44.584 19:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:27:44.584 19:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:44.584 No valid GPT data, bailing 00:27:44.584 19:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:44.584 19:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:27:44.584 19:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:27:44.584 19:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # nvme=/dev/nvme0n1 00:27:44.584 19:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@458 -- # [[ -b /dev/nvme0n1 ]] 00:27:44.584 19:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@460 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:44.584 19:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@461 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:44.584 19:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@462 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:44.584 19:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@467 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:27:44.584 19:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # echo 1 00:27:44.584 19:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@470 -- # echo /dev/nvme0n1 00:27:44.584 19:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@471 -- # echo 1 00:27:44.584 19:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@473 -- # echo 10.0.0.1 00:27:44.584 19:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # echo tcp 00:27:44.584 19:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@475 -- # echo 4420 00:27:44.584 19:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # echo ipv4 00:27:44.584 19:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@479 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:44.584 19:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:27:44.584 00:27:44.584 Discovery Log Number of Records 2, Generation counter 2 00:27:44.584 =====Discovery Log Entry 0====== 00:27:44.584 trtype: tcp 00:27:44.584 adrfam: ipv4 00:27:44.584 subtype: current discovery subsystem 00:27:44.584 treq: not specified, sq flow control disable supported 00:27:44.585 portid: 1 00:27:44.585 trsvcid: 4420 00:27:44.585 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:44.585 traddr: 10.0.0.1 00:27:44.585 eflags: none 00:27:44.585 sectype: none 00:27:44.585 =====Discovery Log Entry 1====== 00:27:44.585 trtype: tcp 00:27:44.585 adrfam: ipv4 00:27:44.585 subtype: nvme subsystem 00:27:44.585 treq: not specified, sq flow control disable supported 00:27:44.585 portid: 1 00:27:44.585 trsvcid: 4420 00:27:44.585 subnqn: nqn.2024-02.io.spdk:cnode0 00:27:44.585 traddr: 10.0.0.1 00:27:44.585 eflags: none 00:27:44.585 sectype: none 00:27:44.585 19:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:44.585 19:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:27:44.585 19:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:44.585 19:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:44.585 19:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:44.585 19:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:44.585 19:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:44.585 19:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:44.585 19:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjIyMmU4MTkzNzdjYzZjYTdlNzMyYWQzMGE0MGU3NzA3M2ZiN2I5Y2Q1NmJhNWQ3Y13+qg==: 00:27:44.585 19:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDgxM2FlODA1Y2ZhYmRhOWQxYThkNmE2NmUzZTQ0YTZjZDg4ZTBhM2YwMTU2YmMwZPAE5A==: 00:27:44.585 19:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:44.585 19:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:44.585 19:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjIyMmU4MTkzNzdjYzZjYTdlNzMyYWQzMGE0MGU3NzA3M2ZiN2I5Y2Q1NmJhNWQ3Y13+qg==: 00:27:44.585 19:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDgxM2FlODA1Y2ZhYmRhOWQxYThkNmE2NmUzZTQ0YTZjZDg4ZTBhM2YwMTU2YmMwZPAE5A==: ]] 00:27:44.585 19:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDgxM2FlODA1Y2ZhYmRhOWQxYThkNmE2NmUzZTQ0YTZjZDg4ZTBhM2YwMTU2YmMwZPAE5A==: 00:27:44.585 19:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:44.585 19:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:27:44.585 19:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:44.585 19:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:44.585 19:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:27:44.585 19:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:44.585 19:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:27:44.585 19:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:44.585 19:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:44.585 19:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:44.585 19:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:44.585 19:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.585 19:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.585 19:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.585 19:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:44.585 19:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.585 19:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.845 nvme0n1 00:27:44.845 19:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.845 19:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:44.845 19:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:44.845 19:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.845 19:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.845 19:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.845 19:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:44.845 19:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:44.845 19:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.845 19:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.845 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.845 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:44.845 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:44.845 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:44.845 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:27:44.845 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:44.845 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:44.845 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:44.846 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:44.846 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmJlODNmOTIzNDk0Mjg2ZGY5N2NjOGUzNWUxMjlkNDPGqwzO: 00:27:44.846 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Mjc5MWY4NGMwNGFjODE2YzNkMDZmNzY0YWFiNDQwMTM5NjYxOTA4YWNkOTA3Y2YyNDczOTI1NThjZWY2ZGIxNmCmCks=: 00:27:44.846 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:44.846 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:44.846 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmJlODNmOTIzNDk0Mjg2ZGY5N2NjOGUzNWUxMjlkNDPGqwzO: 00:27:44.846 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Mjc5MWY4NGMwNGFjODE2YzNkMDZmNzY0YWFiNDQwMTM5NjYxOTA4YWNkOTA3Y2YyNDczOTI1NThjZWY2ZGIxNmCmCks=: ]] 00:27:44.846 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Mjc5MWY4NGMwNGFjODE2YzNkMDZmNzY0YWFiNDQwMTM5NjYxOTA4YWNkOTA3Y2YyNDczOTI1NThjZWY2ZGIxNmCmCks=: 00:27:44.846 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:27:44.846 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:44.846 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:44.846 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:44.846 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:44.846 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:44.846 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:44.846 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.846 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.846 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.846 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:44.846 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.846 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.846 nvme0n1 00:27:44.846 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.106 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:45.106 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:45.106 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.106 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.106 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.106 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:45.106 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:45.106 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.106 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.106 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.106 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:45.106 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:45.106 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:45.106 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:45.106 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:45.106 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:45.106 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjIyMmU4MTkzNzdjYzZjYTdlNzMyYWQzMGE0MGU3NzA3M2ZiN2I5Y2Q1NmJhNWQ3Y13+qg==: 00:27:45.106 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDgxM2FlODA1Y2ZhYmRhOWQxYThkNmE2NmUzZTQ0YTZjZDg4ZTBhM2YwMTU2YmMwZPAE5A==: 00:27:45.106 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:45.106 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:45.106 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjIyMmU4MTkzNzdjYzZjYTdlNzMyYWQzMGE0MGU3NzA3M2ZiN2I5Y2Q1NmJhNWQ3Y13+qg==: 00:27:45.107 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDgxM2FlODA1Y2ZhYmRhOWQxYThkNmE2NmUzZTQ0YTZjZDg4ZTBhM2YwMTU2YmMwZPAE5A==: ]] 00:27:45.107 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDgxM2FlODA1Y2ZhYmRhOWQxYThkNmE2NmUzZTQ0YTZjZDg4ZTBhM2YwMTU2YmMwZPAE5A==: 00:27:45.107 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:27:45.107 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:45.107 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:45.107 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:45.107 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:45.107 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:45.107 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:45.107 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.107 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.107 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.107 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:45.107 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.107 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.107 nvme0n1 00:27:45.107 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.107 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:45.107 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:45.107 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.107 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.107 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.368 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:45.368 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:45.368 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.368 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.368 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.368 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:45.368 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:45.368 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:45.368 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:45.368 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:45.368 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:45.368 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTVkMmI5Zjc1Mjc0ZjMxODE1NjdhYzY1OGY5YTMyNTgULcwn: 00:27:45.368 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZmNiMTBlMTE3NTBlMGI4OWY2ZmNjYzMwNzE1ZjgwNzNwcEW+: 00:27:45.368 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:45.368 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:45.368 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTVkMmI5Zjc1Mjc0ZjMxODE1NjdhYzY1OGY5YTMyNTgULcwn: 00:27:45.368 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZmNiMTBlMTE3NTBlMGI4OWY2ZmNjYzMwNzE1ZjgwNzNwcEW+: ]] 00:27:45.368 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZmNiMTBlMTE3NTBlMGI4OWY2ZmNjYzMwNzE1ZjgwNzNwcEW+: 00:27:45.368 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:27:45.368 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:45.368 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:45.368 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:45.368 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:45.368 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:45.368 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:45.368 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.368 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.368 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.368 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:45.368 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.368 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.368 nvme0n1 00:27:45.368 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.368 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:45.368 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:45.368 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.368 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.368 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.368 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:45.368 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:45.368 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.368 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.368 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.368 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:45.368 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:27:45.368 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:45.368 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:45.368 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:45.368 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:45.368 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzM3MmNlZDUzZmJjMmFhMTJjNjk5Mzk2Y2E3NjIwNGY4Nzg4OGVmNDJkZjA1M2Rl2qf8fQ==: 00:27:45.368 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTliZGIwM2E4Njc5YmQxMzYzM2FiY2QzMjkyNDY2NzDTvq2I: 00:27:45.368 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:45.368 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:45.368 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzM3MmNlZDUzZmJjMmFhMTJjNjk5Mzk2Y2E3NjIwNGY4Nzg4OGVmNDJkZjA1M2Rl2qf8fQ==: 00:27:45.368 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTliZGIwM2E4Njc5YmQxMzYzM2FiY2QzMjkyNDY2NzDTvq2I: ]] 00:27:45.368 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTliZGIwM2E4Njc5YmQxMzYzM2FiY2QzMjkyNDY2NzDTvq2I: 00:27:45.368 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:27:45.368 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:45.368 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:45.368 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:45.368 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:45.368 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:45.368 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:45.368 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.368 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.368 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.368 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:45.368 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.368 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.629 nvme0n1 00:27:45.629 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.629 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:45.629 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:45.629 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.629 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.629 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.629 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:45.629 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:45.629 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.629 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.629 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.629 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:45.629 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:27:45.629 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:45.629 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:45.629 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:45.629 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:45.629 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDZjMjUzN2ZkODNlZTM4YzVlMTVkNmRkZDM1MGUyOTFlYTAyNGM4NDcxNmU0NTk3NDAzZTZlN2QwYTliMDE4YVnPsYE=: 00:27:45.629 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:45.629 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:45.629 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:45.629 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDZjMjUzN2ZkODNlZTM4YzVlMTVkNmRkZDM1MGUyOTFlYTAyNGM4NDcxNmU0NTk3NDAzZTZlN2QwYTliMDE4YVnPsYE=: 00:27:45.629 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:45.629 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:27:45.629 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:45.629 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:45.629 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:45.629 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:45.629 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:45.629 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:45.629 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.629 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.629 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.629 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:45.629 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.629 19:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.889 nvme0n1 00:27:45.889 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.889 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:45.889 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.889 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:45.889 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.889 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.889 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:45.889 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:45.889 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.889 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.889 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.889 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:45.889 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:45.889 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:27:45.889 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:45.889 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:45.889 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:45.889 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:45.889 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmJlODNmOTIzNDk0Mjg2ZGY5N2NjOGUzNWUxMjlkNDPGqwzO: 00:27:45.890 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Mjc5MWY4NGMwNGFjODE2YzNkMDZmNzY0YWFiNDQwMTM5NjYxOTA4YWNkOTA3Y2YyNDczOTI1NThjZWY2ZGIxNmCmCks=: 00:27:45.890 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:45.890 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:45.890 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmJlODNmOTIzNDk0Mjg2ZGY5N2NjOGUzNWUxMjlkNDPGqwzO: 00:27:45.890 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Mjc5MWY4NGMwNGFjODE2YzNkMDZmNzY0YWFiNDQwMTM5NjYxOTA4YWNkOTA3Y2YyNDczOTI1NThjZWY2ZGIxNmCmCks=: ]] 00:27:45.890 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Mjc5MWY4NGMwNGFjODE2YzNkMDZmNzY0YWFiNDQwMTM5NjYxOTA4YWNkOTA3Y2YyNDczOTI1NThjZWY2ZGIxNmCmCks=: 00:27:45.890 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:27:45.890 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:45.890 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:45.890 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:45.890 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:45.890 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:45.890 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:45.890 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.890 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.890 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.890 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:45.890 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.890 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.150 nvme0n1 00:27:46.150 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.150 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:46.150 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:46.150 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.150 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.150 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.150 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:46.150 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:46.150 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.150 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.150 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.150 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:46.150 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:27:46.150 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:46.150 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:46.150 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:46.150 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:46.150 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjIyMmU4MTkzNzdjYzZjYTdlNzMyYWQzMGE0MGU3NzA3M2ZiN2I5Y2Q1NmJhNWQ3Y13+qg==: 00:27:46.150 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDgxM2FlODA1Y2ZhYmRhOWQxYThkNmE2NmUzZTQ0YTZjZDg4ZTBhM2YwMTU2YmMwZPAE5A==: 00:27:46.150 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:46.150 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:46.150 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjIyMmU4MTkzNzdjYzZjYTdlNzMyYWQzMGE0MGU3NzA3M2ZiN2I5Y2Q1NmJhNWQ3Y13+qg==: 00:27:46.150 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDgxM2FlODA1Y2ZhYmRhOWQxYThkNmE2NmUzZTQ0YTZjZDg4ZTBhM2YwMTU2YmMwZPAE5A==: ]] 00:27:46.150 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDgxM2FlODA1Y2ZhYmRhOWQxYThkNmE2NmUzZTQ0YTZjZDg4ZTBhM2YwMTU2YmMwZPAE5A==: 00:27:46.150 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:27:46.150 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:46.150 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:46.150 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:46.150 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:46.150 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:46.151 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:46.151 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.151 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.151 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.151 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:46.151 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.151 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.411 nvme0n1 00:27:46.412 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.412 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:46.412 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:46.412 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.412 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.412 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.412 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:46.412 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:46.412 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.412 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.412 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.412 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:46.412 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:27:46.412 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:46.412 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:46.412 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:46.412 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:46.412 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTVkMmI5Zjc1Mjc0ZjMxODE1NjdhYzY1OGY5YTMyNTgULcwn: 00:27:46.412 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZmNiMTBlMTE3NTBlMGI4OWY2ZmNjYzMwNzE1ZjgwNzNwcEW+: 00:27:46.412 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:46.412 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:46.412 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTVkMmI5Zjc1Mjc0ZjMxODE1NjdhYzY1OGY5YTMyNTgULcwn: 00:27:46.412 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZmNiMTBlMTE3NTBlMGI4OWY2ZmNjYzMwNzE1ZjgwNzNwcEW+: ]] 00:27:46.412 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZmNiMTBlMTE3NTBlMGI4OWY2ZmNjYzMwNzE1ZjgwNzNwcEW+: 00:27:46.412 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:27:46.412 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:46.412 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:46.412 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:46.412 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:46.412 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:46.412 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:46.412 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.412 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.412 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.412 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:46.412 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.412 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.673 nvme0n1 00:27:46.673 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.673 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:46.673 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:46.673 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.673 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.673 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.673 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:46.673 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:46.673 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.673 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.673 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.673 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:46.673 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:27:46.673 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:46.673 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:46.673 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:46.673 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:46.673 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzM3MmNlZDUzZmJjMmFhMTJjNjk5Mzk2Y2E3NjIwNGY4Nzg4OGVmNDJkZjA1M2Rl2qf8fQ==: 00:27:46.673 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTliZGIwM2E4Njc5YmQxMzYzM2FiY2QzMjkyNDY2NzDTvq2I: 00:27:46.673 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:46.673 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:46.673 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzM3MmNlZDUzZmJjMmFhMTJjNjk5Mzk2Y2E3NjIwNGY4Nzg4OGVmNDJkZjA1M2Rl2qf8fQ==: 00:27:46.673 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTliZGIwM2E4Njc5YmQxMzYzM2FiY2QzMjkyNDY2NzDTvq2I: ]] 00:27:46.673 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTliZGIwM2E4Njc5YmQxMzYzM2FiY2QzMjkyNDY2NzDTvq2I: 00:27:46.673 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:27:46.673 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:46.673 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:46.673 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:46.673 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:46.673 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:46.673 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:46.673 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.673 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.673 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.673 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:46.673 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.673 19:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.934 nvme0n1 00:27:46.934 19:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.934 19:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:46.934 19:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:46.934 19:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.934 19:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.934 19:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.934 19:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:46.934 19:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:46.934 19:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.934 19:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.934 19:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.934 19:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:46.934 19:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:27:46.934 19:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:46.934 19:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:46.934 19:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:46.934 19:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:46.934 19:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDZjMjUzN2ZkODNlZTM4YzVlMTVkNmRkZDM1MGUyOTFlYTAyNGM4NDcxNmU0NTk3NDAzZTZlN2QwYTliMDE4YVnPsYE=: 00:27:46.934 19:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:46.934 19:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:46.934 19:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:46.934 19:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDZjMjUzN2ZkODNlZTM4YzVlMTVkNmRkZDM1MGUyOTFlYTAyNGM4NDcxNmU0NTk3NDAzZTZlN2QwYTliMDE4YVnPsYE=: 00:27:46.934 19:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:46.934 19:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:27:46.934 19:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:46.934 19:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:46.934 19:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:46.934 19:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:46.934 19:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:46.934 19:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:46.934 19:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.934 19:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.934 19:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.934 19:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:46.934 19:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.934 19:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.195 nvme0n1 00:27:47.195 19:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.195 19:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:47.195 19:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:47.195 19:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.195 19:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.195 19:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.195 19:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:47.195 19:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:47.195 19:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.195 19:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.195 19:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.195 19:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:47.195 19:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:47.195 19:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:27:47.195 19:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:47.195 19:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:47.195 19:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:47.195 19:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:47.195 19:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmJlODNmOTIzNDk0Mjg2ZGY5N2NjOGUzNWUxMjlkNDPGqwzO: 00:27:47.195 19:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Mjc5MWY4NGMwNGFjODE2YzNkMDZmNzY0YWFiNDQwMTM5NjYxOTA4YWNkOTA3Y2YyNDczOTI1NThjZWY2ZGIxNmCmCks=: 00:27:47.195 19:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:47.196 19:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:47.196 19:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmJlODNmOTIzNDk0Mjg2ZGY5N2NjOGUzNWUxMjlkNDPGqwzO: 00:27:47.196 19:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Mjc5MWY4NGMwNGFjODE2YzNkMDZmNzY0YWFiNDQwMTM5NjYxOTA4YWNkOTA3Y2YyNDczOTI1NThjZWY2ZGIxNmCmCks=: ]] 00:27:47.196 19:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Mjc5MWY4NGMwNGFjODE2YzNkMDZmNzY0YWFiNDQwMTM5NjYxOTA4YWNkOTA3Y2YyNDczOTI1NThjZWY2ZGIxNmCmCks=: 00:27:47.196 19:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:27:47.196 19:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:47.196 19:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:47.196 19:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:47.196 19:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:47.196 19:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:47.196 19:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:47.196 19:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.196 19:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.196 19:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.196 19:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:47.196 19:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.196 19:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.456 nvme0n1 00:27:47.456 19:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.456 19:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:47.456 19:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:47.456 19:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.456 19:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.456 19:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.456 19:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:47.456 19:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:47.456 19:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.456 19:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.456 19:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.456 19:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:47.456 19:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:27:47.456 19:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:47.456 19:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:47.456 19:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:47.456 19:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:47.456 19:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjIyMmU4MTkzNzdjYzZjYTdlNzMyYWQzMGE0MGU3NzA3M2ZiN2I5Y2Q1NmJhNWQ3Y13+qg==: 00:27:47.456 19:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDgxM2FlODA1Y2ZhYmRhOWQxYThkNmE2NmUzZTQ0YTZjZDg4ZTBhM2YwMTU2YmMwZPAE5A==: 00:27:47.456 19:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:47.456 19:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:47.456 19:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjIyMmU4MTkzNzdjYzZjYTdlNzMyYWQzMGE0MGU3NzA3M2ZiN2I5Y2Q1NmJhNWQ3Y13+qg==: 00:27:47.456 19:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDgxM2FlODA1Y2ZhYmRhOWQxYThkNmE2NmUzZTQ0YTZjZDg4ZTBhM2YwMTU2YmMwZPAE5A==: ]] 00:27:47.456 19:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDgxM2FlODA1Y2ZhYmRhOWQxYThkNmE2NmUzZTQ0YTZjZDg4ZTBhM2YwMTU2YmMwZPAE5A==: 00:27:47.456 19:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:27:47.456 19:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:47.456 19:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:47.456 19:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:47.456 19:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:47.456 19:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:47.456 19:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:47.456 19:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.456 19:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.456 19:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.456 19:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:47.456 19:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.456 19:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.717 nvme0n1 00:27:47.717 19:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.717 19:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:47.717 19:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:47.717 19:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.034 19:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.034 19:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.034 19:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:48.034 19:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:48.034 19:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.034 19:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.034 19:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.034 19:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:48.034 19:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:27:48.034 19:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:48.034 19:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:48.034 19:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:48.034 19:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:48.034 19:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTVkMmI5Zjc1Mjc0ZjMxODE1NjdhYzY1OGY5YTMyNTgULcwn: 00:27:48.034 19:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZmNiMTBlMTE3NTBlMGI4OWY2ZmNjYzMwNzE1ZjgwNzNwcEW+: 00:27:48.034 19:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:48.034 19:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:48.035 19:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTVkMmI5Zjc1Mjc0ZjMxODE1NjdhYzY1OGY5YTMyNTgULcwn: 00:27:48.035 19:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZmNiMTBlMTE3NTBlMGI4OWY2ZmNjYzMwNzE1ZjgwNzNwcEW+: ]] 00:27:48.035 19:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZmNiMTBlMTE3NTBlMGI4OWY2ZmNjYzMwNzE1ZjgwNzNwcEW+: 00:27:48.035 19:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:27:48.035 19:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:48.035 19:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:48.035 19:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:48.035 19:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:48.035 19:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:48.035 19:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:48.035 19:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.035 19:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.035 19:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.035 19:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:48.035 19:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.035 19:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.337 nvme0n1 00:27:48.337 19:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.337 19:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:48.337 19:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:48.337 19:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.337 19:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.337 19:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.337 19:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:48.337 19:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:48.337 19:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.337 19:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.337 19:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.337 19:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:48.337 19:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:27:48.337 19:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:48.337 19:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:48.337 19:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:48.337 19:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:48.337 19:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzM3MmNlZDUzZmJjMmFhMTJjNjk5Mzk2Y2E3NjIwNGY4Nzg4OGVmNDJkZjA1M2Rl2qf8fQ==: 00:27:48.337 19:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTliZGIwM2E4Njc5YmQxMzYzM2FiY2QzMjkyNDY2NzDTvq2I: 00:27:48.337 19:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:48.337 19:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:48.337 19:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzM3MmNlZDUzZmJjMmFhMTJjNjk5Mzk2Y2E3NjIwNGY4Nzg4OGVmNDJkZjA1M2Rl2qf8fQ==: 00:27:48.337 19:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTliZGIwM2E4Njc5YmQxMzYzM2FiY2QzMjkyNDY2NzDTvq2I: ]] 00:27:48.337 19:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTliZGIwM2E4Njc5YmQxMzYzM2FiY2QzMjkyNDY2NzDTvq2I: 00:27:48.337 19:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:27:48.337 19:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:48.337 19:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:48.337 19:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:48.337 19:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:48.337 19:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:48.337 19:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:48.337 19:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.337 19:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.337 19:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.337 19:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:48.337 19:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.337 19:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.616 nvme0n1 00:27:48.616 19:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.616 19:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:48.616 19:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:48.616 19:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.616 19:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.616 19:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.616 19:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:48.616 19:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:48.616 19:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.616 19:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.616 19:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.616 19:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:48.616 19:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:27:48.616 19:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:48.617 19:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:48.617 19:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:48.617 19:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:48.617 19:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDZjMjUzN2ZkODNlZTM4YzVlMTVkNmRkZDM1MGUyOTFlYTAyNGM4NDcxNmU0NTk3NDAzZTZlN2QwYTliMDE4YVnPsYE=: 00:27:48.617 19:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:48.617 19:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:48.617 19:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:48.617 19:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDZjMjUzN2ZkODNlZTM4YzVlMTVkNmRkZDM1MGUyOTFlYTAyNGM4NDcxNmU0NTk3NDAzZTZlN2QwYTliMDE4YVnPsYE=: 00:27:48.617 19:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:48.617 19:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:27:48.617 19:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:48.617 19:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:48.617 19:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:48.617 19:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:48.617 19:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:48.617 19:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:48.617 19:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.617 19:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.617 19:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.617 19:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:48.617 19:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.617 19:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.878 nvme0n1 00:27:48.878 19:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.878 19:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:48.878 19:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:48.878 19:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.878 19:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.878 19:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.878 19:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:48.878 19:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:48.878 19:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.878 19:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.878 19:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.878 19:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:48.878 19:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:48.878 19:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:27:48.878 19:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:48.878 19:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:48.878 19:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:48.878 19:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:48.878 19:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmJlODNmOTIzNDk0Mjg2ZGY5N2NjOGUzNWUxMjlkNDPGqwzO: 00:27:48.878 19:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Mjc5MWY4NGMwNGFjODE2YzNkMDZmNzY0YWFiNDQwMTM5NjYxOTA4YWNkOTA3Y2YyNDczOTI1NThjZWY2ZGIxNmCmCks=: 00:27:48.878 19:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:48.878 19:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:48.878 19:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmJlODNmOTIzNDk0Mjg2ZGY5N2NjOGUzNWUxMjlkNDPGqwzO: 00:27:48.878 19:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Mjc5MWY4NGMwNGFjODE2YzNkMDZmNzY0YWFiNDQwMTM5NjYxOTA4YWNkOTA3Y2YyNDczOTI1NThjZWY2ZGIxNmCmCks=: ]] 00:27:48.878 19:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Mjc5MWY4NGMwNGFjODE2YzNkMDZmNzY0YWFiNDQwMTM5NjYxOTA4YWNkOTA3Y2YyNDczOTI1NThjZWY2ZGIxNmCmCks=: 00:27:48.878 19:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:27:48.878 19:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:48.878 19:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:48.878 19:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:48.878 19:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:48.878 19:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:48.878 19:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:48.878 19:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.878 19:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.878 19:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.878 19:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:48.878 19:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.878 19:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.450 nvme0n1 00:27:49.450 19:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.450 19:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:49.450 19:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:49.450 19:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.450 19:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.450 19:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.450 19:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:49.450 19:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:49.450 19:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.450 19:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.450 19:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.450 19:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:49.450 19:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:27:49.450 19:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:49.450 19:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:49.450 19:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:49.450 19:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:49.450 19:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjIyMmU4MTkzNzdjYzZjYTdlNzMyYWQzMGE0MGU3NzA3M2ZiN2I5Y2Q1NmJhNWQ3Y13+qg==: 00:27:49.450 19:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDgxM2FlODA1Y2ZhYmRhOWQxYThkNmE2NmUzZTQ0YTZjZDg4ZTBhM2YwMTU2YmMwZPAE5A==: 00:27:49.450 19:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:49.450 19:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:49.450 19:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjIyMmU4MTkzNzdjYzZjYTdlNzMyYWQzMGE0MGU3NzA3M2ZiN2I5Y2Q1NmJhNWQ3Y13+qg==: 00:27:49.450 19:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDgxM2FlODA1Y2ZhYmRhOWQxYThkNmE2NmUzZTQ0YTZjZDg4ZTBhM2YwMTU2YmMwZPAE5A==: ]] 00:27:49.450 19:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDgxM2FlODA1Y2ZhYmRhOWQxYThkNmE2NmUzZTQ0YTZjZDg4ZTBhM2YwMTU2YmMwZPAE5A==: 00:27:49.450 19:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:27:49.450 19:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:49.450 19:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:49.450 19:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:49.450 19:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:49.450 19:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:49.450 19:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:49.450 19:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.450 19:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.450 19:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.450 19:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:49.450 19:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.450 19:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.021 nvme0n1 00:27:50.021 19:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.021 19:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:50.021 19:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:50.021 19:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.021 19:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.021 19:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.021 19:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:50.021 19:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:50.021 19:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.021 19:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.021 19:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.021 19:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:50.021 19:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:27:50.021 19:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:50.021 19:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:50.021 19:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:50.021 19:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:50.021 19:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTVkMmI5Zjc1Mjc0ZjMxODE1NjdhYzY1OGY5YTMyNTgULcwn: 00:27:50.021 19:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZmNiMTBlMTE3NTBlMGI4OWY2ZmNjYzMwNzE1ZjgwNzNwcEW+: 00:27:50.021 19:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:50.021 19:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:50.021 19:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTVkMmI5Zjc1Mjc0ZjMxODE1NjdhYzY1OGY5YTMyNTgULcwn: 00:27:50.021 19:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZmNiMTBlMTE3NTBlMGI4OWY2ZmNjYzMwNzE1ZjgwNzNwcEW+: ]] 00:27:50.021 19:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZmNiMTBlMTE3NTBlMGI4OWY2ZmNjYzMwNzE1ZjgwNzNwcEW+: 00:27:50.021 19:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:27:50.021 19:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:50.021 19:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:50.021 19:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:50.021 19:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:50.021 19:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:50.021 19:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:50.021 19:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.021 19:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.021 19:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.021 19:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:50.021 19:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.021 19:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.592 nvme0n1 00:27:50.593 19:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.593 19:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:50.593 19:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:50.593 19:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.593 19:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.593 19:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.593 19:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:50.593 19:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:50.593 19:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.593 19:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.593 19:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.593 19:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:50.593 19:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:27:50.593 19:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:50.593 19:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:50.593 19:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:50.593 19:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:50.593 19:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzM3MmNlZDUzZmJjMmFhMTJjNjk5Mzk2Y2E3NjIwNGY4Nzg4OGVmNDJkZjA1M2Rl2qf8fQ==: 00:27:50.593 19:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTliZGIwM2E4Njc5YmQxMzYzM2FiY2QzMjkyNDY2NzDTvq2I: 00:27:50.593 19:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:50.593 19:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:50.593 19:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzM3MmNlZDUzZmJjMmFhMTJjNjk5Mzk2Y2E3NjIwNGY4Nzg4OGVmNDJkZjA1M2Rl2qf8fQ==: 00:27:50.593 19:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTliZGIwM2E4Njc5YmQxMzYzM2FiY2QzMjkyNDY2NzDTvq2I: ]] 00:27:50.593 19:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTliZGIwM2E4Njc5YmQxMzYzM2FiY2QzMjkyNDY2NzDTvq2I: 00:27:50.593 19:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:27:50.593 19:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:50.593 19:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:50.593 19:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:50.593 19:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:50.593 19:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:50.593 19:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:50.593 19:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.593 19:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.593 19:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.593 19:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:50.593 19:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.593 19:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.164 nvme0n1 00:27:51.164 19:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.164 19:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:51.164 19:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:51.164 19:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.164 19:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.164 19:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.164 19:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:51.164 19:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:51.164 19:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.164 19:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.164 19:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.164 19:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:51.164 19:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:27:51.164 19:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:51.164 19:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:51.164 19:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:51.164 19:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:51.164 19:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDZjMjUzN2ZkODNlZTM4YzVlMTVkNmRkZDM1MGUyOTFlYTAyNGM4NDcxNmU0NTk3NDAzZTZlN2QwYTliMDE4YVnPsYE=: 00:27:51.164 19:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:51.164 19:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:51.164 19:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:51.164 19:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDZjMjUzN2ZkODNlZTM4YzVlMTVkNmRkZDM1MGUyOTFlYTAyNGM4NDcxNmU0NTk3NDAzZTZlN2QwYTliMDE4YVnPsYE=: 00:27:51.164 19:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:51.164 19:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:27:51.164 19:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:51.164 19:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:51.164 19:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:51.164 19:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:51.164 19:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:51.164 19:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:51.164 19:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.164 19:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.164 19:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.164 19:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:51.164 19:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.164 19:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.424 nvme0n1 00:27:51.424 19:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.424 19:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:51.424 19:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:51.424 19:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.424 19:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.424 19:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.686 19:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:51.686 19:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:51.686 19:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.686 19:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.686 19:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.686 19:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:51.686 19:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:51.686 19:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:27:51.686 19:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:51.686 19:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:51.686 19:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:51.686 19:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:51.686 19:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmJlODNmOTIzNDk0Mjg2ZGY5N2NjOGUzNWUxMjlkNDPGqwzO: 00:27:51.686 19:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Mjc5MWY4NGMwNGFjODE2YzNkMDZmNzY0YWFiNDQwMTM5NjYxOTA4YWNkOTA3Y2YyNDczOTI1NThjZWY2ZGIxNmCmCks=: 00:27:51.686 19:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:51.686 19:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:51.686 19:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmJlODNmOTIzNDk0Mjg2ZGY5N2NjOGUzNWUxMjlkNDPGqwzO: 00:27:51.686 19:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Mjc5MWY4NGMwNGFjODE2YzNkMDZmNzY0YWFiNDQwMTM5NjYxOTA4YWNkOTA3Y2YyNDczOTI1NThjZWY2ZGIxNmCmCks=: ]] 00:27:51.686 19:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Mjc5MWY4NGMwNGFjODE2YzNkMDZmNzY0YWFiNDQwMTM5NjYxOTA4YWNkOTA3Y2YyNDczOTI1NThjZWY2ZGIxNmCmCks=: 00:27:51.686 19:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:27:51.686 19:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:51.686 19:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:51.686 19:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:51.686 19:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:51.686 19:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:51.686 19:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:51.686 19:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.686 19:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.686 19:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.686 19:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:51.686 19:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.686 19:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.258 nvme0n1 00:27:52.258 19:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.258 19:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:52.258 19:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:52.258 19:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.258 19:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.258 19:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.519 19:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:52.519 19:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:52.519 19:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.519 19:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.519 19:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.519 19:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:52.519 19:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:27:52.519 19:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:52.519 19:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:52.519 19:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:52.519 19:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:52.519 19:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjIyMmU4MTkzNzdjYzZjYTdlNzMyYWQzMGE0MGU3NzA3M2ZiN2I5Y2Q1NmJhNWQ3Y13+qg==: 00:27:52.519 19:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDgxM2FlODA1Y2ZhYmRhOWQxYThkNmE2NmUzZTQ0YTZjZDg4ZTBhM2YwMTU2YmMwZPAE5A==: 00:27:52.519 19:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:52.519 19:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:52.519 19:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjIyMmU4MTkzNzdjYzZjYTdlNzMyYWQzMGE0MGU3NzA3M2ZiN2I5Y2Q1NmJhNWQ3Y13+qg==: 00:27:52.519 19:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDgxM2FlODA1Y2ZhYmRhOWQxYThkNmE2NmUzZTQ0YTZjZDg4ZTBhM2YwMTU2YmMwZPAE5A==: ]] 00:27:52.519 19:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDgxM2FlODA1Y2ZhYmRhOWQxYThkNmE2NmUzZTQ0YTZjZDg4ZTBhM2YwMTU2YmMwZPAE5A==: 00:27:52.519 19:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:27:52.520 19:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:52.520 19:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:52.520 19:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:52.520 19:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:52.520 19:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:52.520 19:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:52.520 19:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.520 19:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.520 19:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.520 19:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:52.520 19:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.520 19:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.091 nvme0n1 00:27:53.091 19:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.091 19:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:53.091 19:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:53.091 19:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.091 19:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.091 19:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.091 19:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:53.352 19:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:53.352 19:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.352 19:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.352 19:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.352 19:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:53.352 19:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:27:53.352 19:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:53.352 19:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:53.352 19:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:53.352 19:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:53.352 19:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTVkMmI5Zjc1Mjc0ZjMxODE1NjdhYzY1OGY5YTMyNTgULcwn: 00:27:53.352 19:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZmNiMTBlMTE3NTBlMGI4OWY2ZmNjYzMwNzE1ZjgwNzNwcEW+: 00:27:53.352 19:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:53.352 19:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:53.352 19:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTVkMmI5Zjc1Mjc0ZjMxODE1NjdhYzY1OGY5YTMyNTgULcwn: 00:27:53.352 19:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZmNiMTBlMTE3NTBlMGI4OWY2ZmNjYzMwNzE1ZjgwNzNwcEW+: ]] 00:27:53.352 19:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZmNiMTBlMTE3NTBlMGI4OWY2ZmNjYzMwNzE1ZjgwNzNwcEW+: 00:27:53.352 19:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:27:53.352 19:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:53.353 19:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:53.353 19:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:53.353 19:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:53.353 19:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:53.353 19:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:53.353 19:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.353 19:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.353 19:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.353 19:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:53.353 19:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.353 19:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.924 nvme0n1 00:27:53.924 19:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.924 19:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:53.924 19:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:53.924 19:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.924 19:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.924 19:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.924 19:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:53.924 19:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:53.924 19:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.924 19:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.185 19:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.185 19:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:54.185 19:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:27:54.185 19:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:54.185 19:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:54.185 19:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:54.185 19:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:54.185 19:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzM3MmNlZDUzZmJjMmFhMTJjNjk5Mzk2Y2E3NjIwNGY4Nzg4OGVmNDJkZjA1M2Rl2qf8fQ==: 00:27:54.185 19:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTliZGIwM2E4Njc5YmQxMzYzM2FiY2QzMjkyNDY2NzDTvq2I: 00:27:54.185 19:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:54.185 19:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:54.185 19:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzM3MmNlZDUzZmJjMmFhMTJjNjk5Mzk2Y2E3NjIwNGY4Nzg4OGVmNDJkZjA1M2Rl2qf8fQ==: 00:27:54.185 19:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTliZGIwM2E4Njc5YmQxMzYzM2FiY2QzMjkyNDY2NzDTvq2I: ]] 00:27:54.185 19:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTliZGIwM2E4Njc5YmQxMzYzM2FiY2QzMjkyNDY2NzDTvq2I: 00:27:54.185 19:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:27:54.185 19:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:54.185 19:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:54.185 19:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:54.185 19:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:54.185 19:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:54.185 19:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:54.185 19:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.185 19:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.185 19:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.185 19:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:54.185 19:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.185 19:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.756 nvme0n1 00:27:54.756 19:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.756 19:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:54.756 19:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:54.756 19:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.756 19:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.756 19:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.756 19:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:54.756 19:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:54.756 19:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.756 19:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.756 19:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.756 19:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:54.756 19:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:27:54.756 19:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:54.756 19:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:54.756 19:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:54.756 19:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:54.756 19:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDZjMjUzN2ZkODNlZTM4YzVlMTVkNmRkZDM1MGUyOTFlYTAyNGM4NDcxNmU0NTk3NDAzZTZlN2QwYTliMDE4YVnPsYE=: 00:27:54.756 19:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:54.756 19:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:54.756 19:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:54.756 19:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDZjMjUzN2ZkODNlZTM4YzVlMTVkNmRkZDM1MGUyOTFlYTAyNGM4NDcxNmU0NTk3NDAzZTZlN2QwYTliMDE4YVnPsYE=: 00:27:54.756 19:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:55.017 19:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:27:55.017 19:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:55.017 19:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:55.017 19:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:55.017 19:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:55.017 19:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:55.017 19:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:55.017 19:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.017 19:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.017 19:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.017 19:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:55.017 19:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.017 19:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.589 nvme0n1 00:27:55.589 19:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.589 19:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:55.589 19:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:55.589 19:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.589 19:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.589 19:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.589 19:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:55.589 19:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:55.589 19:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.589 19:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.589 19:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.589 19:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:55.589 19:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:55.589 19:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:55.589 19:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:27:55.589 19:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:55.589 19:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:55.589 19:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:55.589 19:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:55.589 19:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmJlODNmOTIzNDk0Mjg2ZGY5N2NjOGUzNWUxMjlkNDPGqwzO: 00:27:55.589 19:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Mjc5MWY4NGMwNGFjODE2YzNkMDZmNzY0YWFiNDQwMTM5NjYxOTA4YWNkOTA3Y2YyNDczOTI1NThjZWY2ZGIxNmCmCks=: 00:27:55.589 19:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:55.589 19:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:55.589 19:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmJlODNmOTIzNDk0Mjg2ZGY5N2NjOGUzNWUxMjlkNDPGqwzO: 00:27:55.589 19:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Mjc5MWY4NGMwNGFjODE2YzNkMDZmNzY0YWFiNDQwMTM5NjYxOTA4YWNkOTA3Y2YyNDczOTI1NThjZWY2ZGIxNmCmCks=: ]] 00:27:55.589 19:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Mjc5MWY4NGMwNGFjODE2YzNkMDZmNzY0YWFiNDQwMTM5NjYxOTA4YWNkOTA3Y2YyNDczOTI1NThjZWY2ZGIxNmCmCks=: 00:27:55.589 19:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:27:55.589 19:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:55.589 19:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:55.589 19:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:55.589 19:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:55.589 19:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:55.589 19:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:55.589 19:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.589 19:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.589 19:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.589 19:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:55.589 19:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.589 19:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.851 nvme0n1 00:27:55.851 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.851 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:55.851 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:55.851 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.851 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.851 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.851 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:55.851 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:55.851 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.851 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.851 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.851 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:55.851 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:27:55.851 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:55.851 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:55.851 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:55.851 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:55.851 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjIyMmU4MTkzNzdjYzZjYTdlNzMyYWQzMGE0MGU3NzA3M2ZiN2I5Y2Q1NmJhNWQ3Y13+qg==: 00:27:55.851 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDgxM2FlODA1Y2ZhYmRhOWQxYThkNmE2NmUzZTQ0YTZjZDg4ZTBhM2YwMTU2YmMwZPAE5A==: 00:27:55.851 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:55.851 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:55.851 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjIyMmU4MTkzNzdjYzZjYTdlNzMyYWQzMGE0MGU3NzA3M2ZiN2I5Y2Q1NmJhNWQ3Y13+qg==: 00:27:55.851 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDgxM2FlODA1Y2ZhYmRhOWQxYThkNmE2NmUzZTQ0YTZjZDg4ZTBhM2YwMTU2YmMwZPAE5A==: ]] 00:27:55.851 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDgxM2FlODA1Y2ZhYmRhOWQxYThkNmE2NmUzZTQ0YTZjZDg4ZTBhM2YwMTU2YmMwZPAE5A==: 00:27:55.851 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:27:55.851 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:55.851 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:55.851 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:55.851 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:55.851 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:55.851 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:55.851 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.851 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.851 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.851 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:55.851 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.851 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.112 nvme0n1 00:27:56.112 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:56.112 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:56.112 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:56.112 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:56.112 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.112 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:56.112 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:56.112 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:56.112 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:56.112 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.112 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:56.112 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:56.112 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:27:56.112 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:56.112 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:56.112 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:56.112 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:56.112 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTVkMmI5Zjc1Mjc0ZjMxODE1NjdhYzY1OGY5YTMyNTgULcwn: 00:27:56.112 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZmNiMTBlMTE3NTBlMGI4OWY2ZmNjYzMwNzE1ZjgwNzNwcEW+: 00:27:56.112 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:56.112 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:56.112 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTVkMmI5Zjc1Mjc0ZjMxODE1NjdhYzY1OGY5YTMyNTgULcwn: 00:27:56.112 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZmNiMTBlMTE3NTBlMGI4OWY2ZmNjYzMwNzE1ZjgwNzNwcEW+: ]] 00:27:56.112 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZmNiMTBlMTE3NTBlMGI4OWY2ZmNjYzMwNzE1ZjgwNzNwcEW+: 00:27:56.112 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:27:56.112 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:56.112 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:56.112 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:56.112 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:56.112 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:56.112 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:56.112 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:56.112 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.112 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:56.112 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:56.112 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:56.112 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.372 nvme0n1 00:27:56.372 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:56.372 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:56.372 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:56.372 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:56.372 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.372 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:56.372 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:56.372 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:56.372 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:56.372 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.372 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:56.372 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:56.372 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:27:56.372 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:56.372 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:56.372 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:56.372 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:56.372 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzM3MmNlZDUzZmJjMmFhMTJjNjk5Mzk2Y2E3NjIwNGY4Nzg4OGVmNDJkZjA1M2Rl2qf8fQ==: 00:27:56.372 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTliZGIwM2E4Njc5YmQxMzYzM2FiY2QzMjkyNDY2NzDTvq2I: 00:27:56.372 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:56.372 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:56.372 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzM3MmNlZDUzZmJjMmFhMTJjNjk5Mzk2Y2E3NjIwNGY4Nzg4OGVmNDJkZjA1M2Rl2qf8fQ==: 00:27:56.372 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTliZGIwM2E4Njc5YmQxMzYzM2FiY2QzMjkyNDY2NzDTvq2I: ]] 00:27:56.372 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTliZGIwM2E4Njc5YmQxMzYzM2FiY2QzMjkyNDY2NzDTvq2I: 00:27:56.372 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:27:56.372 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:56.372 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:56.372 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:56.372 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:56.372 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:56.373 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:56.373 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:56.373 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.373 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:56.373 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:56.373 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:56.373 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.633 nvme0n1 00:27:56.633 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:56.633 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:56.633 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:56.633 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:56.633 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.633 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:56.633 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:56.633 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:56.633 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:56.633 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.633 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:56.633 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:56.633 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:27:56.633 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:56.633 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:56.633 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:56.634 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:56.634 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDZjMjUzN2ZkODNlZTM4YzVlMTVkNmRkZDM1MGUyOTFlYTAyNGM4NDcxNmU0NTk3NDAzZTZlN2QwYTliMDE4YVnPsYE=: 00:27:56.634 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:56.634 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:56.634 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:56.634 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDZjMjUzN2ZkODNlZTM4YzVlMTVkNmRkZDM1MGUyOTFlYTAyNGM4NDcxNmU0NTk3NDAzZTZlN2QwYTliMDE4YVnPsYE=: 00:27:56.634 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:56.634 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:27:56.634 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:56.634 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:56.634 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:56.634 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:56.634 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:56.634 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:56.634 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:56.634 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.634 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:56.634 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:56.634 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:56.634 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.894 nvme0n1 00:27:56.894 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:56.894 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:56.894 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:56.894 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:56.894 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.894 19:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:56.894 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:56.894 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:56.894 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:56.894 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.894 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:56.894 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:56.894 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:56.894 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:27:56.894 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:56.894 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:56.894 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:56.894 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:56.894 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmJlODNmOTIzNDk0Mjg2ZGY5N2NjOGUzNWUxMjlkNDPGqwzO: 00:27:56.894 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Mjc5MWY4NGMwNGFjODE2YzNkMDZmNzY0YWFiNDQwMTM5NjYxOTA4YWNkOTA3Y2YyNDczOTI1NThjZWY2ZGIxNmCmCks=: 00:27:56.894 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:56.894 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:56.894 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmJlODNmOTIzNDk0Mjg2ZGY5N2NjOGUzNWUxMjlkNDPGqwzO: 00:27:56.894 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Mjc5MWY4NGMwNGFjODE2YzNkMDZmNzY0YWFiNDQwMTM5NjYxOTA4YWNkOTA3Y2YyNDczOTI1NThjZWY2ZGIxNmCmCks=: ]] 00:27:56.894 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Mjc5MWY4NGMwNGFjODE2YzNkMDZmNzY0YWFiNDQwMTM5NjYxOTA4YWNkOTA3Y2YyNDczOTI1NThjZWY2ZGIxNmCmCks=: 00:27:56.894 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:27:56.894 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:56.894 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:56.894 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:56.894 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:56.894 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:56.894 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:56.894 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:56.894 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.894 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:56.894 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:56.894 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:56.894 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.156 nvme0n1 00:27:57.156 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.156 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:57.156 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:57.156 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.156 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.156 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.156 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:57.156 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:57.156 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.156 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.156 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.156 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:57.156 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:27:57.156 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:57.156 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:57.156 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:57.156 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:57.156 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjIyMmU4MTkzNzdjYzZjYTdlNzMyYWQzMGE0MGU3NzA3M2ZiN2I5Y2Q1NmJhNWQ3Y13+qg==: 00:27:57.156 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDgxM2FlODA1Y2ZhYmRhOWQxYThkNmE2NmUzZTQ0YTZjZDg4ZTBhM2YwMTU2YmMwZPAE5A==: 00:27:57.156 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:57.156 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:57.156 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjIyMmU4MTkzNzdjYzZjYTdlNzMyYWQzMGE0MGU3NzA3M2ZiN2I5Y2Q1NmJhNWQ3Y13+qg==: 00:27:57.156 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDgxM2FlODA1Y2ZhYmRhOWQxYThkNmE2NmUzZTQ0YTZjZDg4ZTBhM2YwMTU2YmMwZPAE5A==: ]] 00:27:57.157 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDgxM2FlODA1Y2ZhYmRhOWQxYThkNmE2NmUzZTQ0YTZjZDg4ZTBhM2YwMTU2YmMwZPAE5A==: 00:27:57.157 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:27:57.157 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:57.157 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:57.157 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:57.157 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:57.157 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:57.157 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:57.157 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.157 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.157 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.157 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:57.157 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.157 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.419 nvme0n1 00:27:57.419 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.419 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:57.419 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:57.419 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.419 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.419 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.419 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:57.419 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:57.419 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.419 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.419 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.419 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:57.419 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:27:57.419 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:57.419 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:57.419 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:57.419 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:57.419 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTVkMmI5Zjc1Mjc0ZjMxODE1NjdhYzY1OGY5YTMyNTgULcwn: 00:27:57.419 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZmNiMTBlMTE3NTBlMGI4OWY2ZmNjYzMwNzE1ZjgwNzNwcEW+: 00:27:57.419 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:57.419 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:57.419 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTVkMmI5Zjc1Mjc0ZjMxODE1NjdhYzY1OGY5YTMyNTgULcwn: 00:27:57.419 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZmNiMTBlMTE3NTBlMGI4OWY2ZmNjYzMwNzE1ZjgwNzNwcEW+: ]] 00:27:57.419 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZmNiMTBlMTE3NTBlMGI4OWY2ZmNjYzMwNzE1ZjgwNzNwcEW+: 00:27:57.419 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:27:57.419 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:57.419 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:57.419 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:57.419 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:57.419 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:57.419 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:57.419 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.419 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.419 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.419 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:57.419 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.419 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.679 nvme0n1 00:27:57.680 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.680 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:57.680 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:57.680 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.680 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.680 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.680 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:57.680 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:57.680 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.680 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.680 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.680 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:57.680 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:27:57.680 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:57.680 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:57.680 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:57.680 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:57.680 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzM3MmNlZDUzZmJjMmFhMTJjNjk5Mzk2Y2E3NjIwNGY4Nzg4OGVmNDJkZjA1M2Rl2qf8fQ==: 00:27:57.680 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTliZGIwM2E4Njc5YmQxMzYzM2FiY2QzMjkyNDY2NzDTvq2I: 00:27:57.680 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:57.680 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:57.680 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzM3MmNlZDUzZmJjMmFhMTJjNjk5Mzk2Y2E3NjIwNGY4Nzg4OGVmNDJkZjA1M2Rl2qf8fQ==: 00:27:57.680 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTliZGIwM2E4Njc5YmQxMzYzM2FiY2QzMjkyNDY2NzDTvq2I: ]] 00:27:57.680 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTliZGIwM2E4Njc5YmQxMzYzM2FiY2QzMjkyNDY2NzDTvq2I: 00:27:57.680 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:27:57.680 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:57.680 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:57.680 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:57.680 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:57.680 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:57.680 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:57.680 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.680 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.680 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.680 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:57.680 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.680 19:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.941 nvme0n1 00:27:57.941 19:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.941 19:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:57.941 19:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:57.941 19:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.941 19:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.941 19:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.941 19:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:57.941 19:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:57.941 19:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.941 19:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.941 19:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.941 19:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:57.941 19:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:27:57.941 19:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:57.941 19:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:57.941 19:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:57.941 19:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:57.941 19:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDZjMjUzN2ZkODNlZTM4YzVlMTVkNmRkZDM1MGUyOTFlYTAyNGM4NDcxNmU0NTk3NDAzZTZlN2QwYTliMDE4YVnPsYE=: 00:27:57.941 19:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:57.941 19:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:57.941 19:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:57.941 19:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDZjMjUzN2ZkODNlZTM4YzVlMTVkNmRkZDM1MGUyOTFlYTAyNGM4NDcxNmU0NTk3NDAzZTZlN2QwYTliMDE4YVnPsYE=: 00:27:57.941 19:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:57.941 19:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:27:57.941 19:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:57.941 19:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:57.941 19:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:57.941 19:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:57.941 19:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:57.941 19:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:57.941 19:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.941 19:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.941 19:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.941 19:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:57.941 19:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.941 19:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.202 nvme0n1 00:27:58.202 19:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.202 19:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:58.202 19:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:58.202 19:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.202 19:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.202 19:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.202 19:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:58.202 19:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:58.202 19:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.202 19:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.202 19:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.202 19:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:58.202 19:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:58.202 19:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:27:58.202 19:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:58.202 19:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:58.202 19:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:58.202 19:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:58.202 19:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmJlODNmOTIzNDk0Mjg2ZGY5N2NjOGUzNWUxMjlkNDPGqwzO: 00:27:58.202 19:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Mjc5MWY4NGMwNGFjODE2YzNkMDZmNzY0YWFiNDQwMTM5NjYxOTA4YWNkOTA3Y2YyNDczOTI1NThjZWY2ZGIxNmCmCks=: 00:27:58.202 19:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:58.202 19:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:58.202 19:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmJlODNmOTIzNDk0Mjg2ZGY5N2NjOGUzNWUxMjlkNDPGqwzO: 00:27:58.202 19:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Mjc5MWY4NGMwNGFjODE2YzNkMDZmNzY0YWFiNDQwMTM5NjYxOTA4YWNkOTA3Y2YyNDczOTI1NThjZWY2ZGIxNmCmCks=: ]] 00:27:58.202 19:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Mjc5MWY4NGMwNGFjODE2YzNkMDZmNzY0YWFiNDQwMTM5NjYxOTA4YWNkOTA3Y2YyNDczOTI1NThjZWY2ZGIxNmCmCks=: 00:27:58.202 19:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:27:58.202 19:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:58.202 19:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:58.202 19:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:58.202 19:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:58.202 19:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:58.202 19:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:58.202 19:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.202 19:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.202 19:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.202 19:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:58.202 19:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.202 19:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.463 nvme0n1 00:27:58.463 19:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.463 19:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:58.463 19:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:58.463 19:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.463 19:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.463 19:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.463 19:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:58.463 19:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:58.463 19:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.463 19:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.463 19:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.463 19:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:58.463 19:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:27:58.463 19:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:58.463 19:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:58.463 19:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:58.463 19:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:58.463 19:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjIyMmU4MTkzNzdjYzZjYTdlNzMyYWQzMGE0MGU3NzA3M2ZiN2I5Y2Q1NmJhNWQ3Y13+qg==: 00:27:58.463 19:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDgxM2FlODA1Y2ZhYmRhOWQxYThkNmE2NmUzZTQ0YTZjZDg4ZTBhM2YwMTU2YmMwZPAE5A==: 00:27:58.463 19:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:58.463 19:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:58.463 19:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjIyMmU4MTkzNzdjYzZjYTdlNzMyYWQzMGE0MGU3NzA3M2ZiN2I5Y2Q1NmJhNWQ3Y13+qg==: 00:27:58.463 19:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDgxM2FlODA1Y2ZhYmRhOWQxYThkNmE2NmUzZTQ0YTZjZDg4ZTBhM2YwMTU2YmMwZPAE5A==: ]] 00:27:58.463 19:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDgxM2FlODA1Y2ZhYmRhOWQxYThkNmE2NmUzZTQ0YTZjZDg4ZTBhM2YwMTU2YmMwZPAE5A==: 00:27:58.463 19:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:27:58.463 19:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:58.463 19:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:58.463 19:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:58.463 19:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:58.463 19:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:58.463 19:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:58.463 19:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.463 19:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.463 19:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.463 19:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:58.463 19:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.463 19:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.725 nvme0n1 00:27:58.725 19:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.725 19:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:58.725 19:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:58.725 19:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.725 19:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.725 19:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.725 19:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:58.725 19:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:58.725 19:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.726 19:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.726 19:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.726 19:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:58.726 19:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:27:58.726 19:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:58.726 19:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:58.726 19:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:58.726 19:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:58.726 19:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTVkMmI5Zjc1Mjc0ZjMxODE1NjdhYzY1OGY5YTMyNTgULcwn: 00:27:58.726 19:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZmNiMTBlMTE3NTBlMGI4OWY2ZmNjYzMwNzE1ZjgwNzNwcEW+: 00:27:58.726 19:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:58.726 19:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:58.726 19:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTVkMmI5Zjc1Mjc0ZjMxODE1NjdhYzY1OGY5YTMyNTgULcwn: 00:27:58.726 19:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZmNiMTBlMTE3NTBlMGI4OWY2ZmNjYzMwNzE1ZjgwNzNwcEW+: ]] 00:27:58.726 19:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZmNiMTBlMTE3NTBlMGI4OWY2ZmNjYzMwNzE1ZjgwNzNwcEW+: 00:27:58.726 19:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:27:58.726 19:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:58.726 19:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:58.726 19:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:58.726 19:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:58.726 19:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:58.726 19:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:58.726 19:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.726 19:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.726 19:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.726 19:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:58.726 19:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.726 19:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.297 nvme0n1 00:27:59.297 19:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.297 19:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:59.297 19:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:59.297 19:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.297 19:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.297 19:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.297 19:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:59.297 19:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:59.297 19:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.297 19:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.297 19:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.297 19:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:59.297 19:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:27:59.297 19:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:59.297 19:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:59.297 19:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:59.297 19:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:59.297 19:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzM3MmNlZDUzZmJjMmFhMTJjNjk5Mzk2Y2E3NjIwNGY4Nzg4OGVmNDJkZjA1M2Rl2qf8fQ==: 00:27:59.297 19:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTliZGIwM2E4Njc5YmQxMzYzM2FiY2QzMjkyNDY2NzDTvq2I: 00:27:59.297 19:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:59.297 19:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:59.297 19:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzM3MmNlZDUzZmJjMmFhMTJjNjk5Mzk2Y2E3NjIwNGY4Nzg4OGVmNDJkZjA1M2Rl2qf8fQ==: 00:27:59.297 19:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTliZGIwM2E4Njc5YmQxMzYzM2FiY2QzMjkyNDY2NzDTvq2I: ]] 00:27:59.297 19:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTliZGIwM2E4Njc5YmQxMzYzM2FiY2QzMjkyNDY2NzDTvq2I: 00:27:59.297 19:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:27:59.297 19:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:59.297 19:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:59.297 19:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:59.297 19:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:59.297 19:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:59.297 19:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:59.297 19:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.297 19:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.297 19:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.297 19:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:59.297 19:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.297 19:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.558 nvme0n1 00:27:59.558 19:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.558 19:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:59.558 19:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:59.558 19:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.558 19:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.558 19:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.558 19:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:59.558 19:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:59.558 19:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.558 19:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.558 19:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.558 19:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:59.558 19:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:27:59.558 19:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:59.558 19:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:59.558 19:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:59.558 19:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:59.558 19:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDZjMjUzN2ZkODNlZTM4YzVlMTVkNmRkZDM1MGUyOTFlYTAyNGM4NDcxNmU0NTk3NDAzZTZlN2QwYTliMDE4YVnPsYE=: 00:27:59.558 19:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:59.558 19:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:59.558 19:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:59.558 19:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDZjMjUzN2ZkODNlZTM4YzVlMTVkNmRkZDM1MGUyOTFlYTAyNGM4NDcxNmU0NTk3NDAzZTZlN2QwYTliMDE4YVnPsYE=: 00:27:59.558 19:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:59.558 19:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:27:59.558 19:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:59.558 19:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:59.558 19:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:59.558 19:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:59.558 19:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:59.558 19:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:59.558 19:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.558 19:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.558 19:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.559 19:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:59.559 19:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.559 19:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.819 nvme0n1 00:27:59.819 19:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.819 19:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:59.819 19:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:59.819 19:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.819 19:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.819 19:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.819 19:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:59.819 19:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:59.819 19:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.820 19:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.820 19:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.820 19:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:59.820 19:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:59.820 19:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:27:59.820 19:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:59.820 19:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:59.820 19:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:59.820 19:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:59.820 19:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmJlODNmOTIzNDk0Mjg2ZGY5N2NjOGUzNWUxMjlkNDPGqwzO: 00:27:59.820 19:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Mjc5MWY4NGMwNGFjODE2YzNkMDZmNzY0YWFiNDQwMTM5NjYxOTA4YWNkOTA3Y2YyNDczOTI1NThjZWY2ZGIxNmCmCks=: 00:27:59.820 19:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:59.820 19:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:59.820 19:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmJlODNmOTIzNDk0Mjg2ZGY5N2NjOGUzNWUxMjlkNDPGqwzO: 00:27:59.820 19:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Mjc5MWY4NGMwNGFjODE2YzNkMDZmNzY0YWFiNDQwMTM5NjYxOTA4YWNkOTA3Y2YyNDczOTI1NThjZWY2ZGIxNmCmCks=: ]] 00:27:59.820 19:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Mjc5MWY4NGMwNGFjODE2YzNkMDZmNzY0YWFiNDQwMTM5NjYxOTA4YWNkOTA3Y2YyNDczOTI1NThjZWY2ZGIxNmCmCks=: 00:27:59.820 19:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:27:59.820 19:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:59.820 19:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:59.820 19:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:59.820 19:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:59.820 19:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:59.820 19:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:59.820 19:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.820 19:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.820 19:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.820 19:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:59.820 19:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.820 19:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.399 nvme0n1 00:28:00.399 19:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.399 19:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:00.399 19:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:00.399 19:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.399 19:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.399 19:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.399 19:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:00.399 19:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:00.399 19:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.399 19:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.399 19:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.399 19:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:00.399 19:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:28:00.399 19:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:00.399 19:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:00.399 19:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:00.399 19:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:00.399 19:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjIyMmU4MTkzNzdjYzZjYTdlNzMyYWQzMGE0MGU3NzA3M2ZiN2I5Y2Q1NmJhNWQ3Y13+qg==: 00:28:00.399 19:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDgxM2FlODA1Y2ZhYmRhOWQxYThkNmE2NmUzZTQ0YTZjZDg4ZTBhM2YwMTU2YmMwZPAE5A==: 00:28:00.399 19:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:00.399 19:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:00.399 19:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjIyMmU4MTkzNzdjYzZjYTdlNzMyYWQzMGE0MGU3NzA3M2ZiN2I5Y2Q1NmJhNWQ3Y13+qg==: 00:28:00.399 19:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDgxM2FlODA1Y2ZhYmRhOWQxYThkNmE2NmUzZTQ0YTZjZDg4ZTBhM2YwMTU2YmMwZPAE5A==: ]] 00:28:00.399 19:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDgxM2FlODA1Y2ZhYmRhOWQxYThkNmE2NmUzZTQ0YTZjZDg4ZTBhM2YwMTU2YmMwZPAE5A==: 00:28:00.399 19:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:28:00.399 19:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:00.399 19:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:00.399 19:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:00.399 19:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:00.399 19:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:00.399 19:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:00.399 19:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.399 19:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.399 19:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.400 19:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:00.400 19:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.400 19:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.978 nvme0n1 00:28:00.978 19:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.978 19:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:00.978 19:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:00.978 19:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.978 19:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.978 19:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.978 19:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:00.978 19:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:00.978 19:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.978 19:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.978 19:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.978 19:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:00.978 19:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:28:00.978 19:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:00.978 19:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:00.978 19:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:00.978 19:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:00.978 19:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTVkMmI5Zjc1Mjc0ZjMxODE1NjdhYzY1OGY5YTMyNTgULcwn: 00:28:00.978 19:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZmNiMTBlMTE3NTBlMGI4OWY2ZmNjYzMwNzE1ZjgwNzNwcEW+: 00:28:00.978 19:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:00.978 19:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:00.978 19:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTVkMmI5Zjc1Mjc0ZjMxODE1NjdhYzY1OGY5YTMyNTgULcwn: 00:28:00.978 19:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZmNiMTBlMTE3NTBlMGI4OWY2ZmNjYzMwNzE1ZjgwNzNwcEW+: ]] 00:28:00.978 19:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZmNiMTBlMTE3NTBlMGI4OWY2ZmNjYzMwNzE1ZjgwNzNwcEW+: 00:28:00.978 19:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:28:00.978 19:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:00.978 19:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:00.978 19:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:00.978 19:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:00.978 19:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:00.978 19:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:00.978 19:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.978 19:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.978 19:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.978 19:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:00.978 19:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.978 19:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.549 nvme0n1 00:28:01.549 19:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.549 19:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:01.549 19:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:01.549 19:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.549 19:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.549 19:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.549 19:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:01.549 19:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:01.549 19:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.549 19:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.549 19:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.549 19:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:01.549 19:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:28:01.549 19:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:01.549 19:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:01.549 19:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:01.549 19:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:01.549 19:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzM3MmNlZDUzZmJjMmFhMTJjNjk5Mzk2Y2E3NjIwNGY4Nzg4OGVmNDJkZjA1M2Rl2qf8fQ==: 00:28:01.549 19:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTliZGIwM2E4Njc5YmQxMzYzM2FiY2QzMjkyNDY2NzDTvq2I: 00:28:01.549 19:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:01.549 19:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:01.549 19:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzM3MmNlZDUzZmJjMmFhMTJjNjk5Mzk2Y2E3NjIwNGY4Nzg4OGVmNDJkZjA1M2Rl2qf8fQ==: 00:28:01.549 19:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTliZGIwM2E4Njc5YmQxMzYzM2FiY2QzMjkyNDY2NzDTvq2I: ]] 00:28:01.549 19:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTliZGIwM2E4Njc5YmQxMzYzM2FiY2QzMjkyNDY2NzDTvq2I: 00:28:01.549 19:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:28:01.549 19:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:01.549 19:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:01.549 19:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:01.549 19:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:01.549 19:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:01.549 19:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:01.549 19:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.549 19:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.549 19:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.549 19:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:01.549 19:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.549 19:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.120 nvme0n1 00:28:02.120 19:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:02.120 19:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:02.120 19:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:02.120 19:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:02.120 19:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.120 19:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:02.120 19:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:02.120 19:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:02.120 19:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:02.120 19:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.120 19:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:02.120 19:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:02.120 19:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:28:02.120 19:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:02.120 19:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:02.120 19:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:02.120 19:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:02.120 19:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDZjMjUzN2ZkODNlZTM4YzVlMTVkNmRkZDM1MGUyOTFlYTAyNGM4NDcxNmU0NTk3NDAzZTZlN2QwYTliMDE4YVnPsYE=: 00:28:02.120 19:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:02.120 19:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:02.120 19:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:02.120 19:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDZjMjUzN2ZkODNlZTM4YzVlMTVkNmRkZDM1MGUyOTFlYTAyNGM4NDcxNmU0NTk3NDAzZTZlN2QwYTliMDE4YVnPsYE=: 00:28:02.120 19:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:02.120 19:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:28:02.120 19:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:02.120 19:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:02.120 19:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:02.120 19:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:02.120 19:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:02.120 19:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:02.120 19:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:02.120 19:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.120 19:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:02.120 19:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:02.120 19:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:02.120 19:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.691 nvme0n1 00:28:02.691 19:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:02.691 19:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:02.691 19:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:02.691 19:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:02.691 19:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.691 19:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:02.691 19:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:02.691 19:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:02.691 19:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:02.691 19:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.691 19:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:02.691 19:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:02.691 19:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:02.691 19:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:28:02.691 19:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:02.691 19:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:02.691 19:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:02.691 19:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:02.691 19:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmJlODNmOTIzNDk0Mjg2ZGY5N2NjOGUzNWUxMjlkNDPGqwzO: 00:28:02.691 19:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Mjc5MWY4NGMwNGFjODE2YzNkMDZmNzY0YWFiNDQwMTM5NjYxOTA4YWNkOTA3Y2YyNDczOTI1NThjZWY2ZGIxNmCmCks=: 00:28:02.691 19:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:02.691 19:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:02.691 19:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmJlODNmOTIzNDk0Mjg2ZGY5N2NjOGUzNWUxMjlkNDPGqwzO: 00:28:02.691 19:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Mjc5MWY4NGMwNGFjODE2YzNkMDZmNzY0YWFiNDQwMTM5NjYxOTA4YWNkOTA3Y2YyNDczOTI1NThjZWY2ZGIxNmCmCks=: ]] 00:28:02.691 19:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Mjc5MWY4NGMwNGFjODE2YzNkMDZmNzY0YWFiNDQwMTM5NjYxOTA4YWNkOTA3Y2YyNDczOTI1NThjZWY2ZGIxNmCmCks=: 00:28:02.691 19:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:28:02.691 19:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:02.691 19:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:02.691 19:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:02.691 19:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:02.691 19:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:02.691 19:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:02.691 19:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:02.691 19:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.691 19:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:02.691 19:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:02.691 19:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:02.691 19:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.262 nvme0n1 00:28:03.262 19:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:03.262 19:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:03.262 19:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:03.262 19:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.262 19:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.262 19:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:03.262 19:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:03.262 19:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:03.262 19:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.262 19:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.523 19:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:03.523 19:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:03.523 19:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:28:03.523 19:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:03.523 19:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:03.523 19:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:03.523 19:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:03.523 19:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjIyMmU4MTkzNzdjYzZjYTdlNzMyYWQzMGE0MGU3NzA3M2ZiN2I5Y2Q1NmJhNWQ3Y13+qg==: 00:28:03.523 19:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDgxM2FlODA1Y2ZhYmRhOWQxYThkNmE2NmUzZTQ0YTZjZDg4ZTBhM2YwMTU2YmMwZPAE5A==: 00:28:03.523 19:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:03.523 19:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:03.523 19:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjIyMmU4MTkzNzdjYzZjYTdlNzMyYWQzMGE0MGU3NzA3M2ZiN2I5Y2Q1NmJhNWQ3Y13+qg==: 00:28:03.523 19:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDgxM2FlODA1Y2ZhYmRhOWQxYThkNmE2NmUzZTQ0YTZjZDg4ZTBhM2YwMTU2YmMwZPAE5A==: ]] 00:28:03.523 19:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDgxM2FlODA1Y2ZhYmRhOWQxYThkNmE2NmUzZTQ0YTZjZDg4ZTBhM2YwMTU2YmMwZPAE5A==: 00:28:03.523 19:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:28:03.523 19:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:03.523 19:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:03.523 19:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:03.523 19:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:03.523 19:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:03.523 19:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:03.523 19:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.524 19:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.524 19:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:03.524 19:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:03.524 19:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.524 19:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.094 nvme0n1 00:28:04.094 19:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:04.094 19:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:04.094 19:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:04.094 19:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:04.094 19:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.094 19:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:04.094 19:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:04.094 19:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:04.094 19:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:04.094 19:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.094 19:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:04.094 19:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:04.094 19:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:28:04.094 19:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:04.094 19:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:04.094 19:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:04.094 19:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:04.094 19:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTVkMmI5Zjc1Mjc0ZjMxODE1NjdhYzY1OGY5YTMyNTgULcwn: 00:28:04.094 19:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZmNiMTBlMTE3NTBlMGI4OWY2ZmNjYzMwNzE1ZjgwNzNwcEW+: 00:28:04.094 19:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:04.094 19:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:04.094 19:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTVkMmI5Zjc1Mjc0ZjMxODE1NjdhYzY1OGY5YTMyNTgULcwn: 00:28:04.094 19:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZmNiMTBlMTE3NTBlMGI4OWY2ZmNjYzMwNzE1ZjgwNzNwcEW+: ]] 00:28:04.094 19:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZmNiMTBlMTE3NTBlMGI4OWY2ZmNjYzMwNzE1ZjgwNzNwcEW+: 00:28:04.094 19:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:28:04.094 19:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:04.094 19:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:04.094 19:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:04.094 19:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:04.094 19:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:04.094 19:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:04.355 19:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:04.355 19:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.355 19:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:04.355 19:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:04.355 19:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:04.355 19:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.925 nvme0n1 00:28:04.925 19:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:04.925 19:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:04.925 19:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:04.925 19:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:04.925 19:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.925 19:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:04.925 19:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:04.925 19:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:04.925 19:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:04.925 19:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.925 19:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:04.925 19:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:04.925 19:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:28:04.925 19:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:04.925 19:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:04.925 19:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:04.925 19:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:04.925 19:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzM3MmNlZDUzZmJjMmFhMTJjNjk5Mzk2Y2E3NjIwNGY4Nzg4OGVmNDJkZjA1M2Rl2qf8fQ==: 00:28:04.926 19:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTliZGIwM2E4Njc5YmQxMzYzM2FiY2QzMjkyNDY2NzDTvq2I: 00:28:04.926 19:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:04.926 19:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:04.926 19:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzM3MmNlZDUzZmJjMmFhMTJjNjk5Mzk2Y2E3NjIwNGY4Nzg4OGVmNDJkZjA1M2Rl2qf8fQ==: 00:28:04.926 19:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTliZGIwM2E4Njc5YmQxMzYzM2FiY2QzMjkyNDY2NzDTvq2I: ]] 00:28:04.926 19:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTliZGIwM2E4Njc5YmQxMzYzM2FiY2QzMjkyNDY2NzDTvq2I: 00:28:04.926 19:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:28:04.926 19:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:04.926 19:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:04.926 19:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:04.926 19:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:04.926 19:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:04.926 19:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:04.926 19:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:04.926 19:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.926 19:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:04.926 19:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:04.926 19:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:04.926 19:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.867 nvme0n1 00:28:05.867 19:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:05.867 19:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:05.867 19:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:05.867 19:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:05.867 19:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.867 19:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:05.867 19:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:05.867 19:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:05.867 19:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:05.867 19:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.867 19:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:05.867 19:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:05.867 19:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:28:05.867 19:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:05.867 19:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:05.867 19:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:05.867 19:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:05.867 19:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDZjMjUzN2ZkODNlZTM4YzVlMTVkNmRkZDM1MGUyOTFlYTAyNGM4NDcxNmU0NTk3NDAzZTZlN2QwYTliMDE4YVnPsYE=: 00:28:05.867 19:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:05.867 19:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:05.867 19:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:05.867 19:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDZjMjUzN2ZkODNlZTM4YzVlMTVkNmRkZDM1MGUyOTFlYTAyNGM4NDcxNmU0NTk3NDAzZTZlN2QwYTliMDE4YVnPsYE=: 00:28:05.867 19:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:05.867 19:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:28:05.867 19:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:05.867 19:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:05.867 19:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:05.867 19:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:05.867 19:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:05.867 19:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:05.867 19:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:05.867 19:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.867 19:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:05.867 19:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:05.867 19:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:05.867 19:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.809 nvme0n1 00:28:06.809 19:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.809 19:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:06.809 19:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:06.810 19:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.810 19:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.810 19:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.810 19:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:06.810 19:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:06.810 19:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.810 19:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.810 19:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.810 19:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:28:06.810 19:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:06.810 19:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:06.810 19:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:28:06.810 19:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:06.810 19:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:06.810 19:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:06.810 19:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:06.810 19:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmJlODNmOTIzNDk0Mjg2ZGY5N2NjOGUzNWUxMjlkNDPGqwzO: 00:28:06.810 19:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Mjc5MWY4NGMwNGFjODE2YzNkMDZmNzY0YWFiNDQwMTM5NjYxOTA4YWNkOTA3Y2YyNDczOTI1NThjZWY2ZGIxNmCmCks=: 00:28:06.810 19:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:06.810 19:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:06.810 19:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmJlODNmOTIzNDk0Mjg2ZGY5N2NjOGUzNWUxMjlkNDPGqwzO: 00:28:06.810 19:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Mjc5MWY4NGMwNGFjODE2YzNkMDZmNzY0YWFiNDQwMTM5NjYxOTA4YWNkOTA3Y2YyNDczOTI1NThjZWY2ZGIxNmCmCks=: ]] 00:28:06.810 19:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Mjc5MWY4NGMwNGFjODE2YzNkMDZmNzY0YWFiNDQwMTM5NjYxOTA4YWNkOTA3Y2YyNDczOTI1NThjZWY2ZGIxNmCmCks=: 00:28:06.810 19:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:28:06.810 19:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:06.810 19:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:06.810 19:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:06.810 19:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:06.810 19:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:06.810 19:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:06.810 19:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.810 19:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.810 19:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.810 19:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:06.810 19:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.810 19:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.810 nvme0n1 00:28:06.810 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.810 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:06.810 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:06.810 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.810 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.810 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.810 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:06.810 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:06.810 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.810 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.810 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.810 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:06.810 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:28:06.810 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:06.810 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:06.810 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:06.810 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:06.810 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjIyMmU4MTkzNzdjYzZjYTdlNzMyYWQzMGE0MGU3NzA3M2ZiN2I5Y2Q1NmJhNWQ3Y13+qg==: 00:28:06.810 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDgxM2FlODA1Y2ZhYmRhOWQxYThkNmE2NmUzZTQ0YTZjZDg4ZTBhM2YwMTU2YmMwZPAE5A==: 00:28:06.810 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:06.810 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:06.810 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjIyMmU4MTkzNzdjYzZjYTdlNzMyYWQzMGE0MGU3NzA3M2ZiN2I5Y2Q1NmJhNWQ3Y13+qg==: 00:28:06.810 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDgxM2FlODA1Y2ZhYmRhOWQxYThkNmE2NmUzZTQ0YTZjZDg4ZTBhM2YwMTU2YmMwZPAE5A==: ]] 00:28:06.810 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDgxM2FlODA1Y2ZhYmRhOWQxYThkNmE2NmUzZTQ0YTZjZDg4ZTBhM2YwMTU2YmMwZPAE5A==: 00:28:06.810 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:28:06.810 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:06.810 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:06.810 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:06.810 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:06.810 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:06.810 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:06.810 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.810 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.810 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.810 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:06.810 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.810 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.071 nvme0n1 00:28:07.071 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.071 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:07.071 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:07.071 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.071 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.071 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.071 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:07.071 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:07.071 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.071 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.071 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.071 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:07.071 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:28:07.071 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:07.071 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:07.071 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:07.071 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:07.071 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTVkMmI5Zjc1Mjc0ZjMxODE1NjdhYzY1OGY5YTMyNTgULcwn: 00:28:07.071 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZmNiMTBlMTE3NTBlMGI4OWY2ZmNjYzMwNzE1ZjgwNzNwcEW+: 00:28:07.071 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:07.071 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:07.071 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTVkMmI5Zjc1Mjc0ZjMxODE1NjdhYzY1OGY5YTMyNTgULcwn: 00:28:07.071 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZmNiMTBlMTE3NTBlMGI4OWY2ZmNjYzMwNzE1ZjgwNzNwcEW+: ]] 00:28:07.071 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZmNiMTBlMTE3NTBlMGI4OWY2ZmNjYzMwNzE1ZjgwNzNwcEW+: 00:28:07.071 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:28:07.071 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:07.071 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:07.071 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:07.071 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:07.071 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:07.071 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:07.071 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.071 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.071 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.071 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:07.071 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.071 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.332 nvme0n1 00:28:07.332 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.332 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:07.332 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:07.332 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.332 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.332 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.332 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:07.332 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:07.332 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.332 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.332 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.332 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:07.332 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:28:07.332 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:07.332 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:07.332 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:07.332 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:07.332 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzM3MmNlZDUzZmJjMmFhMTJjNjk5Mzk2Y2E3NjIwNGY4Nzg4OGVmNDJkZjA1M2Rl2qf8fQ==: 00:28:07.332 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTliZGIwM2E4Njc5YmQxMzYzM2FiY2QzMjkyNDY2NzDTvq2I: 00:28:07.332 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:07.332 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:07.332 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzM3MmNlZDUzZmJjMmFhMTJjNjk5Mzk2Y2E3NjIwNGY4Nzg4OGVmNDJkZjA1M2Rl2qf8fQ==: 00:28:07.332 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTliZGIwM2E4Njc5YmQxMzYzM2FiY2QzMjkyNDY2NzDTvq2I: ]] 00:28:07.332 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTliZGIwM2E4Njc5YmQxMzYzM2FiY2QzMjkyNDY2NzDTvq2I: 00:28:07.332 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:28:07.332 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:07.332 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:07.332 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:07.332 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:07.332 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:07.332 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:07.332 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.332 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.332 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.332 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:07.332 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.332 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.594 nvme0n1 00:28:07.594 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.594 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:07.594 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:07.594 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.594 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.594 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.594 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:07.594 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:07.594 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.594 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.594 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.594 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:07.594 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:28:07.594 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:07.594 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:07.594 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:07.594 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:07.594 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDZjMjUzN2ZkODNlZTM4YzVlMTVkNmRkZDM1MGUyOTFlYTAyNGM4NDcxNmU0NTk3NDAzZTZlN2QwYTliMDE4YVnPsYE=: 00:28:07.594 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:07.594 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:07.594 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:07.594 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDZjMjUzN2ZkODNlZTM4YzVlMTVkNmRkZDM1MGUyOTFlYTAyNGM4NDcxNmU0NTk3NDAzZTZlN2QwYTliMDE4YVnPsYE=: 00:28:07.594 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:07.594 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:28:07.594 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:07.594 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:07.594 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:07.594 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:07.594 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:07.594 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:07.594 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.594 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.594 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.594 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:07.594 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.594 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.856 nvme0n1 00:28:07.856 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.856 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:07.856 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:07.856 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.856 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.856 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.856 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:07.856 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:07.856 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.856 19:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.856 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.856 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:07.856 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:07.856 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:28:07.856 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:07.856 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:07.856 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:07.856 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:07.856 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmJlODNmOTIzNDk0Mjg2ZGY5N2NjOGUzNWUxMjlkNDPGqwzO: 00:28:07.856 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Mjc5MWY4NGMwNGFjODE2YzNkMDZmNzY0YWFiNDQwMTM5NjYxOTA4YWNkOTA3Y2YyNDczOTI1NThjZWY2ZGIxNmCmCks=: 00:28:07.856 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:07.856 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:07.856 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmJlODNmOTIzNDk0Mjg2ZGY5N2NjOGUzNWUxMjlkNDPGqwzO: 00:28:07.856 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Mjc5MWY4NGMwNGFjODE2YzNkMDZmNzY0YWFiNDQwMTM5NjYxOTA4YWNkOTA3Y2YyNDczOTI1NThjZWY2ZGIxNmCmCks=: ]] 00:28:07.856 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Mjc5MWY4NGMwNGFjODE2YzNkMDZmNzY0YWFiNDQwMTM5NjYxOTA4YWNkOTA3Y2YyNDczOTI1NThjZWY2ZGIxNmCmCks=: 00:28:07.856 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:28:07.856 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:07.856 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:07.856 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:07.856 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:07.856 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:07.856 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:07.856 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.856 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.856 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.856 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:07.856 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.856 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.118 nvme0n1 00:28:08.118 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.118 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:08.118 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:08.118 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.118 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.118 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.118 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:08.118 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:08.118 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.118 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.118 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.118 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:08.118 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:28:08.118 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:08.118 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:08.118 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:08.118 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:08.118 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjIyMmU4MTkzNzdjYzZjYTdlNzMyYWQzMGE0MGU3NzA3M2ZiN2I5Y2Q1NmJhNWQ3Y13+qg==: 00:28:08.118 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDgxM2FlODA1Y2ZhYmRhOWQxYThkNmE2NmUzZTQ0YTZjZDg4ZTBhM2YwMTU2YmMwZPAE5A==: 00:28:08.118 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:08.118 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:08.118 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjIyMmU4MTkzNzdjYzZjYTdlNzMyYWQzMGE0MGU3NzA3M2ZiN2I5Y2Q1NmJhNWQ3Y13+qg==: 00:28:08.118 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDgxM2FlODA1Y2ZhYmRhOWQxYThkNmE2NmUzZTQ0YTZjZDg4ZTBhM2YwMTU2YmMwZPAE5A==: ]] 00:28:08.118 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDgxM2FlODA1Y2ZhYmRhOWQxYThkNmE2NmUzZTQ0YTZjZDg4ZTBhM2YwMTU2YmMwZPAE5A==: 00:28:08.118 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:28:08.118 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:08.118 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:08.118 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:08.118 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:08.118 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:08.118 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:08.118 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.118 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.118 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.118 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:08.118 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.118 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.380 nvme0n1 00:28:08.380 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.380 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:08.380 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:08.380 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.380 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.380 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.380 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:08.380 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:08.380 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.380 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.380 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.380 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:08.380 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:28:08.380 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:08.380 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:08.380 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:08.380 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:08.380 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTVkMmI5Zjc1Mjc0ZjMxODE1NjdhYzY1OGY5YTMyNTgULcwn: 00:28:08.380 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZmNiMTBlMTE3NTBlMGI4OWY2ZmNjYzMwNzE1ZjgwNzNwcEW+: 00:28:08.380 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:08.380 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:08.380 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTVkMmI5Zjc1Mjc0ZjMxODE1NjdhYzY1OGY5YTMyNTgULcwn: 00:28:08.380 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZmNiMTBlMTE3NTBlMGI4OWY2ZmNjYzMwNzE1ZjgwNzNwcEW+: ]] 00:28:08.380 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZmNiMTBlMTE3NTBlMGI4OWY2ZmNjYzMwNzE1ZjgwNzNwcEW+: 00:28:08.380 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:28:08.380 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:08.380 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:08.380 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:08.380 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:08.380 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:08.380 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:08.380 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.380 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.380 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.380 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:08.380 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.380 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.641 nvme0n1 00:28:08.641 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.641 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:08.641 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:08.641 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.641 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.641 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.641 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:08.641 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:08.642 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.642 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.642 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.642 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:08.642 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:28:08.642 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:08.642 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:08.642 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:08.642 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:08.642 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzM3MmNlZDUzZmJjMmFhMTJjNjk5Mzk2Y2E3NjIwNGY4Nzg4OGVmNDJkZjA1M2Rl2qf8fQ==: 00:28:08.642 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTliZGIwM2E4Njc5YmQxMzYzM2FiY2QzMjkyNDY2NzDTvq2I: 00:28:08.642 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:08.642 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:08.642 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzM3MmNlZDUzZmJjMmFhMTJjNjk5Mzk2Y2E3NjIwNGY4Nzg4OGVmNDJkZjA1M2Rl2qf8fQ==: 00:28:08.642 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTliZGIwM2E4Njc5YmQxMzYzM2FiY2QzMjkyNDY2NzDTvq2I: ]] 00:28:08.642 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTliZGIwM2E4Njc5YmQxMzYzM2FiY2QzMjkyNDY2NzDTvq2I: 00:28:08.642 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:28:08.642 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:08.642 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:08.642 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:08.642 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:08.642 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:08.642 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:08.642 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.642 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.642 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.642 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:08.642 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.642 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.903 nvme0n1 00:28:08.903 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.903 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:08.903 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:08.903 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.903 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.903 19:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.903 19:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:08.903 19:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:08.903 19:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.903 19:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.903 19:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.903 19:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:08.903 19:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:28:08.903 19:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:08.903 19:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:08.903 19:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:08.903 19:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:08.903 19:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDZjMjUzN2ZkODNlZTM4YzVlMTVkNmRkZDM1MGUyOTFlYTAyNGM4NDcxNmU0NTk3NDAzZTZlN2QwYTliMDE4YVnPsYE=: 00:28:08.903 19:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:08.903 19:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:08.903 19:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:08.903 19:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDZjMjUzN2ZkODNlZTM4YzVlMTVkNmRkZDM1MGUyOTFlYTAyNGM4NDcxNmU0NTk3NDAzZTZlN2QwYTliMDE4YVnPsYE=: 00:28:08.903 19:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:08.903 19:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:28:08.903 19:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:08.903 19:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:08.903 19:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:08.903 19:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:08.903 19:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:08.903 19:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:08.903 19:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.903 19:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.903 19:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.903 19:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:08.903 19:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.903 19:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.164 nvme0n1 00:28:09.164 19:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.164 19:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:09.164 19:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:09.164 19:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.164 19:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.164 19:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.164 19:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:09.164 19:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:09.164 19:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.164 19:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.164 19:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.164 19:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:09.164 19:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:09.164 19:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:28:09.164 19:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:09.164 19:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:09.164 19:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:09.164 19:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:09.164 19:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmJlODNmOTIzNDk0Mjg2ZGY5N2NjOGUzNWUxMjlkNDPGqwzO: 00:28:09.164 19:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Mjc5MWY4NGMwNGFjODE2YzNkMDZmNzY0YWFiNDQwMTM5NjYxOTA4YWNkOTA3Y2YyNDczOTI1NThjZWY2ZGIxNmCmCks=: 00:28:09.164 19:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:09.164 19:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:09.164 19:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmJlODNmOTIzNDk0Mjg2ZGY5N2NjOGUzNWUxMjlkNDPGqwzO: 00:28:09.164 19:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Mjc5MWY4NGMwNGFjODE2YzNkMDZmNzY0YWFiNDQwMTM5NjYxOTA4YWNkOTA3Y2YyNDczOTI1NThjZWY2ZGIxNmCmCks=: ]] 00:28:09.164 19:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Mjc5MWY4NGMwNGFjODE2YzNkMDZmNzY0YWFiNDQwMTM5NjYxOTA4YWNkOTA3Y2YyNDczOTI1NThjZWY2ZGIxNmCmCks=: 00:28:09.164 19:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:28:09.164 19:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:09.164 19:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:09.164 19:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:09.164 19:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:09.164 19:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:09.164 19:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:09.164 19:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.164 19:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.164 19:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.164 19:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:09.164 19:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.164 19:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.425 nvme0n1 00:28:09.425 19:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.425 19:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:09.425 19:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:09.425 19:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.425 19:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.425 19:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.425 19:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:09.425 19:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:09.425 19:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.425 19:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.425 19:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.425 19:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:09.425 19:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:28:09.425 19:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:09.425 19:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:09.425 19:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:09.425 19:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:09.425 19:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjIyMmU4MTkzNzdjYzZjYTdlNzMyYWQzMGE0MGU3NzA3M2ZiN2I5Y2Q1NmJhNWQ3Y13+qg==: 00:28:09.425 19:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDgxM2FlODA1Y2ZhYmRhOWQxYThkNmE2NmUzZTQ0YTZjZDg4ZTBhM2YwMTU2YmMwZPAE5A==: 00:28:09.425 19:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:09.425 19:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:09.425 19:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjIyMmU4MTkzNzdjYzZjYTdlNzMyYWQzMGE0MGU3NzA3M2ZiN2I5Y2Q1NmJhNWQ3Y13+qg==: 00:28:09.426 19:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDgxM2FlODA1Y2ZhYmRhOWQxYThkNmE2NmUzZTQ0YTZjZDg4ZTBhM2YwMTU2YmMwZPAE5A==: ]] 00:28:09.426 19:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDgxM2FlODA1Y2ZhYmRhOWQxYThkNmE2NmUzZTQ0YTZjZDg4ZTBhM2YwMTU2YmMwZPAE5A==: 00:28:09.426 19:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:28:09.426 19:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:09.426 19:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:09.426 19:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:09.426 19:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:09.426 19:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:09.426 19:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:09.426 19:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.426 19:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.426 19:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.426 19:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:09.426 19:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.426 19:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.687 nvme0n1 00:28:09.687 19:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.687 19:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:09.687 19:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:09.687 19:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.687 19:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.687 19:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.687 19:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:09.687 19:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:09.687 19:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.687 19:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.687 19:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.687 19:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:09.687 19:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:28:09.687 19:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:09.947 19:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:09.947 19:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:09.947 19:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:09.947 19:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTVkMmI5Zjc1Mjc0ZjMxODE1NjdhYzY1OGY5YTMyNTgULcwn: 00:28:09.947 19:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZmNiMTBlMTE3NTBlMGI4OWY2ZmNjYzMwNzE1ZjgwNzNwcEW+: 00:28:09.947 19:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:09.947 19:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:09.947 19:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTVkMmI5Zjc1Mjc0ZjMxODE1NjdhYzY1OGY5YTMyNTgULcwn: 00:28:09.947 19:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZmNiMTBlMTE3NTBlMGI4OWY2ZmNjYzMwNzE1ZjgwNzNwcEW+: ]] 00:28:09.947 19:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZmNiMTBlMTE3NTBlMGI4OWY2ZmNjYzMwNzE1ZjgwNzNwcEW+: 00:28:09.947 19:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:28:09.947 19:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:09.947 19:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:09.947 19:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:09.947 19:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:09.947 19:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:09.947 19:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:09.947 19:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.947 19:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.947 19:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.947 19:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:09.947 19:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.947 19:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.209 nvme0n1 00:28:10.209 19:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.209 19:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:10.209 19:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:10.209 19:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.209 19:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.209 19:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.209 19:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:10.209 19:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:10.209 19:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.209 19:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.209 19:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.209 19:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:10.209 19:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:28:10.209 19:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:10.209 19:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:10.209 19:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:10.209 19:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:10.209 19:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzM3MmNlZDUzZmJjMmFhMTJjNjk5Mzk2Y2E3NjIwNGY4Nzg4OGVmNDJkZjA1M2Rl2qf8fQ==: 00:28:10.209 19:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTliZGIwM2E4Njc5YmQxMzYzM2FiY2QzMjkyNDY2NzDTvq2I: 00:28:10.209 19:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:10.209 19:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:10.209 19:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzM3MmNlZDUzZmJjMmFhMTJjNjk5Mzk2Y2E3NjIwNGY4Nzg4OGVmNDJkZjA1M2Rl2qf8fQ==: 00:28:10.209 19:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTliZGIwM2E4Njc5YmQxMzYzM2FiY2QzMjkyNDY2NzDTvq2I: ]] 00:28:10.209 19:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTliZGIwM2E4Njc5YmQxMzYzM2FiY2QzMjkyNDY2NzDTvq2I: 00:28:10.209 19:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:28:10.209 19:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:10.209 19:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:10.209 19:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:10.209 19:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:10.209 19:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:10.209 19:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:10.209 19:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.209 19:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.209 19:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.209 19:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:10.209 19:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.209 19:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.470 nvme0n1 00:28:10.470 19:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.470 19:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:10.470 19:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:10.470 19:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.470 19:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.470 19:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.470 19:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:10.470 19:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:10.470 19:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.470 19:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.470 19:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.470 19:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:10.470 19:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:28:10.470 19:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:10.470 19:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:10.470 19:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:10.470 19:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:10.470 19:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDZjMjUzN2ZkODNlZTM4YzVlMTVkNmRkZDM1MGUyOTFlYTAyNGM4NDcxNmU0NTk3NDAzZTZlN2QwYTliMDE4YVnPsYE=: 00:28:10.470 19:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:10.470 19:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:10.470 19:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:10.471 19:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDZjMjUzN2ZkODNlZTM4YzVlMTVkNmRkZDM1MGUyOTFlYTAyNGM4NDcxNmU0NTk3NDAzZTZlN2QwYTliMDE4YVnPsYE=: 00:28:10.471 19:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:10.471 19:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:28:10.471 19:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:10.471 19:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:10.471 19:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:10.471 19:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:10.471 19:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:10.471 19:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:10.471 19:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.471 19:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.471 19:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.471 19:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:10.471 19:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.471 19:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.731 nvme0n1 00:28:10.731 19:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.731 19:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:10.731 19:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:10.731 19:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.731 19:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.731 19:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.731 19:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:10.731 19:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:10.731 19:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.731 19:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.731 19:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.731 19:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:10.731 19:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:10.731 19:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:28:10.731 19:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:10.731 19:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:10.731 19:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:10.731 19:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:10.731 19:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmJlODNmOTIzNDk0Mjg2ZGY5N2NjOGUzNWUxMjlkNDPGqwzO: 00:28:10.731 19:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Mjc5MWY4NGMwNGFjODE2YzNkMDZmNzY0YWFiNDQwMTM5NjYxOTA4YWNkOTA3Y2YyNDczOTI1NThjZWY2ZGIxNmCmCks=: 00:28:10.731 19:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:10.731 19:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:10.731 19:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmJlODNmOTIzNDk0Mjg2ZGY5N2NjOGUzNWUxMjlkNDPGqwzO: 00:28:10.731 19:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Mjc5MWY4NGMwNGFjODE2YzNkMDZmNzY0YWFiNDQwMTM5NjYxOTA4YWNkOTA3Y2YyNDczOTI1NThjZWY2ZGIxNmCmCks=: ]] 00:28:10.731 19:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Mjc5MWY4NGMwNGFjODE2YzNkMDZmNzY0YWFiNDQwMTM5NjYxOTA4YWNkOTA3Y2YyNDczOTI1NThjZWY2ZGIxNmCmCks=: 00:28:10.731 19:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:28:10.731 19:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:10.731 19:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:10.731 19:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:10.731 19:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:10.731 19:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:10.731 19:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:10.731 19:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.731 19:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.992 19:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.992 19:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:10.992 19:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.992 19:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.254 nvme0n1 00:28:11.254 19:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.254 19:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:11.254 19:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:11.254 19:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.254 19:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.254 19:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.254 19:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:11.254 19:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:11.254 19:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.254 19:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.515 19:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.515 19:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:11.515 19:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:28:11.515 19:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:11.515 19:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:11.515 19:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:11.515 19:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:11.515 19:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjIyMmU4MTkzNzdjYzZjYTdlNzMyYWQzMGE0MGU3NzA3M2ZiN2I5Y2Q1NmJhNWQ3Y13+qg==: 00:28:11.516 19:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDgxM2FlODA1Y2ZhYmRhOWQxYThkNmE2NmUzZTQ0YTZjZDg4ZTBhM2YwMTU2YmMwZPAE5A==: 00:28:11.516 19:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:11.516 19:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:11.516 19:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjIyMmU4MTkzNzdjYzZjYTdlNzMyYWQzMGE0MGU3NzA3M2ZiN2I5Y2Q1NmJhNWQ3Y13+qg==: 00:28:11.516 19:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDgxM2FlODA1Y2ZhYmRhOWQxYThkNmE2NmUzZTQ0YTZjZDg4ZTBhM2YwMTU2YmMwZPAE5A==: ]] 00:28:11.516 19:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDgxM2FlODA1Y2ZhYmRhOWQxYThkNmE2NmUzZTQ0YTZjZDg4ZTBhM2YwMTU2YmMwZPAE5A==: 00:28:11.516 19:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:28:11.516 19:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:11.516 19:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:11.516 19:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:11.516 19:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:11.516 19:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:11.516 19:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:11.516 19:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.516 19:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.516 19:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.516 19:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:11.516 19:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.516 19:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.776 nvme0n1 00:28:11.776 19:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.776 19:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:11.776 19:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:11.776 19:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.776 19:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.776 19:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.776 19:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:11.776 19:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:11.776 19:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.776 19:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.037 19:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.037 19:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:12.037 19:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:28:12.037 19:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:12.037 19:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:12.037 19:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:12.037 19:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:12.037 19:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTVkMmI5Zjc1Mjc0ZjMxODE1NjdhYzY1OGY5YTMyNTgULcwn: 00:28:12.037 19:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZmNiMTBlMTE3NTBlMGI4OWY2ZmNjYzMwNzE1ZjgwNzNwcEW+: 00:28:12.037 19:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:12.037 19:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:12.037 19:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTVkMmI5Zjc1Mjc0ZjMxODE1NjdhYzY1OGY5YTMyNTgULcwn: 00:28:12.037 19:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZmNiMTBlMTE3NTBlMGI4OWY2ZmNjYzMwNzE1ZjgwNzNwcEW+: ]] 00:28:12.037 19:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZmNiMTBlMTE3NTBlMGI4OWY2ZmNjYzMwNzE1ZjgwNzNwcEW+: 00:28:12.037 19:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:28:12.037 19:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:12.037 19:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:12.037 19:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:12.037 19:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:12.037 19:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:12.037 19:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:12.037 19:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.037 19:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.037 19:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.037 19:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:12.037 19:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.037 19:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.298 nvme0n1 00:28:12.298 19:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.298 19:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:12.298 19:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:12.298 19:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.298 19:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.298 19:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.298 19:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:12.298 19:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:12.298 19:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.298 19:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.559 19:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.559 19:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:12.559 19:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:28:12.559 19:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:12.559 19:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:12.559 19:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:12.559 19:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:12.559 19:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzM3MmNlZDUzZmJjMmFhMTJjNjk5Mzk2Y2E3NjIwNGY4Nzg4OGVmNDJkZjA1M2Rl2qf8fQ==: 00:28:12.559 19:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTliZGIwM2E4Njc5YmQxMzYzM2FiY2QzMjkyNDY2NzDTvq2I: 00:28:12.559 19:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:12.559 19:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:12.559 19:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzM3MmNlZDUzZmJjMmFhMTJjNjk5Mzk2Y2E3NjIwNGY4Nzg4OGVmNDJkZjA1M2Rl2qf8fQ==: 00:28:12.559 19:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTliZGIwM2E4Njc5YmQxMzYzM2FiY2QzMjkyNDY2NzDTvq2I: ]] 00:28:12.559 19:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTliZGIwM2E4Njc5YmQxMzYzM2FiY2QzMjkyNDY2NzDTvq2I: 00:28:12.559 19:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:28:12.559 19:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:12.559 19:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:12.559 19:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:12.559 19:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:12.559 19:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:12.559 19:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:12.559 19:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.559 19:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.559 19:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.559 19:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:12.559 19:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.559 19:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.819 nvme0n1 00:28:12.819 19:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.819 19:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:12.819 19:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:12.819 19:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.819 19:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.819 19:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.819 19:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:12.819 19:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:12.819 19:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.819 19:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.081 19:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:13.081 19:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:13.081 19:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:28:13.081 19:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:13.081 19:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:13.081 19:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:13.081 19:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:13.081 19:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDZjMjUzN2ZkODNlZTM4YzVlMTVkNmRkZDM1MGUyOTFlYTAyNGM4NDcxNmU0NTk3NDAzZTZlN2QwYTliMDE4YVnPsYE=: 00:28:13.081 19:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:13.081 19:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:13.081 19:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:13.081 19:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDZjMjUzN2ZkODNlZTM4YzVlMTVkNmRkZDM1MGUyOTFlYTAyNGM4NDcxNmU0NTk3NDAzZTZlN2QwYTliMDE4YVnPsYE=: 00:28:13.081 19:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:13.081 19:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:28:13.081 19:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:13.081 19:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:13.081 19:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:13.081 19:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:13.081 19:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:13.081 19:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:13.081 19:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:13.081 19:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.081 19:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:13.081 19:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:13.081 19:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:13.081 19:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.342 nvme0n1 00:28:13.342 19:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:13.342 19:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:13.342 19:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:13.342 19:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:13.342 19:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.342 19:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:13.603 19:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:13.603 19:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:13.603 19:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:13.603 19:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.603 19:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:13.603 19:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:13.603 19:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:13.603 19:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:28:13.603 19:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:13.603 19:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:13.603 19:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:13.603 19:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:13.603 19:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmJlODNmOTIzNDk0Mjg2ZGY5N2NjOGUzNWUxMjlkNDPGqwzO: 00:28:13.603 19:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Mjc5MWY4NGMwNGFjODE2YzNkMDZmNzY0YWFiNDQwMTM5NjYxOTA4YWNkOTA3Y2YyNDczOTI1NThjZWY2ZGIxNmCmCks=: 00:28:13.603 19:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:13.603 19:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:13.603 19:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmJlODNmOTIzNDk0Mjg2ZGY5N2NjOGUzNWUxMjlkNDPGqwzO: 00:28:13.603 19:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Mjc5MWY4NGMwNGFjODE2YzNkMDZmNzY0YWFiNDQwMTM5NjYxOTA4YWNkOTA3Y2YyNDczOTI1NThjZWY2ZGIxNmCmCks=: ]] 00:28:13.603 19:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Mjc5MWY4NGMwNGFjODE2YzNkMDZmNzY0YWFiNDQwMTM5NjYxOTA4YWNkOTA3Y2YyNDczOTI1NThjZWY2ZGIxNmCmCks=: 00:28:13.603 19:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:28:13.603 19:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:13.603 19:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:13.603 19:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:13.603 19:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:13.603 19:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:13.603 19:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:13.603 19:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:13.603 19:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.603 19:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:13.603 19:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:13.603 19:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:13.603 19:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.173 nvme0n1 00:28:14.174 19:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.174 19:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:14.174 19:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:14.174 19:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.174 19:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.174 19:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.174 19:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:14.174 19:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:14.174 19:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.174 19:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.434 19:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.434 19:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:14.434 19:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:28:14.434 19:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:14.434 19:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:14.434 19:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:14.434 19:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:14.434 19:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjIyMmU4MTkzNzdjYzZjYTdlNzMyYWQzMGE0MGU3NzA3M2ZiN2I5Y2Q1NmJhNWQ3Y13+qg==: 00:28:14.434 19:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDgxM2FlODA1Y2ZhYmRhOWQxYThkNmE2NmUzZTQ0YTZjZDg4ZTBhM2YwMTU2YmMwZPAE5A==: 00:28:14.434 19:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:14.434 19:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:14.434 19:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjIyMmU4MTkzNzdjYzZjYTdlNzMyYWQzMGE0MGU3NzA3M2ZiN2I5Y2Q1NmJhNWQ3Y13+qg==: 00:28:14.434 19:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDgxM2FlODA1Y2ZhYmRhOWQxYThkNmE2NmUzZTQ0YTZjZDg4ZTBhM2YwMTU2YmMwZPAE5A==: ]] 00:28:14.434 19:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDgxM2FlODA1Y2ZhYmRhOWQxYThkNmE2NmUzZTQ0YTZjZDg4ZTBhM2YwMTU2YmMwZPAE5A==: 00:28:14.434 19:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:28:14.434 19:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:14.434 19:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:14.434 19:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:14.434 19:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:14.434 19:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:14.434 19:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:14.434 19:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.434 19:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.434 19:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.434 19:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:14.434 19:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.434 19:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.005 nvme0n1 00:28:15.005 19:17:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.005 19:17:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:15.005 19:17:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:15.005 19:17:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.005 19:17:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.005 19:17:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.005 19:17:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:15.005 19:17:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:15.005 19:17:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.005 19:17:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.005 19:17:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.006 19:17:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:15.006 19:17:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:28:15.006 19:17:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:15.267 19:17:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:15.267 19:17:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:15.267 19:17:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:15.267 19:17:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTVkMmI5Zjc1Mjc0ZjMxODE1NjdhYzY1OGY5YTMyNTgULcwn: 00:28:15.267 19:17:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZmNiMTBlMTE3NTBlMGI4OWY2ZmNjYzMwNzE1ZjgwNzNwcEW+: 00:28:15.267 19:17:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:15.267 19:17:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:15.267 19:17:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTVkMmI5Zjc1Mjc0ZjMxODE1NjdhYzY1OGY5YTMyNTgULcwn: 00:28:15.267 19:17:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZmNiMTBlMTE3NTBlMGI4OWY2ZmNjYzMwNzE1ZjgwNzNwcEW+: ]] 00:28:15.267 19:17:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZmNiMTBlMTE3NTBlMGI4OWY2ZmNjYzMwNzE1ZjgwNzNwcEW+: 00:28:15.267 19:17:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:28:15.267 19:17:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:15.267 19:17:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:15.267 19:17:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:15.267 19:17:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:15.267 19:17:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:15.267 19:17:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:15.267 19:17:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.267 19:17:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.267 19:17:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.267 19:17:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:15.267 19:17:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.267 19:17:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.838 nvme0n1 00:28:15.838 19:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.838 19:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:15.838 19:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:15.838 19:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.838 19:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.838 19:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.838 19:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:15.838 19:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:15.838 19:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.838 19:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.838 19:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.838 19:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:15.838 19:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:28:15.838 19:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:15.838 19:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:15.838 19:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:15.838 19:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:15.838 19:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzM3MmNlZDUzZmJjMmFhMTJjNjk5Mzk2Y2E3NjIwNGY4Nzg4OGVmNDJkZjA1M2Rl2qf8fQ==: 00:28:15.838 19:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTliZGIwM2E4Njc5YmQxMzYzM2FiY2QzMjkyNDY2NzDTvq2I: 00:28:15.838 19:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:15.838 19:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:15.838 19:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzM3MmNlZDUzZmJjMmFhMTJjNjk5Mzk2Y2E3NjIwNGY4Nzg4OGVmNDJkZjA1M2Rl2qf8fQ==: 00:28:15.838 19:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTliZGIwM2E4Njc5YmQxMzYzM2FiY2QzMjkyNDY2NzDTvq2I: ]] 00:28:15.838 19:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTliZGIwM2E4Njc5YmQxMzYzM2FiY2QzMjkyNDY2NzDTvq2I: 00:28:15.838 19:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:28:15.838 19:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:15.838 19:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:15.838 19:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:15.838 19:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:15.838 19:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:15.838 19:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:15.838 19:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.839 19:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.099 19:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.099 19:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:16.099 19:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.099 19:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.671 nvme0n1 00:28:16.671 19:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.671 19:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:16.671 19:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:16.671 19:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.671 19:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.671 19:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.671 19:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:16.671 19:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:16.671 19:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.671 19:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.671 19:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.671 19:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:16.671 19:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:28:16.671 19:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:16.671 19:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:16.671 19:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:16.671 19:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:16.671 19:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDZjMjUzN2ZkODNlZTM4YzVlMTVkNmRkZDM1MGUyOTFlYTAyNGM4NDcxNmU0NTk3NDAzZTZlN2QwYTliMDE4YVnPsYE=: 00:28:16.671 19:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:16.671 19:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:16.671 19:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:16.671 19:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDZjMjUzN2ZkODNlZTM4YzVlMTVkNmRkZDM1MGUyOTFlYTAyNGM4NDcxNmU0NTk3NDAzZTZlN2QwYTliMDE4YVnPsYE=: 00:28:16.671 19:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:16.671 19:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:28:16.671 19:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:16.671 19:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:16.671 19:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:16.671 19:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:16.671 19:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:16.671 19:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:16.671 19:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.671 19:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.672 19:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.672 19:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:16.672 19:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.672 19:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.612 nvme0n1 00:28:17.612 19:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.612 19:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:17.612 19:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:17.612 19:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.612 19:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.612 19:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.612 19:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:17.612 19:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:17.612 19:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.612 19:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.612 19:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.612 19:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:17.612 19:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:17.612 19:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:17.612 19:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:17.612 19:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:17.612 19:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjIyMmU4MTkzNzdjYzZjYTdlNzMyYWQzMGE0MGU3NzA3M2ZiN2I5Y2Q1NmJhNWQ3Y13+qg==: 00:28:17.612 19:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDgxM2FlODA1Y2ZhYmRhOWQxYThkNmE2NmUzZTQ0YTZjZDg4ZTBhM2YwMTU2YmMwZPAE5A==: 00:28:17.612 19:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:17.612 19:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:17.612 19:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjIyMmU4MTkzNzdjYzZjYTdlNzMyYWQzMGE0MGU3NzA3M2ZiN2I5Y2Q1NmJhNWQ3Y13+qg==: 00:28:17.612 19:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDgxM2FlODA1Y2ZhYmRhOWQxYThkNmE2NmUzZTQ0YTZjZDg4ZTBhM2YwMTU2YmMwZPAE5A==: ]] 00:28:17.612 19:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDgxM2FlODA1Y2ZhYmRhOWQxYThkNmE2NmUzZTQ0YTZjZDg4ZTBhM2YwMTU2YmMwZPAE5A==: 00:28:17.612 19:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:17.612 19:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.612 19:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.612 19:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.612 19:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:17.612 19:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:28:17.612 19:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:17.612 19:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:28:17.612 19:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:17.612 19:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:28:17.612 19:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:17.612 19:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:17.612 19:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.612 19:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.612 request: 00:28:17.612 { 00:28:17.612 "name": "nvme0", 00:28:17.612 "trtype": "tcp", 00:28:17.613 "traddr": "10.0.0.1", 00:28:17.613 "adrfam": "ipv4", 00:28:17.613 "trsvcid": "4420", 00:28:17.613 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:17.613 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:17.613 "prchk_reftag": false, 00:28:17.613 "prchk_guard": false, 00:28:17.613 "hdgst": false, 00:28:17.613 "ddgst": false, 00:28:17.613 "allow_unrecognized_csi": false, 00:28:17.613 "method": "bdev_nvme_attach_controller", 00:28:17.613 "req_id": 1 00:28:17.613 } 00:28:17.613 Got JSON-RPC error response 00:28:17.613 response: 00:28:17.613 { 00:28:17.613 "code": -5, 00:28:17.613 "message": "Input/output error" 00:28:17.613 } 00:28:17.613 19:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:28:17.613 19:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:28:17.613 19:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:17.613 19:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:17.613 19:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:17.613 19:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:28:17.613 19:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:28:17.613 19:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.613 19:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.613 19:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.613 19:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:28:17.613 19:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:17.613 19:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:28:17.613 19:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:17.613 19:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:28:17.613 19:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:17.613 19:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:28:17.613 19:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:17.613 19:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:17.613 19:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.613 19:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.873 request: 00:28:17.873 { 00:28:17.873 "name": "nvme0", 00:28:17.873 "trtype": "tcp", 00:28:17.873 "traddr": "10.0.0.1", 00:28:17.873 "adrfam": "ipv4", 00:28:17.873 "trsvcid": "4420", 00:28:17.873 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:17.873 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:17.873 "prchk_reftag": false, 00:28:17.873 "prchk_guard": false, 00:28:17.873 "hdgst": false, 00:28:17.873 "ddgst": false, 00:28:17.873 "dhchap_key": "key2", 00:28:17.873 "allow_unrecognized_csi": false, 00:28:17.873 "method": "bdev_nvme_attach_controller", 00:28:17.873 "req_id": 1 00:28:17.873 } 00:28:17.873 Got JSON-RPC error response 00:28:17.873 response: 00:28:17.873 { 00:28:17.873 "code": -5, 00:28:17.873 "message": "Input/output error" 00:28:17.873 } 00:28:17.873 19:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:28:17.873 19:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:28:17.873 19:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:17.873 19:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:17.873 19:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:17.873 19:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:28:17.873 19:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:28:17.873 19:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.873 19:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.873 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.873 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:28:17.873 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:17.873 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:28:17.873 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:17.873 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:28:17.873 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:17.873 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:28:17.873 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:17.873 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:17.873 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.873 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.873 request: 00:28:17.873 { 00:28:17.873 "name": "nvme0", 00:28:17.873 "trtype": "tcp", 00:28:17.873 "traddr": "10.0.0.1", 00:28:17.873 "adrfam": "ipv4", 00:28:17.873 "trsvcid": "4420", 00:28:17.873 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:17.873 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:17.873 "prchk_reftag": false, 00:28:17.873 "prchk_guard": false, 00:28:17.873 "hdgst": false, 00:28:17.873 "ddgst": false, 00:28:17.873 "dhchap_key": "key1", 00:28:17.873 "dhchap_ctrlr_key": "ckey2", 00:28:17.873 "allow_unrecognized_csi": false, 00:28:17.873 "method": "bdev_nvme_attach_controller", 00:28:17.873 "req_id": 1 00:28:17.873 } 00:28:17.873 Got JSON-RPC error response 00:28:17.873 response: 00:28:17.873 { 00:28:17.873 "code": -5, 00:28:17.873 "message": "Input/output error" 00:28:17.873 } 00:28:17.873 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:28:17.873 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:28:17.873 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:17.873 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:17.873 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:17.874 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:28:17.874 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.874 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.134 nvme0n1 00:28:18.134 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.134 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:28:18.134 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:18.134 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:18.134 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:18.134 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:18.134 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTVkMmI5Zjc1Mjc0ZjMxODE1NjdhYzY1OGY5YTMyNTgULcwn: 00:28:18.134 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZmNiMTBlMTE3NTBlMGI4OWY2ZmNjYzMwNzE1ZjgwNzNwcEW+: 00:28:18.134 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:18.134 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:18.134 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTVkMmI5Zjc1Mjc0ZjMxODE1NjdhYzY1OGY5YTMyNTgULcwn: 00:28:18.134 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZmNiMTBlMTE3NTBlMGI4OWY2ZmNjYzMwNzE1ZjgwNzNwcEW+: ]] 00:28:18.134 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZmNiMTBlMTE3NTBlMGI4OWY2ZmNjYzMwNzE1ZjgwNzNwcEW+: 00:28:18.134 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:18.134 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.134 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.134 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.134 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:28:18.134 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.134 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:28:18.134 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.134 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.134 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:18.134 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:18.134 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:28:18.134 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:18.134 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:28:18.134 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:18.134 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:28:18.134 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:18.134 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:18.134 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.134 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.134 request: 00:28:18.134 { 00:28:18.134 "name": "nvme0", 00:28:18.134 "dhchap_key": "key1", 00:28:18.134 "dhchap_ctrlr_key": "ckey2", 00:28:18.134 "method": "bdev_nvme_set_keys", 00:28:18.134 "req_id": 1 00:28:18.134 } 00:28:18.134 Got JSON-RPC error response 00:28:18.134 response: 00:28:18.134 { 00:28:18.134 "code": -13, 00:28:18.134 "message": "Permission denied" 00:28:18.134 } 00:28:18.134 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:28:18.134 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:28:18.134 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:18.134 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:18.134 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:18.134 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:28:18.134 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:28:18.134 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.134 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.393 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.393 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:28:18.393 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:28:19.333 19:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:28:19.333 19:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:28:19.333 19:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.333 19:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.333 19:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.333 19:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:28:19.333 19:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:28:20.272 19:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:28:20.272 19:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:28:20.272 19:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.272 19:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.272 19:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.273 19:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:28:20.273 19:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:20.273 19:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:20.273 19:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:20.273 19:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:20.273 19:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:20.273 19:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjIyMmU4MTkzNzdjYzZjYTdlNzMyYWQzMGE0MGU3NzA3M2ZiN2I5Y2Q1NmJhNWQ3Y13+qg==: 00:28:20.273 19:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDgxM2FlODA1Y2ZhYmRhOWQxYThkNmE2NmUzZTQ0YTZjZDg4ZTBhM2YwMTU2YmMwZPAE5A==: 00:28:20.273 19:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:20.273 19:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:20.273 19:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjIyMmU4MTkzNzdjYzZjYTdlNzMyYWQzMGE0MGU3NzA3M2ZiN2I5Y2Q1NmJhNWQ3Y13+qg==: 00:28:20.273 19:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDgxM2FlODA1Y2ZhYmRhOWQxYThkNmE2NmUzZTQ0YTZjZDg4ZTBhM2YwMTU2YmMwZPAE5A==: ]] 00:28:20.273 19:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDgxM2FlODA1Y2ZhYmRhOWQxYThkNmE2NmUzZTQ0YTZjZDg4ZTBhM2YwMTU2YmMwZPAE5A==: 00:28:20.273 19:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:28:20.273 19:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.273 19:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.533 nvme0n1 00:28:20.533 19:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.533 19:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:28:20.533 19:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:20.533 19:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:20.533 19:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:20.533 19:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:20.533 19:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTVkMmI5Zjc1Mjc0ZjMxODE1NjdhYzY1OGY5YTMyNTgULcwn: 00:28:20.533 19:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZmNiMTBlMTE3NTBlMGI4OWY2ZmNjYzMwNzE1ZjgwNzNwcEW+: 00:28:20.533 19:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:20.533 19:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:20.533 19:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTVkMmI5Zjc1Mjc0ZjMxODE1NjdhYzY1OGY5YTMyNTgULcwn: 00:28:20.533 19:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZmNiMTBlMTE3NTBlMGI4OWY2ZmNjYzMwNzE1ZjgwNzNwcEW+: ]] 00:28:20.533 19:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZmNiMTBlMTE3NTBlMGI4OWY2ZmNjYzMwNzE1ZjgwNzNwcEW+: 00:28:20.533 19:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:28:20.533 19:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:28:20.533 19:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:28:20.533 19:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:28:20.533 19:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:20.533 19:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:28:20.533 19:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:20.533 19:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:28:20.533 19:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.533 19:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.533 request: 00:28:20.533 { 00:28:20.533 "name": "nvme0", 00:28:20.533 "dhchap_key": "key2", 00:28:20.533 "dhchap_ctrlr_key": "ckey1", 00:28:20.533 "method": "bdev_nvme_set_keys", 00:28:20.533 "req_id": 1 00:28:20.533 } 00:28:20.533 Got JSON-RPC error response 00:28:20.533 response: 00:28:20.533 { 00:28:20.533 "code": -13, 00:28:20.533 "message": "Permission denied" 00:28:20.533 } 00:28:20.533 19:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:28:20.533 19:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:28:20.533 19:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:20.533 19:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:20.533 19:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:20.533 19:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:28:20.533 19:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:28:20.533 19:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.533 19:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.533 19:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.533 19:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:28:20.533 19:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:28:21.915 19:17:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:28:21.915 19:17:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:28:21.915 19:17:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.915 19:17:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.915 19:17:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.915 19:17:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:28:21.915 19:17:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:28:21.915 19:17:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:28:21.915 19:17:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:28:21.915 19:17:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@335 -- # nvmfcleanup 00:28:21.915 19:17:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@99 -- # sync 00:28:21.915 19:17:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:28:21.915 19:17:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@102 -- # set +e 00:28:21.915 19:17:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@103 -- # for i in {1..20} 00:28:21.915 19:17:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:28:21.915 rmmod nvme_tcp 00:28:21.915 rmmod nvme_fabrics 00:28:21.915 19:17:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:28:21.915 19:17:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # set -e 00:28:21.915 19:17:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # return 0 00:28:21.915 19:17:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # '[' -n 491911 ']' 00:28:21.915 19:17:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@337 -- # killprocess 491911 00:28:21.915 19:17:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@952 -- # '[' -z 491911 ']' 00:28:21.915 19:17:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # kill -0 491911 00:28:21.915 19:17:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@957 -- # uname 00:28:21.915 19:17:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:21.915 19:17:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 491911 00:28:21.915 19:17:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:28:21.915 19:17:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:28:21.915 19:17:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@970 -- # echo 'killing process with pid 491911' 00:28:21.916 killing process with pid 491911 00:28:21.916 19:17:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@971 -- # kill 491911 00:28:21.916 19:17:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@976 -- # wait 491911 00:28:21.916 19:17:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:28:21.916 19:17:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # nvmf_fini 00:28:21.916 19:17:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@264 -- # local dev 00:28:21.916 19:17:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@267 -- # remove_target_ns 00:28:21.916 19:17:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:28:21.916 19:17:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:28:21.916 19:17:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_target_ns 00:28:24.458 19:17:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@268 -- # delete_main_bridge 00:28:24.458 19:17:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:28:24.458 19:17:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@130 -- # return 0 00:28:24.458 19:17:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:28:24.459 19:17:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:28:24.459 19:17:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:28:24.459 19:17:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:28:24.459 19:17:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:28:24.459 19:17:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:28:24.459 19:17:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:28:24.459 19:17:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:28:24.459 19:17:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:28:24.459 19:17:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:28:24.459 19:17:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:28:24.459 19:17:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:28:24.459 19:17:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:28:24.459 19:17:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:28:24.459 19:17:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:28:24.459 19:17:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:28:24.459 19:17:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:28:24.459 19:17:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@41 -- # _dev=0 00:28:24.459 19:17:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@41 -- # dev_map=() 00:28:24.459 19:17:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@284 -- # iptr 00:28:24.459 19:17:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@542 -- # iptables-save 00:28:24.459 19:17:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:28:24.459 19:17:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@542 -- # iptables-restore 00:28:24.459 19:17:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:28:24.459 19:17:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:28:24.459 19:17:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:28:24.459 19:17:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@486 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:28:24.459 19:17:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@488 -- # echo 0 00:28:24.459 19:17:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@490 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:24.459 19:17:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@491 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:24.459 19:17:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:28:24.459 19:17:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:24.459 19:17:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@495 -- # modules=(/sys/module/nvmet/holders/*) 00:28:24.459 19:17:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@497 -- # modprobe -r nvmet_tcp nvmet 00:28:24.459 19:17:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@500 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:27.759 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:27.759 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:27.759 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:27.759 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:27.759 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:27.759 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:27.759 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:27.759 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:27.759 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:27.759 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:27.759 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:27.759 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:27.759 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:27.759 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:27.759 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:27.759 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:27.759 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:28:28.020 19:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.6f9 /tmp/spdk.key-null.nEb /tmp/spdk.key-sha256.6cb /tmp/spdk.key-sha384.wnU /tmp/spdk.key-sha512.gaL /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:28:28.020 19:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:31.413 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:28:31.413 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:28:31.413 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:28:31.413 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:28:31.413 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:28:31.413 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:28:31.413 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:28:31.413 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:28:31.413 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:28:31.413 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:28:31.413 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:28:31.413 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:28:31.413 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:28:31.413 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:28:31.413 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:28:31.413 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:28:31.413 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:28:31.673 00:28:31.673 real 1m2.324s 00:28:31.673 user 0m55.861s 00:28:31.673 sys 0m15.472s 00:28:31.934 19:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:31.934 19:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.934 ************************************ 00:28:31.934 END TEST nvmf_auth_host 00:28:31.934 ************************************ 00:28:31.934 19:18:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:28:31.934 19:18:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:28:31.934 19:18:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:31.934 19:18:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.934 ************************************ 00:28:31.934 START TEST nvmf_bdevperf 00:28:31.934 ************************************ 00:28:31.934 19:18:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:28:31.934 * Looking for test storage... 00:28:31.934 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:31.934 19:18:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:28:31.934 19:18:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # lcov --version 00:28:31.934 19:18:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:28:31.934 19:18:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:28:31.934 19:18:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:31.934 19:18:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:31.934 19:18:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:31.934 19:18:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:28:31.934 19:18:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:28:31.935 19:18:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:28:31.935 19:18:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:28:31.935 19:18:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:28:31.935 19:18:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:28:31.935 19:18:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:28:31.935 19:18:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:31.935 19:18:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:28:31.935 19:18:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:28:31.935 19:18:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:31.935 19:18:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:32.195 19:18:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:28:32.195 19:18:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:28:32.195 19:18:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:32.195 19:18:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:28:32.196 19:18:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:28:32.196 19:18:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:28:32.196 19:18:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:28:32.196 19:18:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:32.196 19:18:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:28:32.196 19:18:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:28:32.196 19:18:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:32.196 19:18:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:32.196 19:18:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:28:32.196 19:18:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:32.196 19:18:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:28:32.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:32.196 --rc genhtml_branch_coverage=1 00:28:32.196 --rc genhtml_function_coverage=1 00:28:32.196 --rc genhtml_legend=1 00:28:32.196 --rc geninfo_all_blocks=1 00:28:32.196 --rc geninfo_unexecuted_blocks=1 00:28:32.196 00:28:32.196 ' 00:28:32.196 19:18:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:28:32.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:32.196 --rc genhtml_branch_coverage=1 00:28:32.196 --rc genhtml_function_coverage=1 00:28:32.196 --rc genhtml_legend=1 00:28:32.196 --rc geninfo_all_blocks=1 00:28:32.196 --rc geninfo_unexecuted_blocks=1 00:28:32.196 00:28:32.196 ' 00:28:32.196 19:18:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:28:32.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:32.196 --rc genhtml_branch_coverage=1 00:28:32.196 --rc genhtml_function_coverage=1 00:28:32.196 --rc genhtml_legend=1 00:28:32.196 --rc geninfo_all_blocks=1 00:28:32.196 --rc geninfo_unexecuted_blocks=1 00:28:32.196 00:28:32.196 ' 00:28:32.196 19:18:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:28:32.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:32.196 --rc genhtml_branch_coverage=1 00:28:32.196 --rc genhtml_function_coverage=1 00:28:32.196 --rc genhtml_legend=1 00:28:32.196 --rc geninfo_all_blocks=1 00:28:32.196 --rc geninfo_unexecuted_blocks=1 00:28:32.196 00:28:32.196 ' 00:28:32.196 19:18:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:32.196 19:18:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:28:32.196 19:18:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:32.196 19:18:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:32.196 19:18:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:32.196 19:18:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:32.196 19:18:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:32.196 19:18:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:28:32.196 19:18:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:32.196 19:18:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:28:32.196 19:18:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:32.196 19:18:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:32.196 19:18:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:32.196 19:18:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:28:32.196 19:18:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:28:32.196 19:18:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:32.196 19:18:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:32.196 19:18:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:28:32.196 19:18:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:32.196 19:18:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:32.196 19:18:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:32.196 19:18:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:32.196 19:18:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:32.196 19:18:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:32.196 19:18:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:28:32.196 19:18:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:32.196 19:18:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:28:32.196 19:18:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:28:32.196 19:18:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:28:32.196 19:18:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:28:32.196 19:18:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@50 -- # : 0 00:28:32.196 19:18:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:28:32.196 19:18:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:28:32.196 19:18:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:28:32.196 19:18:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:32.196 19:18:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:32.196 19:18:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:28:32.196 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:28:32.196 19:18:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:28:32.196 19:18:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:28:32.196 19:18:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@54 -- # have_pci_nics=0 00:28:32.196 19:18:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:32.196 19:18:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:32.196 19:18:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:28:32.196 19:18:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:28:32.196 19:18:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:32.196 19:18:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@296 -- # prepare_net_devs 00:28:32.196 19:18:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # local -g is_hw=no 00:28:32.196 19:18:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@260 -- # remove_target_ns 00:28:32.196 19:18:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:28:32.196 19:18:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:28:32.196 19:18:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_target_ns 00:28:32.196 19:18:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:28:32.196 19:18:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:28:32.196 19:18:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # xtrace_disable 00:28:32.196 19:18:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:40.337 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:40.337 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@131 -- # pci_devs=() 00:28:40.337 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@131 -- # local -a pci_devs 00:28:40.337 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@132 -- # pci_net_devs=() 00:28:40.337 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:28:40.337 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@133 -- # pci_drivers=() 00:28:40.337 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@133 -- # local -A pci_drivers 00:28:40.337 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@135 -- # net_devs=() 00:28:40.337 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@135 -- # local -ga net_devs 00:28:40.337 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@136 -- # e810=() 00:28:40.337 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@136 -- # local -ga e810 00:28:40.337 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@137 -- # x722=() 00:28:40.337 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@137 -- # local -ga x722 00:28:40.337 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@138 -- # mlx=() 00:28:40.337 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@138 -- # local -ga mlx 00:28:40.337 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:40.337 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:40.337 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:40.337 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:40.337 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:40.337 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:40.337 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:40.337 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:40.337 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:40.337 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:40.337 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:40.337 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:40.337 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:28:40.337 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:28:40.337 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:28:40.337 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:28:40.337 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:28:40.337 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:28:40.337 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:28:40.337 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:40.337 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:40.337 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:28:40.337 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:28:40.337 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:40.337 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:40.337 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:28:40.337 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:28:40.337 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:40.337 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:40.337 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:28:40.337 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:28:40.337 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:40.337 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:40.337 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:28:40.337 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:28:40.337 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:28:40.337 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:28:40.337 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:28:40.337 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:40.337 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:28:40.337 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:40.337 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@234 -- # [[ up == up ]] 00:28:40.337 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:28:40.337 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:40.337 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:40.337 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:40.337 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:28:40.337 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:28:40.337 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:40.337 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:28:40.337 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:40.337 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@234 -- # [[ up == up ]] 00:28:40.337 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:28:40.337 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:40.337 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:40.337 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:40.337 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:28:40.337 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:28:40.337 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:28:40.337 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # is_hw=yes 00:28:40.337 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:28:40.337 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:28:40.337 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:28:40.337 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:28:40.337 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@257 -- # create_target_ns 00:28:40.337 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:28:40.337 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:28:40.337 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:28:40.338 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:40.338 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:28:40.338 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:28:40.338 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:40.338 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:40.338 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:28:40.338 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:28:40.338 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:28:40.338 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:28:40.338 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@27 -- # local -gA dev_map 00:28:40.338 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@28 -- # local -g _dev 00:28:40.338 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:28:40.338 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:28:40.338 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:28:40.338 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:28:40.338 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@44 -- # ips=() 00:28:40.338 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:28:40.338 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:28:40.338 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:28:40.338 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:28:40.338 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:28:40.338 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:28:40.338 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:28:40.338 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:28:40.338 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:28:40.338 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:28:40.338 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:28:40.338 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:28:40.338 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:28:40.338 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:28:40.338 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:28:40.338 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:28:40.338 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:28:40.338 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:28:40.338 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:28:40.338 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:28:40.338 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@11 -- # local val=167772161 00:28:40.338 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:28:40.338 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:28:40.338 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:28:40.338 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:28:40.338 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:28:40.338 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:28:40.338 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:28:40.338 10.0.0.1 00:28:40.338 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:28:40.338 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:28:40.338 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:40.338 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:40.338 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:28:40.338 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@11 -- # local val=167772162 00:28:40.338 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:28:40.338 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:28:40.338 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:28:40.338 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:28:40.338 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:28:40.338 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:28:40.338 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:28:40.338 10.0.0.2 00:28:40.338 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:28:40.338 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:28:40.338 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:28:40.338 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:28:40.338 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:28:40.338 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:28:40.338 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:28:40.338 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:40.338 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:40.338 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:28:40.338 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:28:40.338 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:28:40.338 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:28:40.338 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:28:40.338 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:28:40.338 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:28:40.338 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:28:40.338 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:28:40.338 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:28:40.338 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:28:40.338 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@38 -- # ping_ips 1 00:28:40.338 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:28:40.338 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:28:40.338 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:28:40.338 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:28:40.338 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:28:40.338 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:28:40.338 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:28:40.338 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:28:40.338 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:28:40.338 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@107 -- # local dev=initiator0 00:28:40.338 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:28:40.338 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:28:40.338 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:28:40.338 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:28:40.338 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:28:40.338 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:28:40.338 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:28:40.338 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:28:40.338 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:28:40.338 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:28:40.338 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:28:40.338 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:40.338 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:40.338 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:28:40.338 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:28:40.338 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:40.338 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.573 ms 00:28:40.338 00:28:40.338 --- 10.0.0.1 ping statistics --- 00:28:40.338 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:40.338 rtt min/avg/max/mdev = 0.573/0.573/0.573/0.000 ms 00:28:40.338 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:28:40.338 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:28:40.338 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:28:40.338 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:28:40.338 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:40.338 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:40.338 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@168 -- # get_net_dev target0 00:28:40.338 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@107 -- # local dev=target0 00:28:40.339 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:28:40.339 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:28:40.339 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:28:40.339 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:28:40.339 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:28:40.339 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:28:40.339 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:28:40.339 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:28:40.339 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:28:40.339 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:28:40.339 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:28:40.339 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:28:40.339 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:28:40.339 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:28:40.339 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:40.339 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.260 ms 00:28:40.339 00:28:40.339 --- 10.0.0.2 ping statistics --- 00:28:40.339 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:40.339 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:28:40.339 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@98 -- # (( pair++ )) 00:28:40.339 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:28:40.339 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:40.339 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@270 -- # return 0 00:28:40.339 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:28:40.339 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:28:40.339 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:28:40.339 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:28:40.339 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:28:40.339 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:28:40.339 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:28:40.339 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:28:40.339 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:28:40.339 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:28:40.339 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@107 -- # local dev=initiator0 00:28:40.339 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:28:40.339 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:28:40.339 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:28:40.339 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:28:40.339 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:28:40.339 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:28:40.339 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:28:40.339 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:28:40.339 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:28:40.339 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:40.339 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:28:40.339 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:28:40.339 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:28:40.339 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:28:40.339 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:28:40.339 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:28:40.339 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@107 -- # local dev=initiator1 00:28:40.339 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:28:40.339 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:28:40.339 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@109 -- # return 1 00:28:40.339 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@168 -- # dev= 00:28:40.339 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@169 -- # return 0 00:28:40.339 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:28:40.339 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:28:40.339 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:28:40.339 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:28:40.339 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:28:40.339 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:40.339 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:40.339 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@168 -- # get_net_dev target0 00:28:40.339 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@107 -- # local dev=target0 00:28:40.339 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:28:40.339 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:28:40.339 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:28:40.339 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:28:40.339 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:28:40.339 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:28:40.339 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:28:40.339 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:28:40.339 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:28:40.339 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:40.339 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:28:40.339 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:28:40.339 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:28:40.339 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:28:40.339 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:40.339 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:40.339 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@168 -- # get_net_dev target1 00:28:40.339 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@107 -- # local dev=target1 00:28:40.339 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:28:40.339 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:28:40.339 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@109 -- # return 1 00:28:40.339 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@168 -- # dev= 00:28:40.339 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@169 -- # return 0 00:28:40.339 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:28:40.339 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:40.339 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:28:40.339 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:28:40.339 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:40.339 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:28:40.339 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:28:40.339 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:28:40.339 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:28:40.339 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:28:40.339 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:40.339 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:40.339 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # nvmfpid=509261 00:28:40.339 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@329 -- # waitforlisten 509261 00:28:40.339 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:40.339 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@833 -- # '[' -z 509261 ']' 00:28:40.339 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:40.339 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:40.339 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:40.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:40.339 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:40.339 19:18:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:40.339 [2024-11-05 19:18:08.928210] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:28:40.339 [2024-11-05 19:18:08.928280] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:40.339 [2024-11-05 19:18:09.029102] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:40.339 [2024-11-05 19:18:09.081559] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:40.339 [2024-11-05 19:18:09.081611] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:40.339 [2024-11-05 19:18:09.081621] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:40.339 [2024-11-05 19:18:09.081628] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:40.340 [2024-11-05 19:18:09.081635] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:40.340 [2024-11-05 19:18:09.083671] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:40.340 [2024-11-05 19:18:09.083830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:40.340 [2024-11-05 19:18:09.083843] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:40.600 19:18:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:40.600 19:18:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@866 -- # return 0 00:28:40.600 19:18:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:28:40.600 19:18:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:40.600 19:18:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:40.600 19:18:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:40.600 19:18:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:40.600 19:18:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:40.600 19:18:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:40.600 [2024-11-05 19:18:09.785326] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:40.600 19:18:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:40.600 19:18:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:40.600 19:18:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:40.600 19:18:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:40.600 Malloc0 00:28:40.600 19:18:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:40.600 19:18:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:40.600 19:18:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:40.600 19:18:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:40.600 19:18:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:40.600 19:18:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:40.600 19:18:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:40.600 19:18:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:40.600 19:18:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:40.600 19:18:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:40.600 19:18:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:40.600 19:18:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:40.600 [2024-11-05 19:18:09.861209] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:40.600 19:18:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:40.600 19:18:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:28:40.600 19:18:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:28:40.600 19:18:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # config=() 00:28:40.600 19:18:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # local subsystem config 00:28:40.600 19:18:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:28:40.600 19:18:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:28:40.600 { 00:28:40.600 "params": { 00:28:40.600 "name": "Nvme$subsystem", 00:28:40.600 "trtype": "$TEST_TRANSPORT", 00:28:40.600 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:40.600 "adrfam": "ipv4", 00:28:40.600 "trsvcid": "$NVMF_PORT", 00:28:40.600 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:40.600 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:40.600 "hdgst": ${hdgst:-false}, 00:28:40.600 "ddgst": ${ddgst:-false} 00:28:40.600 }, 00:28:40.600 "method": "bdev_nvme_attach_controller" 00:28:40.600 } 00:28:40.600 EOF 00:28:40.600 )") 00:28:40.600 19:18:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # cat 00:28:40.600 19:18:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@396 -- # jq . 00:28:40.600 19:18:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@397 -- # IFS=, 00:28:40.600 19:18:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:28:40.600 "params": { 00:28:40.600 "name": "Nvme1", 00:28:40.600 "trtype": "tcp", 00:28:40.600 "traddr": "10.0.0.2", 00:28:40.600 "adrfam": "ipv4", 00:28:40.600 "trsvcid": "4420", 00:28:40.600 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:40.600 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:40.600 "hdgst": false, 00:28:40.600 "ddgst": false 00:28:40.600 }, 00:28:40.600 "method": "bdev_nvme_attach_controller" 00:28:40.600 }' 00:28:40.601 [2024-11-05 19:18:09.916031] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:28:40.601 [2024-11-05 19:18:09.916080] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid509850 ] 00:28:40.861 [2024-11-05 19:18:09.986006] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:40.861 [2024-11-05 19:18:10.023372] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:41.121 Running I/O for 1 seconds... 00:28:42.063 8928.00 IOPS, 34.88 MiB/s 00:28:42.063 Latency(us) 00:28:42.063 [2024-11-05T18:18:11.386Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:42.063 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:42.063 Verification LBA range: start 0x0 length 0x4000 00:28:42.063 Nvme1n1 : 1.01 8996.04 35.14 0.00 0.00 14140.73 1952.43 12943.36 00:28:42.063 [2024-11-05T18:18:11.386Z] =================================================================================================================== 00:28:42.063 [2024-11-05T18:18:11.386Z] Total : 8996.04 35.14 0.00 0.00 14140.73 1952.43 12943.36 00:28:42.324 19:18:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=510399 00:28:42.324 19:18:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:28:42.324 19:18:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:28:42.324 19:18:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:28:42.324 19:18:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # config=() 00:28:42.324 19:18:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # local subsystem config 00:28:42.324 19:18:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:28:42.324 19:18:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:28:42.324 { 00:28:42.324 "params": { 00:28:42.324 "name": "Nvme$subsystem", 00:28:42.324 "trtype": "$TEST_TRANSPORT", 00:28:42.324 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:42.324 "adrfam": "ipv4", 00:28:42.324 "trsvcid": "$NVMF_PORT", 00:28:42.324 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:42.324 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:42.324 "hdgst": ${hdgst:-false}, 00:28:42.324 "ddgst": ${ddgst:-false} 00:28:42.324 }, 00:28:42.324 "method": "bdev_nvme_attach_controller" 00:28:42.324 } 00:28:42.324 EOF 00:28:42.324 )") 00:28:42.324 19:18:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # cat 00:28:42.324 19:18:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@396 -- # jq . 00:28:42.324 19:18:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@397 -- # IFS=, 00:28:42.324 19:18:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:28:42.325 "params": { 00:28:42.325 "name": "Nvme1", 00:28:42.325 "trtype": "tcp", 00:28:42.325 "traddr": "10.0.0.2", 00:28:42.325 "adrfam": "ipv4", 00:28:42.325 "trsvcid": "4420", 00:28:42.325 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:42.325 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:42.325 "hdgst": false, 00:28:42.325 "ddgst": false 00:28:42.325 }, 00:28:42.325 "method": "bdev_nvme_attach_controller" 00:28:42.325 }' 00:28:42.325 [2024-11-05 19:18:11.512273] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:28:42.325 [2024-11-05 19:18:11.512330] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid510399 ] 00:28:42.325 [2024-11-05 19:18:11.583236] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:42.325 [2024-11-05 19:18:11.617662] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:42.586 Running I/O for 15 seconds... 00:28:44.470 8751.00 IOPS, 34.18 MiB/s [2024-11-05T18:18:14.738Z] 10010.00 IOPS, 39.10 MiB/s [2024-11-05T18:18:14.738Z] 19:18:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 509261 00:28:45.415 19:18:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:28:45.415 [2024-11-05 19:18:14.475586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:91376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.415 [2024-11-05 19:18:14.475630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.415 [2024-11-05 19:18:14.475651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:91384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.415 [2024-11-05 19:18:14.475662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.415 [2024-11-05 19:18:14.475679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:91392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.415 [2024-11-05 19:18:14.475687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.415 [2024-11-05 19:18:14.475697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:91400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.415 [2024-11-05 19:18:14.475706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.415 [2024-11-05 19:18:14.475717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:91408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.415 [2024-11-05 19:18:14.475725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.415 [2024-11-05 19:18:14.475735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:91416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.415 [2024-11-05 19:18:14.475742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.415 [2024-11-05 19:18:14.475852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:91424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.415 [2024-11-05 19:18:14.475861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.415 [2024-11-05 19:18:14.475874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:91432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.415 [2024-11-05 19:18:14.475885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.415 [2024-11-05 19:18:14.475895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:91440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.415 [2024-11-05 19:18:14.475904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.415 [2024-11-05 19:18:14.475918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:91448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.415 [2024-11-05 19:18:14.475932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.416 [2024-11-05 19:18:14.475944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:91456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.416 [2024-11-05 19:18:14.475952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.416 [2024-11-05 19:18:14.475962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:91464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.416 [2024-11-05 19:18:14.475970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.416 [2024-11-05 19:18:14.475980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:91472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.416 [2024-11-05 19:18:14.475987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.416 [2024-11-05 19:18:14.475996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:91480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.416 [2024-11-05 19:18:14.476005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.416 [2024-11-05 19:18:14.476015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:91488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.416 [2024-11-05 19:18:14.476025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.416 [2024-11-05 19:18:14.476035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:91496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.416 [2024-11-05 19:18:14.476044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.416 [2024-11-05 19:18:14.476054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:91504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.416 [2024-11-05 19:18:14.476062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.416 [2024-11-05 19:18:14.476073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:91512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.416 [2024-11-05 19:18:14.476081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.416 [2024-11-05 19:18:14.476091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:91520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.416 [2024-11-05 19:18:14.476099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.416 [2024-11-05 19:18:14.476110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:91528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.416 [2024-11-05 19:18:14.476118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.416 [2024-11-05 19:18:14.476127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:91536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.416 [2024-11-05 19:18:14.476135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.416 [2024-11-05 19:18:14.476144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:91544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.416 [2024-11-05 19:18:14.476151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.416 [2024-11-05 19:18:14.476161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:91552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.416 [2024-11-05 19:18:14.476169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.416 [2024-11-05 19:18:14.476179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:91560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.416 [2024-11-05 19:18:14.476186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.416 [2024-11-05 19:18:14.476196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:91568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.416 [2024-11-05 19:18:14.476203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.416 [2024-11-05 19:18:14.476212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:91576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.416 [2024-11-05 19:18:14.476220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.416 [2024-11-05 19:18:14.476229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:91584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.416 [2024-11-05 19:18:14.476236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.416 [2024-11-05 19:18:14.476247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:91592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.416 [2024-11-05 19:18:14.476255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.416 [2024-11-05 19:18:14.476264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:91600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.416 [2024-11-05 19:18:14.476272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.416 [2024-11-05 19:18:14.476281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:91608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.416 [2024-11-05 19:18:14.476288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.416 [2024-11-05 19:18:14.476298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:91616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.416 [2024-11-05 19:18:14.476305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.416 [2024-11-05 19:18:14.476314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.416 [2024-11-05 19:18:14.476321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.416 [2024-11-05 19:18:14.476331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:91632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.416 [2024-11-05 19:18:14.476338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.416 [2024-11-05 19:18:14.476348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:91640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.416 [2024-11-05 19:18:14.476356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.416 [2024-11-05 19:18:14.476366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:91648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.416 [2024-11-05 19:18:14.476373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.416 [2024-11-05 19:18:14.476383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:91656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.416 [2024-11-05 19:18:14.476390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.416 [2024-11-05 19:18:14.476400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:91664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.416 [2024-11-05 19:18:14.476407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.416 [2024-11-05 19:18:14.476417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:91672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.416 [2024-11-05 19:18:14.476424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.416 [2024-11-05 19:18:14.476433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:91680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.416 [2024-11-05 19:18:14.476440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.416 [2024-11-05 19:18:14.476451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:91688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.416 [2024-11-05 19:18:14.476458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.416 [2024-11-05 19:18:14.476469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:91696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.416 [2024-11-05 19:18:14.476476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.416 [2024-11-05 19:18:14.476486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:91704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.416 [2024-11-05 19:18:14.476493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.416 [2024-11-05 19:18:14.476502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:91712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.416 [2024-11-05 19:18:14.476510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.416 [2024-11-05 19:18:14.476520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:91720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.416 [2024-11-05 19:18:14.476527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.416 [2024-11-05 19:18:14.476536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:91728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.416 [2024-11-05 19:18:14.476543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.416 [2024-11-05 19:18:14.476553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:91736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.416 [2024-11-05 19:18:14.476560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.416 [2024-11-05 19:18:14.476570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:91744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.416 [2024-11-05 19:18:14.476577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.416 [2024-11-05 19:18:14.476586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:91752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.416 [2024-11-05 19:18:14.476593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.416 [2024-11-05 19:18:14.476602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:91760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.416 [2024-11-05 19:18:14.476610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.416 [2024-11-05 19:18:14.476620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:91768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.416 [2024-11-05 19:18:14.476627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.417 [2024-11-05 19:18:14.476637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:91776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.417 [2024-11-05 19:18:14.476644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.417 [2024-11-05 19:18:14.476653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:91784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.417 [2024-11-05 19:18:14.476660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.417 [2024-11-05 19:18:14.476670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:91792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.417 [2024-11-05 19:18:14.476679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.417 [2024-11-05 19:18:14.476688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:91800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.417 [2024-11-05 19:18:14.476695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.417 [2024-11-05 19:18:14.476704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:91808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.417 [2024-11-05 19:18:14.476712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.417 [2024-11-05 19:18:14.476721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:91816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.417 [2024-11-05 19:18:14.476729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.417 [2024-11-05 19:18:14.476738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:91824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.417 [2024-11-05 19:18:14.476751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.417 [2024-11-05 19:18:14.476761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:91832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.417 [2024-11-05 19:18:14.476768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.417 [2024-11-05 19:18:14.476777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:91840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.417 [2024-11-05 19:18:14.476784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.417 [2024-11-05 19:18:14.476794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:91848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.417 [2024-11-05 19:18:14.476802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.417 [2024-11-05 19:18:14.476811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:91856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.417 [2024-11-05 19:18:14.476818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.417 [2024-11-05 19:18:14.476827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:91864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.417 [2024-11-05 19:18:14.476834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.417 [2024-11-05 19:18:14.476844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:91872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.417 [2024-11-05 19:18:14.476852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.417 [2024-11-05 19:18:14.476861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:91880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.417 [2024-11-05 19:18:14.476868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.417 [2024-11-05 19:18:14.476878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:91888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.417 [2024-11-05 19:18:14.476885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.417 [2024-11-05 19:18:14.476896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:91896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.417 [2024-11-05 19:18:14.476905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.417 [2024-11-05 19:18:14.476915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:91904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.417 [2024-11-05 19:18:14.476922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.417 [2024-11-05 19:18:14.476931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:91912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.417 [2024-11-05 19:18:14.476939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.417 [2024-11-05 19:18:14.476948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:91920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.417 [2024-11-05 19:18:14.476956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.417 [2024-11-05 19:18:14.476966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:91928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.417 [2024-11-05 19:18:14.476973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.417 [2024-11-05 19:18:14.476982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:91936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.417 [2024-11-05 19:18:14.476990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.417 [2024-11-05 19:18:14.476999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:91944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.417 [2024-11-05 19:18:14.477006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.417 [2024-11-05 19:18:14.477016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:91952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.417 [2024-11-05 19:18:14.477023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.417 [2024-11-05 19:18:14.477033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:91960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.417 [2024-11-05 19:18:14.477040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.417 [2024-11-05 19:18:14.477049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:91968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.417 [2024-11-05 19:18:14.477057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.417 [2024-11-05 19:18:14.477067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:91976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.417 [2024-11-05 19:18:14.477074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.417 [2024-11-05 19:18:14.477084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:91984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.417 [2024-11-05 19:18:14.477091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.417 [2024-11-05 19:18:14.477100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:91992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.417 [2024-11-05 19:18:14.477112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.417 [2024-11-05 19:18:14.477122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:92000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.417 [2024-11-05 19:18:14.477130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.417 [2024-11-05 19:18:14.477139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:92008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.417 [2024-11-05 19:18:14.477146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.417 [2024-11-05 19:18:14.477155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:92016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.417 [2024-11-05 19:18:14.477162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.417 [2024-11-05 19:18:14.477172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:92024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.417 [2024-11-05 19:18:14.477180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.417 [2024-11-05 19:18:14.477190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:92032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.417 [2024-11-05 19:18:14.477197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.417 [2024-11-05 19:18:14.477206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:92040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.417 [2024-11-05 19:18:14.477213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.417 [2024-11-05 19:18:14.477222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:92048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.417 [2024-11-05 19:18:14.477230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.417 [2024-11-05 19:18:14.477239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:92056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.417 [2024-11-05 19:18:14.477247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.417 [2024-11-05 19:18:14.477256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:92064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.417 [2024-11-05 19:18:14.477263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.417 [2024-11-05 19:18:14.477272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:92072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.417 [2024-11-05 19:18:14.477279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.417 [2024-11-05 19:18:14.477289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:92080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.417 [2024-11-05 19:18:14.477296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.417 [2024-11-05 19:18:14.477306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:92088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.417 [2024-11-05 19:18:14.477313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.418 [2024-11-05 19:18:14.477324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:92096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.418 [2024-11-05 19:18:14.477331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.418 [2024-11-05 19:18:14.477340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:92104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.418 [2024-11-05 19:18:14.477348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.418 [2024-11-05 19:18:14.477357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:92112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.418 [2024-11-05 19:18:14.477364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.418 [2024-11-05 19:18:14.477373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:92120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.418 [2024-11-05 19:18:14.477380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.418 [2024-11-05 19:18:14.477389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:92128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.418 [2024-11-05 19:18:14.477397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.418 [2024-11-05 19:18:14.477406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:92136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.418 [2024-11-05 19:18:14.477413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.418 [2024-11-05 19:18:14.477423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:92144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.418 [2024-11-05 19:18:14.477430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.418 [2024-11-05 19:18:14.477440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:92152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.418 [2024-11-05 19:18:14.477448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.418 [2024-11-05 19:18:14.477457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:92160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.418 [2024-11-05 19:18:14.477464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.418 [2024-11-05 19:18:14.477473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:92168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.418 [2024-11-05 19:18:14.477480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.418 [2024-11-05 19:18:14.477490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:92176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.418 [2024-11-05 19:18:14.477497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.418 [2024-11-05 19:18:14.477507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:92184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.418 [2024-11-05 19:18:14.477514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.418 [2024-11-05 19:18:14.477523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:92192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.418 [2024-11-05 19:18:14.477530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.418 [2024-11-05 19:18:14.477544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:92200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.418 [2024-11-05 19:18:14.477553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.418 [2024-11-05 19:18:14.477564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:92208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.418 [2024-11-05 19:18:14.477571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.418 [2024-11-05 19:18:14.477580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:92216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.418 [2024-11-05 19:18:14.477587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.418 [2024-11-05 19:18:14.477596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:92224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.418 [2024-11-05 19:18:14.477603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.418 [2024-11-05 19:18:14.477613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:92232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.418 [2024-11-05 19:18:14.477621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.418 [2024-11-05 19:18:14.477630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:92240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.418 [2024-11-05 19:18:14.477637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.418 [2024-11-05 19:18:14.477646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:92248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.418 [2024-11-05 19:18:14.477654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.418 [2024-11-05 19:18:14.477663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:92256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.418 [2024-11-05 19:18:14.477671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.418 [2024-11-05 19:18:14.477680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:92264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.418 [2024-11-05 19:18:14.477687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.418 [2024-11-05 19:18:14.477696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:92272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.418 [2024-11-05 19:18:14.477703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.418 [2024-11-05 19:18:14.477712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:92280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.418 [2024-11-05 19:18:14.477720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.418 [2024-11-05 19:18:14.477730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:92288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.418 [2024-11-05 19:18:14.477737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.418 [2024-11-05 19:18:14.477749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:92296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.418 [2024-11-05 19:18:14.477758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.418 [2024-11-05 19:18:14.477767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:92304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.418 [2024-11-05 19:18:14.477775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.418 [2024-11-05 19:18:14.477785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:92312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.418 [2024-11-05 19:18:14.477792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.418 [2024-11-05 19:18:14.477801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:92320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.418 [2024-11-05 19:18:14.477808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.418 [2024-11-05 19:18:14.477817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:92328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.418 [2024-11-05 19:18:14.477824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.418 [2024-11-05 19:18:14.477835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:92336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.418 [2024-11-05 19:18:14.477842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.418 [2024-11-05 19:18:14.477851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:92344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.418 [2024-11-05 19:18:14.477858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.418 [2024-11-05 19:18:14.477867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:92352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.418 [2024-11-05 19:18:14.477875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.418 [2024-11-05 19:18:14.477884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:92360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.418 [2024-11-05 19:18:14.477892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.418 [2024-11-05 19:18:14.477901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:92368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.418 [2024-11-05 19:18:14.477908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.418 [2024-11-05 19:18:14.477918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:92376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.418 [2024-11-05 19:18:14.477925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.418 [2024-11-05 19:18:14.477934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:92384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.418 [2024-11-05 19:18:14.477941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.418 [2024-11-05 19:18:14.477950] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18903f0 is same with the state(6) to be set 00:28:45.418 [2024-11-05 19:18:14.477960] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:45.418 [2024-11-05 19:18:14.477966] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:45.418 [2024-11-05 19:18:14.477974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92392 len:8 PRP1 0x0 PRP2 0x0 00:28:45.418 [2024-11-05 19:18:14.477982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.418 [2024-11-05 19:18:14.481549] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.418 [2024-11-05 19:18:14.481604] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:45.419 [2024-11-05 19:18:14.482361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.419 [2024-11-05 19:18:14.482379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:45.419 [2024-11-05 19:18:14.482388] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:45.419 [2024-11-05 19:18:14.482609] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:45.419 [2024-11-05 19:18:14.482836] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.419 [2024-11-05 19:18:14.482845] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.419 [2024-11-05 19:18:14.482855] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.419 [2024-11-05 19:18:14.482864] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.419 [2024-11-05 19:18:14.495601] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.419 [2024-11-05 19:18:14.496236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.419 [2024-11-05 19:18:14.496277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:45.419 [2024-11-05 19:18:14.496288] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:45.419 [2024-11-05 19:18:14.496529] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:45.419 [2024-11-05 19:18:14.496762] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.419 [2024-11-05 19:18:14.496773] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.419 [2024-11-05 19:18:14.496781] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.419 [2024-11-05 19:18:14.496789] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.419 [2024-11-05 19:18:14.509539] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.419 [2024-11-05 19:18:14.510084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.419 [2024-11-05 19:18:14.510105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:45.419 [2024-11-05 19:18:14.510113] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:45.419 [2024-11-05 19:18:14.510332] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:45.419 [2024-11-05 19:18:14.510552] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.419 [2024-11-05 19:18:14.510562] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.419 [2024-11-05 19:18:14.510569] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.419 [2024-11-05 19:18:14.510582] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.419 [2024-11-05 19:18:14.523356] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.419 [2024-11-05 19:18:14.523996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.419 [2024-11-05 19:18:14.524035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:45.419 [2024-11-05 19:18:14.524046] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:45.419 [2024-11-05 19:18:14.524285] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:45.419 [2024-11-05 19:18:14.524509] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.419 [2024-11-05 19:18:14.524519] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.419 [2024-11-05 19:18:14.524526] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.419 [2024-11-05 19:18:14.524535] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.419 [2024-11-05 19:18:14.537307] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.419 [2024-11-05 19:18:14.537872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.419 [2024-11-05 19:18:14.537912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:45.419 [2024-11-05 19:18:14.537924] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:45.419 [2024-11-05 19:18:14.538165] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:45.419 [2024-11-05 19:18:14.538389] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.419 [2024-11-05 19:18:14.538399] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.419 [2024-11-05 19:18:14.538407] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.419 [2024-11-05 19:18:14.538415] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.419 [2024-11-05 19:18:14.551164] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.419 [2024-11-05 19:18:14.551738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.419 [2024-11-05 19:18:14.551764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:45.419 [2024-11-05 19:18:14.551773] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:45.419 [2024-11-05 19:18:14.551992] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:45.419 [2024-11-05 19:18:14.552212] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.419 [2024-11-05 19:18:14.552222] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.419 [2024-11-05 19:18:14.552230] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.419 [2024-11-05 19:18:14.552236] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.419 [2024-11-05 19:18:14.564971] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.419 [2024-11-05 19:18:14.565630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.419 [2024-11-05 19:18:14.565669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:45.419 [2024-11-05 19:18:14.565681] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:45.419 [2024-11-05 19:18:14.565927] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:45.419 [2024-11-05 19:18:14.566151] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.419 [2024-11-05 19:18:14.566162] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.419 [2024-11-05 19:18:14.566170] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.419 [2024-11-05 19:18:14.566178] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.419 [2024-11-05 19:18:14.578932] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.419 [2024-11-05 19:18:14.579601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.419 [2024-11-05 19:18:14.579640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:45.419 [2024-11-05 19:18:14.579651] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:45.419 [2024-11-05 19:18:14.579899] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:45.419 [2024-11-05 19:18:14.580123] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.419 [2024-11-05 19:18:14.580133] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.419 [2024-11-05 19:18:14.580141] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.419 [2024-11-05 19:18:14.580149] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.419 [2024-11-05 19:18:14.592896] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.419 [2024-11-05 19:18:14.593465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.419 [2024-11-05 19:18:14.593485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:45.419 [2024-11-05 19:18:14.593494] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:45.419 [2024-11-05 19:18:14.593713] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:45.419 [2024-11-05 19:18:14.593938] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.419 [2024-11-05 19:18:14.593948] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.419 [2024-11-05 19:18:14.593956] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.419 [2024-11-05 19:18:14.593963] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.419 [2024-11-05 19:18:14.606694] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.419 [2024-11-05 19:18:14.607230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.419 [2024-11-05 19:18:14.607248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:45.419 [2024-11-05 19:18:14.607256] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:45.419 [2024-11-05 19:18:14.607480] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:45.419 [2024-11-05 19:18:14.607699] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.419 [2024-11-05 19:18:14.607710] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.419 [2024-11-05 19:18:14.607718] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.419 [2024-11-05 19:18:14.607726] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.419 [2024-11-05 19:18:14.620680] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.420 [2024-11-05 19:18:14.621336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.420 [2024-11-05 19:18:14.621375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:45.420 [2024-11-05 19:18:14.621386] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:45.420 [2024-11-05 19:18:14.621625] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:45.420 [2024-11-05 19:18:14.621855] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.420 [2024-11-05 19:18:14.621866] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.420 [2024-11-05 19:18:14.621874] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.420 [2024-11-05 19:18:14.621883] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.420 [2024-11-05 19:18:14.634623] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.420 [2024-11-05 19:18:14.635207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.420 [2024-11-05 19:18:14.635227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:45.420 [2024-11-05 19:18:14.635236] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:45.420 [2024-11-05 19:18:14.635455] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:45.420 [2024-11-05 19:18:14.635674] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.420 [2024-11-05 19:18:14.635683] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.420 [2024-11-05 19:18:14.635691] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.420 [2024-11-05 19:18:14.635698] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.420 [2024-11-05 19:18:14.648470] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.420 [2024-11-05 19:18:14.649005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.420 [2024-11-05 19:18:14.649024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:45.420 [2024-11-05 19:18:14.649032] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:45.420 [2024-11-05 19:18:14.649251] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:45.420 [2024-11-05 19:18:14.649470] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.420 [2024-11-05 19:18:14.649485] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.420 [2024-11-05 19:18:14.649492] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.420 [2024-11-05 19:18:14.649499] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.420 [2024-11-05 19:18:14.662441] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.420 [2024-11-05 19:18:14.662961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.420 [2024-11-05 19:18:14.662978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:45.420 [2024-11-05 19:18:14.662986] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:45.420 [2024-11-05 19:18:14.663205] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:45.420 [2024-11-05 19:18:14.663425] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.420 [2024-11-05 19:18:14.663434] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.420 [2024-11-05 19:18:14.663442] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.420 [2024-11-05 19:18:14.663448] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.420 [2024-11-05 19:18:14.676391] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.420 [2024-11-05 19:18:14.676932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.420 [2024-11-05 19:18:14.676949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:45.420 [2024-11-05 19:18:14.676957] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:45.420 [2024-11-05 19:18:14.677176] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:45.420 [2024-11-05 19:18:14.677395] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.420 [2024-11-05 19:18:14.677403] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.420 [2024-11-05 19:18:14.677410] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.420 [2024-11-05 19:18:14.677417] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.420 [2024-11-05 19:18:14.690374] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.420 [2024-11-05 19:18:14.691037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.420 [2024-11-05 19:18:14.691077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:45.420 [2024-11-05 19:18:14.691088] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:45.420 [2024-11-05 19:18:14.691326] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:45.420 [2024-11-05 19:18:14.691550] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.420 [2024-11-05 19:18:14.691559] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.420 [2024-11-05 19:18:14.691566] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.420 [2024-11-05 19:18:14.691579] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.420 [2024-11-05 19:18:14.704327] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.420 [2024-11-05 19:18:14.704869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.420 [2024-11-05 19:18:14.704909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:45.420 [2024-11-05 19:18:14.704920] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:45.420 [2024-11-05 19:18:14.705158] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:45.420 [2024-11-05 19:18:14.705382] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.420 [2024-11-05 19:18:14.705391] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.420 [2024-11-05 19:18:14.705399] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.420 [2024-11-05 19:18:14.705408] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.420 [2024-11-05 19:18:14.718168] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.420 [2024-11-05 19:18:14.718743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.420 [2024-11-05 19:18:14.718769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:45.420 [2024-11-05 19:18:14.718777] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:45.420 [2024-11-05 19:18:14.718997] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:45.420 [2024-11-05 19:18:14.719217] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.420 [2024-11-05 19:18:14.719226] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.420 [2024-11-05 19:18:14.719233] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.420 [2024-11-05 19:18:14.719240] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.420 [2024-11-05 19:18:14.731979] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.420 [2024-11-05 19:18:14.732615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.420 [2024-11-05 19:18:14.732656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:45.420 [2024-11-05 19:18:14.732667] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:45.420 [2024-11-05 19:18:14.732915] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:45.421 [2024-11-05 19:18:14.733140] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.421 [2024-11-05 19:18:14.733150] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.421 [2024-11-05 19:18:14.733158] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.421 [2024-11-05 19:18:14.733167] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.683 [2024-11-05 19:18:14.745919] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.683 [2024-11-05 19:18:14.746614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.683 [2024-11-05 19:18:14.746653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:45.683 [2024-11-05 19:18:14.746664] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:45.683 [2024-11-05 19:18:14.746910] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:45.683 [2024-11-05 19:18:14.747135] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.683 [2024-11-05 19:18:14.747145] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.683 [2024-11-05 19:18:14.747154] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.683 [2024-11-05 19:18:14.747162] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.683 [2024-11-05 19:18:14.759904] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.683 [2024-11-05 19:18:14.760561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.683 [2024-11-05 19:18:14.760601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:45.683 [2024-11-05 19:18:14.760613] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:45.683 [2024-11-05 19:18:14.760860] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:45.683 [2024-11-05 19:18:14.761085] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.683 [2024-11-05 19:18:14.761095] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.683 [2024-11-05 19:18:14.761103] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.683 [2024-11-05 19:18:14.761112] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.683 [2024-11-05 19:18:14.773857] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.683 [2024-11-05 19:18:14.774276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.683 [2024-11-05 19:18:14.774296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:45.683 [2024-11-05 19:18:14.774304] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:45.683 [2024-11-05 19:18:14.774523] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:45.683 [2024-11-05 19:18:14.774743] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.683 [2024-11-05 19:18:14.774758] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.683 [2024-11-05 19:18:14.774765] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.683 [2024-11-05 19:18:14.774772] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.683 9268.67 IOPS, 36.21 MiB/s [2024-11-05T18:18:15.006Z] [2024-11-05 19:18:14.787715] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.683 [2024-11-05 19:18:14.788369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.683 [2024-11-05 19:18:14.788410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:45.683 [2024-11-05 19:18:14.788427] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:45.683 [2024-11-05 19:18:14.788668] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:45.683 [2024-11-05 19:18:14.788900] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.683 [2024-11-05 19:18:14.788911] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.683 [2024-11-05 19:18:14.788918] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.683 [2024-11-05 19:18:14.788927] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.683 [2024-11-05 19:18:14.801669] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.683 [2024-11-05 19:18:14.802318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.683 [2024-11-05 19:18:14.802356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:45.683 [2024-11-05 19:18:14.802368] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:45.683 [2024-11-05 19:18:14.802606] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:45.683 [2024-11-05 19:18:14.802838] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.683 [2024-11-05 19:18:14.802849] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.683 [2024-11-05 19:18:14.802857] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.683 [2024-11-05 19:18:14.802865] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.683 [2024-11-05 19:18:14.815605] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.683 [2024-11-05 19:18:14.816308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.683 [2024-11-05 19:18:14.816347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:45.683 [2024-11-05 19:18:14.816359] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:45.683 [2024-11-05 19:18:14.816598] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:45.683 [2024-11-05 19:18:14.816837] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.683 [2024-11-05 19:18:14.816848] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.683 [2024-11-05 19:18:14.816856] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.683 [2024-11-05 19:18:14.816864] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.683 [2024-11-05 19:18:14.829397] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.683 [2024-11-05 19:18:14.829981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.683 [2024-11-05 19:18:14.830002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:45.683 [2024-11-05 19:18:14.830011] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:45.683 [2024-11-05 19:18:14.830231] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:45.683 [2024-11-05 19:18:14.830457] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.684 [2024-11-05 19:18:14.830466] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.684 [2024-11-05 19:18:14.830473] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.684 [2024-11-05 19:18:14.830480] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.684 [2024-11-05 19:18:14.843227] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.684 [2024-11-05 19:18:14.843804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.684 [2024-11-05 19:18:14.843822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:45.684 [2024-11-05 19:18:14.843830] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:45.684 [2024-11-05 19:18:14.844049] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:45.684 [2024-11-05 19:18:14.844269] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.684 [2024-11-05 19:18:14.844278] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.684 [2024-11-05 19:18:14.844286] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.684 [2024-11-05 19:18:14.844293] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.684 [2024-11-05 19:18:14.857056] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.684 [2024-11-05 19:18:14.857730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.684 [2024-11-05 19:18:14.857776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:45.684 [2024-11-05 19:18:14.857789] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:45.684 [2024-11-05 19:18:14.858029] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:45.684 [2024-11-05 19:18:14.858253] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.684 [2024-11-05 19:18:14.858262] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.684 [2024-11-05 19:18:14.858270] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.684 [2024-11-05 19:18:14.858279] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.684 [2024-11-05 19:18:14.871022] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.684 [2024-11-05 19:18:14.871558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.684 [2024-11-05 19:18:14.871578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:45.684 [2024-11-05 19:18:14.871586] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:45.684 [2024-11-05 19:18:14.871812] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:45.684 [2024-11-05 19:18:14.872033] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.684 [2024-11-05 19:18:14.872043] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.684 [2024-11-05 19:18:14.872050] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.684 [2024-11-05 19:18:14.872065] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.684 [2024-11-05 19:18:14.884815] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.684 [2024-11-05 19:18:14.885377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.684 [2024-11-05 19:18:14.885395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:45.684 [2024-11-05 19:18:14.885403] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:45.684 [2024-11-05 19:18:14.885623] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:45.684 [2024-11-05 19:18:14.885850] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.684 [2024-11-05 19:18:14.885860] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.684 [2024-11-05 19:18:14.885867] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.684 [2024-11-05 19:18:14.885874] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.684 [2024-11-05 19:18:14.898607] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.684 [2024-11-05 19:18:14.899166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.684 [2024-11-05 19:18:14.899183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:45.684 [2024-11-05 19:18:14.899191] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:45.684 [2024-11-05 19:18:14.899410] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:45.684 [2024-11-05 19:18:14.899629] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.684 [2024-11-05 19:18:14.899638] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.684 [2024-11-05 19:18:14.899646] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.684 [2024-11-05 19:18:14.899652] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.684 [2024-11-05 19:18:14.912417] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.684 [2024-11-05 19:18:14.912935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.684 [2024-11-05 19:18:14.912952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:45.684 [2024-11-05 19:18:14.912960] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:45.684 [2024-11-05 19:18:14.913179] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:45.684 [2024-11-05 19:18:14.913398] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.684 [2024-11-05 19:18:14.913408] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.684 [2024-11-05 19:18:14.913416] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.684 [2024-11-05 19:18:14.913422] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.684 [2024-11-05 19:18:14.926380] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.684 [2024-11-05 19:18:14.927018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.684 [2024-11-05 19:18:14.927058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:45.684 [2024-11-05 19:18:14.927069] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:45.684 [2024-11-05 19:18:14.927308] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:45.684 [2024-11-05 19:18:14.927532] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.684 [2024-11-05 19:18:14.927541] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.684 [2024-11-05 19:18:14.927549] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.684 [2024-11-05 19:18:14.927557] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.684 [2024-11-05 19:18:14.940302] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.684 [2024-11-05 19:18:14.940873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.684 [2024-11-05 19:18:14.940912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:45.684 [2024-11-05 19:18:14.940925] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:45.684 [2024-11-05 19:18:14.941166] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:45.684 [2024-11-05 19:18:14.941389] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.684 [2024-11-05 19:18:14.941398] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.684 [2024-11-05 19:18:14.941407] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.684 [2024-11-05 19:18:14.941415] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.684 [2024-11-05 19:18:14.954158] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.684 [2024-11-05 19:18:14.954732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.684 [2024-11-05 19:18:14.954757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:45.684 [2024-11-05 19:18:14.954766] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:45.684 [2024-11-05 19:18:14.954985] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:45.684 [2024-11-05 19:18:14.955205] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.684 [2024-11-05 19:18:14.955214] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.684 [2024-11-05 19:18:14.955222] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.684 [2024-11-05 19:18:14.955229] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.684 [2024-11-05 19:18:14.967956] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.684 [2024-11-05 19:18:14.968641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.684 [2024-11-05 19:18:14.968680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:45.684 [2024-11-05 19:18:14.968696] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:45.684 [2024-11-05 19:18:14.968945] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:45.684 [2024-11-05 19:18:14.969170] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.685 [2024-11-05 19:18:14.969179] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.685 [2024-11-05 19:18:14.969187] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.685 [2024-11-05 19:18:14.969195] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.685 [2024-11-05 19:18:14.981944] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.685 [2024-11-05 19:18:14.982566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.685 [2024-11-05 19:18:14.982605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:45.685 [2024-11-05 19:18:14.982617] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:45.685 [2024-11-05 19:18:14.982865] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:45.685 [2024-11-05 19:18:14.983090] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.685 [2024-11-05 19:18:14.983100] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.685 [2024-11-05 19:18:14.983108] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.685 [2024-11-05 19:18:14.983117] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.685 [2024-11-05 19:18:14.995858] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.685 [2024-11-05 19:18:14.996487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.685 [2024-11-05 19:18:14.996526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:45.685 [2024-11-05 19:18:14.996538] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:45.685 [2024-11-05 19:18:14.996784] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:45.685 [2024-11-05 19:18:14.997009] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.685 [2024-11-05 19:18:14.997019] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.685 [2024-11-05 19:18:14.997027] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.685 [2024-11-05 19:18:14.997035] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.947 [2024-11-05 19:18:15.009782] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.947 [2024-11-05 19:18:15.010426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.947 [2024-11-05 19:18:15.010465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:45.947 [2024-11-05 19:18:15.010476] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:45.947 [2024-11-05 19:18:15.010714] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:45.947 [2024-11-05 19:18:15.010954] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.947 [2024-11-05 19:18:15.010965] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.947 [2024-11-05 19:18:15.010974] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.947 [2024-11-05 19:18:15.010982] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.947 [2024-11-05 19:18:15.023727] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.947 [2024-11-05 19:18:15.024396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.947 [2024-11-05 19:18:15.024435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:45.947 [2024-11-05 19:18:15.024446] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:45.947 [2024-11-05 19:18:15.024685] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:45.947 [2024-11-05 19:18:15.024918] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.947 [2024-11-05 19:18:15.024929] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.947 [2024-11-05 19:18:15.024937] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.947 [2024-11-05 19:18:15.024945] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.947 [2024-11-05 19:18:15.037681] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.947 [2024-11-05 19:18:15.038273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.947 [2024-11-05 19:18:15.038294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:45.947 [2024-11-05 19:18:15.038302] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:45.947 [2024-11-05 19:18:15.038521] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:45.947 [2024-11-05 19:18:15.038741] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.947 [2024-11-05 19:18:15.038756] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.947 [2024-11-05 19:18:15.038764] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.947 [2024-11-05 19:18:15.038771] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.947 [2024-11-05 19:18:15.051497] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.947 [2024-11-05 19:18:15.052158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.947 [2024-11-05 19:18:15.052198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:45.947 [2024-11-05 19:18:15.052209] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:45.947 [2024-11-05 19:18:15.052447] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:45.947 [2024-11-05 19:18:15.052671] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.947 [2024-11-05 19:18:15.052681] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.947 [2024-11-05 19:18:15.052688] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.947 [2024-11-05 19:18:15.052701] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.947 [2024-11-05 19:18:15.065483] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.947 [2024-11-05 19:18:15.066059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.947 [2024-11-05 19:18:15.066079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:45.947 [2024-11-05 19:18:15.066087] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:45.947 [2024-11-05 19:18:15.066307] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:45.947 [2024-11-05 19:18:15.066527] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.947 [2024-11-05 19:18:15.066536] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.947 [2024-11-05 19:18:15.066543] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.947 [2024-11-05 19:18:15.066550] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.947 [2024-11-05 19:18:15.079288] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.947 [2024-11-05 19:18:15.079975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.947 [2024-11-05 19:18:15.080014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:45.947 [2024-11-05 19:18:15.080026] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:45.947 [2024-11-05 19:18:15.080264] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:45.947 [2024-11-05 19:18:15.080498] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.947 [2024-11-05 19:18:15.080508] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.947 [2024-11-05 19:18:15.080517] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.947 [2024-11-05 19:18:15.080525] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.947 [2024-11-05 19:18:15.093361] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.947 [2024-11-05 19:18:15.093853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.947 [2024-11-05 19:18:15.093892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:45.947 [2024-11-05 19:18:15.093905] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:45.947 [2024-11-05 19:18:15.094147] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:45.947 [2024-11-05 19:18:15.094370] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.947 [2024-11-05 19:18:15.094379] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.947 [2024-11-05 19:18:15.094387] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.947 [2024-11-05 19:18:15.094395] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.947 [2024-11-05 19:18:15.107343] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.947 [2024-11-05 19:18:15.108008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.947 [2024-11-05 19:18:15.108047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:45.947 [2024-11-05 19:18:15.108058] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:45.948 [2024-11-05 19:18:15.108296] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:45.948 [2024-11-05 19:18:15.108520] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.948 [2024-11-05 19:18:15.108530] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.948 [2024-11-05 19:18:15.108538] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.948 [2024-11-05 19:18:15.108546] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.948 [2024-11-05 19:18:15.121304] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.948 [2024-11-05 19:18:15.121875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.948 [2024-11-05 19:18:15.121915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:45.948 [2024-11-05 19:18:15.121927] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:45.948 [2024-11-05 19:18:15.122168] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:45.948 [2024-11-05 19:18:15.122392] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.948 [2024-11-05 19:18:15.122402] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.948 [2024-11-05 19:18:15.122409] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.948 [2024-11-05 19:18:15.122418] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.948 [2024-11-05 19:18:15.135165] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.948 [2024-11-05 19:18:15.135845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.948 [2024-11-05 19:18:15.135883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:45.948 [2024-11-05 19:18:15.135894] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:45.948 [2024-11-05 19:18:15.136133] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:45.948 [2024-11-05 19:18:15.136356] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.948 [2024-11-05 19:18:15.136366] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.948 [2024-11-05 19:18:15.136374] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.948 [2024-11-05 19:18:15.136383] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.948 [2024-11-05 19:18:15.149130] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.948 [2024-11-05 19:18:15.149825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.948 [2024-11-05 19:18:15.149865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:45.948 [2024-11-05 19:18:15.149880] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:45.948 [2024-11-05 19:18:15.150119] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:45.948 [2024-11-05 19:18:15.150342] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.948 [2024-11-05 19:18:15.150352] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.948 [2024-11-05 19:18:15.150360] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.948 [2024-11-05 19:18:15.150369] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.948 [2024-11-05 19:18:15.163112] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.948 [2024-11-05 19:18:15.163801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.948 [2024-11-05 19:18:15.163840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:45.948 [2024-11-05 19:18:15.163851] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:45.948 [2024-11-05 19:18:15.164090] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:45.948 [2024-11-05 19:18:15.164313] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.948 [2024-11-05 19:18:15.164323] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.948 [2024-11-05 19:18:15.164330] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.948 [2024-11-05 19:18:15.164339] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.948 [2024-11-05 19:18:15.177084] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.948 [2024-11-05 19:18:15.177762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.948 [2024-11-05 19:18:15.177801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:45.948 [2024-11-05 19:18:15.177814] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:45.948 [2024-11-05 19:18:15.178054] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:45.948 [2024-11-05 19:18:15.178278] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.948 [2024-11-05 19:18:15.178287] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.948 [2024-11-05 19:18:15.178295] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.948 [2024-11-05 19:18:15.178303] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.948 [2024-11-05 19:18:15.191055] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.948 [2024-11-05 19:18:15.191730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.948 [2024-11-05 19:18:15.191776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:45.948 [2024-11-05 19:18:15.191789] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:45.948 [2024-11-05 19:18:15.192029] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:45.948 [2024-11-05 19:18:15.192257] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.948 [2024-11-05 19:18:15.192267] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.948 [2024-11-05 19:18:15.192275] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.948 [2024-11-05 19:18:15.192283] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.948 [2024-11-05 19:18:15.205022] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.948 [2024-11-05 19:18:15.205671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.948 [2024-11-05 19:18:15.205710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:45.948 [2024-11-05 19:18:15.205723] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:45.948 [2024-11-05 19:18:15.205971] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:45.948 [2024-11-05 19:18:15.206196] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.948 [2024-11-05 19:18:15.206206] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.948 [2024-11-05 19:18:15.206214] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.948 [2024-11-05 19:18:15.206222] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.948 [2024-11-05 19:18:15.218971] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.948 [2024-11-05 19:18:15.219642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.948 [2024-11-05 19:18:15.219680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:45.948 [2024-11-05 19:18:15.219692] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:45.948 [2024-11-05 19:18:15.219939] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:45.948 [2024-11-05 19:18:15.220163] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.948 [2024-11-05 19:18:15.220173] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.948 [2024-11-05 19:18:15.220181] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.948 [2024-11-05 19:18:15.220189] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.948 [2024-11-05 19:18:15.232931] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.948 [2024-11-05 19:18:15.233389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.948 [2024-11-05 19:18:15.233409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:45.948 [2024-11-05 19:18:15.233417] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:45.948 [2024-11-05 19:18:15.233638] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:45.948 [2024-11-05 19:18:15.233864] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.948 [2024-11-05 19:18:15.233875] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.948 [2024-11-05 19:18:15.233882] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.948 [2024-11-05 19:18:15.233894] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.948 [2024-11-05 19:18:15.246837] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.948 [2024-11-05 19:18:15.247397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.948 [2024-11-05 19:18:15.247414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:45.948 [2024-11-05 19:18:15.247422] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:45.949 [2024-11-05 19:18:15.247641] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:45.949 [2024-11-05 19:18:15.247865] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.949 [2024-11-05 19:18:15.247876] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.949 [2024-11-05 19:18:15.247884] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.949 [2024-11-05 19:18:15.247891] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.949 [2024-11-05 19:18:15.260623] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.949 [2024-11-05 19:18:15.261166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.949 [2024-11-05 19:18:15.261183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:45.949 [2024-11-05 19:18:15.261191] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:45.949 [2024-11-05 19:18:15.261409] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:45.949 [2024-11-05 19:18:15.261628] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.949 [2024-11-05 19:18:15.261638] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.949 [2024-11-05 19:18:15.261645] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.949 [2024-11-05 19:18:15.261652] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.210 [2024-11-05 19:18:15.274417] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.210 [2024-11-05 19:18:15.274984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.210 [2024-11-05 19:18:15.275002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:46.211 [2024-11-05 19:18:15.275011] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:46.211 [2024-11-05 19:18:15.275230] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:46.211 [2024-11-05 19:18:15.275449] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.211 [2024-11-05 19:18:15.275458] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.211 [2024-11-05 19:18:15.275465] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.211 [2024-11-05 19:18:15.275472] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.211 [2024-11-05 19:18:15.288214] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.211 [2024-11-05 19:18:15.288735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.211 [2024-11-05 19:18:15.288757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:46.211 [2024-11-05 19:18:15.288765] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:46.211 [2024-11-05 19:18:15.288984] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:46.211 [2024-11-05 19:18:15.289204] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.211 [2024-11-05 19:18:15.289212] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.211 [2024-11-05 19:18:15.289220] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.211 [2024-11-05 19:18:15.289227] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.211 [2024-11-05 19:18:15.302159] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.211 [2024-11-05 19:18:15.302822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.211 [2024-11-05 19:18:15.302862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:46.211 [2024-11-05 19:18:15.302874] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:46.211 [2024-11-05 19:18:15.303113] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:46.211 [2024-11-05 19:18:15.303337] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.211 [2024-11-05 19:18:15.303347] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.211 [2024-11-05 19:18:15.303355] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.211 [2024-11-05 19:18:15.303364] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.211 [2024-11-05 19:18:15.316113] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.211 [2024-11-05 19:18:15.316689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.211 [2024-11-05 19:18:15.316709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:46.211 [2024-11-05 19:18:15.316718] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:46.211 [2024-11-05 19:18:15.316954] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:46.211 [2024-11-05 19:18:15.317175] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.211 [2024-11-05 19:18:15.317184] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.211 [2024-11-05 19:18:15.317192] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.211 [2024-11-05 19:18:15.317199] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.211 [2024-11-05 19:18:15.329928] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.211 [2024-11-05 19:18:15.330542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.211 [2024-11-05 19:18:15.330580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:46.211 [2024-11-05 19:18:15.330596] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:46.211 [2024-11-05 19:18:15.330842] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:46.211 [2024-11-05 19:18:15.331066] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.211 [2024-11-05 19:18:15.331076] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.211 [2024-11-05 19:18:15.331084] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.211 [2024-11-05 19:18:15.331092] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.211 [2024-11-05 19:18:15.343837] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.211 [2024-11-05 19:18:15.344435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.211 [2024-11-05 19:18:15.344474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:46.211 [2024-11-05 19:18:15.344485] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:46.211 [2024-11-05 19:18:15.344724] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:46.211 [2024-11-05 19:18:15.344960] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.211 [2024-11-05 19:18:15.344972] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.211 [2024-11-05 19:18:15.344980] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.211 [2024-11-05 19:18:15.344989] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.211 [2024-11-05 19:18:15.357723] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.211 [2024-11-05 19:18:15.358264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.211 [2024-11-05 19:18:15.358284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:46.211 [2024-11-05 19:18:15.358292] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:46.211 [2024-11-05 19:18:15.358512] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:46.211 [2024-11-05 19:18:15.358731] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.211 [2024-11-05 19:18:15.358740] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.211 [2024-11-05 19:18:15.358754] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.211 [2024-11-05 19:18:15.358761] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.211 [2024-11-05 19:18:15.371696] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.211 [2024-11-05 19:18:15.372307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.211 [2024-11-05 19:18:15.372347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:46.211 [2024-11-05 19:18:15.372358] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:46.211 [2024-11-05 19:18:15.372596] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:46.211 [2024-11-05 19:18:15.372833] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.211 [2024-11-05 19:18:15.372844] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.211 [2024-11-05 19:18:15.372852] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.211 [2024-11-05 19:18:15.372860] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.211 [2024-11-05 19:18:15.385611] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.211 [2024-11-05 19:18:15.386244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.211 [2024-11-05 19:18:15.386284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:46.211 [2024-11-05 19:18:15.386295] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:46.211 [2024-11-05 19:18:15.386533] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:46.211 [2024-11-05 19:18:15.386764] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.211 [2024-11-05 19:18:15.386775] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.211 [2024-11-05 19:18:15.386783] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.211 [2024-11-05 19:18:15.386791] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.211 [2024-11-05 19:18:15.399526] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.211 [2024-11-05 19:18:15.400159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.211 [2024-11-05 19:18:15.400199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:46.211 [2024-11-05 19:18:15.400210] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:46.212 [2024-11-05 19:18:15.400448] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:46.212 [2024-11-05 19:18:15.400672] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.212 [2024-11-05 19:18:15.400681] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.212 [2024-11-05 19:18:15.400689] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.212 [2024-11-05 19:18:15.400698] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.212 [2024-11-05 19:18:15.413446] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.212 [2024-11-05 19:18:15.414084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.212 [2024-11-05 19:18:15.414123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:46.212 [2024-11-05 19:18:15.414134] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:46.212 [2024-11-05 19:18:15.414372] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:46.212 [2024-11-05 19:18:15.414596] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.212 [2024-11-05 19:18:15.414606] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.212 [2024-11-05 19:18:15.414614] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.212 [2024-11-05 19:18:15.414627] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.212 [2024-11-05 19:18:15.427387] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.212 [2024-11-05 19:18:15.428105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.212 [2024-11-05 19:18:15.428127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:46.212 [2024-11-05 19:18:15.428136] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:46.212 [2024-11-05 19:18:15.428360] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:46.212 [2024-11-05 19:18:15.428581] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.212 [2024-11-05 19:18:15.428591] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.212 [2024-11-05 19:18:15.428599] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.212 [2024-11-05 19:18:15.428606] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.212 [2024-11-05 19:18:15.441347] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.212 [2024-11-05 19:18:15.441998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.212 [2024-11-05 19:18:15.442037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:46.212 [2024-11-05 19:18:15.442049] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:46.212 [2024-11-05 19:18:15.442288] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:46.212 [2024-11-05 19:18:15.442511] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.212 [2024-11-05 19:18:15.442521] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.212 [2024-11-05 19:18:15.442529] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.212 [2024-11-05 19:18:15.442537] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.212 [2024-11-05 19:18:15.455283] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.212 [2024-11-05 19:18:15.455878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.212 [2024-11-05 19:18:15.455918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:46.212 [2024-11-05 19:18:15.455931] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:46.212 [2024-11-05 19:18:15.456171] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:46.212 [2024-11-05 19:18:15.456395] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.212 [2024-11-05 19:18:15.456405] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.212 [2024-11-05 19:18:15.456412] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.212 [2024-11-05 19:18:15.456421] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.212 [2024-11-05 19:18:15.469168] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.212 [2024-11-05 19:18:15.469799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.212 [2024-11-05 19:18:15.469839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:46.212 [2024-11-05 19:18:15.469851] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:46.212 [2024-11-05 19:18:15.470091] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:46.212 [2024-11-05 19:18:15.470314] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.212 [2024-11-05 19:18:15.470324] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.212 [2024-11-05 19:18:15.470332] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.212 [2024-11-05 19:18:15.470340] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.212 [2024-11-05 19:18:15.483129] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.212 [2024-11-05 19:18:15.483769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.212 [2024-11-05 19:18:15.483807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:46.212 [2024-11-05 19:18:15.483820] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:46.212 [2024-11-05 19:18:15.484061] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:46.212 [2024-11-05 19:18:15.484287] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.212 [2024-11-05 19:18:15.484298] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.212 [2024-11-05 19:18:15.484307] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.212 [2024-11-05 19:18:15.484317] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.212 [2024-11-05 19:18:15.497059] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.212 [2024-11-05 19:18:15.497688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.212 [2024-11-05 19:18:15.497727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:46.212 [2024-11-05 19:18:15.497738] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:46.212 [2024-11-05 19:18:15.497984] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:46.212 [2024-11-05 19:18:15.498209] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.212 [2024-11-05 19:18:15.498219] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.212 [2024-11-05 19:18:15.498227] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.212 [2024-11-05 19:18:15.498235] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.212 [2024-11-05 19:18:15.510879] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.212 [2024-11-05 19:18:15.511505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.212 [2024-11-05 19:18:15.511544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:46.212 [2024-11-05 19:18:15.511560] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:46.212 [2024-11-05 19:18:15.511807] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:46.212 [2024-11-05 19:18:15.512032] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.212 [2024-11-05 19:18:15.512042] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.212 [2024-11-05 19:18:15.512050] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.212 [2024-11-05 19:18:15.512059] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.212 [2024-11-05 19:18:15.524809] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.212 [2024-11-05 19:18:15.525368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.212 [2024-11-05 19:18:15.525407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:46.212 [2024-11-05 19:18:15.525418] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:46.212 [2024-11-05 19:18:15.525657] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:46.212 [2024-11-05 19:18:15.525888] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.212 [2024-11-05 19:18:15.525899] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.213 [2024-11-05 19:18:15.525906] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.213 [2024-11-05 19:18:15.525915] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.475 [2024-11-05 19:18:15.538667] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.475 [2024-11-05 19:18:15.539249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.475 [2024-11-05 19:18:15.539269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:46.475 [2024-11-05 19:18:15.539277] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:46.475 [2024-11-05 19:18:15.539497] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:46.475 [2024-11-05 19:18:15.539717] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.475 [2024-11-05 19:18:15.539726] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.475 [2024-11-05 19:18:15.539733] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.475 [2024-11-05 19:18:15.539741] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.475 [2024-11-05 19:18:15.552481] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.475 [2024-11-05 19:18:15.553021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.475 [2024-11-05 19:18:15.553039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:46.475 [2024-11-05 19:18:15.553047] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:46.475 [2024-11-05 19:18:15.553266] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:46.475 [2024-11-05 19:18:15.553485] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.475 [2024-11-05 19:18:15.553499] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.475 [2024-11-05 19:18:15.553507] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.475 [2024-11-05 19:18:15.553513] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.475 [2024-11-05 19:18:15.566454] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.475 [2024-11-05 19:18:15.567078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.475 [2024-11-05 19:18:15.567117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:46.475 [2024-11-05 19:18:15.567128] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:46.475 [2024-11-05 19:18:15.567367] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:46.475 [2024-11-05 19:18:15.567590] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.475 [2024-11-05 19:18:15.567600] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.475 [2024-11-05 19:18:15.567608] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.475 [2024-11-05 19:18:15.567617] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.475 [2024-11-05 19:18:15.580366] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.475 [2024-11-05 19:18:15.581023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.475 [2024-11-05 19:18:15.581063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:46.475 [2024-11-05 19:18:15.581074] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:46.475 [2024-11-05 19:18:15.581313] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:46.475 [2024-11-05 19:18:15.581536] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.475 [2024-11-05 19:18:15.581546] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.475 [2024-11-05 19:18:15.581554] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.475 [2024-11-05 19:18:15.581562] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.475 [2024-11-05 19:18:15.594315] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.475 [2024-11-05 19:18:15.594980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.475 [2024-11-05 19:18:15.595019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:46.475 [2024-11-05 19:18:15.595031] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:46.475 [2024-11-05 19:18:15.595269] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:46.475 [2024-11-05 19:18:15.595493] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.475 [2024-11-05 19:18:15.595502] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.475 [2024-11-05 19:18:15.595510] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.475 [2024-11-05 19:18:15.595523] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.475 [2024-11-05 19:18:15.608272] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.475 [2024-11-05 19:18:15.608899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.475 [2024-11-05 19:18:15.608938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:46.475 [2024-11-05 19:18:15.608950] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:46.475 [2024-11-05 19:18:15.609189] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:46.475 [2024-11-05 19:18:15.609412] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.475 [2024-11-05 19:18:15.609422] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.475 [2024-11-05 19:18:15.609430] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.475 [2024-11-05 19:18:15.609438] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.475 [2024-11-05 19:18:15.622191] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.475 [2024-11-05 19:18:15.622830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.475 [2024-11-05 19:18:15.622869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:46.475 [2024-11-05 19:18:15.622881] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:46.475 [2024-11-05 19:18:15.623121] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:46.475 [2024-11-05 19:18:15.623345] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.475 [2024-11-05 19:18:15.623354] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.475 [2024-11-05 19:18:15.623362] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.475 [2024-11-05 19:18:15.623370] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.475 [2024-11-05 19:18:15.636123] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.475 [2024-11-05 19:18:15.636778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.475 [2024-11-05 19:18:15.636817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:46.475 [2024-11-05 19:18:15.636829] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:46.475 [2024-11-05 19:18:15.637067] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:46.475 [2024-11-05 19:18:15.637291] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.475 [2024-11-05 19:18:15.637301] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.475 [2024-11-05 19:18:15.637309] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.476 [2024-11-05 19:18:15.637318] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.476 [2024-11-05 19:18:15.650058] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.476 [2024-11-05 19:18:15.650627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.476 [2024-11-05 19:18:15.650665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:46.476 [2024-11-05 19:18:15.650676] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:46.476 [2024-11-05 19:18:15.650924] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:46.476 [2024-11-05 19:18:15.651149] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.476 [2024-11-05 19:18:15.651159] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.476 [2024-11-05 19:18:15.651167] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.476 [2024-11-05 19:18:15.651175] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.476 [2024-11-05 19:18:15.663907] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.476 [2024-11-05 19:18:15.664473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.476 [2024-11-05 19:18:15.664493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:46.476 [2024-11-05 19:18:15.664501] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:46.476 [2024-11-05 19:18:15.664721] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:46.476 [2024-11-05 19:18:15.664950] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.476 [2024-11-05 19:18:15.664959] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.476 [2024-11-05 19:18:15.664966] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.476 [2024-11-05 19:18:15.664975] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.476 [2024-11-05 19:18:15.677699] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.476 [2024-11-05 19:18:15.678225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.476 [2024-11-05 19:18:15.678243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:46.476 [2024-11-05 19:18:15.678251] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:46.476 [2024-11-05 19:18:15.678470] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:46.476 [2024-11-05 19:18:15.678689] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.476 [2024-11-05 19:18:15.678699] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.476 [2024-11-05 19:18:15.678706] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.476 [2024-11-05 19:18:15.678713] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.476 [2024-11-05 19:18:15.691488] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.476 [2024-11-05 19:18:15.692022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.476 [2024-11-05 19:18:15.692041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:46.476 [2024-11-05 19:18:15.692053] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:46.476 [2024-11-05 19:18:15.692273] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:46.476 [2024-11-05 19:18:15.692493] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.476 [2024-11-05 19:18:15.692502] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.476 [2024-11-05 19:18:15.692509] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.476 [2024-11-05 19:18:15.692516] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.476 [2024-11-05 19:18:15.705449] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.476 [2024-11-05 19:18:15.706071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.476 [2024-11-05 19:18:15.706110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:46.476 [2024-11-05 19:18:15.706121] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:46.476 [2024-11-05 19:18:15.706360] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:46.476 [2024-11-05 19:18:15.706584] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.476 [2024-11-05 19:18:15.706593] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.476 [2024-11-05 19:18:15.706601] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.476 [2024-11-05 19:18:15.706609] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.476 [2024-11-05 19:18:15.719372] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.476 [2024-11-05 19:18:15.720045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.476 [2024-11-05 19:18:15.720085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:46.476 [2024-11-05 19:18:15.720096] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:46.476 [2024-11-05 19:18:15.720335] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:46.476 [2024-11-05 19:18:15.720559] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.476 [2024-11-05 19:18:15.720569] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.476 [2024-11-05 19:18:15.720577] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.476 [2024-11-05 19:18:15.720586] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.476 [2024-11-05 19:18:15.733324] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.476 [2024-11-05 19:18:15.733771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.476 [2024-11-05 19:18:15.733795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:46.476 [2024-11-05 19:18:15.733804] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:46.476 [2024-11-05 19:18:15.734027] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:46.476 [2024-11-05 19:18:15.734246] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.476 [2024-11-05 19:18:15.734269] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.476 [2024-11-05 19:18:15.734276] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.476 [2024-11-05 19:18:15.734285] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.476 [2024-11-05 19:18:15.747244] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.476 [2024-11-05 19:18:15.747852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.476 [2024-11-05 19:18:15.747891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:46.476 [2024-11-05 19:18:15.747904] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:46.476 [2024-11-05 19:18:15.748143] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:46.476 [2024-11-05 19:18:15.748367] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.476 [2024-11-05 19:18:15.748376] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.476 [2024-11-05 19:18:15.748384] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.476 [2024-11-05 19:18:15.748393] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.476 [2024-11-05 19:18:15.761140] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.476 [2024-11-05 19:18:15.761703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.476 [2024-11-05 19:18:15.761722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:46.476 [2024-11-05 19:18:15.761730] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:46.476 [2024-11-05 19:18:15.761956] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:46.476 [2024-11-05 19:18:15.762176] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.476 [2024-11-05 19:18:15.762185] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.476 [2024-11-05 19:18:15.762192] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.476 [2024-11-05 19:18:15.762199] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.476 [2024-11-05 19:18:15.775138] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.476 [2024-11-05 19:18:15.775768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.476 [2024-11-05 19:18:15.775808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:46.476 [2024-11-05 19:18:15.775820] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:46.476 [2024-11-05 19:18:15.776059] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:46.476 [2024-11-05 19:18:15.776287] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.477 [2024-11-05 19:18:15.776298] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.477 [2024-11-05 19:18:15.776306] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.477 [2024-11-05 19:18:15.776323] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.477 6951.50 IOPS, 27.15 MiB/s [2024-11-05T18:18:15.800Z] [2024-11-05 19:18:15.789065] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.477 [2024-11-05 19:18:15.789700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.477 [2024-11-05 19:18:15.789739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:46.477 [2024-11-05 19:18:15.789762] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:46.477 [2024-11-05 19:18:15.790002] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:46.477 [2024-11-05 19:18:15.790226] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.477 [2024-11-05 19:18:15.790235] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.477 [2024-11-05 19:18:15.790243] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.477 [2024-11-05 19:18:15.790251] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.738 [2024-11-05 19:18:15.802998] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.739 [2024-11-05 19:18:15.803526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.739 [2024-11-05 19:18:15.803546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:46.739 [2024-11-05 19:18:15.803555] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:46.739 [2024-11-05 19:18:15.803784] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:46.739 [2024-11-05 19:18:15.804005] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.739 [2024-11-05 19:18:15.804016] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.739 [2024-11-05 19:18:15.804023] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.739 [2024-11-05 19:18:15.804030] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.739 [2024-11-05 19:18:15.816969] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.739 [2024-11-05 19:18:15.817567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.739 [2024-11-05 19:18:15.817606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:46.739 [2024-11-05 19:18:15.817617] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:46.739 [2024-11-05 19:18:15.817865] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:46.739 [2024-11-05 19:18:15.818090] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.739 [2024-11-05 19:18:15.818100] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.739 [2024-11-05 19:18:15.818108] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.739 [2024-11-05 19:18:15.818116] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.739 [2024-11-05 19:18:15.830850] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.739 [2024-11-05 19:18:15.831487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.739 [2024-11-05 19:18:15.831526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:46.739 [2024-11-05 19:18:15.831537] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:46.739 [2024-11-05 19:18:15.831786] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:46.739 [2024-11-05 19:18:15.832011] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.739 [2024-11-05 19:18:15.832020] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.739 [2024-11-05 19:18:15.832028] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.739 [2024-11-05 19:18:15.832038] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.739 [2024-11-05 19:18:15.844774] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.739 [2024-11-05 19:18:15.845443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.739 [2024-11-05 19:18:15.845482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:46.739 [2024-11-05 19:18:15.845492] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:46.739 [2024-11-05 19:18:15.845731] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:46.739 [2024-11-05 19:18:15.845966] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.739 [2024-11-05 19:18:15.845976] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.739 [2024-11-05 19:18:15.845984] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.739 [2024-11-05 19:18:15.845992] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.739 [2024-11-05 19:18:15.858725] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.739 [2024-11-05 19:18:15.859263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.739 [2024-11-05 19:18:15.859284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:46.739 [2024-11-05 19:18:15.859292] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:46.739 [2024-11-05 19:18:15.859511] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:46.739 [2024-11-05 19:18:15.859731] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.739 [2024-11-05 19:18:15.859740] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.739 [2024-11-05 19:18:15.859756] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.739 [2024-11-05 19:18:15.859764] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.739 [2024-11-05 19:18:15.872695] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.739 [2024-11-05 19:18:15.873259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.739 [2024-11-05 19:18:15.873277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:46.739 [2024-11-05 19:18:15.873289] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:46.739 [2024-11-05 19:18:15.873508] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:46.739 [2024-11-05 19:18:15.873727] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.739 [2024-11-05 19:18:15.873736] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.739 [2024-11-05 19:18:15.873744] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.739 [2024-11-05 19:18:15.873758] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.739 [2024-11-05 19:18:15.886488] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.739 [2024-11-05 19:18:15.887120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.739 [2024-11-05 19:18:15.887159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:46.739 [2024-11-05 19:18:15.887170] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:46.739 [2024-11-05 19:18:15.887408] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:46.739 [2024-11-05 19:18:15.887632] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.739 [2024-11-05 19:18:15.887641] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.739 [2024-11-05 19:18:15.887649] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.739 [2024-11-05 19:18:15.887658] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.739 [2024-11-05 19:18:15.900434] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.739 [2024-11-05 19:18:15.901067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.739 [2024-11-05 19:18:15.901106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:46.739 [2024-11-05 19:18:15.901117] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:46.739 [2024-11-05 19:18:15.901356] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:46.739 [2024-11-05 19:18:15.901579] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.739 [2024-11-05 19:18:15.901590] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.739 [2024-11-05 19:18:15.901597] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.739 [2024-11-05 19:18:15.901607] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.739 [2024-11-05 19:18:15.914349] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.739 [2024-11-05 19:18:15.914983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.739 [2024-11-05 19:18:15.915022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:46.739 [2024-11-05 19:18:15.915033] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:46.739 [2024-11-05 19:18:15.915272] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:46.739 [2024-11-05 19:18:15.915500] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.739 [2024-11-05 19:18:15.915509] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.739 [2024-11-05 19:18:15.915517] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.739 [2024-11-05 19:18:15.915525] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.739 [2024-11-05 19:18:15.928279] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.739 [2024-11-05 19:18:15.928946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.739 [2024-11-05 19:18:15.928986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:46.739 [2024-11-05 19:18:15.928997] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:46.739 [2024-11-05 19:18:15.929235] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:46.739 [2024-11-05 19:18:15.929459] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.739 [2024-11-05 19:18:15.929468] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.739 [2024-11-05 19:18:15.929476] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.739 [2024-11-05 19:18:15.929484] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.739 [2024-11-05 19:18:15.942249] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.740 [2024-11-05 19:18:15.942851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.740 [2024-11-05 19:18:15.942890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:46.740 [2024-11-05 19:18:15.942903] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:46.740 [2024-11-05 19:18:15.943144] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:46.740 [2024-11-05 19:18:15.943368] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.740 [2024-11-05 19:18:15.943378] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.740 [2024-11-05 19:18:15.943386] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.740 [2024-11-05 19:18:15.943394] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.740 [2024-11-05 19:18:15.956146] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.740 [2024-11-05 19:18:15.956782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.740 [2024-11-05 19:18:15.956821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:46.740 [2024-11-05 19:18:15.956835] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:46.740 [2024-11-05 19:18:15.957075] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:46.740 [2024-11-05 19:18:15.957298] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.740 [2024-11-05 19:18:15.957307] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.740 [2024-11-05 19:18:15.957320] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.740 [2024-11-05 19:18:15.957329] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.740 [2024-11-05 19:18:15.970078] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.740 [2024-11-05 19:18:15.970609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.740 [2024-11-05 19:18:15.970630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:46.740 [2024-11-05 19:18:15.970639] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:46.740 [2024-11-05 19:18:15.970864] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:46.740 [2024-11-05 19:18:15.971084] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.740 [2024-11-05 19:18:15.971094] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.740 [2024-11-05 19:18:15.971101] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.740 [2024-11-05 19:18:15.971108] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.740 [2024-11-05 19:18:15.984057] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.740 [2024-11-05 19:18:15.984585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.740 [2024-11-05 19:18:15.984603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:46.740 [2024-11-05 19:18:15.984611] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:46.740 [2024-11-05 19:18:15.984835] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:46.740 [2024-11-05 19:18:15.985055] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.740 [2024-11-05 19:18:15.985066] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.740 [2024-11-05 19:18:15.985074] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.740 [2024-11-05 19:18:15.985081] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.740 [2024-11-05 19:18:15.998028] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.740 [2024-11-05 19:18:15.998553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.740 [2024-11-05 19:18:15.998571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:46.740 [2024-11-05 19:18:15.998578] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:46.740 [2024-11-05 19:18:15.998803] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:46.740 [2024-11-05 19:18:15.999024] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.740 [2024-11-05 19:18:15.999033] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.740 [2024-11-05 19:18:15.999040] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.740 [2024-11-05 19:18:15.999047] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.740 [2024-11-05 19:18:16.011984] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.740 [2024-11-05 19:18:16.012556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.740 [2024-11-05 19:18:16.012575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:46.740 [2024-11-05 19:18:16.012583] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:46.740 [2024-11-05 19:18:16.012808] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:46.740 [2024-11-05 19:18:16.013029] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.740 [2024-11-05 19:18:16.013039] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.740 [2024-11-05 19:18:16.013046] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.740 [2024-11-05 19:18:16.013053] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.740 [2024-11-05 19:18:16.025814] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.740 [2024-11-05 19:18:16.026472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.740 [2024-11-05 19:18:16.026512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:46.740 [2024-11-05 19:18:16.026523] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:46.740 [2024-11-05 19:18:16.026770] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:46.740 [2024-11-05 19:18:16.026995] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.740 [2024-11-05 19:18:16.027005] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.740 [2024-11-05 19:18:16.027013] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.740 [2024-11-05 19:18:16.027021] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.740 [2024-11-05 19:18:16.039807] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.740 [2024-11-05 19:18:16.040344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.740 [2024-11-05 19:18:16.040364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:46.740 [2024-11-05 19:18:16.040373] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:46.740 [2024-11-05 19:18:16.040592] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:46.740 [2024-11-05 19:18:16.040821] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.740 [2024-11-05 19:18:16.040831] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.740 [2024-11-05 19:18:16.040838] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.740 [2024-11-05 19:18:16.040845] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.740 [2024-11-05 19:18:16.053804] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.740 [2024-11-05 19:18:16.054346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.740 [2024-11-05 19:18:16.054386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:46.740 [2024-11-05 19:18:16.054403] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:46.740 [2024-11-05 19:18:16.054642] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:46.740 [2024-11-05 19:18:16.054876] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.740 [2024-11-05 19:18:16.054887] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.740 [2024-11-05 19:18:16.054895] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.740 [2024-11-05 19:18:16.054903] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.004 [2024-11-05 19:18:16.067660] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.004 [2024-11-05 19:18:16.068201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.004 [2024-11-05 19:18:16.068222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:47.004 [2024-11-05 19:18:16.068230] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:47.004 [2024-11-05 19:18:16.068450] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:47.004 [2024-11-05 19:18:16.068669] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.004 [2024-11-05 19:18:16.068679] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.004 [2024-11-05 19:18:16.068687] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.004 [2024-11-05 19:18:16.068694] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.004 [2024-11-05 19:18:16.081658] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.004 [2024-11-05 19:18:16.082191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.004 [2024-11-05 19:18:16.082209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:47.004 [2024-11-05 19:18:16.082217] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:47.004 [2024-11-05 19:18:16.082435] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:47.004 [2024-11-05 19:18:16.082654] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.004 [2024-11-05 19:18:16.082664] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.004 [2024-11-05 19:18:16.082672] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.004 [2024-11-05 19:18:16.082679] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.004 [2024-11-05 19:18:16.095647] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.004 [2024-11-05 19:18:16.096173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.004 [2024-11-05 19:18:16.096190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:47.004 [2024-11-05 19:18:16.096198] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:47.004 [2024-11-05 19:18:16.096416] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:47.004 [2024-11-05 19:18:16.096640] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.004 [2024-11-05 19:18:16.096650] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.004 [2024-11-05 19:18:16.096657] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.004 [2024-11-05 19:18:16.096664] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.004 [2024-11-05 19:18:16.109451] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.004 [2024-11-05 19:18:16.110014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.004 [2024-11-05 19:18:16.110032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:47.004 [2024-11-05 19:18:16.110040] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:47.004 [2024-11-05 19:18:16.110259] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:47.004 [2024-11-05 19:18:16.110479] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.004 [2024-11-05 19:18:16.110488] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.004 [2024-11-05 19:18:16.110495] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.004 [2024-11-05 19:18:16.110502] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.004 [2024-11-05 19:18:16.123338] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.004 [2024-11-05 19:18:16.123772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.004 [2024-11-05 19:18:16.123791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:47.004 [2024-11-05 19:18:16.123799] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:47.004 [2024-11-05 19:18:16.124019] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:47.005 [2024-11-05 19:18:16.124238] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.005 [2024-11-05 19:18:16.124248] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.005 [2024-11-05 19:18:16.124255] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.005 [2024-11-05 19:18:16.124263] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.005 [2024-11-05 19:18:16.137223] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.005 [2024-11-05 19:18:16.137644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.005 [2024-11-05 19:18:16.137662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:47.005 [2024-11-05 19:18:16.137670] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:47.005 [2024-11-05 19:18:16.137895] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:47.005 [2024-11-05 19:18:16.138115] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.005 [2024-11-05 19:18:16.138125] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.005 [2024-11-05 19:18:16.138136] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.005 [2024-11-05 19:18:16.138143] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.005 [2024-11-05 19:18:16.151103] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.005 [2024-11-05 19:18:16.151659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.005 [2024-11-05 19:18:16.151675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:47.005 [2024-11-05 19:18:16.151683] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:47.005 [2024-11-05 19:18:16.151907] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:47.005 [2024-11-05 19:18:16.152127] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.005 [2024-11-05 19:18:16.152136] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.005 [2024-11-05 19:18:16.152143] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.005 [2024-11-05 19:18:16.152150] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.005 [2024-11-05 19:18:16.164900] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.005 [2024-11-05 19:18:16.165458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.005 [2024-11-05 19:18:16.165474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:47.005 [2024-11-05 19:18:16.165482] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:47.005 [2024-11-05 19:18:16.165700] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:47.005 [2024-11-05 19:18:16.165927] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.005 [2024-11-05 19:18:16.165936] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.005 [2024-11-05 19:18:16.165944] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.005 [2024-11-05 19:18:16.165951] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.005 [2024-11-05 19:18:16.178699] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.005 [2024-11-05 19:18:16.179378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.005 [2024-11-05 19:18:16.179418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:47.005 [2024-11-05 19:18:16.179429] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:47.005 [2024-11-05 19:18:16.179668] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:47.005 [2024-11-05 19:18:16.179900] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.005 [2024-11-05 19:18:16.179910] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.005 [2024-11-05 19:18:16.179918] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.005 [2024-11-05 19:18:16.179926] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.005 [2024-11-05 19:18:16.192490] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.005 [2024-11-05 19:18:16.193037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.005 [2024-11-05 19:18:16.193058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:47.005 [2024-11-05 19:18:16.193066] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:47.005 [2024-11-05 19:18:16.193286] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:47.005 [2024-11-05 19:18:16.193507] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.005 [2024-11-05 19:18:16.193517] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.005 [2024-11-05 19:18:16.193525] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.005 [2024-11-05 19:18:16.193532] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.005 [2024-11-05 19:18:16.206493] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.005 [2024-11-05 19:18:16.206956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.005 [2024-11-05 19:18:16.206996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:47.005 [2024-11-05 19:18:16.207008] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:47.005 [2024-11-05 19:18:16.207248] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:47.005 [2024-11-05 19:18:16.207472] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.005 [2024-11-05 19:18:16.207483] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.005 [2024-11-05 19:18:16.207491] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.005 [2024-11-05 19:18:16.207499] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.005 [2024-11-05 19:18:16.220486] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.005 [2024-11-05 19:18:16.221063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.005 [2024-11-05 19:18:16.221084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:47.005 [2024-11-05 19:18:16.221092] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:47.005 [2024-11-05 19:18:16.221312] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:47.005 [2024-11-05 19:18:16.221532] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.005 [2024-11-05 19:18:16.221542] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.005 [2024-11-05 19:18:16.221550] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.005 [2024-11-05 19:18:16.221557] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.005 [2024-11-05 19:18:16.234313] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.005 [2024-11-05 19:18:16.234841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.005 [2024-11-05 19:18:16.234859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:47.006 [2024-11-05 19:18:16.234872] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:47.006 [2024-11-05 19:18:16.235091] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:47.006 [2024-11-05 19:18:16.235311] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.006 [2024-11-05 19:18:16.235320] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.006 [2024-11-05 19:18:16.235328] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.006 [2024-11-05 19:18:16.235335] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.006 [2024-11-05 19:18:16.248296] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.006 [2024-11-05 19:18:16.248996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.006 [2024-11-05 19:18:16.249036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:47.006 [2024-11-05 19:18:16.249047] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:47.006 [2024-11-05 19:18:16.249286] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:47.006 [2024-11-05 19:18:16.249510] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.006 [2024-11-05 19:18:16.249520] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.006 [2024-11-05 19:18:16.249528] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.006 [2024-11-05 19:18:16.249537] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.006 [2024-11-05 19:18:16.262094] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.006 [2024-11-05 19:18:16.262669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.006 [2024-11-05 19:18:16.262689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:47.006 [2024-11-05 19:18:16.262697] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:47.006 [2024-11-05 19:18:16.262924] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:47.006 [2024-11-05 19:18:16.263145] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.006 [2024-11-05 19:18:16.263154] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.006 [2024-11-05 19:18:16.263161] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.006 [2024-11-05 19:18:16.263168] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.006 [2024-11-05 19:18:16.275919] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.006 [2024-11-05 19:18:16.276478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.006 [2024-11-05 19:18:16.276495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:47.006 [2024-11-05 19:18:16.276503] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:47.006 [2024-11-05 19:18:16.276722] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:47.006 [2024-11-05 19:18:16.276953] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.006 [2024-11-05 19:18:16.276963] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.006 [2024-11-05 19:18:16.276970] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.006 [2024-11-05 19:18:16.276977] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.006 [2024-11-05 19:18:16.289736] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.006 [2024-11-05 19:18:16.290303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.006 [2024-11-05 19:18:16.290321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:47.006 [2024-11-05 19:18:16.290328] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:47.006 [2024-11-05 19:18:16.290547] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:47.006 [2024-11-05 19:18:16.290774] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.006 [2024-11-05 19:18:16.290784] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.006 [2024-11-05 19:18:16.290791] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.006 [2024-11-05 19:18:16.290798] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.006 [2024-11-05 19:18:16.303542] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.006 [2024-11-05 19:18:16.304171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.006 [2024-11-05 19:18:16.304211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:47.006 [2024-11-05 19:18:16.304222] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:47.006 [2024-11-05 19:18:16.304461] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:47.006 [2024-11-05 19:18:16.304685] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.006 [2024-11-05 19:18:16.304695] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.006 [2024-11-05 19:18:16.304702] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.006 [2024-11-05 19:18:16.304711] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.006 [2024-11-05 19:18:16.317520] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.006 [2024-11-05 19:18:16.318074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.006 [2024-11-05 19:18:16.318095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:47.006 [2024-11-05 19:18:16.318104] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:47.006 [2024-11-05 19:18:16.318324] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:47.006 [2024-11-05 19:18:16.318544] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.006 [2024-11-05 19:18:16.318553] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.006 [2024-11-05 19:18:16.318561] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.006 [2024-11-05 19:18:16.318572] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.268 [2024-11-05 19:18:16.331331] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.268 [2024-11-05 19:18:16.331739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.268 [2024-11-05 19:18:16.331767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:47.268 [2024-11-05 19:18:16.331775] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:47.268 [2024-11-05 19:18:16.331995] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:47.268 [2024-11-05 19:18:16.332214] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.268 [2024-11-05 19:18:16.332223] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.268 [2024-11-05 19:18:16.332230] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.268 [2024-11-05 19:18:16.332237] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.269 [2024-11-05 19:18:16.345198] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.269 [2024-11-05 19:18:16.345709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.269 [2024-11-05 19:18:16.345728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:47.269 [2024-11-05 19:18:16.345736] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:47.269 [2024-11-05 19:18:16.345960] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:47.269 [2024-11-05 19:18:16.346180] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.269 [2024-11-05 19:18:16.346197] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.269 [2024-11-05 19:18:16.346205] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.269 [2024-11-05 19:18:16.346212] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.269 [2024-11-05 19:18:16.359174] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.269 [2024-11-05 19:18:16.359688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.269 [2024-11-05 19:18:16.359705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:47.269 [2024-11-05 19:18:16.359712] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:47.269 [2024-11-05 19:18:16.359936] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:47.269 [2024-11-05 19:18:16.360156] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.269 [2024-11-05 19:18:16.360166] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.269 [2024-11-05 19:18:16.360173] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.269 [2024-11-05 19:18:16.360180] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.269 [2024-11-05 19:18:16.373143] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.269 [2024-11-05 19:18:16.373665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.269 [2024-11-05 19:18:16.373681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:47.269 [2024-11-05 19:18:16.373689] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:47.269 [2024-11-05 19:18:16.373914] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:47.269 [2024-11-05 19:18:16.374134] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.269 [2024-11-05 19:18:16.374144] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.269 [2024-11-05 19:18:16.374151] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.269 [2024-11-05 19:18:16.374158] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.269 [2024-11-05 19:18:16.387137] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.269 [2024-11-05 19:18:16.387696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.269 [2024-11-05 19:18:16.387713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:47.269 [2024-11-05 19:18:16.387721] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:47.269 [2024-11-05 19:18:16.387945] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:47.269 [2024-11-05 19:18:16.388165] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.269 [2024-11-05 19:18:16.388174] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.269 [2024-11-05 19:18:16.388181] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.269 [2024-11-05 19:18:16.388188] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.269 [2024-11-05 19:18:16.400943] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.269 [2024-11-05 19:18:16.401458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.269 [2024-11-05 19:18:16.401475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:47.269 [2024-11-05 19:18:16.401483] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:47.269 [2024-11-05 19:18:16.401701] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:47.269 [2024-11-05 19:18:16.401929] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.269 [2024-11-05 19:18:16.401940] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.269 [2024-11-05 19:18:16.401947] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.269 [2024-11-05 19:18:16.401954] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.269 [2024-11-05 19:18:16.414913] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.269 [2024-11-05 19:18:16.415473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.269 [2024-11-05 19:18:16.415490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:47.269 [2024-11-05 19:18:16.415501] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:47.269 [2024-11-05 19:18:16.415720] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:47.269 [2024-11-05 19:18:16.415945] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.269 [2024-11-05 19:18:16.415957] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.269 [2024-11-05 19:18:16.415964] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.269 [2024-11-05 19:18:16.415971] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.269 [2024-11-05 19:18:16.428731] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.269 [2024-11-05 19:18:16.429292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.269 [2024-11-05 19:18:16.429309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:47.269 [2024-11-05 19:18:16.429316] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:47.269 [2024-11-05 19:18:16.429534] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:47.269 [2024-11-05 19:18:16.429760] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.269 [2024-11-05 19:18:16.429770] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.269 [2024-11-05 19:18:16.429777] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.269 [2024-11-05 19:18:16.429784] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.269 [2024-11-05 19:18:16.442533] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.269 [2024-11-05 19:18:16.443100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.269 [2024-11-05 19:18:16.443118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:47.269 [2024-11-05 19:18:16.443126] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:47.269 [2024-11-05 19:18:16.443345] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:47.269 [2024-11-05 19:18:16.443564] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.269 [2024-11-05 19:18:16.443573] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.269 [2024-11-05 19:18:16.443580] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.269 [2024-11-05 19:18:16.443587] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.269 [2024-11-05 19:18:16.456341] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.269 [2024-11-05 19:18:16.456881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.269 [2024-11-05 19:18:16.456898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:47.269 [2024-11-05 19:18:16.456906] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:47.269 [2024-11-05 19:18:16.457125] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:47.269 [2024-11-05 19:18:16.457352] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.269 [2024-11-05 19:18:16.457361] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.269 [2024-11-05 19:18:16.457368] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.269 [2024-11-05 19:18:16.457375] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.269 [2024-11-05 19:18:16.470130] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.269 [2024-11-05 19:18:16.470689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.269 [2024-11-05 19:18:16.470706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:47.269 [2024-11-05 19:18:16.470714] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:47.269 [2024-11-05 19:18:16.470938] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:47.269 [2024-11-05 19:18:16.471158] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.269 [2024-11-05 19:18:16.471167] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.269 [2024-11-05 19:18:16.471175] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.270 [2024-11-05 19:18:16.471181] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.270 [2024-11-05 19:18:16.483930] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.270 [2024-11-05 19:18:16.484556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.270 [2024-11-05 19:18:16.484596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:47.270 [2024-11-05 19:18:16.484609] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:47.270 [2024-11-05 19:18:16.484857] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:47.270 [2024-11-05 19:18:16.485093] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.270 [2024-11-05 19:18:16.485104] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.270 [2024-11-05 19:18:16.485113] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.270 [2024-11-05 19:18:16.485123] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.270 [2024-11-05 19:18:16.497889] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.270 [2024-11-05 19:18:16.498561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.270 [2024-11-05 19:18:16.498602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:47.270 [2024-11-05 19:18:16.498613] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:47.270 [2024-11-05 19:18:16.498861] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:47.270 [2024-11-05 19:18:16.499088] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.270 [2024-11-05 19:18:16.499098] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.270 [2024-11-05 19:18:16.499106] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.270 [2024-11-05 19:18:16.499119] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.270 [2024-11-05 19:18:16.511884] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.270 [2024-11-05 19:18:16.512460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.270 [2024-11-05 19:18:16.512480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:47.270 [2024-11-05 19:18:16.512488] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:47.270 [2024-11-05 19:18:16.512708] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:47.270 [2024-11-05 19:18:16.512936] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.270 [2024-11-05 19:18:16.512947] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.270 [2024-11-05 19:18:16.512954] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.270 [2024-11-05 19:18:16.512961] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.270 [2024-11-05 19:18:16.525759] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.270 [2024-11-05 19:18:16.526425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.270 [2024-11-05 19:18:16.526465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:47.270 [2024-11-05 19:18:16.526475] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:47.270 [2024-11-05 19:18:16.526714] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:47.270 [2024-11-05 19:18:16.526947] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.270 [2024-11-05 19:18:16.526959] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.270 [2024-11-05 19:18:16.526967] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.270 [2024-11-05 19:18:16.526975] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.270 [2024-11-05 19:18:16.539604] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.270 [2024-11-05 19:18:16.540191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.270 [2024-11-05 19:18:16.540212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:47.270 [2024-11-05 19:18:16.540220] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:47.270 [2024-11-05 19:18:16.540440] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:47.270 [2024-11-05 19:18:16.540660] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.270 [2024-11-05 19:18:16.540670] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.270 [2024-11-05 19:18:16.540677] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.270 [2024-11-05 19:18:16.540684] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.270 [2024-11-05 19:18:16.553440] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.270 [2024-11-05 19:18:16.554081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.270 [2024-11-05 19:18:16.554121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:47.270 [2024-11-05 19:18:16.554132] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:47.270 [2024-11-05 19:18:16.554371] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:47.270 [2024-11-05 19:18:16.554595] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.270 [2024-11-05 19:18:16.554606] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.270 [2024-11-05 19:18:16.554613] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.270 [2024-11-05 19:18:16.554622] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.270 [2024-11-05 19:18:16.567367] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.270 [2024-11-05 19:18:16.567908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.270 [2024-11-05 19:18:16.567929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:47.270 [2024-11-05 19:18:16.567937] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:47.270 [2024-11-05 19:18:16.568157] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:47.270 [2024-11-05 19:18:16.568376] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.270 [2024-11-05 19:18:16.568386] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.270 [2024-11-05 19:18:16.568393] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.270 [2024-11-05 19:18:16.568400] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.270 [2024-11-05 19:18:16.581346] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.270 [2024-11-05 19:18:16.581885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.270 [2024-11-05 19:18:16.581903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:47.270 [2024-11-05 19:18:16.581911] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:47.270 [2024-11-05 19:18:16.582130] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:47.270 [2024-11-05 19:18:16.582350] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.270 [2024-11-05 19:18:16.582359] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.270 [2024-11-05 19:18:16.582366] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.270 [2024-11-05 19:18:16.582373] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.531 [2024-11-05 19:18:16.595342] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.531 [2024-11-05 19:18:16.595875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.531 [2024-11-05 19:18:16.595893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:47.531 [2024-11-05 19:18:16.595906] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:47.531 [2024-11-05 19:18:16.596124] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:47.531 [2024-11-05 19:18:16.596344] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.531 [2024-11-05 19:18:16.596353] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.531 [2024-11-05 19:18:16.596361] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.531 [2024-11-05 19:18:16.596368] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.531 [2024-11-05 19:18:16.609331] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.531 [2024-11-05 19:18:16.609856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.531 [2024-11-05 19:18:16.609874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:47.531 [2024-11-05 19:18:16.609882] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:47.531 [2024-11-05 19:18:16.610101] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:47.531 [2024-11-05 19:18:16.610320] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.531 [2024-11-05 19:18:16.610329] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.531 [2024-11-05 19:18:16.610337] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.531 [2024-11-05 19:18:16.610343] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.531 [2024-11-05 19:18:16.623315] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.531 [2024-11-05 19:18:16.623849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.531 [2024-11-05 19:18:16.623867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:47.531 [2024-11-05 19:18:16.623874] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:47.531 [2024-11-05 19:18:16.624093] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:47.531 [2024-11-05 19:18:16.624312] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.531 [2024-11-05 19:18:16.624321] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.531 [2024-11-05 19:18:16.624328] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.531 [2024-11-05 19:18:16.624335] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.531 [2024-11-05 19:18:16.637114] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.531 [2024-11-05 19:18:16.637627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.531 [2024-11-05 19:18:16.637644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:47.531 [2024-11-05 19:18:16.637652] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:47.531 [2024-11-05 19:18:16.637878] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:47.531 [2024-11-05 19:18:16.638102] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.531 [2024-11-05 19:18:16.638110] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.531 [2024-11-05 19:18:16.638118] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.531 [2024-11-05 19:18:16.638125] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.531 [2024-11-05 19:18:16.651081] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.531 [2024-11-05 19:18:16.651641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.531 [2024-11-05 19:18:16.651658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:47.531 [2024-11-05 19:18:16.651665] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:47.531 [2024-11-05 19:18:16.651892] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:47.531 [2024-11-05 19:18:16.652111] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.531 [2024-11-05 19:18:16.652121] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.531 [2024-11-05 19:18:16.652129] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.531 [2024-11-05 19:18:16.652136] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.531 [2024-11-05 19:18:16.664885] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.531 [2024-11-05 19:18:16.665424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.531 [2024-11-05 19:18:16.665441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:47.531 [2024-11-05 19:18:16.665449] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:47.531 [2024-11-05 19:18:16.665668] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:47.531 [2024-11-05 19:18:16.665895] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.531 [2024-11-05 19:18:16.665905] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.531 [2024-11-05 19:18:16.665912] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.531 [2024-11-05 19:18:16.665918] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.531 [2024-11-05 19:18:16.678869] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.531 [2024-11-05 19:18:16.679427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.532 [2024-11-05 19:18:16.679443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:47.532 [2024-11-05 19:18:16.679450] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:47.532 [2024-11-05 19:18:16.679669] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:47.532 [2024-11-05 19:18:16.679894] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.532 [2024-11-05 19:18:16.679904] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.532 [2024-11-05 19:18:16.679912] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.532 [2024-11-05 19:18:16.679922] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.532 [2024-11-05 19:18:16.692675] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.532 [2024-11-05 19:18:16.693209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.532 [2024-11-05 19:18:16.693226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:47.532 [2024-11-05 19:18:16.693234] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:47.532 [2024-11-05 19:18:16.693453] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:47.532 [2024-11-05 19:18:16.693672] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.532 [2024-11-05 19:18:16.693681] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.532 [2024-11-05 19:18:16.693690] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.532 [2024-11-05 19:18:16.693697] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.532 [2024-11-05 19:18:16.706655] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.532 [2024-11-05 19:18:16.707292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.532 [2024-11-05 19:18:16.707331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:47.532 [2024-11-05 19:18:16.707342] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:47.532 [2024-11-05 19:18:16.707581] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:47.532 [2024-11-05 19:18:16.707814] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.532 [2024-11-05 19:18:16.707825] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.532 [2024-11-05 19:18:16.707833] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.532 [2024-11-05 19:18:16.707841] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.532 [2024-11-05 19:18:16.720598] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.532 [2024-11-05 19:18:16.721170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.532 [2024-11-05 19:18:16.721191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:47.532 [2024-11-05 19:18:16.721199] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:47.532 [2024-11-05 19:18:16.721419] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:47.532 [2024-11-05 19:18:16.721639] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.532 [2024-11-05 19:18:16.721648] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.532 [2024-11-05 19:18:16.721655] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.532 [2024-11-05 19:18:16.721662] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.532 [2024-11-05 19:18:16.734439] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.532 [2024-11-05 19:18:16.735115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.532 [2024-11-05 19:18:16.735154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:47.532 [2024-11-05 19:18:16.735165] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:47.532 [2024-11-05 19:18:16.735403] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:47.532 [2024-11-05 19:18:16.735627] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.532 [2024-11-05 19:18:16.735638] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.532 [2024-11-05 19:18:16.735646] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.532 [2024-11-05 19:18:16.735654] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.532 [2024-11-05 19:18:16.748403] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.532 [2024-11-05 19:18:16.749051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.532 [2024-11-05 19:18:16.749090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:47.532 [2024-11-05 19:18:16.749102] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:47.532 [2024-11-05 19:18:16.749342] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:47.532 [2024-11-05 19:18:16.749566] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.532 [2024-11-05 19:18:16.749575] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.532 [2024-11-05 19:18:16.749583] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.532 [2024-11-05 19:18:16.749592] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.532 [2024-11-05 19:18:16.762333] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.532 [2024-11-05 19:18:16.763028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.532 [2024-11-05 19:18:16.763067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:47.532 [2024-11-05 19:18:16.763078] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:47.532 [2024-11-05 19:18:16.763317] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:47.532 [2024-11-05 19:18:16.763541] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.532 [2024-11-05 19:18:16.763551] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.532 [2024-11-05 19:18:16.763559] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.532 [2024-11-05 19:18:16.763567] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.532 [2024-11-05 19:18:16.776315] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.532 [2024-11-05 19:18:16.777034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.532 [2024-11-05 19:18:16.777074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:47.532 [2024-11-05 19:18:16.777089] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:47.532 [2024-11-05 19:18:16.777328] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:47.532 [2024-11-05 19:18:16.777552] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.532 [2024-11-05 19:18:16.777561] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.532 [2024-11-05 19:18:16.777569] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.532 [2024-11-05 19:18:16.777577] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.532 5561.20 IOPS, 21.72 MiB/s [2024-11-05T18:18:16.855Z] [2024-11-05 19:18:16.790113] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.532 [2024-11-05 19:18:16.790614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.532 [2024-11-05 19:18:16.790654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:47.532 [2024-11-05 19:18:16.790667] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:47.532 [2024-11-05 19:18:16.790917] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:47.532 [2024-11-05 19:18:16.791143] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.532 [2024-11-05 19:18:16.791152] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.532 [2024-11-05 19:18:16.791160] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.532 [2024-11-05 19:18:16.791168] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.532 [2024-11-05 19:18:16.803906] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.532 [2024-11-05 19:18:16.804581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.532 [2024-11-05 19:18:16.804620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:47.532 [2024-11-05 19:18:16.804631] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:47.532 [2024-11-05 19:18:16.804879] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:47.532 [2024-11-05 19:18:16.805103] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.532 [2024-11-05 19:18:16.805113] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.532 [2024-11-05 19:18:16.805122] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.532 [2024-11-05 19:18:16.805130] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.532 [2024-11-05 19:18:16.817879] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.533 [2024-11-05 19:18:16.818511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.533 [2024-11-05 19:18:16.818550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:47.533 [2024-11-05 19:18:16.818561] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:47.533 [2024-11-05 19:18:16.818808] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:47.533 [2024-11-05 19:18:16.819038] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.533 [2024-11-05 19:18:16.819048] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.533 [2024-11-05 19:18:16.819055] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.533 [2024-11-05 19:18:16.819064] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.533 [2024-11-05 19:18:16.831800] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.533 [2024-11-05 19:18:16.832470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.533 [2024-11-05 19:18:16.832509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:47.533 [2024-11-05 19:18:16.832520] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:47.533 [2024-11-05 19:18:16.832767] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:47.533 [2024-11-05 19:18:16.832992] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.533 [2024-11-05 19:18:16.833001] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.533 [2024-11-05 19:18:16.833009] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.533 [2024-11-05 19:18:16.833017] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.533 [2024-11-05 19:18:16.845757] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.533 [2024-11-05 19:18:16.846419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.533 [2024-11-05 19:18:16.846457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:47.533 [2024-11-05 19:18:16.846468] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:47.533 [2024-11-05 19:18:16.846707] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:47.533 [2024-11-05 19:18:16.846941] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.533 [2024-11-05 19:18:16.846952] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.533 [2024-11-05 19:18:16.846960] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.533 [2024-11-05 19:18:16.846968] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.795 [2024-11-05 19:18:16.859705] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.795 [2024-11-05 19:18:16.860283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.795 [2024-11-05 19:18:16.860304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:47.795 [2024-11-05 19:18:16.860312] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:47.795 [2024-11-05 19:18:16.860532] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:47.795 [2024-11-05 19:18:16.860757] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.795 [2024-11-05 19:18:16.860768] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.795 [2024-11-05 19:18:16.860779] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.795 [2024-11-05 19:18:16.860786] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.795 [2024-11-05 19:18:16.873514] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.795 [2024-11-05 19:18:16.874173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.795 [2024-11-05 19:18:16.874211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:47.795 [2024-11-05 19:18:16.874223] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:47.795 [2024-11-05 19:18:16.874461] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:47.795 [2024-11-05 19:18:16.874684] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.795 [2024-11-05 19:18:16.874694] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.795 [2024-11-05 19:18:16.874702] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.795 [2024-11-05 19:18:16.874711] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.795 [2024-11-05 19:18:16.887478] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.795 [2024-11-05 19:18:16.888147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.795 [2024-11-05 19:18:16.888186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:47.795 [2024-11-05 19:18:16.888198] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:47.795 [2024-11-05 19:18:16.888436] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:47.795 [2024-11-05 19:18:16.888659] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.795 [2024-11-05 19:18:16.888668] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.795 [2024-11-05 19:18:16.888676] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.795 [2024-11-05 19:18:16.888685] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.795 [2024-11-05 19:18:16.901430] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.795 [2024-11-05 19:18:16.902015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.795 [2024-11-05 19:18:16.902036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:47.795 [2024-11-05 19:18:16.902044] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:47.795 [2024-11-05 19:18:16.902263] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:47.795 [2024-11-05 19:18:16.902483] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.795 [2024-11-05 19:18:16.902492] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.795 [2024-11-05 19:18:16.902499] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.795 [2024-11-05 19:18:16.902506] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.795 [2024-11-05 19:18:16.915240] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.795 [2024-11-05 19:18:16.915950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.795 [2024-11-05 19:18:16.915990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:47.795 [2024-11-05 19:18:16.916001] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:47.795 [2024-11-05 19:18:16.916239] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:47.795 [2024-11-05 19:18:16.916463] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.795 [2024-11-05 19:18:16.916473] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.795 [2024-11-05 19:18:16.916481] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.795 [2024-11-05 19:18:16.916489] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.795 [2024-11-05 19:18:16.929036] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.795 [2024-11-05 19:18:16.929610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.795 [2024-11-05 19:18:16.929630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:47.795 [2024-11-05 19:18:16.929638] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:47.795 [2024-11-05 19:18:16.929864] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:47.795 [2024-11-05 19:18:16.930084] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.795 [2024-11-05 19:18:16.930096] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.795 [2024-11-05 19:18:16.930103] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.795 [2024-11-05 19:18:16.930110] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.795 [2024-11-05 19:18:16.942865] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.795 [2024-11-05 19:18:16.943456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.795 [2024-11-05 19:18:16.943495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:47.795 [2024-11-05 19:18:16.943506] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:47.795 [2024-11-05 19:18:16.943744] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:47.795 [2024-11-05 19:18:16.943977] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.795 [2024-11-05 19:18:16.943987] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.795 [2024-11-05 19:18:16.943995] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.795 [2024-11-05 19:18:16.944003] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.795 [2024-11-05 19:18:16.956738] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.795 [2024-11-05 19:18:16.957412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.795 [2024-11-05 19:18:16.957451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:47.795 [2024-11-05 19:18:16.957467] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:47.795 [2024-11-05 19:18:16.957706] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:47.795 [2024-11-05 19:18:16.957938] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.795 [2024-11-05 19:18:16.957952] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.795 [2024-11-05 19:18:16.957960] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.795 [2024-11-05 19:18:16.957968] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.795 [2024-11-05 19:18:16.970705] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.795 [2024-11-05 19:18:16.971337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.795 [2024-11-05 19:18:16.971377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:47.795 [2024-11-05 19:18:16.971388] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:47.795 [2024-11-05 19:18:16.971626] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:47.795 [2024-11-05 19:18:16.971859] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.795 [2024-11-05 19:18:16.971870] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.795 [2024-11-05 19:18:16.971878] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.795 [2024-11-05 19:18:16.971886] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.795 [2024-11-05 19:18:16.984629] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.795 [2024-11-05 19:18:16.985267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.795 [2024-11-05 19:18:16.985307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:47.795 [2024-11-05 19:18:16.985318] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:47.795 [2024-11-05 19:18:16.985557] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:47.795 [2024-11-05 19:18:16.985790] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.796 [2024-11-05 19:18:16.985801] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.796 [2024-11-05 19:18:16.985810] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.796 [2024-11-05 19:18:16.985818] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.796 [2024-11-05 19:18:16.998571] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.796 [2024-11-05 19:18:16.999207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.796 [2024-11-05 19:18:16.999246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:47.796 [2024-11-05 19:18:16.999257] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:47.796 [2024-11-05 19:18:16.999495] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:47.796 [2024-11-05 19:18:16.999724] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.796 [2024-11-05 19:18:16.999735] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.796 [2024-11-05 19:18:16.999743] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.796 [2024-11-05 19:18:16.999760] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.796 [2024-11-05 19:18:17.012503] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.796 [2024-11-05 19:18:17.013207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.796 [2024-11-05 19:18:17.013245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:47.796 [2024-11-05 19:18:17.013257] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:47.796 [2024-11-05 19:18:17.013495] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:47.796 [2024-11-05 19:18:17.013718] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.796 [2024-11-05 19:18:17.013728] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.796 [2024-11-05 19:18:17.013735] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.796 [2024-11-05 19:18:17.013744] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.796 [2024-11-05 19:18:17.026501] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.796 [2024-11-05 19:18:17.027134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.796 [2024-11-05 19:18:17.027174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:47.796 [2024-11-05 19:18:17.027185] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:47.796 [2024-11-05 19:18:17.027424] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:47.796 [2024-11-05 19:18:17.027648] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.796 [2024-11-05 19:18:17.027657] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.796 [2024-11-05 19:18:17.027665] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.796 [2024-11-05 19:18:17.027674] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.796 [2024-11-05 19:18:17.040417] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.796 [2024-11-05 19:18:17.040953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.796 [2024-11-05 19:18:17.040974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:47.796 [2024-11-05 19:18:17.040982] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:47.796 [2024-11-05 19:18:17.041202] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:47.796 [2024-11-05 19:18:17.041422] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.796 [2024-11-05 19:18:17.041431] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.796 [2024-11-05 19:18:17.041443] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.796 [2024-11-05 19:18:17.041450] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.796 [2024-11-05 19:18:17.054391] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.796 [2024-11-05 19:18:17.054951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.796 [2024-11-05 19:18:17.054969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:47.796 [2024-11-05 19:18:17.054977] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:47.796 [2024-11-05 19:18:17.055196] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:47.796 [2024-11-05 19:18:17.055415] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.796 [2024-11-05 19:18:17.055424] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.796 [2024-11-05 19:18:17.055431] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.796 [2024-11-05 19:18:17.055437] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.796 [2024-11-05 19:18:17.068375] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.796 [2024-11-05 19:18:17.069042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.796 [2024-11-05 19:18:17.069081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:47.796 [2024-11-05 19:18:17.069092] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:47.796 [2024-11-05 19:18:17.069331] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:47.796 [2024-11-05 19:18:17.069554] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.796 [2024-11-05 19:18:17.069564] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.796 [2024-11-05 19:18:17.069573] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.796 [2024-11-05 19:18:17.069581] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.796 [2024-11-05 19:18:17.082365] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.796 [2024-11-05 19:18:17.083045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.796 [2024-11-05 19:18:17.083085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:47.796 [2024-11-05 19:18:17.083096] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:47.796 [2024-11-05 19:18:17.083335] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:47.796 [2024-11-05 19:18:17.083558] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.796 [2024-11-05 19:18:17.083568] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.796 [2024-11-05 19:18:17.083576] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.796 [2024-11-05 19:18:17.083584] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.796 [2024-11-05 19:18:17.096341] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.796 [2024-11-05 19:18:17.097045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.796 [2024-11-05 19:18:17.097084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:47.796 [2024-11-05 19:18:17.097095] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:47.796 [2024-11-05 19:18:17.097334] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:47.796 [2024-11-05 19:18:17.097558] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.796 [2024-11-05 19:18:17.097568] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.796 [2024-11-05 19:18:17.097576] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.796 [2024-11-05 19:18:17.097584] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.796 [2024-11-05 19:18:17.110327] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.796 [2024-11-05 19:18:17.110888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.796 [2024-11-05 19:18:17.110927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:47.796 [2024-11-05 19:18:17.110940] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:47.796 [2024-11-05 19:18:17.111181] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:47.796 [2024-11-05 19:18:17.111405] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.796 [2024-11-05 19:18:17.111415] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.796 [2024-11-05 19:18:17.111423] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.796 [2024-11-05 19:18:17.111431] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.059 [2024-11-05 19:18:17.124187] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.059 [2024-11-05 19:18:17.124850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.059 [2024-11-05 19:18:17.124890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:48.059 [2024-11-05 19:18:17.124901] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:48.059 [2024-11-05 19:18:17.125139] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:48.059 [2024-11-05 19:18:17.125363] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.059 [2024-11-05 19:18:17.125373] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.059 [2024-11-05 19:18:17.125381] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.059 [2024-11-05 19:18:17.125389] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.059 [2024-11-05 19:18:17.138138] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.059 [2024-11-05 19:18:17.138680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.059 [2024-11-05 19:18:17.138700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:48.059 [2024-11-05 19:18:17.138720] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:48.059 [2024-11-05 19:18:17.138970] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:48.059 [2024-11-05 19:18:17.139194] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.059 [2024-11-05 19:18:17.139203] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.059 [2024-11-05 19:18:17.139210] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.059 [2024-11-05 19:18:17.139217] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.059 [2024-11-05 19:18:17.151947] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.059 [2024-11-05 19:18:17.152565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.059 [2024-11-05 19:18:17.152604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:48.059 [2024-11-05 19:18:17.152616] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:48.059 [2024-11-05 19:18:17.152863] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:48.059 [2024-11-05 19:18:17.153087] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.059 [2024-11-05 19:18:17.153097] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.059 [2024-11-05 19:18:17.153105] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.059 [2024-11-05 19:18:17.153114] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.059 [2024-11-05 19:18:17.165851] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.059 [2024-11-05 19:18:17.166495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.059 [2024-11-05 19:18:17.166535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:48.059 [2024-11-05 19:18:17.166546] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:48.059 [2024-11-05 19:18:17.166794] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:48.059 [2024-11-05 19:18:17.167018] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.059 [2024-11-05 19:18:17.167028] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.059 [2024-11-05 19:18:17.167036] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.059 [2024-11-05 19:18:17.167044] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.059 [2024-11-05 19:18:17.179782] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.059 [2024-11-05 19:18:17.180431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.059 [2024-11-05 19:18:17.180470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:48.059 [2024-11-05 19:18:17.180482] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:48.059 [2024-11-05 19:18:17.180720] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:48.059 [2024-11-05 19:18:17.180958] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.059 [2024-11-05 19:18:17.180969] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.059 [2024-11-05 19:18:17.180977] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.059 [2024-11-05 19:18:17.180985] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.059 [2024-11-05 19:18:17.193733] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.059 [2024-11-05 19:18:17.194386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.059 [2024-11-05 19:18:17.194425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:48.059 [2024-11-05 19:18:17.194436] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:48.059 [2024-11-05 19:18:17.194674] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:48.059 [2024-11-05 19:18:17.194908] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.059 [2024-11-05 19:18:17.194919] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.059 [2024-11-05 19:18:17.194927] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.059 [2024-11-05 19:18:17.194936] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.059 [2024-11-05 19:18:17.207671] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.059 [2024-11-05 19:18:17.208327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.059 [2024-11-05 19:18:17.208366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:48.059 [2024-11-05 19:18:17.208377] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:48.059 [2024-11-05 19:18:17.208615] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:48.059 [2024-11-05 19:18:17.208849] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.059 [2024-11-05 19:18:17.208859] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.059 [2024-11-05 19:18:17.208868] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.059 [2024-11-05 19:18:17.208876] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.059 [2024-11-05 19:18:17.221623] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.059 [2024-11-05 19:18:17.222307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.059 [2024-11-05 19:18:17.222346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:48.059 [2024-11-05 19:18:17.222358] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:48.059 [2024-11-05 19:18:17.222596] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:48.059 [2024-11-05 19:18:17.222828] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.059 [2024-11-05 19:18:17.222838] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.059 [2024-11-05 19:18:17.222851] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.059 [2024-11-05 19:18:17.222859] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.059 [2024-11-05 19:18:17.235595] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.059 [2024-11-05 19:18:17.236172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.059 [2024-11-05 19:18:17.236193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:48.059 [2024-11-05 19:18:17.236201] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:48.059 [2024-11-05 19:18:17.236421] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:48.059 [2024-11-05 19:18:17.236641] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.059 [2024-11-05 19:18:17.236650] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.059 [2024-11-05 19:18:17.236658] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.059 [2024-11-05 19:18:17.236665] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.060 [2024-11-05 19:18:17.249403] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.060 [2024-11-05 19:18:17.249853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.060 [2024-11-05 19:18:17.249872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:48.060 [2024-11-05 19:18:17.249879] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:48.060 [2024-11-05 19:18:17.250098] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:48.060 [2024-11-05 19:18:17.250317] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.060 [2024-11-05 19:18:17.250327] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.060 [2024-11-05 19:18:17.250334] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.060 [2024-11-05 19:18:17.250341] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.060 [2024-11-05 19:18:17.263278] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.060 [2024-11-05 19:18:17.263883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.060 [2024-11-05 19:18:17.263922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:48.060 [2024-11-05 19:18:17.263934] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:48.060 [2024-11-05 19:18:17.264174] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:48.060 [2024-11-05 19:18:17.264397] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.060 [2024-11-05 19:18:17.264407] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.060 [2024-11-05 19:18:17.264415] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.060 [2024-11-05 19:18:17.264423] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.060 [2024-11-05 19:18:17.277170] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.060 [2024-11-05 19:18:17.277838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.060 [2024-11-05 19:18:17.277877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:48.060 [2024-11-05 19:18:17.277890] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:48.060 [2024-11-05 19:18:17.278132] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:48.060 [2024-11-05 19:18:17.278355] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.060 [2024-11-05 19:18:17.278365] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.060 [2024-11-05 19:18:17.278373] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.060 [2024-11-05 19:18:17.278381] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.060 [2024-11-05 19:18:17.291133] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.060 [2024-11-05 19:18:17.291801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.060 [2024-11-05 19:18:17.291841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:48.060 [2024-11-05 19:18:17.291853] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:48.060 [2024-11-05 19:18:17.292095] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:48.060 [2024-11-05 19:18:17.292319] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.060 [2024-11-05 19:18:17.292328] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.060 [2024-11-05 19:18:17.292335] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.060 [2024-11-05 19:18:17.292344] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.060 [2024-11-05 19:18:17.305090] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.060 [2024-11-05 19:18:17.305766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.060 [2024-11-05 19:18:17.305806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:48.060 [2024-11-05 19:18:17.305817] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:48.060 [2024-11-05 19:18:17.306055] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:48.060 [2024-11-05 19:18:17.306279] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.060 [2024-11-05 19:18:17.306289] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.060 [2024-11-05 19:18:17.306297] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.060 [2024-11-05 19:18:17.306305] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.060 [2024-11-05 19:18:17.319062] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.060 [2024-11-05 19:18:17.319700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.060 [2024-11-05 19:18:17.319740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:48.060 [2024-11-05 19:18:17.319764] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:48.060 [2024-11-05 19:18:17.320003] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:48.060 [2024-11-05 19:18:17.320227] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.060 [2024-11-05 19:18:17.320237] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.060 [2024-11-05 19:18:17.320245] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.060 [2024-11-05 19:18:17.320253] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.060 [2024-11-05 19:18:17.332989] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.060 [2024-11-05 19:18:17.333516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.060 [2024-11-05 19:18:17.333536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:48.060 [2024-11-05 19:18:17.333544] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:48.060 [2024-11-05 19:18:17.333770] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:48.060 [2024-11-05 19:18:17.333991] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.060 [2024-11-05 19:18:17.334000] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.060 [2024-11-05 19:18:17.334008] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.060 [2024-11-05 19:18:17.334015] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.060 [2024-11-05 19:18:17.346952] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.060 [2024-11-05 19:18:17.347501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.060 [2024-11-05 19:18:17.347519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:48.060 [2024-11-05 19:18:17.347527] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:48.060 [2024-11-05 19:18:17.347776] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:48.060 [2024-11-05 19:18:17.347999] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.060 [2024-11-05 19:18:17.348008] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.060 [2024-11-05 19:18:17.348016] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.060 [2024-11-05 19:18:17.348023] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.060 [2024-11-05 19:18:17.360753] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.060 [2024-11-05 19:18:17.361414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.060 [2024-11-05 19:18:17.361453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:48.060 [2024-11-05 19:18:17.361464] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:48.060 [2024-11-05 19:18:17.361703] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:48.060 [2024-11-05 19:18:17.361941] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.060 [2024-11-05 19:18:17.361952] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.060 [2024-11-05 19:18:17.361960] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.060 [2024-11-05 19:18:17.361968] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.060 [2024-11-05 19:18:17.374702] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.060 [2024-11-05 19:18:17.375281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.060 [2024-11-05 19:18:17.375302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:48.060 [2024-11-05 19:18:17.375310] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:48.060 [2024-11-05 19:18:17.375530] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:48.060 [2024-11-05 19:18:17.375757] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.060 [2024-11-05 19:18:17.375767] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.060 [2024-11-05 19:18:17.375775] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.060 [2024-11-05 19:18:17.375782] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.323 [2024-11-05 19:18:17.388519] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.323 [2024-11-05 19:18:17.389048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.323 [2024-11-05 19:18:17.389067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:48.323 [2024-11-05 19:18:17.389075] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:48.323 [2024-11-05 19:18:17.389293] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:48.323 [2024-11-05 19:18:17.389513] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.323 [2024-11-05 19:18:17.389523] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.323 [2024-11-05 19:18:17.389530] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.323 [2024-11-05 19:18:17.389537] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.323 [2024-11-05 19:18:17.402481] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.323 [2024-11-05 19:18:17.403093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.323 [2024-11-05 19:18:17.403132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:48.323 [2024-11-05 19:18:17.403143] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:48.323 [2024-11-05 19:18:17.403381] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:48.323 [2024-11-05 19:18:17.403605] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.323 [2024-11-05 19:18:17.403615] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.323 [2024-11-05 19:18:17.403628] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.323 [2024-11-05 19:18:17.403636] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.323 [2024-11-05 19:18:17.416382] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.323 [2024-11-05 19:18:17.417034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.323 [2024-11-05 19:18:17.417073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:48.323 [2024-11-05 19:18:17.417085] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:48.323 [2024-11-05 19:18:17.417323] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:48.323 [2024-11-05 19:18:17.417547] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.323 [2024-11-05 19:18:17.417556] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.323 [2024-11-05 19:18:17.417564] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.323 [2024-11-05 19:18:17.417572] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.323 [2024-11-05 19:18:17.430333] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.323 [2024-11-05 19:18:17.431016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.323 [2024-11-05 19:18:17.431055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:48.323 [2024-11-05 19:18:17.431066] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:48.323 [2024-11-05 19:18:17.431304] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:48.323 [2024-11-05 19:18:17.431528] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.323 [2024-11-05 19:18:17.431539] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.323 [2024-11-05 19:18:17.431547] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.323 [2024-11-05 19:18:17.431556] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.323 [2024-11-05 19:18:17.444302] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.323 [2024-11-05 19:18:17.444878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.323 [2024-11-05 19:18:17.444919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:48.323 [2024-11-05 19:18:17.444932] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:48.323 [2024-11-05 19:18:17.445173] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:48.323 [2024-11-05 19:18:17.445397] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.323 [2024-11-05 19:18:17.445407] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.323 [2024-11-05 19:18:17.445415] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.323 [2024-11-05 19:18:17.445423] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.323 [2024-11-05 19:18:17.458184] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.323 [2024-11-05 19:18:17.458878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.324 [2024-11-05 19:18:17.458917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:48.324 [2024-11-05 19:18:17.458930] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:48.324 [2024-11-05 19:18:17.459169] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:48.324 [2024-11-05 19:18:17.459393] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.324 [2024-11-05 19:18:17.459404] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.324 [2024-11-05 19:18:17.459412] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.324 [2024-11-05 19:18:17.459420] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.324 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 509261 Killed "${NVMF_APP[@]}" "$@" 00:28:48.324 [2024-11-05 19:18:17.472172] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.324 19:18:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:28:48.324 [2024-11-05 19:18:17.472723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.324 [2024-11-05 19:18:17.472768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:48.324 [2024-11-05 19:18:17.472782] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:48.324 [2024-11-05 19:18:17.473022] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:48.324 19:18:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:28:48.324 [2024-11-05 19:18:17.473247] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.324 [2024-11-05 19:18:17.473256] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.324 [2024-11-05 19:18:17.473265] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.324 [2024-11-05 19:18:17.473273] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.324 19:18:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:28:48.324 19:18:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:48.324 19:18:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:48.324 19:18:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # nvmfpid=511426 00:28:48.324 19:18:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@329 -- # waitforlisten 511426 00:28:48.324 19:18:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:48.324 19:18:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@833 -- # '[' -z 511426 ']' 00:28:48.324 19:18:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:48.324 19:18:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:48.324 19:18:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:48.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:48.324 19:18:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:48.324 19:18:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:48.324 [2024-11-05 19:18:17.486023] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.324 [2024-11-05 19:18:17.486570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.324 [2024-11-05 19:18:17.486589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:48.324 [2024-11-05 19:18:17.486598] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:48.324 [2024-11-05 19:18:17.486823] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:48.324 [2024-11-05 19:18:17.487043] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.324 [2024-11-05 19:18:17.487053] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.324 [2024-11-05 19:18:17.487060] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.324 [2024-11-05 19:18:17.487068] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.324 [2024-11-05 19:18:17.499817] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.324 [2024-11-05 19:18:17.500442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.324 [2024-11-05 19:18:17.500481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:48.324 [2024-11-05 19:18:17.500492] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:48.324 [2024-11-05 19:18:17.500731] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:48.324 [2024-11-05 19:18:17.500963] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.324 [2024-11-05 19:18:17.500974] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.324 [2024-11-05 19:18:17.500982] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.324 [2024-11-05 19:18:17.500990] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.324 [2024-11-05 19:18:17.513732] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.324 [2024-11-05 19:18:17.514422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.324 [2024-11-05 19:18:17.514462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:48.324 [2024-11-05 19:18:17.514473] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:48.324 [2024-11-05 19:18:17.514712] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:48.324 [2024-11-05 19:18:17.514944] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.324 [2024-11-05 19:18:17.514955] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.324 [2024-11-05 19:18:17.514962] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.324 [2024-11-05 19:18:17.514971] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.324 [2024-11-05 19:18:17.527725] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.324 [2024-11-05 19:18:17.528265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.324 [2024-11-05 19:18:17.528290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:48.324 [2024-11-05 19:18:17.528299] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:48.324 [2024-11-05 19:18:17.528518] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:48.324 [2024-11-05 19:18:17.528738] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.324 [2024-11-05 19:18:17.528755] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.324 [2024-11-05 19:18:17.528763] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.324 [2024-11-05 19:18:17.528770] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.324 [2024-11-05 19:18:17.534639] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:28:48.324 [2024-11-05 19:18:17.534693] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:48.324 [2024-11-05 19:18:17.541711] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.324 [2024-11-05 19:18:17.542234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.324 [2024-11-05 19:18:17.542253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:48.324 [2024-11-05 19:18:17.542261] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:48.324 [2024-11-05 19:18:17.542480] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:48.324 [2024-11-05 19:18:17.542700] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.324 [2024-11-05 19:18:17.542710] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.324 [2024-11-05 19:18:17.542717] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.324 [2024-11-05 19:18:17.542725] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.324 [2024-11-05 19:18:17.555671] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.324 [2024-11-05 19:18:17.556367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.324 [2024-11-05 19:18:17.556407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:48.324 [2024-11-05 19:18:17.556418] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:48.324 [2024-11-05 19:18:17.556657] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:48.324 [2024-11-05 19:18:17.556889] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.325 [2024-11-05 19:18:17.556900] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.325 [2024-11-05 19:18:17.556908] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.325 [2024-11-05 19:18:17.556917] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.325 [2024-11-05 19:18:17.569533] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.325 [2024-11-05 19:18:17.570173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.325 [2024-11-05 19:18:17.570216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:48.325 [2024-11-05 19:18:17.570228] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:48.325 [2024-11-05 19:18:17.570471] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:48.325 [2024-11-05 19:18:17.570695] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.325 [2024-11-05 19:18:17.570705] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.325 [2024-11-05 19:18:17.570713] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.325 [2024-11-05 19:18:17.570721] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.325 [2024-11-05 19:18:17.583475] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.325 [2024-11-05 19:18:17.584013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.325 [2024-11-05 19:18:17.584052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:48.325 [2024-11-05 19:18:17.584064] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:48.325 [2024-11-05 19:18:17.584303] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:48.325 [2024-11-05 19:18:17.584526] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.325 [2024-11-05 19:18:17.584536] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.325 [2024-11-05 19:18:17.584544] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.325 [2024-11-05 19:18:17.584553] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.325 [2024-11-05 19:18:17.597314] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.325 [2024-11-05 19:18:17.597890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.325 [2024-11-05 19:18:17.597929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:48.325 [2024-11-05 19:18:17.597942] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:48.325 [2024-11-05 19:18:17.598181] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:48.325 [2024-11-05 19:18:17.598405] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.325 [2024-11-05 19:18:17.598414] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.325 [2024-11-05 19:18:17.598423] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.325 [2024-11-05 19:18:17.598432] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.325 [2024-11-05 19:18:17.611177] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.325 [2024-11-05 19:18:17.611876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.325 [2024-11-05 19:18:17.611916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:48.325 [2024-11-05 19:18:17.611927] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:48.325 [2024-11-05 19:18:17.612170] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:48.325 [2024-11-05 19:18:17.612394] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.325 [2024-11-05 19:18:17.612404] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.325 [2024-11-05 19:18:17.612412] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.325 [2024-11-05 19:18:17.612420] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.325 [2024-11-05 19:18:17.624972] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.325 [2024-11-05 19:18:17.625654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.325 [2024-11-05 19:18:17.625693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:48.325 [2024-11-05 19:18:17.625706] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:48.325 [2024-11-05 19:18:17.625957] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:48.325 [2024-11-05 19:18:17.626182] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.325 [2024-11-05 19:18:17.626192] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.325 [2024-11-05 19:18:17.626199] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.325 [2024-11-05 19:18:17.626208] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.325 [2024-11-05 19:18:17.627047] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:48.325 [2024-11-05 19:18:17.638960] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.325 [2024-11-05 19:18:17.639663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.325 [2024-11-05 19:18:17.639705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:48.325 [2024-11-05 19:18:17.639718] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:48.325 [2024-11-05 19:18:17.639966] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:48.325 [2024-11-05 19:18:17.640191] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.325 [2024-11-05 19:18:17.640201] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.325 [2024-11-05 19:18:17.640209] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.325 [2024-11-05 19:18:17.640217] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.588 [2024-11-05 19:18:17.652962] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.588 [2024-11-05 19:18:17.653573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.588 [2024-11-05 19:18:17.653594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:48.588 [2024-11-05 19:18:17.653603] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:48.588 [2024-11-05 19:18:17.653828] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:48.588 [2024-11-05 19:18:17.654050] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.588 [2024-11-05 19:18:17.654065] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.588 [2024-11-05 19:18:17.654074] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.588 [2024-11-05 19:18:17.654081] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.588 [2024-11-05 19:18:17.656444] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:48.588 [2024-11-05 19:18:17.656469] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:48.588 [2024-11-05 19:18:17.656475] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:48.588 [2024-11-05 19:18:17.656480] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:48.588 [2024-11-05 19:18:17.656484] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:48.588 [2024-11-05 19:18:17.657536] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:48.588 [2024-11-05 19:18:17.657690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:48.588 [2024-11-05 19:18:17.657692] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:48.588 [2024-11-05 19:18:17.666819] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.588 [2024-11-05 19:18:17.667416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.588 [2024-11-05 19:18:17.667434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:48.588 [2024-11-05 19:18:17.667443] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:48.588 [2024-11-05 19:18:17.667662] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:48.588 [2024-11-05 19:18:17.667889] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.588 [2024-11-05 19:18:17.667900] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.588 [2024-11-05 19:18:17.667908] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.588 [2024-11-05 19:18:17.667915] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.588 [2024-11-05 19:18:17.680652] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.588 [2024-11-05 19:18:17.681359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.588 [2024-11-05 19:18:17.681404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:48.588 [2024-11-05 19:18:17.681417] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:48.588 [2024-11-05 19:18:17.681663] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:48.588 [2024-11-05 19:18:17.681895] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.588 [2024-11-05 19:18:17.681905] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.588 [2024-11-05 19:18:17.681914] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.588 [2024-11-05 19:18:17.681923] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.588 [2024-11-05 19:18:17.694476] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.588 [2024-11-05 19:18:17.695057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.588 [2024-11-05 19:18:17.695077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:48.589 [2024-11-05 19:18:17.695086] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:48.589 [2024-11-05 19:18:17.695306] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:48.589 [2024-11-05 19:18:17.695526] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.589 [2024-11-05 19:18:17.695535] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.589 [2024-11-05 19:18:17.695543] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.589 [2024-11-05 19:18:17.695550] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.589 [2024-11-05 19:18:17.708287] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.589 [2024-11-05 19:18:17.708842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.589 [2024-11-05 19:18:17.708884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:48.589 [2024-11-05 19:18:17.708896] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:48.589 [2024-11-05 19:18:17.709142] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:48.589 [2024-11-05 19:18:17.709366] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.589 [2024-11-05 19:18:17.709376] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.589 [2024-11-05 19:18:17.709384] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.589 [2024-11-05 19:18:17.709393] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.589 [2024-11-05 19:18:17.722158] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.589 [2024-11-05 19:18:17.722755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.589 [2024-11-05 19:18:17.722776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:48.589 [2024-11-05 19:18:17.722784] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:48.589 [2024-11-05 19:18:17.723004] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:48.589 [2024-11-05 19:18:17.723224] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.589 [2024-11-05 19:18:17.723232] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.589 [2024-11-05 19:18:17.723240] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.589 [2024-11-05 19:18:17.723247] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.589 [2024-11-05 19:18:17.735984] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.589 [2024-11-05 19:18:17.736558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.589 [2024-11-05 19:18:17.736575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:48.589 [2024-11-05 19:18:17.736583] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:48.589 [2024-11-05 19:18:17.736812] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:48.589 [2024-11-05 19:18:17.737033] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.589 [2024-11-05 19:18:17.737041] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.589 [2024-11-05 19:18:17.737049] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.589 [2024-11-05 19:18:17.737058] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.589 [2024-11-05 19:18:17.749796] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.589 [2024-11-05 19:18:17.750206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.589 [2024-11-05 19:18:17.750224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:48.589 [2024-11-05 19:18:17.750232] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:48.589 [2024-11-05 19:18:17.750450] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:48.589 [2024-11-05 19:18:17.750670] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.589 [2024-11-05 19:18:17.750679] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.589 [2024-11-05 19:18:17.750686] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.589 [2024-11-05 19:18:17.750693] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.589 [2024-11-05 19:18:17.763633] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.589 [2024-11-05 19:18:17.764322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.589 [2024-11-05 19:18:17.764362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:48.589 [2024-11-05 19:18:17.764374] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:48.589 [2024-11-05 19:18:17.764615] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:48.589 [2024-11-05 19:18:17.764847] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.589 [2024-11-05 19:18:17.764858] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.589 [2024-11-05 19:18:17.764866] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.589 [2024-11-05 19:18:17.764874] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.589 [2024-11-05 19:18:17.777441] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.589 [2024-11-05 19:18:17.778119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.589 [2024-11-05 19:18:17.778159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:48.589 [2024-11-05 19:18:17.778171] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:48.589 [2024-11-05 19:18:17.778411] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:48.589 [2024-11-05 19:18:17.778634] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.589 [2024-11-05 19:18:17.778649] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.589 [2024-11-05 19:18:17.778657] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.589 [2024-11-05 19:18:17.778665] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.589 4634.33 IOPS, 18.10 MiB/s [2024-11-05T18:18:17.912Z] [2024-11-05 19:18:17.791417] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.589 [2024-11-05 19:18:17.792091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.589 [2024-11-05 19:18:17.792130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:48.589 [2024-11-05 19:18:17.792141] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:48.589 [2024-11-05 19:18:17.792379] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:48.589 [2024-11-05 19:18:17.792603] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.589 [2024-11-05 19:18:17.792613] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.589 [2024-11-05 19:18:17.792622] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.589 [2024-11-05 19:18:17.792630] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.589 [2024-11-05 19:18:17.805382] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.589 [2024-11-05 19:18:17.806048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.589 [2024-11-05 19:18:17.806088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:48.589 [2024-11-05 19:18:17.806099] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:48.589 [2024-11-05 19:18:17.806338] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:48.589 [2024-11-05 19:18:17.806562] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.589 [2024-11-05 19:18:17.806571] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.589 [2024-11-05 19:18:17.806579] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.589 [2024-11-05 19:18:17.806588] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.589 [2024-11-05 19:18:17.819344] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.589 [2024-11-05 19:18:17.819940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.589 [2024-11-05 19:18:17.819961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:48.589 [2024-11-05 19:18:17.819969] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:48.589 [2024-11-05 19:18:17.820190] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:48.589 [2024-11-05 19:18:17.820409] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.589 [2024-11-05 19:18:17.820418] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.589 [2024-11-05 19:18:17.820425] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.589 [2024-11-05 19:18:17.820440] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.589 [2024-11-05 19:18:17.833187] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.589 [2024-11-05 19:18:17.833631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.589 [2024-11-05 19:18:17.833648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:48.590 [2024-11-05 19:18:17.833656] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:48.590 [2024-11-05 19:18:17.833882] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:48.590 [2024-11-05 19:18:17.834102] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.590 [2024-11-05 19:18:17.834112] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.590 [2024-11-05 19:18:17.834119] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.590 [2024-11-05 19:18:17.834126] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.590 [2024-11-05 19:18:17.847065] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.590 [2024-11-05 19:18:17.847734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.590 [2024-11-05 19:18:17.847781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:48.590 [2024-11-05 19:18:17.847794] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:48.590 [2024-11-05 19:18:17.848034] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:48.590 [2024-11-05 19:18:17.848258] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.590 [2024-11-05 19:18:17.848268] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.590 [2024-11-05 19:18:17.848276] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.590 [2024-11-05 19:18:17.848285] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.590 [2024-11-05 19:18:17.861029] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.590 [2024-11-05 19:18:17.861615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.590 [2024-11-05 19:18:17.861635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:48.590 [2024-11-05 19:18:17.861643] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:48.590 [2024-11-05 19:18:17.861869] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:48.590 [2024-11-05 19:18:17.862090] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.590 [2024-11-05 19:18:17.862100] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.590 [2024-11-05 19:18:17.862108] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.590 [2024-11-05 19:18:17.862115] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.590 [2024-11-05 19:18:17.874846] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.590 [2024-11-05 19:18:17.875479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.590 [2024-11-05 19:18:17.875518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:48.590 [2024-11-05 19:18:17.875530] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:48.590 [2024-11-05 19:18:17.875777] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:48.590 [2024-11-05 19:18:17.876001] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.590 [2024-11-05 19:18:17.876011] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.590 [2024-11-05 19:18:17.876019] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.590 [2024-11-05 19:18:17.876027] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.590 [2024-11-05 19:18:17.888775] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.590 [2024-11-05 19:18:17.889414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.590 [2024-11-05 19:18:17.889454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:48.590 [2024-11-05 19:18:17.889465] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:48.590 [2024-11-05 19:18:17.889704] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:48.590 [2024-11-05 19:18:17.889948] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.590 [2024-11-05 19:18:17.889960] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.590 [2024-11-05 19:18:17.889968] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.590 [2024-11-05 19:18:17.889977] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.590 [2024-11-05 19:18:17.902715] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.590 [2024-11-05 19:18:17.903359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.590 [2024-11-05 19:18:17.903398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:48.590 [2024-11-05 19:18:17.903409] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:48.590 [2024-11-05 19:18:17.903648] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:48.590 [2024-11-05 19:18:17.903881] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.590 [2024-11-05 19:18:17.903892] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.590 [2024-11-05 19:18:17.903900] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.590 [2024-11-05 19:18:17.903908] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.853 [2024-11-05 19:18:17.916648] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.853 [2024-11-05 19:18:17.917239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.853 [2024-11-05 19:18:17.917260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:48.853 [2024-11-05 19:18:17.917268] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:48.853 [2024-11-05 19:18:17.917492] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:48.853 [2024-11-05 19:18:17.917713] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.853 [2024-11-05 19:18:17.917722] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.853 [2024-11-05 19:18:17.917730] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.853 [2024-11-05 19:18:17.917737] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.853 [2024-11-05 19:18:17.930482] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.853 [2024-11-05 19:18:17.931028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.853 [2024-11-05 19:18:17.931047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:48.853 [2024-11-05 19:18:17.931055] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:48.853 [2024-11-05 19:18:17.931274] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:48.853 [2024-11-05 19:18:17.931493] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.853 [2024-11-05 19:18:17.931503] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.853 [2024-11-05 19:18:17.931511] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.853 [2024-11-05 19:18:17.931518] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.853 [2024-11-05 19:18:17.944463] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.853 [2024-11-05 19:18:17.945116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.853 [2024-11-05 19:18:17.945156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:48.853 [2024-11-05 19:18:17.945167] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:48.853 [2024-11-05 19:18:17.945406] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:48.853 [2024-11-05 19:18:17.945630] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.853 [2024-11-05 19:18:17.945640] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.853 [2024-11-05 19:18:17.945648] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.853 [2024-11-05 19:18:17.945657] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.853 [2024-11-05 19:18:17.958406] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.853 [2024-11-05 19:18:17.959108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.853 [2024-11-05 19:18:17.959147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:48.853 [2024-11-05 19:18:17.959158] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:48.853 [2024-11-05 19:18:17.959397] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:48.853 [2024-11-05 19:18:17.959621] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.853 [2024-11-05 19:18:17.959636] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.853 [2024-11-05 19:18:17.959644] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.853 [2024-11-05 19:18:17.959652] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.853 [2024-11-05 19:18:17.972223] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.853 [2024-11-05 19:18:17.972840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.853 [2024-11-05 19:18:17.972880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:48.853 [2024-11-05 19:18:17.972893] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:48.853 [2024-11-05 19:18:17.973133] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:48.853 [2024-11-05 19:18:17.973357] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.853 [2024-11-05 19:18:17.973367] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.853 [2024-11-05 19:18:17.973375] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.853 [2024-11-05 19:18:17.973383] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.853 [2024-11-05 19:18:17.986128] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.853 [2024-11-05 19:18:17.986677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.853 [2024-11-05 19:18:17.986697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:48.853 [2024-11-05 19:18:17.986705] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:48.853 [2024-11-05 19:18:17.986930] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:48.853 [2024-11-05 19:18:17.987151] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.853 [2024-11-05 19:18:17.987160] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.853 [2024-11-05 19:18:17.987167] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.853 [2024-11-05 19:18:17.987174] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.853 [2024-11-05 19:18:18.000132] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.853 [2024-11-05 19:18:18.000718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.853 [2024-11-05 19:18:18.000736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:48.853 [2024-11-05 19:18:18.000744] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:48.853 [2024-11-05 19:18:18.000970] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:48.853 [2024-11-05 19:18:18.001190] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.853 [2024-11-05 19:18:18.001199] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.853 [2024-11-05 19:18:18.001207] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.853 [2024-11-05 19:18:18.001219] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.853 [2024-11-05 19:18:18.013958] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.853 [2024-11-05 19:18:18.014614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.853 [2024-11-05 19:18:18.014653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:48.853 [2024-11-05 19:18:18.014665] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:48.853 [2024-11-05 19:18:18.014911] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:48.853 [2024-11-05 19:18:18.015135] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.853 [2024-11-05 19:18:18.015144] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.853 [2024-11-05 19:18:18.015152] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.853 [2024-11-05 19:18:18.015161] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.853 [2024-11-05 19:18:18.027919] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.853 [2024-11-05 19:18:18.028579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.853 [2024-11-05 19:18:18.028619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:48.853 [2024-11-05 19:18:18.028631] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:48.853 [2024-11-05 19:18:18.028880] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:48.853 [2024-11-05 19:18:18.029105] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.853 [2024-11-05 19:18:18.029114] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.854 [2024-11-05 19:18:18.029122] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.854 [2024-11-05 19:18:18.029130] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.854 [2024-11-05 19:18:18.041875] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.854 [2024-11-05 19:18:18.042519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.854 [2024-11-05 19:18:18.042559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:48.854 [2024-11-05 19:18:18.042570] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:48.854 [2024-11-05 19:18:18.042815] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:48.854 [2024-11-05 19:18:18.043050] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.854 [2024-11-05 19:18:18.043060] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.854 [2024-11-05 19:18:18.043068] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.854 [2024-11-05 19:18:18.043076] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.854 [2024-11-05 19:18:18.055820] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.854 [2024-11-05 19:18:18.056369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.854 [2024-11-05 19:18:18.056389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:48.854 [2024-11-05 19:18:18.056397] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:48.854 [2024-11-05 19:18:18.056616] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:48.854 [2024-11-05 19:18:18.056841] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.854 [2024-11-05 19:18:18.056852] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.854 [2024-11-05 19:18:18.056860] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.854 [2024-11-05 19:18:18.056867] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.854 [2024-11-05 19:18:18.069814] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.854 [2024-11-05 19:18:18.070491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.854 [2024-11-05 19:18:18.070531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:48.854 [2024-11-05 19:18:18.070542] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:48.854 [2024-11-05 19:18:18.070791] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:48.854 [2024-11-05 19:18:18.071015] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.854 [2024-11-05 19:18:18.071025] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.854 [2024-11-05 19:18:18.071033] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.854 [2024-11-05 19:18:18.071042] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.854 [2024-11-05 19:18:18.083784] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.854 [2024-11-05 19:18:18.084437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.854 [2024-11-05 19:18:18.084477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:48.854 [2024-11-05 19:18:18.084488] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:48.854 [2024-11-05 19:18:18.084726] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:48.854 [2024-11-05 19:18:18.084960] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.854 [2024-11-05 19:18:18.084971] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.854 [2024-11-05 19:18:18.084979] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.854 [2024-11-05 19:18:18.084987] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.854 [2024-11-05 19:18:18.097735] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.854 [2024-11-05 19:18:18.098203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.854 [2024-11-05 19:18:18.098224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:48.854 [2024-11-05 19:18:18.098232] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:48.854 [2024-11-05 19:18:18.098457] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:48.854 [2024-11-05 19:18:18.098677] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.854 [2024-11-05 19:18:18.098687] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.854 [2024-11-05 19:18:18.098694] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.854 [2024-11-05 19:18:18.098701] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.854 [2024-11-05 19:18:18.111521] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.854 [2024-11-05 19:18:18.112065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.854 [2024-11-05 19:18:18.112084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:48.854 [2024-11-05 19:18:18.112092] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:48.854 [2024-11-05 19:18:18.112311] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:48.854 [2024-11-05 19:18:18.112530] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.854 [2024-11-05 19:18:18.112540] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.854 [2024-11-05 19:18:18.112547] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.854 [2024-11-05 19:18:18.112554] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.854 [2024-11-05 19:18:18.125509] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.854 [2024-11-05 19:18:18.126045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.854 [2024-11-05 19:18:18.126062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:48.854 [2024-11-05 19:18:18.126070] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:48.854 [2024-11-05 19:18:18.126289] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:48.854 [2024-11-05 19:18:18.126508] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.854 [2024-11-05 19:18:18.126517] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.854 [2024-11-05 19:18:18.126524] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.854 [2024-11-05 19:18:18.126531] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.854 [2024-11-05 19:18:18.139478] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.854 [2024-11-05 19:18:18.140111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.854 [2024-11-05 19:18:18.140151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:48.854 [2024-11-05 19:18:18.140162] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:48.854 [2024-11-05 19:18:18.140401] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:48.854 [2024-11-05 19:18:18.140624] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.854 [2024-11-05 19:18:18.140639] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.854 [2024-11-05 19:18:18.140647] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.854 [2024-11-05 19:18:18.140656] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.854 [2024-11-05 19:18:18.153403] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.854 [2024-11-05 19:18:18.153893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.854 [2024-11-05 19:18:18.153933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:48.854 [2024-11-05 19:18:18.153944] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:48.854 [2024-11-05 19:18:18.154182] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:48.854 [2024-11-05 19:18:18.154406] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.854 [2024-11-05 19:18:18.154416] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.854 [2024-11-05 19:18:18.154424] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.854 [2024-11-05 19:18:18.154433] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.854 [2024-11-05 19:18:18.167396] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.854 [2024-11-05 19:18:18.168148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.854 [2024-11-05 19:18:18.168188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:48.854 [2024-11-05 19:18:18.168199] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:48.854 [2024-11-05 19:18:18.168437] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:48.854 [2024-11-05 19:18:18.168661] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.855 [2024-11-05 19:18:18.168671] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.855 [2024-11-05 19:18:18.168679] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.855 [2024-11-05 19:18:18.168688] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.117 [2024-11-05 19:18:18.181256] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.117 [2024-11-05 19:18:18.181844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.117 [2024-11-05 19:18:18.181865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:49.117 [2024-11-05 19:18:18.181874] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:49.117 [2024-11-05 19:18:18.182094] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:49.117 [2024-11-05 19:18:18.182314] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.117 [2024-11-05 19:18:18.182322] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.117 [2024-11-05 19:18:18.182331] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.117 [2024-11-05 19:18:18.182343] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.117 [2024-11-05 19:18:18.195098] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.117 [2024-11-05 19:18:18.195728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.117 [2024-11-05 19:18:18.195775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:49.117 [2024-11-05 19:18:18.195788] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:49.117 [2024-11-05 19:18:18.196028] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:49.117 [2024-11-05 19:18:18.196251] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.117 [2024-11-05 19:18:18.196261] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.117 [2024-11-05 19:18:18.196269] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.117 [2024-11-05 19:18:18.196277] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.117 [2024-11-05 19:18:18.209028] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.117 [2024-11-05 19:18:18.209711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.117 [2024-11-05 19:18:18.209759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:49.117 [2024-11-05 19:18:18.209771] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:49.117 [2024-11-05 19:18:18.210009] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:49.117 [2024-11-05 19:18:18.210233] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.117 [2024-11-05 19:18:18.210245] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.117 [2024-11-05 19:18:18.210253] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.117 [2024-11-05 19:18:18.210262] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.117 [2024-11-05 19:18:18.223021] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.117 [2024-11-05 19:18:18.223594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.117 [2024-11-05 19:18:18.223634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:49.117 [2024-11-05 19:18:18.223647] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:49.117 [2024-11-05 19:18:18.223895] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:49.117 [2024-11-05 19:18:18.224120] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.117 [2024-11-05 19:18:18.224129] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.117 [2024-11-05 19:18:18.224137] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.117 [2024-11-05 19:18:18.224146] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.117 [2024-11-05 19:18:18.236889] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.117 [2024-11-05 19:18:18.237454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.117 [2024-11-05 19:18:18.237474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:49.117 [2024-11-05 19:18:18.237482] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:49.117 [2024-11-05 19:18:18.237702] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:49.117 [2024-11-05 19:18:18.237929] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.117 [2024-11-05 19:18:18.237939] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.117 [2024-11-05 19:18:18.237947] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.117 [2024-11-05 19:18:18.237954] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.117 [2024-11-05 19:18:18.250688] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.117 [2024-11-05 19:18:18.251257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.117 [2024-11-05 19:18:18.251275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:49.117 [2024-11-05 19:18:18.251283] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:49.117 [2024-11-05 19:18:18.251501] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:49.117 [2024-11-05 19:18:18.251721] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.117 [2024-11-05 19:18:18.251732] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.117 [2024-11-05 19:18:18.251740] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.117 [2024-11-05 19:18:18.251751] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.117 [2024-11-05 19:18:18.264484] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.117 [2024-11-05 19:18:18.265129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.117 [2024-11-05 19:18:18.265168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:49.117 [2024-11-05 19:18:18.265179] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:49.117 [2024-11-05 19:18:18.265417] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:49.118 [2024-11-05 19:18:18.265641] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.118 [2024-11-05 19:18:18.265651] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.118 [2024-11-05 19:18:18.265659] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.118 [2024-11-05 19:18:18.265668] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.118 [2024-11-05 19:18:18.278422] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.118 [2024-11-05 19:18:18.278954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.118 [2024-11-05 19:18:18.278994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:49.118 [2024-11-05 19:18:18.279007] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:49.118 [2024-11-05 19:18:18.279252] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:49.118 [2024-11-05 19:18:18.279476] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.118 [2024-11-05 19:18:18.279486] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.118 [2024-11-05 19:18:18.279494] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.118 [2024-11-05 19:18:18.279502] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.118 [2024-11-05 19:18:18.292266] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.118 [2024-11-05 19:18:18.292857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.118 [2024-11-05 19:18:18.292896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:49.118 [2024-11-05 19:18:18.292909] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:49.118 [2024-11-05 19:18:18.293151] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:49.118 [2024-11-05 19:18:18.293375] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.118 [2024-11-05 19:18:18.293384] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.118 [2024-11-05 19:18:18.293392] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.118 [2024-11-05 19:18:18.293401] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.118 [2024-11-05 19:18:18.306148] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.118 [2024-11-05 19:18:18.306806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.118 [2024-11-05 19:18:18.306846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:49.118 [2024-11-05 19:18:18.306859] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:49.118 [2024-11-05 19:18:18.307099] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:49.118 [2024-11-05 19:18:18.307323] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.118 [2024-11-05 19:18:18.307334] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.118 [2024-11-05 19:18:18.307342] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.118 [2024-11-05 19:18:18.307350] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.118 [2024-11-05 19:18:18.320112] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.118 [2024-11-05 19:18:18.320753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.118 [2024-11-05 19:18:18.320792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:49.118 [2024-11-05 19:18:18.320803] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:49.118 [2024-11-05 19:18:18.321041] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:49.118 [2024-11-05 19:18:18.321265] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.118 [2024-11-05 19:18:18.321281] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.118 [2024-11-05 19:18:18.321289] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.118 [2024-11-05 19:18:18.321298] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.118 19:18:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:49.118 19:18:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@866 -- # return 0 00:28:49.118 19:18:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:28:49.118 19:18:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:49.118 19:18:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:49.118 [2024-11-05 19:18:18.334042] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.118 [2024-11-05 19:18:18.334726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.118 [2024-11-05 19:18:18.334774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:49.118 [2024-11-05 19:18:18.334786] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:49.118 [2024-11-05 19:18:18.335025] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:49.118 [2024-11-05 19:18:18.335249] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.118 [2024-11-05 19:18:18.335259] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.118 [2024-11-05 19:18:18.335267] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.118 [2024-11-05 19:18:18.335275] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.118 [2024-11-05 19:18:18.348023] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.118 [2024-11-05 19:18:18.348567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.118 [2024-11-05 19:18:18.348587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:49.118 [2024-11-05 19:18:18.348595] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:49.118 [2024-11-05 19:18:18.348820] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:49.118 [2024-11-05 19:18:18.349041] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.118 [2024-11-05 19:18:18.349051] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.118 [2024-11-05 19:18:18.349058] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.118 [2024-11-05 19:18:18.349065] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.118 [2024-11-05 19:18:18.362007] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.118 [2024-11-05 19:18:18.362544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.118 [2024-11-05 19:18:18.362562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:49.118 [2024-11-05 19:18:18.362570] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:49.118 [2024-11-05 19:18:18.362793] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:49.118 [2024-11-05 19:18:18.363018] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.118 [2024-11-05 19:18:18.363028] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.118 [2024-11-05 19:18:18.363036] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.118 [2024-11-05 19:18:18.363043] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.118 19:18:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:49.118 19:18:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:49.118 19:18:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:49.118 19:18:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:49.118 [2024-11-05 19:18:18.375988] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.118 [2024-11-05 19:18:18.376560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.118 [2024-11-05 19:18:18.376577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:49.118 [2024-11-05 19:18:18.376585] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:49.118 [2024-11-05 19:18:18.376807] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:49.118 [2024-11-05 19:18:18.377027] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.118 [2024-11-05 19:18:18.377036] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.118 [2024-11-05 19:18:18.377044] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.118 [2024-11-05 19:18:18.377050] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.118 [2024-11-05 19:18:18.377334] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:49.118 19:18:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:49.118 19:18:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:49.118 19:18:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:49.118 19:18:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:49.118 [2024-11-05 19:18:18.389811] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.118 [2024-11-05 19:18:18.390345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.118 [2024-11-05 19:18:18.390363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:49.118 [2024-11-05 19:18:18.390370] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:49.119 [2024-11-05 19:18:18.390590] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:49.119 [2024-11-05 19:18:18.390814] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.119 [2024-11-05 19:18:18.390824] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.119 [2024-11-05 19:18:18.390831] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.119 [2024-11-05 19:18:18.390838] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.119 [2024-11-05 19:18:18.403791] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.119 [2024-11-05 19:18:18.404350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.119 [2024-11-05 19:18:18.404367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:49.119 [2024-11-05 19:18:18.404375] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:49.119 [2024-11-05 19:18:18.404593] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:49.119 [2024-11-05 19:18:18.404817] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.119 [2024-11-05 19:18:18.404827] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.119 [2024-11-05 19:18:18.404835] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.119 [2024-11-05 19:18:18.404841] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.119 [2024-11-05 19:18:18.417784] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.119 Malloc0 00:28:49.119 [2024-11-05 19:18:18.418196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.119 [2024-11-05 19:18:18.418212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:49.119 [2024-11-05 19:18:18.418220] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:49.119 [2024-11-05 19:18:18.418439] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:49.119 19:18:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:49.119 [2024-11-05 19:18:18.418658] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.119 [2024-11-05 19:18:18.418668] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.119 [2024-11-05 19:18:18.418676] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.119 [2024-11-05 19:18:18.418684] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.119 19:18:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:49.119 19:18:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:49.119 19:18:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:49.119 19:18:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:49.119 19:18:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:49.119 19:18:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:49.119 19:18:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:49.119 [2024-11-05 19:18:18.431619] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.119 [2024-11-05 19:18:18.432032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.119 [2024-11-05 19:18:18.432049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:49.119 [2024-11-05 19:18:18.432057] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:49.119 [2024-11-05 19:18:18.432276] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:49.119 [2024-11-05 19:18:18.432499] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.119 [2024-11-05 19:18:18.432508] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.119 [2024-11-05 19:18:18.432515] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.119 [2024-11-05 19:18:18.432522] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.379 19:18:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:49.379 19:18:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:49.379 19:18:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:49.379 19:18:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:49.379 [2024-11-05 19:18:18.445464] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.379 [2024-11-05 19:18:18.445995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.379 [2024-11-05 19:18:18.446035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187d000 with addr=10.0.0.2, port=4420 00:28:49.379 [2024-11-05 19:18:18.446046] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d000 is same with the state(6) to be set 00:28:49.379 [2024-11-05 19:18:18.446286] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187d000 (9): Bad file descriptor 00:28:49.379 [2024-11-05 19:18:18.446510] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.379 [2024-11-05 19:18:18.446519] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.379 [2024-11-05 19:18:18.446527] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.379 [2024-11-05 19:18:18.446536] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.379 [2024-11-05 19:18:18.449542] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:49.379 19:18:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:49.379 19:18:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 510399 00:28:49.379 [2024-11-05 19:18:18.459272] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.379 [2024-11-05 19:18:18.488777] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:28:50.582 4518.14 IOPS, 17.65 MiB/s [2024-11-05T18:18:20.846Z] 5348.12 IOPS, 20.89 MiB/s [2024-11-05T18:18:22.229Z] 5982.44 IOPS, 23.37 MiB/s [2024-11-05T18:18:22.800Z] 6492.40 IOPS, 25.36 MiB/s [2024-11-05T18:18:24.183Z] 6918.18 IOPS, 27.02 MiB/s [2024-11-05T18:18:25.124Z] 7319.58 IOPS, 28.59 MiB/s [2024-11-05T18:18:26.064Z] 7608.92 IOPS, 29.72 MiB/s [2024-11-05T18:18:27.004Z] 7868.86 IOPS, 30.74 MiB/s 00:28:57.682 Latency(us) 00:28:57.682 [2024-11-05T18:18:27.005Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:57.682 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:57.682 Verification LBA range: start 0x0 length 0x4000 00:28:57.682 Nvme1n1 : 15.00 8082.00 31.57 9833.26 0.00 7118.78 791.89 15510.19 00:28:57.682 [2024-11-05T18:18:27.005Z] =================================================================================================================== 00:28:57.682 [2024-11-05T18:18:27.005Z] Total : 8082.00 31.57 9833.26 0.00 7118.78 791.89 15510.19 00:28:57.682 19:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:28:57.682 19:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:57.682 19:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:57.682 19:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:57.682 19:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:57.682 19:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:28:57.682 19:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:28:57.682 19:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@335 -- # nvmfcleanup 00:28:57.682 19:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@99 -- # sync 00:28:57.682 19:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:28:57.682 19:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@102 -- # set +e 00:28:57.682 19:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@103 -- # for i in {1..20} 00:28:57.682 19:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:28:57.682 rmmod nvme_tcp 00:28:57.682 rmmod nvme_fabrics 00:28:57.682 rmmod nvme_keyring 00:28:57.682 19:18:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:28:57.943 19:18:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # set -e 00:28:57.943 19:18:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # return 0 00:28:57.943 19:18:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # '[' -n 511426 ']' 00:28:57.943 19:18:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@337 -- # killprocess 511426 00:28:57.943 19:18:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@952 -- # '[' -z 511426 ']' 00:28:57.943 19:18:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # kill -0 511426 00:28:57.943 19:18:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@957 -- # uname 00:28:57.943 19:18:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:57.943 19:18:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 511426 00:28:57.943 19:18:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:28:57.943 19:18:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:28:57.943 19:18:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 511426' 00:28:57.943 killing process with pid 511426 00:28:57.943 19:18:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@971 -- # kill 511426 00:28:57.943 19:18:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@976 -- # wait 511426 00:28:57.943 19:18:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:28:57.943 19:18:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # nvmf_fini 00:28:57.943 19:18:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@264 -- # local dev 00:28:57.943 19:18:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@267 -- # remove_target_ns 00:28:57.943 19:18:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:28:57.943 19:18:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:28:57.943 19:18:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_target_ns 00:29:00.487 19:18:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@268 -- # delete_main_bridge 00:29:00.487 19:18:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:29:00.487 19:18:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@130 -- # return 0 00:29:00.487 19:18:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:29:00.487 19:18:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:29:00.487 19:18:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:29:00.487 19:18:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:29:00.487 19:18:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:29:00.487 19:18:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:29:00.487 19:18:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:29:00.487 19:18:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:29:00.487 19:18:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:29:00.487 19:18:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:29:00.487 19:18:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:29:00.487 19:18:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:29:00.487 19:18:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:29:00.487 19:18:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:29:00.487 19:18:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:29:00.487 19:18:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:29:00.487 19:18:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:29:00.487 19:18:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@41 -- # _dev=0 00:29:00.487 19:18:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@41 -- # dev_map=() 00:29:00.487 19:18:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@284 -- # iptr 00:29:00.487 19:18:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@542 -- # iptables-save 00:29:00.487 19:18:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:29:00.487 19:18:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@542 -- # iptables-restore 00:29:00.487 00:29:00.487 real 0m28.207s 00:29:00.487 user 1m3.127s 00:29:00.487 sys 0m7.534s 00:29:00.487 19:18:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:00.487 19:18:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:00.487 ************************************ 00:29:00.487 END TEST nvmf_bdevperf 00:29:00.487 ************************************ 00:29:00.487 19:18:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:00.487 19:18:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:29:00.487 19:18:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:00.487 19:18:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.487 ************************************ 00:29:00.487 START TEST nvmf_target_disconnect 00:29:00.487 ************************************ 00:29:00.487 19:18:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:00.487 * Looking for test storage... 00:29:00.487 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:00.487 19:18:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:00.487 19:18:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # lcov --version 00:29:00.487 19:18:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:00.487 19:18:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:00.487 19:18:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:00.487 19:18:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:00.487 19:18:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:00.487 19:18:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:29:00.488 19:18:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:29:00.488 19:18:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:29:00.488 19:18:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:29:00.488 19:18:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:29:00.488 19:18:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:29:00.488 19:18:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:29:00.488 19:18:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:00.488 19:18:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:29:00.488 19:18:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:29:00.488 19:18:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:00.488 19:18:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:00.488 19:18:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:29:00.488 19:18:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:29:00.488 19:18:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:00.488 19:18:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:29:00.488 19:18:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:29:00.488 19:18:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:29:00.488 19:18:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:29:00.488 19:18:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:00.488 19:18:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:29:00.488 19:18:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:29:00.488 19:18:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:00.488 19:18:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:00.488 19:18:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:29:00.488 19:18:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:00.488 19:18:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:00.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:00.488 --rc genhtml_branch_coverage=1 00:29:00.488 --rc genhtml_function_coverage=1 00:29:00.488 --rc genhtml_legend=1 00:29:00.488 --rc geninfo_all_blocks=1 00:29:00.488 --rc geninfo_unexecuted_blocks=1 00:29:00.488 00:29:00.488 ' 00:29:00.488 19:18:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:00.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:00.488 --rc genhtml_branch_coverage=1 00:29:00.488 --rc genhtml_function_coverage=1 00:29:00.488 --rc genhtml_legend=1 00:29:00.488 --rc geninfo_all_blocks=1 00:29:00.488 --rc geninfo_unexecuted_blocks=1 00:29:00.488 00:29:00.488 ' 00:29:00.488 19:18:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:00.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:00.488 --rc genhtml_branch_coverage=1 00:29:00.488 --rc genhtml_function_coverage=1 00:29:00.488 --rc genhtml_legend=1 00:29:00.488 --rc geninfo_all_blocks=1 00:29:00.488 --rc geninfo_unexecuted_blocks=1 00:29:00.488 00:29:00.488 ' 00:29:00.488 19:18:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:00.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:00.488 --rc genhtml_branch_coverage=1 00:29:00.488 --rc genhtml_function_coverage=1 00:29:00.488 --rc genhtml_legend=1 00:29:00.488 --rc geninfo_all_blocks=1 00:29:00.488 --rc geninfo_unexecuted_blocks=1 00:29:00.488 00:29:00.488 ' 00:29:00.488 19:18:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:00.488 19:18:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:29:00.488 19:18:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:00.488 19:18:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:00.488 19:18:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:00.488 19:18:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:00.488 19:18:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:00.488 19:18:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:29:00.488 19:18:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:00.488 19:18:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:29:00.488 19:18:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:00.488 19:18:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:00.488 19:18:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:00.488 19:18:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:29:00.488 19:18:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:29:00.488 19:18:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:00.488 19:18:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:00.488 19:18:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:29:00.488 19:18:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:00.488 19:18:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:00.488 19:18:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:00.488 19:18:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:00.488 19:18:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:00.488 19:18:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:00.489 19:18:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:29:00.489 19:18:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:00.489 19:18:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:29:00.489 19:18:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:29:00.489 19:18:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:29:00.489 19:18:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:29:00.489 19:18:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@50 -- # : 0 00:29:00.489 19:18:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:29:00.489 19:18:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:29:00.489 19:18:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:29:00.489 19:18:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:00.489 19:18:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:00.489 19:18:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:29:00.489 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:29:00.489 19:18:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:29:00.489 19:18:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:29:00.489 19:18:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@54 -- # have_pci_nics=0 00:29:00.489 19:18:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:29:00.489 19:18:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:29:00.489 19:18:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:29:00.489 19:18:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:29:00.489 19:18:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:29:00.489 19:18:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:00.489 19:18:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@296 -- # prepare_net_devs 00:29:00.489 19:18:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # local -g is_hw=no 00:29:00.489 19:18:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@260 -- # remove_target_ns 00:29:00.489 19:18:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:29:00.489 19:18:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:29:00.489 19:18:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_target_ns 00:29:00.489 19:18:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:29:00.489 19:18:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:29:00.489 19:18:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # xtrace_disable 00:29:00.489 19:18:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:08.631 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:08.631 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@131 -- # pci_devs=() 00:29:08.631 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@131 -- # local -a pci_devs 00:29:08.631 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@132 -- # pci_net_devs=() 00:29:08.631 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:29:08.631 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@133 -- # pci_drivers=() 00:29:08.631 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@133 -- # local -A pci_drivers 00:29:08.631 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@135 -- # net_devs=() 00:29:08.631 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@135 -- # local -ga net_devs 00:29:08.631 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@136 -- # e810=() 00:29:08.631 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@136 -- # local -ga e810 00:29:08.631 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@137 -- # x722=() 00:29:08.631 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@137 -- # local -ga x722 00:29:08.631 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@138 -- # mlx=() 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@138 -- # local -ga mlx 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:08.632 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:08.632 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@234 -- # [[ up == up ]] 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:08.632 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@234 -- # [[ up == up ]] 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:08.632 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # is_hw=yes 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@257 -- # create_target_ns 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@27 -- # local -gA dev_map 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@28 -- # local -g _dev 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@44 -- # ips=() 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@11 -- # local val=167772161 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:29:08.632 10.0.0.1 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@11 -- # local val=167772162 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:29:08.632 10.0.0.2 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@38 -- # ping_ips 1 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@107 -- # local dev=initiator0 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:29:08.632 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:29:08.633 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:29:08.633 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:29:08.633 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:29:08.633 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:29:08.633 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:29:08.633 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:29:08.633 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:29:08.633 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:29:08.633 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:08.633 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:08.633 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:29:08.633 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:29:08.633 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:08.633 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.690 ms 00:29:08.633 00:29:08.633 --- 10.0.0.1 ping statistics --- 00:29:08.633 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:08.633 rtt min/avg/max/mdev = 0.690/0.690/0.690/0.000 ms 00:29:08.633 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:29:08.633 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:29:08.633 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:29:08.633 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:29:08.633 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:08.633 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:08.633 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@168 -- # get_net_dev target0 00:29:08.633 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@107 -- # local dev=target0 00:29:08.633 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:29:08.633 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:29:08.633 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:29:08.633 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:29:08.633 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:29:08.633 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:29:08.633 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:29:08.633 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:29:08.633 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:29:08.633 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:29:08.633 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:29:08.633 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:29:08.633 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:29:08.633 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:29:08.633 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:08.633 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.317 ms 00:29:08.633 00:29:08.633 --- 10.0.0.2 ping statistics --- 00:29:08.633 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:08.633 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:29:08.633 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@98 -- # (( pair++ )) 00:29:08.633 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:29:08.633 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:08.633 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@270 -- # return 0 00:29:08.633 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:29:08.633 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:29:08.633 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:29:08.633 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:29:08.633 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:29:08.633 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:29:08.633 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:29:08.633 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:29:08.633 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:29:08.633 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:29:08.633 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@107 -- # local dev=initiator0 00:29:08.633 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:29:08.633 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:29:08.633 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:29:08.633 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:29:08.633 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:29:08.633 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:29:08.633 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:29:08.633 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:29:08.633 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:29:08.633 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:08.633 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:29:08.633 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:29:08.633 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:29:08.633 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:29:08.633 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:29:08.633 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:29:08.633 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@107 -- # local dev=initiator1 00:29:08.633 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:29:08.633 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:29:08.633 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@109 -- # return 1 00:29:08.633 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@168 -- # dev= 00:29:08.633 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@169 -- # return 0 00:29:08.633 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:29:08.633 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:29:08.633 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:29:08.633 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:29:08.633 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:29:08.633 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:08.633 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:08.633 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@168 -- # get_net_dev target0 00:29:08.633 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@107 -- # local dev=target0 00:29:08.633 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:29:08.633 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:29:08.633 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:29:08.633 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:29:08.633 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:29:08.633 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:29:08.633 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:29:08.633 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:29:08.633 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:29:08.633 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:08.633 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:29:08.633 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:29:08.633 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:29:08.633 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:29:08.633 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:08.633 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:08.633 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@168 -- # get_net_dev target1 00:29:08.633 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@107 -- # local dev=target1 00:29:08.633 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:29:08.633 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:29:08.633 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@109 -- # return 1 00:29:08.633 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@168 -- # dev= 00:29:08.633 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@169 -- # return 0 00:29:08.633 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:29:08.633 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:08.633 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:29:08.633 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:29:08.633 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:08.633 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:29:08.633 19:18:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:29:08.633 19:18:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:29:08.633 19:18:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:29:08.633 19:18:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:08.633 19:18:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:08.633 ************************************ 00:29:08.633 START TEST nvmf_target_disconnect_tc1 00:29:08.633 ************************************ 00:29:08.633 19:18:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1127 -- # nvmf_target_disconnect_tc1 00:29:08.633 19:18:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:08.633 19:18:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # local es=0 00:29:08.633 19:18:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:08.633 19:18:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:08.633 19:18:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:08.633 19:18:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:08.633 19:18:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:08.633 19:18:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:08.633 19:18:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:08.633 19:18:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:08.633 19:18:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:29:08.633 19:18:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:08.633 [2024-11-05 19:18:37.175375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.633 [2024-11-05 19:18:37.175463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc5aad0 with addr=10.0.0.2, port=4420 00:29:08.633 [2024-11-05 19:18:37.175502] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:29:08.633 [2024-11-05 19:18:37.175514] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:08.633 [2024-11-05 19:18:37.175522] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:29:08.633 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:29:08.633 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:29:08.633 Initializing NVMe Controllers 00:29:08.633 19:18:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # es=1 00:29:08.633 19:18:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:08.633 19:18:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:08.633 19:18:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:08.633 00:29:08.633 real 0m0.128s 00:29:08.634 user 0m0.064s 00:29:08.634 sys 0m0.063s 00:29:08.634 19:18:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:08.634 19:18:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:08.634 ************************************ 00:29:08.634 END TEST nvmf_target_disconnect_tc1 00:29:08.634 ************************************ 00:29:08.634 19:18:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:29:08.634 19:18:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:29:08.634 19:18:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:08.634 19:18:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:08.634 ************************************ 00:29:08.634 START TEST nvmf_target_disconnect_tc2 00:29:08.634 ************************************ 00:29:08.634 19:18:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1127 -- # nvmf_target_disconnect_tc2 00:29:08.634 19:18:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:29:08.634 19:18:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:08.634 19:18:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:29:08.634 19:18:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:08.634 19:18:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:08.634 19:18:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@328 -- # nvmfpid=517570 00:29:08.634 19:18:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@329 -- # waitforlisten 517570 00:29:08.634 19:18:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:08.634 19:18:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # '[' -z 517570 ']' 00:29:08.634 19:18:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:08.634 19:18:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:08.634 19:18:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:08.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:08.634 19:18:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:08.634 19:18:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:08.634 [2024-11-05 19:18:37.340371] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:29:08.634 [2024-11-05 19:18:37.340432] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:08.634 [2024-11-05 19:18:37.440115] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:08.634 [2024-11-05 19:18:37.493073] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:08.634 [2024-11-05 19:18:37.493127] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:08.634 [2024-11-05 19:18:37.493137] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:08.634 [2024-11-05 19:18:37.493144] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:08.634 [2024-11-05 19:18:37.493150] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:08.634 [2024-11-05 19:18:37.495190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:29:08.634 [2024-11-05 19:18:37.495351] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:29:08.634 [2024-11-05 19:18:37.495517] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:08.634 [2024-11-05 19:18:37.495518] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:29:08.896 19:18:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:08.896 19:18:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@866 -- # return 0 00:29:08.896 19:18:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:29:08.896 19:18:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:08.896 19:18:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:08.896 19:18:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:08.896 19:18:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:08.896 19:18:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:08.896 19:18:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:09.156 Malloc0 00:29:09.156 19:18:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:09.156 19:18:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:09.156 19:18:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:09.156 19:18:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:09.156 [2024-11-05 19:18:38.253273] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:09.156 19:18:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:09.156 19:18:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:09.156 19:18:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:09.156 19:18:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:09.156 19:18:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:09.156 19:18:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:09.156 19:18:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:09.156 19:18:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:09.156 19:18:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:09.156 19:18:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:09.156 19:18:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:09.156 19:18:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:09.156 [2024-11-05 19:18:38.281643] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:09.156 19:18:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:09.156 19:18:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:09.156 19:18:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:09.156 19:18:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:09.156 19:18:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:09.156 19:18:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=517847 00:29:09.156 19:18:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:29:09.157 19:18:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:11.072 19:18:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 517570 00:29:11.072 19:18:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:29:11.072 Read completed with error (sct=0, sc=8) 00:29:11.072 starting I/O failed 00:29:11.072 Read completed with error (sct=0, sc=8) 00:29:11.072 starting I/O failed 00:29:11.072 Read completed with error (sct=0, sc=8) 00:29:11.072 starting I/O failed 00:29:11.072 Read completed with error (sct=0, sc=8) 00:29:11.072 starting I/O failed 00:29:11.072 Read completed with error (sct=0, sc=8) 00:29:11.072 starting I/O failed 00:29:11.072 Read completed with error (sct=0, sc=8) 00:29:11.072 starting I/O failed 00:29:11.072 Read completed with error (sct=0, sc=8) 00:29:11.072 starting I/O failed 00:29:11.072 Read completed with error (sct=0, sc=8) 00:29:11.072 starting I/O failed 00:29:11.072 Read completed with error (sct=0, sc=8) 00:29:11.072 starting I/O failed 00:29:11.072 Read completed with error (sct=0, sc=8) 00:29:11.072 starting I/O failed 00:29:11.072 Read completed with error (sct=0, sc=8) 00:29:11.072 starting I/O failed 00:29:11.072 Write completed with error (sct=0, sc=8) 00:29:11.072 starting I/O failed 00:29:11.072 Read completed with error (sct=0, sc=8) 00:29:11.072 starting I/O failed 00:29:11.072 Read completed with error (sct=0, sc=8) 00:29:11.072 starting I/O failed 00:29:11.072 Write completed with error (sct=0, sc=8) 00:29:11.072 starting I/O failed 00:29:11.072 Read completed with error (sct=0, sc=8) 00:29:11.072 starting I/O failed 00:29:11.072 Write completed with error (sct=0, sc=8) 00:29:11.072 starting I/O failed 00:29:11.072 Read completed with error (sct=0, sc=8) 00:29:11.072 starting I/O failed 00:29:11.072 Write completed with error (sct=0, sc=8) 00:29:11.072 starting I/O failed 00:29:11.072 Write completed with error (sct=0, sc=8) 00:29:11.072 starting I/O failed 00:29:11.072 Write completed with error (sct=0, sc=8) 00:29:11.072 starting I/O failed 00:29:11.072 Write completed with error (sct=0, sc=8) 00:29:11.072 starting I/O failed 00:29:11.072 Write completed with error (sct=0, sc=8) 00:29:11.072 starting I/O failed 00:29:11.072 Write completed with error (sct=0, sc=8) 00:29:11.072 starting I/O failed 00:29:11.072 Read completed with error (sct=0, sc=8) 00:29:11.072 starting I/O failed 00:29:11.072 Read completed with error (sct=0, sc=8) 00:29:11.072 starting I/O failed 00:29:11.072 Write completed with error (sct=0, sc=8) 00:29:11.072 starting I/O failed 00:29:11.072 Read completed with error (sct=0, sc=8) 00:29:11.072 starting I/O failed 00:29:11.072 Write completed with error (sct=0, sc=8) 00:29:11.072 starting I/O failed 00:29:11.072 Write completed with error (sct=0, sc=8) 00:29:11.072 starting I/O failed 00:29:11.072 Read completed with error (sct=0, sc=8) 00:29:11.072 starting I/O failed 00:29:11.072 Read completed with error (sct=0, sc=8) 00:29:11.072 starting I/O failed 00:29:11.072 [2024-11-05 19:18:40.309237] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:11.072 [2024-11-05 19:18:40.309616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.072 [2024-11-05 19:18:40.309638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.072 qpair failed and we were unable to recover it. 00:29:11.072 [2024-11-05 19:18:40.310058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.072 [2024-11-05 19:18:40.310095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.072 qpair failed and we were unable to recover it. 00:29:11.072 [2024-11-05 19:18:40.310425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.072 [2024-11-05 19:18:40.310441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.072 qpair failed and we were unable to recover it. 00:29:11.072 [2024-11-05 19:18:40.310959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.072 [2024-11-05 19:18:40.310999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.072 qpair failed and we were unable to recover it. 00:29:11.072 [2024-11-05 19:18:40.311367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.072 [2024-11-05 19:18:40.311382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.072 qpair failed and we were unable to recover it. 00:29:11.072 [2024-11-05 19:18:40.311713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.072 [2024-11-05 19:18:40.311725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.072 qpair failed and we were unable to recover it. 00:29:11.072 [2024-11-05 19:18:40.311922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.072 [2024-11-05 19:18:40.311934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.072 qpair failed and we were unable to recover it. 00:29:11.072 [2024-11-05 19:18:40.312272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.072 [2024-11-05 19:18:40.312284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.072 qpair failed and we were unable to recover it. 00:29:11.072 [2024-11-05 19:18:40.312466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.072 [2024-11-05 19:18:40.312478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.072 qpair failed and we were unable to recover it. 00:29:11.073 [2024-11-05 19:18:40.312691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.073 [2024-11-05 19:18:40.312703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.073 qpair failed and we were unable to recover it. 00:29:11.073 [2024-11-05 19:18:40.312919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.073 [2024-11-05 19:18:40.312934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.073 qpair failed and we were unable to recover it. 00:29:11.073 [2024-11-05 19:18:40.313095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.073 [2024-11-05 19:18:40.313106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.073 qpair failed and we were unable to recover it. 00:29:11.073 [2024-11-05 19:18:40.313413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.073 [2024-11-05 19:18:40.313425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.073 qpair failed and we were unable to recover it. 00:29:11.073 [2024-11-05 19:18:40.313748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.073 [2024-11-05 19:18:40.313761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.073 qpair failed and we were unable to recover it. 00:29:11.073 [2024-11-05 19:18:40.314080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.073 [2024-11-05 19:18:40.314092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.073 qpair failed and we were unable to recover it. 00:29:11.073 [2024-11-05 19:18:40.314308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.073 [2024-11-05 19:18:40.314320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.073 qpair failed and we were unable to recover it. 00:29:11.073 [2024-11-05 19:18:40.314628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.073 [2024-11-05 19:18:40.314640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.073 qpair failed and we were unable to recover it. 00:29:11.073 [2024-11-05 19:18:40.315023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.073 [2024-11-05 19:18:40.315035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.073 qpair failed and we were unable to recover it. 00:29:11.073 [2024-11-05 19:18:40.315362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.073 [2024-11-05 19:18:40.315374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.073 qpair failed and we were unable to recover it. 00:29:11.073 [2024-11-05 19:18:40.315467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.073 [2024-11-05 19:18:40.315477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.073 qpair failed and we were unable to recover it. 00:29:11.073 [2024-11-05 19:18:40.315770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.073 [2024-11-05 19:18:40.315783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.073 qpair failed and we were unable to recover it. 00:29:11.073 [2024-11-05 19:18:40.316172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.073 [2024-11-05 19:18:40.316184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.073 qpair failed and we were unable to recover it. 00:29:11.073 [2024-11-05 19:18:40.316476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.073 [2024-11-05 19:18:40.316488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.073 qpair failed and we were unable to recover it. 00:29:11.073 [2024-11-05 19:18:40.316833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.073 [2024-11-05 19:18:40.316849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.073 qpair failed and we were unable to recover it. 00:29:11.073 [2024-11-05 19:18:40.317196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.073 [2024-11-05 19:18:40.317208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.073 qpair failed and we were unable to recover it. 00:29:11.073 [2024-11-05 19:18:40.317533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.073 [2024-11-05 19:18:40.317545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.073 qpair failed and we were unable to recover it. 00:29:11.073 [2024-11-05 19:18:40.317864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.073 [2024-11-05 19:18:40.317876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.073 qpair failed and we were unable to recover it. 00:29:11.073 [2024-11-05 19:18:40.318203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.073 [2024-11-05 19:18:40.318215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.073 qpair failed and we were unable to recover it. 00:29:11.073 [2024-11-05 19:18:40.318516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.073 [2024-11-05 19:18:40.318527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.073 qpair failed and we were unable to recover it. 00:29:11.073 [2024-11-05 19:18:40.318806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.073 [2024-11-05 19:18:40.318819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.073 qpair failed and we were unable to recover it. 00:29:11.073 [2024-11-05 19:18:40.319194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.073 [2024-11-05 19:18:40.319205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.073 qpair failed and we were unable to recover it. 00:29:11.073 [2024-11-05 19:18:40.319530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.073 [2024-11-05 19:18:40.319542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.073 qpair failed and we were unable to recover it. 00:29:11.073 [2024-11-05 19:18:40.319884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.073 [2024-11-05 19:18:40.319896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.073 qpair failed and we were unable to recover it. 00:29:11.073 [2024-11-05 19:18:40.320212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.073 [2024-11-05 19:18:40.320223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.073 qpair failed and we were unable to recover it. 00:29:11.073 [2024-11-05 19:18:40.320498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.073 [2024-11-05 19:18:40.320510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.073 qpair failed and we were unable to recover it. 00:29:11.073 [2024-11-05 19:18:40.320797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.073 [2024-11-05 19:18:40.320809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.073 qpair failed and we were unable to recover it. 00:29:11.073 [2024-11-05 19:18:40.321144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.073 [2024-11-05 19:18:40.321156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.073 qpair failed and we were unable to recover it. 00:29:11.073 [2024-11-05 19:18:40.321465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.073 [2024-11-05 19:18:40.321477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.073 qpair failed and we were unable to recover it. 00:29:11.073 [2024-11-05 19:18:40.321808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.073 [2024-11-05 19:18:40.321820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.073 qpair failed and we were unable to recover it. 00:29:11.073 [2024-11-05 19:18:40.322139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.073 [2024-11-05 19:18:40.322151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.073 qpair failed and we were unable to recover it. 00:29:11.073 [2024-11-05 19:18:40.322493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.073 [2024-11-05 19:18:40.322505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.073 qpair failed and we were unable to recover it. 00:29:11.073 [2024-11-05 19:18:40.322806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.073 [2024-11-05 19:18:40.322819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.073 qpair failed and we were unable to recover it. 00:29:11.073 [2024-11-05 19:18:40.323111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.073 [2024-11-05 19:18:40.323123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.073 qpair failed and we were unable to recover it. 00:29:11.073 [2024-11-05 19:18:40.323455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.073 [2024-11-05 19:18:40.323467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.073 qpair failed and we were unable to recover it. 00:29:11.073 [2024-11-05 19:18:40.323806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.073 [2024-11-05 19:18:40.323819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.073 qpair failed and we were unable to recover it. 00:29:11.073 [2024-11-05 19:18:40.324156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.073 [2024-11-05 19:18:40.324168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.073 qpair failed and we were unable to recover it. 00:29:11.073 [2024-11-05 19:18:40.324500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.073 [2024-11-05 19:18:40.324513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.073 qpair failed and we were unable to recover it. 00:29:11.073 [2024-11-05 19:18:40.324816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.074 [2024-11-05 19:18:40.324829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.074 qpair failed and we were unable to recover it. 00:29:11.074 [2024-11-05 19:18:40.325155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.074 [2024-11-05 19:18:40.325168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.074 qpair failed and we were unable to recover it. 00:29:11.074 [2024-11-05 19:18:40.325498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.074 [2024-11-05 19:18:40.325510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.074 qpair failed and we were unable to recover it. 00:29:11.074 [2024-11-05 19:18:40.325813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.074 [2024-11-05 19:18:40.325828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.074 qpair failed and we were unable to recover it. 00:29:11.074 [2024-11-05 19:18:40.326153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.074 [2024-11-05 19:18:40.326165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.074 qpair failed and we were unable to recover it. 00:29:11.074 [2024-11-05 19:18:40.326445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.074 [2024-11-05 19:18:40.326457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.074 qpair failed and we were unable to recover it. 00:29:11.074 [2024-11-05 19:18:40.326794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.074 [2024-11-05 19:18:40.326807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.074 qpair failed and we were unable to recover it. 00:29:11.074 [2024-11-05 19:18:40.327121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.074 [2024-11-05 19:18:40.327133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.074 qpair failed and we were unable to recover it. 00:29:11.074 [2024-11-05 19:18:40.327312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.074 [2024-11-05 19:18:40.327324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.074 qpair failed and we were unable to recover it. 00:29:11.074 [2024-11-05 19:18:40.327629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.074 [2024-11-05 19:18:40.327642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.074 qpair failed and we were unable to recover it. 00:29:11.074 [2024-11-05 19:18:40.327989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.074 [2024-11-05 19:18:40.328002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.074 qpair failed and we were unable to recover it. 00:29:11.074 [2024-11-05 19:18:40.328304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.074 [2024-11-05 19:18:40.328316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.074 qpair failed and we were unable to recover it. 00:29:11.074 [2024-11-05 19:18:40.328594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.074 [2024-11-05 19:18:40.328606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.074 qpair failed and we were unable to recover it. 00:29:11.074 [2024-11-05 19:18:40.328923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.074 [2024-11-05 19:18:40.328935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.074 qpair failed and we were unable to recover it. 00:29:11.074 [2024-11-05 19:18:40.329243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.074 [2024-11-05 19:18:40.329255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.074 qpair failed and we were unable to recover it. 00:29:11.074 [2024-11-05 19:18:40.329582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.074 [2024-11-05 19:18:40.329593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.074 qpair failed and we were unable to recover it. 00:29:11.074 [2024-11-05 19:18:40.329786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.074 [2024-11-05 19:18:40.329797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.074 qpair failed and we were unable to recover it. 00:29:11.074 [2024-11-05 19:18:40.330116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.074 [2024-11-05 19:18:40.330127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.074 qpair failed and we were unable to recover it. 00:29:11.074 [2024-11-05 19:18:40.330463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.074 [2024-11-05 19:18:40.330475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.074 qpair failed and we were unable to recover it. 00:29:11.074 [2024-11-05 19:18:40.330657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.074 [2024-11-05 19:18:40.330669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.074 qpair failed and we were unable to recover it. 00:29:11.074 [2024-11-05 19:18:40.330960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.074 [2024-11-05 19:18:40.330972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.074 qpair failed and we were unable to recover it. 00:29:11.074 [2024-11-05 19:18:40.331257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.074 [2024-11-05 19:18:40.331276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.074 qpair failed and we were unable to recover it. 00:29:11.074 [2024-11-05 19:18:40.331574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.074 [2024-11-05 19:18:40.331585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.074 qpair failed and we were unable to recover it. 00:29:11.074 [2024-11-05 19:18:40.331915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.074 [2024-11-05 19:18:40.331927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.074 qpair failed and we were unable to recover it. 00:29:11.074 [2024-11-05 19:18:40.332260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.074 [2024-11-05 19:18:40.332271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.074 qpair failed and we were unable to recover it. 00:29:11.074 [2024-11-05 19:18:40.332460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.074 [2024-11-05 19:18:40.332470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.074 qpair failed and we were unable to recover it. 00:29:11.074 [2024-11-05 19:18:40.332765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.074 [2024-11-05 19:18:40.332776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.074 qpair failed and we were unable to recover it. 00:29:11.074 [2024-11-05 19:18:40.333083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.074 [2024-11-05 19:18:40.333095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.074 qpair failed and we were unable to recover it. 00:29:11.074 [2024-11-05 19:18:40.333449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.074 [2024-11-05 19:18:40.333460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.074 qpair failed and we were unable to recover it. 00:29:11.074 [2024-11-05 19:18:40.333756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.074 [2024-11-05 19:18:40.333768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.074 qpair failed and we were unable to recover it. 00:29:11.074 [2024-11-05 19:18:40.334091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.074 [2024-11-05 19:18:40.334102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.074 qpair failed and we were unable to recover it. 00:29:11.074 [2024-11-05 19:18:40.334428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.074 [2024-11-05 19:18:40.334440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.074 qpair failed and we were unable to recover it. 00:29:11.074 [2024-11-05 19:18:40.334754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.074 [2024-11-05 19:18:40.334765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.074 qpair failed and we were unable to recover it. 00:29:11.074 [2024-11-05 19:18:40.335129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.074 [2024-11-05 19:18:40.335139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.074 qpair failed and we were unable to recover it. 00:29:11.074 [2024-11-05 19:18:40.335464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.074 [2024-11-05 19:18:40.335476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.074 qpair failed and we were unable to recover it. 00:29:11.074 [2024-11-05 19:18:40.335680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.074 [2024-11-05 19:18:40.335692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.074 qpair failed and we were unable to recover it. 00:29:11.074 [2024-11-05 19:18:40.335992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.074 [2024-11-05 19:18:40.336003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.074 qpair failed and we were unable to recover it. 00:29:11.074 [2024-11-05 19:18:40.336262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.074 [2024-11-05 19:18:40.336273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.074 qpair failed and we were unable to recover it. 00:29:11.074 [2024-11-05 19:18:40.336587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.074 [2024-11-05 19:18:40.336599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.074 qpair failed and we were unable to recover it. 00:29:11.074 [2024-11-05 19:18:40.337008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.075 [2024-11-05 19:18:40.337019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.075 qpair failed and we were unable to recover it. 00:29:11.075 [2024-11-05 19:18:40.337294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.075 [2024-11-05 19:18:40.337304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.075 qpair failed and we were unable to recover it. 00:29:11.075 [2024-11-05 19:18:40.337491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.075 [2024-11-05 19:18:40.337504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.075 qpair failed and we were unable to recover it. 00:29:11.075 [2024-11-05 19:18:40.337672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.075 [2024-11-05 19:18:40.337684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.075 qpair failed and we were unable to recover it. 00:29:11.075 [2024-11-05 19:18:40.337984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.075 [2024-11-05 19:18:40.337995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.075 qpair failed and we were unable to recover it. 00:29:11.075 [2024-11-05 19:18:40.338368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.075 [2024-11-05 19:18:40.338378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.075 qpair failed and we were unable to recover it. 00:29:11.075 [2024-11-05 19:18:40.338649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.075 [2024-11-05 19:18:40.338659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.075 qpair failed and we were unable to recover it. 00:29:11.075 [2024-11-05 19:18:40.338859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.075 [2024-11-05 19:18:40.338871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.075 qpair failed and we were unable to recover it. 00:29:11.075 [2024-11-05 19:18:40.339169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.075 [2024-11-05 19:18:40.339180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.075 qpair failed and we were unable to recover it. 00:29:11.075 [2024-11-05 19:18:40.339491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.075 [2024-11-05 19:18:40.339502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.075 qpair failed and we were unable to recover it. 00:29:11.075 [2024-11-05 19:18:40.339820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.075 [2024-11-05 19:18:40.339832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.075 qpair failed and we were unable to recover it. 00:29:11.075 [2024-11-05 19:18:40.340131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.075 [2024-11-05 19:18:40.340143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.075 qpair failed and we were unable to recover it. 00:29:11.075 [2024-11-05 19:18:40.340453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.075 [2024-11-05 19:18:40.340465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.075 qpair failed and we were unable to recover it. 00:29:11.075 [2024-11-05 19:18:40.340763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.075 [2024-11-05 19:18:40.340775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.075 qpair failed and we were unable to recover it. 00:29:11.075 [2024-11-05 19:18:40.341106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.075 [2024-11-05 19:18:40.341117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.075 qpair failed and we were unable to recover it. 00:29:11.075 [2024-11-05 19:18:40.341391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.075 [2024-11-05 19:18:40.341402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.075 qpair failed and we were unable to recover it. 00:29:11.075 [2024-11-05 19:18:40.341704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.075 [2024-11-05 19:18:40.341716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.075 qpair failed and we were unable to recover it. 00:29:11.075 [2024-11-05 19:18:40.341943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.075 [2024-11-05 19:18:40.341955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.075 qpair failed and we were unable to recover it. 00:29:11.075 [2024-11-05 19:18:40.342153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.075 [2024-11-05 19:18:40.342163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.075 qpair failed and we were unable to recover it. 00:29:11.075 [2024-11-05 19:18:40.342508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.075 [2024-11-05 19:18:40.342519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.075 qpair failed and we were unable to recover it. 00:29:11.075 [2024-11-05 19:18:40.342712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.075 [2024-11-05 19:18:40.342724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.075 qpair failed and we were unable to recover it. 00:29:11.075 [2024-11-05 19:18:40.342932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.075 [2024-11-05 19:18:40.342943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.075 qpair failed and we were unable to recover it. 00:29:11.075 [2024-11-05 19:18:40.343214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.075 [2024-11-05 19:18:40.343225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.075 qpair failed and we were unable to recover it. 00:29:11.075 [2024-11-05 19:18:40.343501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.075 [2024-11-05 19:18:40.343512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.075 qpair failed and we were unable to recover it. 00:29:11.075 [2024-11-05 19:18:40.343712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.075 [2024-11-05 19:18:40.343724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.075 qpair failed and we were unable to recover it. 00:29:11.075 [2024-11-05 19:18:40.344033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.075 [2024-11-05 19:18:40.344045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.075 qpair failed and we were unable to recover it. 00:29:11.075 [2024-11-05 19:18:40.344235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.075 [2024-11-05 19:18:40.344246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.075 qpair failed and we were unable to recover it. 00:29:11.075 [2024-11-05 19:18:40.344519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.075 [2024-11-05 19:18:40.344530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.075 qpair failed and we were unable to recover it. 00:29:11.075 [2024-11-05 19:18:40.344832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.075 [2024-11-05 19:18:40.344844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.075 qpair failed and we were unable to recover it. 00:29:11.075 [2024-11-05 19:18:40.345124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.075 [2024-11-05 19:18:40.345135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.075 qpair failed and we were unable to recover it. 00:29:11.075 [2024-11-05 19:18:40.345323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.075 [2024-11-05 19:18:40.345336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.075 qpair failed and we were unable to recover it. 00:29:11.075 [2024-11-05 19:18:40.345625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.075 [2024-11-05 19:18:40.345636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.075 qpair failed and we were unable to recover it. 00:29:11.075 [2024-11-05 19:18:40.345958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.075 [2024-11-05 19:18:40.345972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.075 qpair failed and we were unable to recover it. 00:29:11.075 [2024-11-05 19:18:40.346270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.075 [2024-11-05 19:18:40.346281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.075 qpair failed and we were unable to recover it. 00:29:11.075 [2024-11-05 19:18:40.346537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.075 [2024-11-05 19:18:40.346548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.075 qpair failed and we were unable to recover it. 00:29:11.075 [2024-11-05 19:18:40.346833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.075 [2024-11-05 19:18:40.346845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.075 qpair failed and we were unable to recover it. 00:29:11.075 [2024-11-05 19:18:40.347021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.075 [2024-11-05 19:18:40.347032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.075 qpair failed and we were unable to recover it. 00:29:11.075 [2024-11-05 19:18:40.347200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.075 [2024-11-05 19:18:40.347212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.075 qpair failed and we were unable to recover it. 00:29:11.075 [2024-11-05 19:18:40.347501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.075 [2024-11-05 19:18:40.347512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.075 qpair failed and we were unable to recover it. 00:29:11.075 [2024-11-05 19:18:40.347805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.076 [2024-11-05 19:18:40.347816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.076 qpair failed and we were unable to recover it. 00:29:11.076 [2024-11-05 19:18:40.348112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.076 [2024-11-05 19:18:40.348123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.076 qpair failed and we were unable to recover it. 00:29:11.076 [2024-11-05 19:18:40.348406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.076 [2024-11-05 19:18:40.348417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.076 qpair failed and we were unable to recover it. 00:29:11.076 [2024-11-05 19:18:40.348599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.076 [2024-11-05 19:18:40.348610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.076 qpair failed and we were unable to recover it. 00:29:11.076 [2024-11-05 19:18:40.348877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.076 [2024-11-05 19:18:40.348888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.076 qpair failed and we were unable to recover it. 00:29:11.076 [2024-11-05 19:18:40.349205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.076 [2024-11-05 19:18:40.349215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.076 qpair failed and we were unable to recover it. 00:29:11.076 [2024-11-05 19:18:40.349458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.076 [2024-11-05 19:18:40.349469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.076 qpair failed and we were unable to recover it. 00:29:11.076 [2024-11-05 19:18:40.349754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.076 [2024-11-05 19:18:40.349765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.076 qpair failed and we were unable to recover it. 00:29:11.076 [2024-11-05 19:18:40.350039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.076 [2024-11-05 19:18:40.350051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.076 qpair failed and we were unable to recover it. 00:29:11.076 [2024-11-05 19:18:40.350367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.076 [2024-11-05 19:18:40.350378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.076 qpair failed and we were unable to recover it. 00:29:11.076 [2024-11-05 19:18:40.350679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.076 [2024-11-05 19:18:40.350691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.076 qpair failed and we were unable to recover it. 00:29:11.076 [2024-11-05 19:18:40.350948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.076 [2024-11-05 19:18:40.350960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.076 qpair failed and we were unable to recover it. 00:29:11.076 [2024-11-05 19:18:40.351255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.076 [2024-11-05 19:18:40.351267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.076 qpair failed and we were unable to recover it. 00:29:11.076 [2024-11-05 19:18:40.351595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.076 [2024-11-05 19:18:40.351605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.076 qpair failed and we were unable to recover it. 00:29:11.076 [2024-11-05 19:18:40.351918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.076 [2024-11-05 19:18:40.351931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.076 qpair failed and we were unable to recover it. 00:29:11.076 [2024-11-05 19:18:40.352239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.076 [2024-11-05 19:18:40.352250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.076 qpair failed and we were unable to recover it. 00:29:11.076 [2024-11-05 19:18:40.352560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.076 [2024-11-05 19:18:40.352572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.076 qpair failed and we were unable to recover it. 00:29:11.076 [2024-11-05 19:18:40.352899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.076 [2024-11-05 19:18:40.352911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.076 qpair failed and we were unable to recover it. 00:29:11.076 [2024-11-05 19:18:40.353198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.076 [2024-11-05 19:18:40.353208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.076 qpair failed and we were unable to recover it. 00:29:11.076 [2024-11-05 19:18:40.353516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.076 [2024-11-05 19:18:40.353527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.076 qpair failed and we were unable to recover it. 00:29:11.076 [2024-11-05 19:18:40.353914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.076 [2024-11-05 19:18:40.353927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.076 qpair failed and we were unable to recover it. 00:29:11.076 [2024-11-05 19:18:40.354224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.076 [2024-11-05 19:18:40.354235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.076 qpair failed and we were unable to recover it. 00:29:11.076 [2024-11-05 19:18:40.354559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.076 [2024-11-05 19:18:40.354570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.076 qpair failed and we were unable to recover it. 00:29:11.076 [2024-11-05 19:18:40.354797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.076 [2024-11-05 19:18:40.354807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.076 qpair failed and we were unable to recover it. 00:29:11.076 [2024-11-05 19:18:40.355017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.076 [2024-11-05 19:18:40.355028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.076 qpair failed and we were unable to recover it. 00:29:11.076 [2024-11-05 19:18:40.355292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.076 [2024-11-05 19:18:40.355303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.076 qpair failed and we were unable to recover it. 00:29:11.076 [2024-11-05 19:18:40.355594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.076 [2024-11-05 19:18:40.355606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.076 qpair failed and we were unable to recover it. 00:29:11.076 [2024-11-05 19:18:40.355912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.076 [2024-11-05 19:18:40.355924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.076 qpair failed and we were unable to recover it. 00:29:11.076 [2024-11-05 19:18:40.356128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.076 [2024-11-05 19:18:40.356139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.076 qpair failed and we were unable to recover it. 00:29:11.076 [2024-11-05 19:18:40.356430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.076 [2024-11-05 19:18:40.356441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.076 qpair failed and we were unable to recover it. 00:29:11.076 [2024-11-05 19:18:40.356786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.076 [2024-11-05 19:18:40.356798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.076 qpair failed and we were unable to recover it. 00:29:11.076 [2024-11-05 19:18:40.356988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.076 [2024-11-05 19:18:40.356999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.076 qpair failed and we were unable to recover it. 00:29:11.076 [2024-11-05 19:18:40.357299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.076 [2024-11-05 19:18:40.357310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.076 qpair failed and we were unable to recover it. 00:29:11.076 [2024-11-05 19:18:40.357613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.076 [2024-11-05 19:18:40.357624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.076 qpair failed and we were unable to recover it. 00:29:11.076 [2024-11-05 19:18:40.357929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.076 [2024-11-05 19:18:40.357941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.076 qpair failed and we were unable to recover it. 00:29:11.076 [2024-11-05 19:18:40.358256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.077 [2024-11-05 19:18:40.358266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.077 qpair failed and we were unable to recover it. 00:29:11.077 [2024-11-05 19:18:40.358462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.077 [2024-11-05 19:18:40.358474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.077 qpair failed and we were unable to recover it. 00:29:11.077 [2024-11-05 19:18:40.358776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.077 [2024-11-05 19:18:40.358788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.077 qpair failed and we were unable to recover it. 00:29:11.077 [2024-11-05 19:18:40.359194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.077 [2024-11-05 19:18:40.359205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.077 qpair failed and we were unable to recover it. 00:29:11.077 [2024-11-05 19:18:40.359398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.077 [2024-11-05 19:18:40.359408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.077 qpair failed and we were unable to recover it. 00:29:11.077 [2024-11-05 19:18:40.359622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.077 [2024-11-05 19:18:40.359634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.077 qpair failed and we were unable to recover it. 00:29:11.077 [2024-11-05 19:18:40.359935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.077 [2024-11-05 19:18:40.359947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.077 qpair failed and we were unable to recover it. 00:29:11.077 [2024-11-05 19:18:40.360252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.077 [2024-11-05 19:18:40.360264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.077 qpair failed and we were unable to recover it. 00:29:11.077 [2024-11-05 19:18:40.360443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.077 [2024-11-05 19:18:40.360456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.077 qpair failed and we were unable to recover it. 00:29:11.077 [2024-11-05 19:18:40.360740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.077 [2024-11-05 19:18:40.360759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.077 qpair failed and we were unable to recover it. 00:29:11.077 [2024-11-05 19:18:40.361095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.077 [2024-11-05 19:18:40.361107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.077 qpair failed and we were unable to recover it. 00:29:11.077 [2024-11-05 19:18:40.361406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.077 [2024-11-05 19:18:40.361418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.077 qpair failed and we were unable to recover it. 00:29:11.077 [2024-11-05 19:18:40.361707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.077 [2024-11-05 19:18:40.361720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.077 qpair failed and we were unable to recover it. 00:29:11.077 [2024-11-05 19:18:40.362034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.077 [2024-11-05 19:18:40.362045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.077 qpair failed and we were unable to recover it. 00:29:11.077 [2024-11-05 19:18:40.362406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.077 [2024-11-05 19:18:40.362417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.077 qpair failed and we were unable to recover it. 00:29:11.077 [2024-11-05 19:18:40.362720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.077 [2024-11-05 19:18:40.362731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.077 qpair failed and we were unable to recover it. 00:29:11.077 [2024-11-05 19:18:40.363025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.077 [2024-11-05 19:18:40.363036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.077 qpair failed and we were unable to recover it. 00:29:11.077 [2024-11-05 19:18:40.363388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.077 [2024-11-05 19:18:40.363400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.077 qpair failed and we were unable to recover it. 00:29:11.077 [2024-11-05 19:18:40.363637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.077 [2024-11-05 19:18:40.363648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.077 qpair failed and we were unable to recover it. 00:29:11.077 [2024-11-05 19:18:40.363958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.077 [2024-11-05 19:18:40.363969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.077 qpair failed and we were unable to recover it. 00:29:11.077 [2024-11-05 19:18:40.364236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.077 [2024-11-05 19:18:40.364246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.077 qpair failed and we were unable to recover it. 00:29:11.077 [2024-11-05 19:18:40.364574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.077 [2024-11-05 19:18:40.364585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.077 qpair failed and we were unable to recover it. 00:29:11.077 [2024-11-05 19:18:40.364888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.077 [2024-11-05 19:18:40.364900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.077 qpair failed and we were unable to recover it. 00:29:11.077 [2024-11-05 19:18:40.365229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.077 [2024-11-05 19:18:40.365240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.077 qpair failed and we were unable to recover it. 00:29:11.077 [2024-11-05 19:18:40.365568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.077 [2024-11-05 19:18:40.365579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.077 qpair failed and we were unable to recover it. 00:29:11.077 [2024-11-05 19:18:40.365963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.077 [2024-11-05 19:18:40.365974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.077 qpair failed and we were unable to recover it. 00:29:11.077 [2024-11-05 19:18:40.366292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.077 [2024-11-05 19:18:40.366304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.077 qpair failed and we were unable to recover it. 00:29:11.077 [2024-11-05 19:18:40.366635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.077 [2024-11-05 19:18:40.366646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.077 qpair failed and we were unable to recover it. 00:29:11.077 [2024-11-05 19:18:40.366921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.077 [2024-11-05 19:18:40.366932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.077 qpair failed and we were unable to recover it. 00:29:11.077 [2024-11-05 19:18:40.367229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.077 [2024-11-05 19:18:40.367240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.077 qpair failed and we were unable to recover it. 00:29:11.077 [2024-11-05 19:18:40.367571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.077 [2024-11-05 19:18:40.367582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.077 qpair failed and we were unable to recover it. 00:29:11.077 [2024-11-05 19:18:40.367904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.077 [2024-11-05 19:18:40.367916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.077 qpair failed and we were unable to recover it. 00:29:11.077 [2024-11-05 19:18:40.368124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.077 [2024-11-05 19:18:40.368135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.077 qpair failed and we were unable to recover it. 00:29:11.077 [2024-11-05 19:18:40.368465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.077 [2024-11-05 19:18:40.368477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.077 qpair failed and we were unable to recover it. 00:29:11.077 [2024-11-05 19:18:40.368766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.077 [2024-11-05 19:18:40.368779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.077 qpair failed and we were unable to recover it. 00:29:11.077 [2024-11-05 19:18:40.369112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.077 [2024-11-05 19:18:40.369123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.077 qpair failed and we were unable to recover it. 00:29:11.077 [2024-11-05 19:18:40.369455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.078 [2024-11-05 19:18:40.369466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.078 qpair failed and we were unable to recover it. 00:29:11.078 [2024-11-05 19:18:40.369770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.078 [2024-11-05 19:18:40.369781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.078 qpair failed and we were unable to recover it. 00:29:11.078 [2024-11-05 19:18:40.370053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.078 [2024-11-05 19:18:40.370063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.078 qpair failed and we were unable to recover it. 00:29:11.078 [2024-11-05 19:18:40.370374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.078 [2024-11-05 19:18:40.370384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.078 qpair failed and we were unable to recover it. 00:29:11.078 [2024-11-05 19:18:40.370714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.078 [2024-11-05 19:18:40.370725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.078 qpair failed and we were unable to recover it. 00:29:11.078 [2024-11-05 19:18:40.371029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.078 [2024-11-05 19:18:40.371041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.078 qpair failed and we were unable to recover it. 00:29:11.078 [2024-11-05 19:18:40.371320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.078 [2024-11-05 19:18:40.371332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.078 qpair failed and we were unable to recover it. 00:29:11.078 [2024-11-05 19:18:40.371664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.078 [2024-11-05 19:18:40.371674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.078 qpair failed and we were unable to recover it. 00:29:11.078 [2024-11-05 19:18:40.371991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.078 [2024-11-05 19:18:40.372003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.078 qpair failed and we were unable to recover it. 00:29:11.078 [2024-11-05 19:18:40.372353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.078 [2024-11-05 19:18:40.372365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.078 qpair failed and we were unable to recover it. 00:29:11.078 [2024-11-05 19:18:40.372666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.078 [2024-11-05 19:18:40.372676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.078 qpair failed and we were unable to recover it. 00:29:11.078 [2024-11-05 19:18:40.372994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.078 [2024-11-05 19:18:40.373005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.078 qpair failed and we were unable to recover it. 00:29:11.078 [2024-11-05 19:18:40.373293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.078 [2024-11-05 19:18:40.373304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.078 qpair failed and we were unable to recover it. 00:29:11.078 [2024-11-05 19:18:40.373596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.078 [2024-11-05 19:18:40.373606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.078 qpair failed and we were unable to recover it. 00:29:11.078 [2024-11-05 19:18:40.373907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.078 [2024-11-05 19:18:40.373919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.078 qpair failed and we were unable to recover it. 00:29:11.078 [2024-11-05 19:18:40.374245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.078 [2024-11-05 19:18:40.374256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.078 qpair failed and we were unable to recover it. 00:29:11.078 [2024-11-05 19:18:40.374549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.078 [2024-11-05 19:18:40.374560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.078 qpair failed and we were unable to recover it. 00:29:11.078 [2024-11-05 19:18:40.374866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.078 [2024-11-05 19:18:40.374878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.078 qpair failed and we were unable to recover it. 00:29:11.078 [2024-11-05 19:18:40.375062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.078 [2024-11-05 19:18:40.375073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.078 qpair failed and we were unable to recover it. 00:29:11.078 [2024-11-05 19:18:40.375391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.078 [2024-11-05 19:18:40.375403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.078 qpair failed and we were unable to recover it. 00:29:11.078 [2024-11-05 19:18:40.375701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.078 [2024-11-05 19:18:40.375713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.078 qpair failed and we were unable to recover it. 00:29:11.078 [2024-11-05 19:18:40.376011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.078 [2024-11-05 19:18:40.376024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.078 qpair failed and we were unable to recover it. 00:29:11.078 [2024-11-05 19:18:40.376326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.078 [2024-11-05 19:18:40.376338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.078 qpair failed and we were unable to recover it. 00:29:11.078 [2024-11-05 19:18:40.376661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.078 [2024-11-05 19:18:40.376673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.078 qpair failed and we were unable to recover it. 00:29:11.078 [2024-11-05 19:18:40.376972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.078 [2024-11-05 19:18:40.376984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.078 qpair failed and we were unable to recover it. 00:29:11.078 [2024-11-05 19:18:40.377309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.078 [2024-11-05 19:18:40.377321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.078 qpair failed and we were unable to recover it. 00:29:11.078 [2024-11-05 19:18:40.377501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.078 [2024-11-05 19:18:40.377514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.078 qpair failed and we were unable to recover it. 00:29:11.078 [2024-11-05 19:18:40.377791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.078 [2024-11-05 19:18:40.377803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.078 qpair failed and we were unable to recover it. 00:29:11.078 [2024-11-05 19:18:40.378117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.078 [2024-11-05 19:18:40.378128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.078 qpair failed and we were unable to recover it. 00:29:11.078 [2024-11-05 19:18:40.378425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.078 [2024-11-05 19:18:40.378435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.078 qpair failed and we were unable to recover it. 00:29:11.078 [2024-11-05 19:18:40.378774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.078 [2024-11-05 19:18:40.378785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.078 qpair failed and we were unable to recover it. 00:29:11.078 [2024-11-05 19:18:40.379070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.078 [2024-11-05 19:18:40.379081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.078 qpair failed and we were unable to recover it. 00:29:11.078 [2024-11-05 19:18:40.379383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.078 [2024-11-05 19:18:40.379394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.078 qpair failed and we were unable to recover it. 00:29:11.078 [2024-11-05 19:18:40.379718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.078 [2024-11-05 19:18:40.379730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.078 qpair failed and we were unable to recover it. 00:29:11.078 [2024-11-05 19:18:40.380040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.078 [2024-11-05 19:18:40.380051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.078 qpair failed and we were unable to recover it. 00:29:11.078 [2024-11-05 19:18:40.380364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.078 [2024-11-05 19:18:40.380376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.078 qpair failed and we were unable to recover it. 00:29:11.078 [2024-11-05 19:18:40.380687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.078 [2024-11-05 19:18:40.380699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.078 qpair failed and we were unable to recover it. 00:29:11.078 [2024-11-05 19:18:40.381016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.078 [2024-11-05 19:18:40.381028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.078 qpair failed and we were unable to recover it. 00:29:11.078 [2024-11-05 19:18:40.381238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.078 [2024-11-05 19:18:40.381249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.078 qpair failed and we were unable to recover it. 00:29:11.079 [2024-11-05 19:18:40.381558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.079 [2024-11-05 19:18:40.381569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.079 qpair failed and we were unable to recover it. 00:29:11.079 [2024-11-05 19:18:40.381759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.079 [2024-11-05 19:18:40.381771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.079 qpair failed and we were unable to recover it. 00:29:11.079 [2024-11-05 19:18:40.382058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.079 [2024-11-05 19:18:40.382069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.079 qpair failed and we were unable to recover it. 00:29:11.079 [2024-11-05 19:18:40.382350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.079 [2024-11-05 19:18:40.382361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.079 qpair failed and we were unable to recover it. 00:29:11.079 [2024-11-05 19:18:40.382670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.079 [2024-11-05 19:18:40.382682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.079 qpair failed and we were unable to recover it. 00:29:11.079 [2024-11-05 19:18:40.382992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.079 [2024-11-05 19:18:40.383005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.079 qpair failed and we were unable to recover it. 00:29:11.079 [2024-11-05 19:18:40.383349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.079 [2024-11-05 19:18:40.383360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.079 qpair failed and we were unable to recover it. 00:29:11.079 [2024-11-05 19:18:40.383657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.079 [2024-11-05 19:18:40.383668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.079 qpair failed and we were unable to recover it. 00:29:11.079 [2024-11-05 19:18:40.383998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.079 [2024-11-05 19:18:40.384009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.079 qpair failed and we were unable to recover it. 00:29:11.079 [2024-11-05 19:18:40.384190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.079 [2024-11-05 19:18:40.384200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.079 qpair failed and we were unable to recover it. 00:29:11.079 [2024-11-05 19:18:40.384562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.079 [2024-11-05 19:18:40.384574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.079 qpair failed and we were unable to recover it. 00:29:11.079 [2024-11-05 19:18:40.384720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.079 [2024-11-05 19:18:40.384732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.079 qpair failed and we were unable to recover it. 00:29:11.079 [2024-11-05 19:18:40.385033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.079 [2024-11-05 19:18:40.385045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.079 qpair failed and we were unable to recover it. 00:29:11.079 [2024-11-05 19:18:40.385343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.079 [2024-11-05 19:18:40.385356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.079 qpair failed and we were unable to recover it. 00:29:11.079 [2024-11-05 19:18:40.385645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.079 [2024-11-05 19:18:40.385657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.079 qpair failed and we were unable to recover it. 00:29:11.079 [2024-11-05 19:18:40.385990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.079 [2024-11-05 19:18:40.386002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.079 qpair failed and we were unable to recover it. 00:29:11.079 [2024-11-05 19:18:40.386343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.079 [2024-11-05 19:18:40.386354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.079 qpair failed and we were unable to recover it. 00:29:11.079 [2024-11-05 19:18:40.386736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.079 [2024-11-05 19:18:40.386750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.079 qpair failed and we were unable to recover it. 00:29:11.079 [2024-11-05 19:18:40.386928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.079 [2024-11-05 19:18:40.386940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.079 qpair failed and we were unable to recover it. 00:29:11.079 [2024-11-05 19:18:40.387213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.079 [2024-11-05 19:18:40.387224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.079 qpair failed and we were unable to recover it. 00:29:11.079 [2024-11-05 19:18:40.387527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.079 [2024-11-05 19:18:40.387538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.079 qpair failed and we were unable to recover it. 00:29:11.079 [2024-11-05 19:18:40.387870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.079 [2024-11-05 19:18:40.387881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.079 qpair failed and we were unable to recover it. 00:29:11.079 [2024-11-05 19:18:40.388065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.079 [2024-11-05 19:18:40.388077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.079 qpair failed and we were unable to recover it. 00:29:11.079 [2024-11-05 19:18:40.388345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.079 [2024-11-05 19:18:40.388356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.079 qpair failed and we were unable to recover it. 00:29:11.079 [2024-11-05 19:18:40.388637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.079 [2024-11-05 19:18:40.388648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.079 qpair failed and we were unable to recover it. 00:29:11.079 [2024-11-05 19:18:40.388956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.079 [2024-11-05 19:18:40.388967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.079 qpair failed and we were unable to recover it. 00:29:11.079 [2024-11-05 19:18:40.389265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.079 [2024-11-05 19:18:40.389276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.079 qpair failed and we were unable to recover it. 00:29:11.079 [2024-11-05 19:18:40.389553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.079 [2024-11-05 19:18:40.389564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.079 qpair failed and we were unable to recover it. 00:29:11.079 [2024-11-05 19:18:40.389866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.079 [2024-11-05 19:18:40.389877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.079 qpair failed and we were unable to recover it. 00:29:11.079 [2024-11-05 19:18:40.390179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.079 [2024-11-05 19:18:40.390190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.079 qpair failed and we were unable to recover it. 00:29:11.079 [2024-11-05 19:18:40.390465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.079 [2024-11-05 19:18:40.390476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.079 qpair failed and we were unable to recover it. 00:29:11.079 [2024-11-05 19:18:40.390773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.079 [2024-11-05 19:18:40.390784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.079 qpair failed and we were unable to recover it. 00:29:11.079 [2024-11-05 19:18:40.391087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.079 [2024-11-05 19:18:40.391101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.079 qpair failed and we were unable to recover it. 00:29:11.079 [2024-11-05 19:18:40.391430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.079 [2024-11-05 19:18:40.391443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.079 qpair failed and we were unable to recover it. 00:29:11.079 [2024-11-05 19:18:40.391756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.079 [2024-11-05 19:18:40.391769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.079 qpair failed and we were unable to recover it. 00:29:11.079 [2024-11-05 19:18:40.392070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.079 [2024-11-05 19:18:40.392082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.079 qpair failed and we were unable to recover it. 00:29:11.079 [2024-11-05 19:18:40.392379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.079 [2024-11-05 19:18:40.392390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.079 qpair failed and we were unable to recover it. 00:29:11.079 [2024-11-05 19:18:40.392691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.079 [2024-11-05 19:18:40.392702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.079 qpair failed and we were unable to recover it. 00:29:11.079 [2024-11-05 19:18:40.393002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.354 [2024-11-05 19:18:40.393013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.354 qpair failed and we were unable to recover it. 00:29:11.354 [2024-11-05 19:18:40.393352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.354 [2024-11-05 19:18:40.393364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.354 qpair failed and we were unable to recover it. 00:29:11.354 [2024-11-05 19:18:40.393677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.354 [2024-11-05 19:18:40.393687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.354 qpair failed and we were unable to recover it. 00:29:11.354 [2024-11-05 19:18:40.393998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.354 [2024-11-05 19:18:40.394010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.354 qpair failed and we were unable to recover it. 00:29:11.354 [2024-11-05 19:18:40.394307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.355 [2024-11-05 19:18:40.394318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.355 qpair failed and we were unable to recover it. 00:29:11.355 [2024-11-05 19:18:40.394474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.355 [2024-11-05 19:18:40.394484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.355 qpair failed and we were unable to recover it. 00:29:11.355 [2024-11-05 19:18:40.394852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.355 [2024-11-05 19:18:40.394863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.355 qpair failed and we were unable to recover it. 00:29:11.355 [2024-11-05 19:18:40.395189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.355 [2024-11-05 19:18:40.395199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.355 qpair failed and we were unable to recover it. 00:29:11.355 [2024-11-05 19:18:40.395511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.355 [2024-11-05 19:18:40.395522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.355 qpair failed and we were unable to recover it. 00:29:11.355 [2024-11-05 19:18:40.395810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.355 [2024-11-05 19:18:40.395821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.355 qpair failed and we were unable to recover it. 00:29:11.355 [2024-11-05 19:18:40.396155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.355 [2024-11-05 19:18:40.396167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.355 qpair failed and we were unable to recover it. 00:29:11.355 [2024-11-05 19:18:40.396479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.355 [2024-11-05 19:18:40.396491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.355 qpair failed and we were unable to recover it. 00:29:11.355 [2024-11-05 19:18:40.396794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.355 [2024-11-05 19:18:40.396805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.355 qpair failed and we were unable to recover it. 00:29:11.355 [2024-11-05 19:18:40.397018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.355 [2024-11-05 19:18:40.397028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.355 qpair failed and we were unable to recover it. 00:29:11.355 [2024-11-05 19:18:40.397322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.355 [2024-11-05 19:18:40.397333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.355 qpair failed and we were unable to recover it. 00:29:11.355 [2024-11-05 19:18:40.397503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.355 [2024-11-05 19:18:40.397514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.355 qpair failed and we were unable to recover it. 00:29:11.355 [2024-11-05 19:18:40.397818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.355 [2024-11-05 19:18:40.397830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.355 qpair failed and we were unable to recover it. 00:29:11.355 [2024-11-05 19:18:40.398145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.355 [2024-11-05 19:18:40.398157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.355 qpair failed and we were unable to recover it. 00:29:11.355 [2024-11-05 19:18:40.398462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.355 [2024-11-05 19:18:40.398473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.355 qpair failed and we were unable to recover it. 00:29:11.355 [2024-11-05 19:18:40.398631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.355 [2024-11-05 19:18:40.398643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.355 qpair failed and we were unable to recover it. 00:29:11.355 [2024-11-05 19:18:40.398920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.355 [2024-11-05 19:18:40.398932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.355 qpair failed and we were unable to recover it. 00:29:11.355 [2024-11-05 19:18:40.399247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.355 [2024-11-05 19:18:40.399258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.355 qpair failed and we were unable to recover it. 00:29:11.355 [2024-11-05 19:18:40.399551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.355 [2024-11-05 19:18:40.399564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.355 qpair failed and we were unable to recover it. 00:29:11.355 [2024-11-05 19:18:40.399882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.355 [2024-11-05 19:18:40.399893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.355 qpair failed and we were unable to recover it. 00:29:11.355 [2024-11-05 19:18:40.400153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.355 [2024-11-05 19:18:40.400164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.355 qpair failed and we were unable to recover it. 00:29:11.355 [2024-11-05 19:18:40.400450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.355 [2024-11-05 19:18:40.400461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.355 qpair failed and we were unable to recover it. 00:29:11.355 [2024-11-05 19:18:40.400764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.355 [2024-11-05 19:18:40.400784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.355 qpair failed and we were unable to recover it. 00:29:11.355 [2024-11-05 19:18:40.401068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.355 [2024-11-05 19:18:40.401080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.355 qpair failed and we were unable to recover it. 00:29:11.355 [2024-11-05 19:18:40.401397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.355 [2024-11-05 19:18:40.401409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.355 qpair failed and we were unable to recover it. 00:29:11.355 [2024-11-05 19:18:40.401717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.355 [2024-11-05 19:18:40.401728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.355 qpair failed and we were unable to recover it. 00:29:11.355 [2024-11-05 19:18:40.402034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.355 [2024-11-05 19:18:40.402046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.355 qpair failed and we were unable to recover it. 00:29:11.355 [2024-11-05 19:18:40.402377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.355 [2024-11-05 19:18:40.402388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.355 qpair failed and we were unable to recover it. 00:29:11.355 [2024-11-05 19:18:40.402691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.355 [2024-11-05 19:18:40.402703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.355 qpair failed and we were unable to recover it. 00:29:11.355 [2024-11-05 19:18:40.403010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.355 [2024-11-05 19:18:40.403022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.355 qpair failed and we were unable to recover it. 00:29:11.355 [2024-11-05 19:18:40.403355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.355 [2024-11-05 19:18:40.403367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.355 qpair failed and we were unable to recover it. 00:29:11.355 [2024-11-05 19:18:40.403718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.355 [2024-11-05 19:18:40.403729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.355 qpair failed and we were unable to recover it. 00:29:11.355 [2024-11-05 19:18:40.404057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.355 [2024-11-05 19:18:40.404069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.355 qpair failed and we were unable to recover it. 00:29:11.355 [2024-11-05 19:18:40.404390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.355 [2024-11-05 19:18:40.404401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.355 qpair failed and we were unable to recover it. 00:29:11.355 [2024-11-05 19:18:40.404711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.355 [2024-11-05 19:18:40.404722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.355 qpair failed and we were unable to recover it. 00:29:11.355 [2024-11-05 19:18:40.405025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.355 [2024-11-05 19:18:40.405037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.355 qpair failed and we were unable to recover it. 00:29:11.355 [2024-11-05 19:18:40.405371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.355 [2024-11-05 19:18:40.405383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.355 qpair failed and we were unable to recover it. 00:29:11.355 [2024-11-05 19:18:40.405692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.355 [2024-11-05 19:18:40.405705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.355 qpair failed and we were unable to recover it. 00:29:11.355 [2024-11-05 19:18:40.406022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.355 [2024-11-05 19:18:40.406033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.356 qpair failed and we were unable to recover it. 00:29:11.356 [2024-11-05 19:18:40.406310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.356 [2024-11-05 19:18:40.406321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.356 qpair failed and we were unable to recover it. 00:29:11.356 [2024-11-05 19:18:40.406655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.356 [2024-11-05 19:18:40.406667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.356 qpair failed and we were unable to recover it. 00:29:11.356 [2024-11-05 19:18:40.406908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.356 [2024-11-05 19:18:40.406919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.356 qpair failed and we were unable to recover it. 00:29:11.356 [2024-11-05 19:18:40.407221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.356 [2024-11-05 19:18:40.407233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.356 qpair failed and we were unable to recover it. 00:29:11.356 [2024-11-05 19:18:40.407530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.356 [2024-11-05 19:18:40.407542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.356 qpair failed and we were unable to recover it. 00:29:11.356 [2024-11-05 19:18:40.407852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.356 [2024-11-05 19:18:40.407864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.356 qpair failed and we were unable to recover it. 00:29:11.356 [2024-11-05 19:18:40.408198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.356 [2024-11-05 19:18:40.408209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.356 qpair failed and we were unable to recover it. 00:29:11.356 [2024-11-05 19:18:40.408535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.356 [2024-11-05 19:18:40.408546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.356 qpair failed and we were unable to recover it. 00:29:11.356 [2024-11-05 19:18:40.408841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.356 [2024-11-05 19:18:40.408853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.356 qpair failed and we were unable to recover it. 00:29:11.356 [2024-11-05 19:18:40.409191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.356 [2024-11-05 19:18:40.409202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.356 qpair failed and we were unable to recover it. 00:29:11.356 [2024-11-05 19:18:40.409509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.356 [2024-11-05 19:18:40.409521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.356 qpair failed and we were unable to recover it. 00:29:11.356 [2024-11-05 19:18:40.409818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.356 [2024-11-05 19:18:40.409830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.356 qpair failed and we were unable to recover it. 00:29:11.356 [2024-11-05 19:18:40.410026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.356 [2024-11-05 19:18:40.410036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.356 qpair failed and we were unable to recover it. 00:29:11.356 [2024-11-05 19:18:40.410225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.356 [2024-11-05 19:18:40.410235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.356 qpair failed and we were unable to recover it. 00:29:11.356 [2024-11-05 19:18:40.410477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.356 [2024-11-05 19:18:40.410488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.356 qpair failed and we were unable to recover it. 00:29:11.356 [2024-11-05 19:18:40.410783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.356 [2024-11-05 19:18:40.410795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.356 qpair failed and we were unable to recover it. 00:29:11.356 [2024-11-05 19:18:40.411119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.356 [2024-11-05 19:18:40.411130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.356 qpair failed and we were unable to recover it. 00:29:11.356 [2024-11-05 19:18:40.411445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.356 [2024-11-05 19:18:40.411457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.356 qpair failed and we were unable to recover it. 00:29:11.356 [2024-11-05 19:18:40.411734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.356 [2024-11-05 19:18:40.411749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.356 qpair failed and we were unable to recover it. 00:29:11.356 [2024-11-05 19:18:40.412047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.356 [2024-11-05 19:18:40.412061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.356 qpair failed and we were unable to recover it. 00:29:11.356 [2024-11-05 19:18:40.412362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.356 [2024-11-05 19:18:40.412373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.356 qpair failed and we were unable to recover it. 00:29:11.356 [2024-11-05 19:18:40.412736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.356 [2024-11-05 19:18:40.412756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.356 qpair failed and we were unable to recover it. 00:29:11.356 [2024-11-05 19:18:40.413032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.356 [2024-11-05 19:18:40.413043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.356 qpair failed and we were unable to recover it. 00:29:11.356 [2024-11-05 19:18:40.413245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.356 [2024-11-05 19:18:40.413255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.356 qpair failed and we were unable to recover it. 00:29:11.356 [2024-11-05 19:18:40.413568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.356 [2024-11-05 19:18:40.413579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.356 qpair failed and we were unable to recover it. 00:29:11.356 [2024-11-05 19:18:40.413796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.356 [2024-11-05 19:18:40.413807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.356 qpair failed and we were unable to recover it. 00:29:11.356 [2024-11-05 19:18:40.414126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.356 [2024-11-05 19:18:40.414137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.356 qpair failed and we were unable to recover it. 00:29:11.356 [2024-11-05 19:18:40.414431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.356 [2024-11-05 19:18:40.414443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.356 qpair failed and we were unable to recover it. 00:29:11.356 [2024-11-05 19:18:40.414761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.356 [2024-11-05 19:18:40.414773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.356 qpair failed and we were unable to recover it. 00:29:11.356 [2024-11-05 19:18:40.415100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.356 [2024-11-05 19:18:40.415111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.356 qpair failed and we were unable to recover it. 00:29:11.356 [2024-11-05 19:18:40.415440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.356 [2024-11-05 19:18:40.415451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.356 qpair failed and we were unable to recover it. 00:29:11.356 [2024-11-05 19:18:40.415756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.356 [2024-11-05 19:18:40.415767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.356 qpair failed and we were unable to recover it. 00:29:11.356 [2024-11-05 19:18:40.416073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.356 [2024-11-05 19:18:40.416084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.356 qpair failed and we were unable to recover it. 00:29:11.356 [2024-11-05 19:18:40.416420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.356 [2024-11-05 19:18:40.416432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.356 qpair failed and we were unable to recover it. 00:29:11.356 [2024-11-05 19:18:40.416730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.356 [2024-11-05 19:18:40.416741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.356 qpair failed and we were unable to recover it. 00:29:11.356 [2024-11-05 19:18:40.417054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.356 [2024-11-05 19:18:40.417066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.356 qpair failed and we were unable to recover it. 00:29:11.356 [2024-11-05 19:18:40.417427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.356 [2024-11-05 19:18:40.417439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.356 qpair failed and we were unable to recover it. 00:29:11.356 [2024-11-05 19:18:40.417763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.356 [2024-11-05 19:18:40.417776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.356 qpair failed and we were unable to recover it. 00:29:11.356 [2024-11-05 19:18:40.418070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.357 [2024-11-05 19:18:40.418081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.357 qpair failed and we were unable to recover it. 00:29:11.357 [2024-11-05 19:18:40.418376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.357 [2024-11-05 19:18:40.418388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.357 qpair failed and we were unable to recover it. 00:29:11.357 [2024-11-05 19:18:40.418720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.357 [2024-11-05 19:18:40.418732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.357 qpair failed and we were unable to recover it. 00:29:11.357 [2024-11-05 19:18:40.419023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.357 [2024-11-05 19:18:40.419036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.357 qpair failed and we were unable to recover it. 00:29:11.357 [2024-11-05 19:18:40.419366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.357 [2024-11-05 19:18:40.419378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.357 qpair failed and we were unable to recover it. 00:29:11.357 [2024-11-05 19:18:40.419678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.357 [2024-11-05 19:18:40.419691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.357 qpair failed and we were unable to recover it. 00:29:11.357 [2024-11-05 19:18:40.419877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.357 [2024-11-05 19:18:40.419888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.357 qpair failed and we were unable to recover it. 00:29:11.357 [2024-11-05 19:18:40.420194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.357 [2024-11-05 19:18:40.420204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.357 qpair failed and we were unable to recover it. 00:29:11.357 [2024-11-05 19:18:40.420508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.357 [2024-11-05 19:18:40.420520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.357 qpair failed and we were unable to recover it. 00:29:11.357 [2024-11-05 19:18:40.420834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.357 [2024-11-05 19:18:40.420846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.357 qpair failed and we were unable to recover it. 00:29:11.357 [2024-11-05 19:18:40.421205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.357 [2024-11-05 19:18:40.421216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.357 qpair failed and we were unable to recover it. 00:29:11.357 [2024-11-05 19:18:40.421515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.357 [2024-11-05 19:18:40.421527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.357 qpair failed and we were unable to recover it. 00:29:11.357 [2024-11-05 19:18:40.421700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.357 [2024-11-05 19:18:40.421712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.357 qpair failed and we were unable to recover it. 00:29:11.357 [2024-11-05 19:18:40.421997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.357 [2024-11-05 19:18:40.422008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.357 qpair failed and we were unable to recover it. 00:29:11.357 [2024-11-05 19:18:40.422309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.357 [2024-11-05 19:18:40.422320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.357 qpair failed and we were unable to recover it. 00:29:11.357 [2024-11-05 19:18:40.422624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.357 [2024-11-05 19:18:40.422636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.357 qpair failed and we were unable to recover it. 00:29:11.357 [2024-11-05 19:18:40.422863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.357 [2024-11-05 19:18:40.422875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.357 qpair failed and we were unable to recover it. 00:29:11.357 [2024-11-05 19:18:40.423187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.357 [2024-11-05 19:18:40.423199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.357 qpair failed and we were unable to recover it. 00:29:11.357 [2024-11-05 19:18:40.423498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.357 [2024-11-05 19:18:40.423510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.357 qpair failed and we were unable to recover it. 00:29:11.357 [2024-11-05 19:18:40.423805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.357 [2024-11-05 19:18:40.423817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.357 qpair failed and we were unable to recover it. 00:29:11.357 [2024-11-05 19:18:40.424116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.357 [2024-11-05 19:18:40.424127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.357 qpair failed and we were unable to recover it. 00:29:11.357 [2024-11-05 19:18:40.424466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.357 [2024-11-05 19:18:40.424478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.357 qpair failed and we were unable to recover it. 00:29:11.357 [2024-11-05 19:18:40.424811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.357 [2024-11-05 19:18:40.424822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.357 qpair failed and we were unable to recover it. 00:29:11.357 [2024-11-05 19:18:40.425104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.357 [2024-11-05 19:18:40.425115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.357 qpair failed and we were unable to recover it. 00:29:11.357 [2024-11-05 19:18:40.425405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.357 [2024-11-05 19:18:40.425417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.357 qpair failed and we were unable to recover it. 00:29:11.357 [2024-11-05 19:18:40.425749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.357 [2024-11-05 19:18:40.425760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.357 qpair failed and we were unable to recover it. 00:29:11.357 [2024-11-05 19:18:40.426126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.357 [2024-11-05 19:18:40.426137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.357 qpair failed and we were unable to recover it. 00:29:11.357 [2024-11-05 19:18:40.426435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.357 [2024-11-05 19:18:40.426446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.357 qpair failed and we were unable to recover it. 00:29:11.357 [2024-11-05 19:18:40.426759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.357 [2024-11-05 19:18:40.426771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.357 qpair failed and we were unable to recover it. 00:29:11.357 [2024-11-05 19:18:40.426971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.357 [2024-11-05 19:18:40.426983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.357 qpair failed and we were unable to recover it. 00:29:11.357 [2024-11-05 19:18:40.427269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.357 [2024-11-05 19:18:40.427281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.357 qpair failed and we were unable to recover it. 00:29:11.357 [2024-11-05 19:18:40.427562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.357 [2024-11-05 19:18:40.427573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.357 qpair failed and we were unable to recover it. 00:29:11.357 [2024-11-05 19:18:40.427902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.357 [2024-11-05 19:18:40.427914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.357 qpair failed and we were unable to recover it. 00:29:11.357 [2024-11-05 19:18:40.428213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.357 [2024-11-05 19:18:40.428224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.357 qpair failed and we were unable to recover it. 00:29:11.357 [2024-11-05 19:18:40.428497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.357 [2024-11-05 19:18:40.428508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.357 qpair failed and we were unable to recover it. 00:29:11.357 [2024-11-05 19:18:40.428840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.357 [2024-11-05 19:18:40.428853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.357 qpair failed and we were unable to recover it. 00:29:11.357 [2024-11-05 19:18:40.429180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.357 [2024-11-05 19:18:40.429191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.357 qpair failed and we were unable to recover it. 00:29:11.357 [2024-11-05 19:18:40.429518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.357 [2024-11-05 19:18:40.429529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.357 qpair failed and we were unable to recover it. 00:29:11.357 [2024-11-05 19:18:40.429831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.357 [2024-11-05 19:18:40.429842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.357 qpair failed and we were unable to recover it. 00:29:11.358 [2024-11-05 19:18:40.430158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.358 [2024-11-05 19:18:40.430169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.358 qpair failed and we were unable to recover it. 00:29:11.358 [2024-11-05 19:18:40.430472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.358 [2024-11-05 19:18:40.430483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.358 qpair failed and we were unable to recover it. 00:29:11.358 [2024-11-05 19:18:40.430647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.358 [2024-11-05 19:18:40.430658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.358 qpair failed and we were unable to recover it. 00:29:11.358 [2024-11-05 19:18:40.430956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.358 [2024-11-05 19:18:40.430968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.358 qpair failed and we were unable to recover it. 00:29:11.358 [2024-11-05 19:18:40.431294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.358 [2024-11-05 19:18:40.431305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.358 qpair failed and we were unable to recover it. 00:29:11.358 [2024-11-05 19:18:40.431619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.358 [2024-11-05 19:18:40.431631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.358 qpair failed and we were unable to recover it. 00:29:11.358 [2024-11-05 19:18:40.431931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.358 [2024-11-05 19:18:40.431943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.358 qpair failed and we were unable to recover it. 00:29:11.358 [2024-11-05 19:18:40.432273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.358 [2024-11-05 19:18:40.432285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.358 qpair failed and we were unable to recover it. 00:29:11.358 [2024-11-05 19:18:40.432585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.358 [2024-11-05 19:18:40.432597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.358 qpair failed and we were unable to recover it. 00:29:11.358 [2024-11-05 19:18:40.432910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.358 [2024-11-05 19:18:40.432922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.358 qpair failed and we were unable to recover it. 00:29:11.358 [2024-11-05 19:18:40.433216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.358 [2024-11-05 19:18:40.433228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.358 qpair failed and we were unable to recover it. 00:29:11.358 [2024-11-05 19:18:40.433518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.358 [2024-11-05 19:18:40.433529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.358 qpair failed and we were unable to recover it. 00:29:11.358 [2024-11-05 19:18:40.433848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.358 [2024-11-05 19:18:40.433860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.358 qpair failed and we were unable to recover it. 00:29:11.358 [2024-11-05 19:18:40.434179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.358 [2024-11-05 19:18:40.434190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.358 qpair failed and we were unable to recover it. 00:29:11.358 [2024-11-05 19:18:40.434502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.358 [2024-11-05 19:18:40.434514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.358 qpair failed and we were unable to recover it. 00:29:11.358 [2024-11-05 19:18:40.434795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.358 [2024-11-05 19:18:40.434808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.358 qpair failed and we were unable to recover it. 00:29:11.358 [2024-11-05 19:18:40.435116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.358 [2024-11-05 19:18:40.435128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.358 qpair failed and we were unable to recover it. 00:29:11.358 [2024-11-05 19:18:40.435443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.358 [2024-11-05 19:18:40.435453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.358 qpair failed and we were unable to recover it. 00:29:11.358 [2024-11-05 19:18:40.435756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.358 [2024-11-05 19:18:40.435768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.358 qpair failed and we were unable to recover it. 00:29:11.358 [2024-11-05 19:18:40.436102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.358 [2024-11-05 19:18:40.436113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.358 qpair failed and we were unable to recover it. 00:29:11.358 [2024-11-05 19:18:40.436416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.358 [2024-11-05 19:18:40.436426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.358 qpair failed and we were unable to recover it. 00:29:11.358 [2024-11-05 19:18:40.436690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.358 [2024-11-05 19:18:40.436702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.358 qpair failed and we were unable to recover it. 00:29:11.358 [2024-11-05 19:18:40.436995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.358 [2024-11-05 19:18:40.437008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.358 qpair failed and we were unable to recover it. 00:29:11.358 [2024-11-05 19:18:40.437300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.358 [2024-11-05 19:18:40.437311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.358 qpair failed and we were unable to recover it. 00:29:11.358 [2024-11-05 19:18:40.437639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.358 [2024-11-05 19:18:40.437651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.358 qpair failed and we were unable to recover it. 00:29:11.358 [2024-11-05 19:18:40.437953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.358 [2024-11-05 19:18:40.437965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.358 qpair failed and we were unable to recover it. 00:29:11.358 [2024-11-05 19:18:40.438263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.358 [2024-11-05 19:18:40.438275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.358 qpair failed and we were unable to recover it. 00:29:11.358 [2024-11-05 19:18:40.438335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.358 [2024-11-05 19:18:40.438346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.358 qpair failed and we were unable to recover it. 00:29:11.358 [2024-11-05 19:18:40.438627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.358 [2024-11-05 19:18:40.438638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.358 qpair failed and we were unable to recover it. 00:29:11.358 [2024-11-05 19:18:40.438958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.358 [2024-11-05 19:18:40.438971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.358 qpair failed and we were unable to recover it. 00:29:11.358 [2024-11-05 19:18:40.439161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.358 [2024-11-05 19:18:40.439172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.358 qpair failed and we were unable to recover it. 00:29:11.358 [2024-11-05 19:18:40.439454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.358 [2024-11-05 19:18:40.439465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.358 qpair failed and we were unable to recover it. 00:29:11.358 [2024-11-05 19:18:40.439797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.358 [2024-11-05 19:18:40.439809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.358 qpair failed and we were unable to recover it. 00:29:11.358 [2024-11-05 19:18:40.440124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.358 [2024-11-05 19:18:40.440135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.358 qpair failed and we were unable to recover it. 00:29:11.358 [2024-11-05 19:18:40.440475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.358 [2024-11-05 19:18:40.440487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.358 qpair failed and we were unable to recover it. 00:29:11.358 [2024-11-05 19:18:40.440784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.358 [2024-11-05 19:18:40.440796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.358 qpair failed and we were unable to recover it. 00:29:11.358 [2024-11-05 19:18:40.441120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.358 [2024-11-05 19:18:40.441132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.358 qpair failed and we were unable to recover it. 00:29:11.358 [2024-11-05 19:18:40.441457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.358 [2024-11-05 19:18:40.441468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.358 qpair failed and we were unable to recover it. 00:29:11.358 [2024-11-05 19:18:40.441832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.359 [2024-11-05 19:18:40.441844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.359 qpair failed and we were unable to recover it. 00:29:11.359 [2024-11-05 19:18:40.442139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.359 [2024-11-05 19:18:40.442149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.359 qpair failed and we were unable to recover it. 00:29:11.359 [2024-11-05 19:18:40.442449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.359 [2024-11-05 19:18:40.442459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.359 qpair failed and we were unable to recover it. 00:29:11.359 [2024-11-05 19:18:40.442735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.359 [2024-11-05 19:18:40.442756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.359 qpair failed and we were unable to recover it. 00:29:11.359 [2024-11-05 19:18:40.442933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.359 [2024-11-05 19:18:40.442944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.359 qpair failed and we were unable to recover it. 00:29:11.359 [2024-11-05 19:18:40.443238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.359 [2024-11-05 19:18:40.443249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.359 qpair failed and we were unable to recover it. 00:29:11.359 [2024-11-05 19:18:40.443582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.359 [2024-11-05 19:18:40.443593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.359 qpair failed and we were unable to recover it. 00:29:11.359 [2024-11-05 19:18:40.443837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.359 [2024-11-05 19:18:40.443849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.359 qpair failed and we were unable to recover it. 00:29:11.359 [2024-11-05 19:18:40.444146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.359 [2024-11-05 19:18:40.444157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.359 qpair failed and we were unable to recover it. 00:29:11.359 [2024-11-05 19:18:40.444460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.359 [2024-11-05 19:18:40.444472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.359 qpair failed and we were unable to recover it. 00:29:11.359 [2024-11-05 19:18:40.444810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.359 [2024-11-05 19:18:40.444821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.359 qpair failed and we were unable to recover it. 00:29:11.359 [2024-11-05 19:18:40.445143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.359 [2024-11-05 19:18:40.445155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.359 qpair failed and we were unable to recover it. 00:29:11.359 [2024-11-05 19:18:40.445516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.359 [2024-11-05 19:18:40.445527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.359 qpair failed and we were unable to recover it. 00:29:11.359 [2024-11-05 19:18:40.445832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.359 [2024-11-05 19:18:40.445845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.359 qpair failed and we were unable to recover it. 00:29:11.359 [2024-11-05 19:18:40.446174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.359 [2024-11-05 19:18:40.446185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.359 qpair failed and we were unable to recover it. 00:29:11.359 [2024-11-05 19:18:40.446395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.359 [2024-11-05 19:18:40.446405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.359 qpair failed and we were unable to recover it. 00:29:11.359 [2024-11-05 19:18:40.446718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.359 [2024-11-05 19:18:40.446729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.359 qpair failed and we were unable to recover it. 00:29:11.359 [2024-11-05 19:18:40.447055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.359 [2024-11-05 19:18:40.447066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.359 qpair failed and we were unable to recover it. 00:29:11.359 [2024-11-05 19:18:40.447399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.359 [2024-11-05 19:18:40.447411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.359 qpair failed and we were unable to recover it. 00:29:11.359 [2024-11-05 19:18:40.447605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.359 [2024-11-05 19:18:40.447618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.359 qpair failed and we were unable to recover it. 00:29:11.359 [2024-11-05 19:18:40.447925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.359 [2024-11-05 19:18:40.447937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.359 qpair failed and we were unable to recover it. 00:29:11.359 [2024-11-05 19:18:40.448307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.359 [2024-11-05 19:18:40.448318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.359 qpair failed and we were unable to recover it. 00:29:11.359 [2024-11-05 19:18:40.448606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.359 [2024-11-05 19:18:40.448617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.359 qpair failed and we were unable to recover it. 00:29:11.359 [2024-11-05 19:18:40.448915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.359 [2024-11-05 19:18:40.448927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.359 qpair failed and we were unable to recover it. 00:29:11.359 [2024-11-05 19:18:40.449232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.359 [2024-11-05 19:18:40.449244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.359 qpair failed and we were unable to recover it. 00:29:11.359 [2024-11-05 19:18:40.449539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.359 [2024-11-05 19:18:40.449550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.359 qpair failed and we were unable to recover it. 00:29:11.359 [2024-11-05 19:18:40.449852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.359 [2024-11-05 19:18:40.449865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.359 qpair failed and we were unable to recover it. 00:29:11.359 [2024-11-05 19:18:40.450167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.359 [2024-11-05 19:18:40.450178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.359 qpair failed and we were unable to recover it. 00:29:11.359 [2024-11-05 19:18:40.450486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.359 [2024-11-05 19:18:40.450498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.359 qpair failed and we were unable to recover it. 00:29:11.359 [2024-11-05 19:18:40.450804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.359 [2024-11-05 19:18:40.450816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.359 qpair failed and we were unable to recover it. 00:29:11.359 [2024-11-05 19:18:40.451119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.359 [2024-11-05 19:18:40.451130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.359 qpair failed and we were unable to recover it. 00:29:11.359 [2024-11-05 19:18:40.451434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.359 [2024-11-05 19:18:40.451446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.359 qpair failed and we were unable to recover it. 00:29:11.359 [2024-11-05 19:18:40.451749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.359 [2024-11-05 19:18:40.451761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.360 qpair failed and we were unable to recover it. 00:29:11.360 [2024-11-05 19:18:40.451943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.360 [2024-11-05 19:18:40.451954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.360 qpair failed and we were unable to recover it. 00:29:11.360 [2024-11-05 19:18:40.452279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.360 [2024-11-05 19:18:40.452291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.360 qpair failed and we were unable to recover it. 00:29:11.360 [2024-11-05 19:18:40.452588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.360 [2024-11-05 19:18:40.452599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.360 qpair failed and we were unable to recover it. 00:29:11.360 [2024-11-05 19:18:40.452866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.360 [2024-11-05 19:18:40.452878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.360 qpair failed and we were unable to recover it. 00:29:11.360 [2024-11-05 19:18:40.453208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.360 [2024-11-05 19:18:40.453218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.360 qpair failed and we were unable to recover it. 00:29:11.360 [2024-11-05 19:18:40.453488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.360 [2024-11-05 19:18:40.453499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.360 qpair failed and we were unable to recover it. 00:29:11.360 [2024-11-05 19:18:40.453801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.360 [2024-11-05 19:18:40.453813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.360 qpair failed and we were unable to recover it. 00:29:11.360 [2024-11-05 19:18:40.454122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.360 [2024-11-05 19:18:40.454134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.360 qpair failed and we were unable to recover it. 00:29:11.360 [2024-11-05 19:18:40.454454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.360 [2024-11-05 19:18:40.454466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.360 qpair failed and we were unable to recover it. 00:29:11.360 [2024-11-05 19:18:40.454675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.360 [2024-11-05 19:18:40.454687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.360 qpair failed and we were unable to recover it. 00:29:11.360 [2024-11-05 19:18:40.454894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.360 [2024-11-05 19:18:40.454906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.360 qpair failed and we were unable to recover it. 00:29:11.360 [2024-11-05 19:18:40.455121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.360 [2024-11-05 19:18:40.455132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.360 qpair failed and we were unable to recover it. 00:29:11.360 [2024-11-05 19:18:40.455437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.360 [2024-11-05 19:18:40.455449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.360 qpair failed and we were unable to recover it. 00:29:11.360 [2024-11-05 19:18:40.455808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.360 [2024-11-05 19:18:40.455820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.360 qpair failed and we were unable to recover it. 00:29:11.360 [2024-11-05 19:18:40.456030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.360 [2024-11-05 19:18:40.456040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.360 qpair failed and we were unable to recover it. 00:29:11.360 [2024-11-05 19:18:40.456359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.360 [2024-11-05 19:18:40.456370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.360 qpair failed and we were unable to recover it. 00:29:11.360 [2024-11-05 19:18:40.456691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.360 [2024-11-05 19:18:40.456703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.360 qpair failed and we were unable to recover it. 00:29:11.360 [2024-11-05 19:18:40.456980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.360 [2024-11-05 19:18:40.456992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.360 qpair failed and we were unable to recover it. 00:29:11.360 [2024-11-05 19:18:40.457324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.360 [2024-11-05 19:18:40.457336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.360 qpair failed and we were unable to recover it. 00:29:11.360 [2024-11-05 19:18:40.457637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.360 [2024-11-05 19:18:40.457648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.360 qpair failed and we were unable to recover it. 00:29:11.360 [2024-11-05 19:18:40.457959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.360 [2024-11-05 19:18:40.457974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.360 qpair failed and we were unable to recover it. 00:29:11.360 [2024-11-05 19:18:40.458261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.360 [2024-11-05 19:18:40.458272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.360 qpair failed and we were unable to recover it. 00:29:11.360 [2024-11-05 19:18:40.458581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.360 [2024-11-05 19:18:40.458592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.360 qpair failed and we were unable to recover it. 00:29:11.360 [2024-11-05 19:18:40.458768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.360 [2024-11-05 19:18:40.458779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.360 qpair failed and we were unable to recover it. 00:29:11.360 [2024-11-05 19:18:40.459084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.360 [2024-11-05 19:18:40.459095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.360 qpair failed and we were unable to recover it. 00:29:11.360 [2024-11-05 19:18:40.459399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.360 [2024-11-05 19:18:40.459411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.360 qpair failed and we were unable to recover it. 00:29:11.360 [2024-11-05 19:18:40.459608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.360 [2024-11-05 19:18:40.459618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.360 qpair failed and we were unable to recover it. 00:29:11.360 [2024-11-05 19:18:40.460012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.360 [2024-11-05 19:18:40.460023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.360 qpair failed and we were unable to recover it. 00:29:11.360 [2024-11-05 19:18:40.460319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.360 [2024-11-05 19:18:40.460330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.360 qpair failed and we were unable to recover it. 00:29:11.360 [2024-11-05 19:18:40.460627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.360 [2024-11-05 19:18:40.460638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.360 qpair failed and we were unable to recover it. 00:29:11.360 [2024-11-05 19:18:40.460916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.360 [2024-11-05 19:18:40.460927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.360 qpair failed and we were unable to recover it. 00:29:11.360 [2024-11-05 19:18:40.461248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.360 [2024-11-05 19:18:40.461260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.360 qpair failed and we were unable to recover it. 00:29:11.360 [2024-11-05 19:18:40.461568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.360 [2024-11-05 19:18:40.461580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.360 qpair failed and we were unable to recover it. 00:29:11.360 [2024-11-05 19:18:40.461762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.360 [2024-11-05 19:18:40.461775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.360 qpair failed and we were unable to recover it. 00:29:11.360 [2024-11-05 19:18:40.462104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.360 [2024-11-05 19:18:40.462117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.360 qpair failed and we were unable to recover it. 00:29:11.360 [2024-11-05 19:18:40.462446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.360 [2024-11-05 19:18:40.462459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.360 qpair failed and we were unable to recover it. 00:29:11.360 [2024-11-05 19:18:40.462792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.360 [2024-11-05 19:18:40.462803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.360 qpair failed and we were unable to recover it. 00:29:11.360 [2024-11-05 19:18:40.463108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.360 [2024-11-05 19:18:40.463119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.360 qpair failed and we were unable to recover it. 00:29:11.360 [2024-11-05 19:18:40.463455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.361 [2024-11-05 19:18:40.463465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.361 qpair failed and we were unable to recover it. 00:29:11.361 [2024-11-05 19:18:40.463758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.361 [2024-11-05 19:18:40.463769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.361 qpair failed and we were unable to recover it. 00:29:11.361 [2024-11-05 19:18:40.464107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.361 [2024-11-05 19:18:40.464118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.361 qpair failed and we were unable to recover it. 00:29:11.361 [2024-11-05 19:18:40.464326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.361 [2024-11-05 19:18:40.464336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.361 qpair failed and we were unable to recover it. 00:29:11.361 [2024-11-05 19:18:40.464636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.361 [2024-11-05 19:18:40.464648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.361 qpair failed and we were unable to recover it. 00:29:11.361 [2024-11-05 19:18:40.464981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.361 [2024-11-05 19:18:40.464993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.361 qpair failed and we were unable to recover it. 00:29:11.361 [2024-11-05 19:18:40.465296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.361 [2024-11-05 19:18:40.465307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.361 qpair failed and we were unable to recover it. 00:29:11.361 [2024-11-05 19:18:40.465604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.361 [2024-11-05 19:18:40.465615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.361 qpair failed and we were unable to recover it. 00:29:11.361 [2024-11-05 19:18:40.465960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.361 [2024-11-05 19:18:40.465971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.361 qpair failed and we were unable to recover it. 00:29:11.361 [2024-11-05 19:18:40.466269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.361 [2024-11-05 19:18:40.466281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.361 qpair failed and we were unable to recover it. 00:29:11.361 [2024-11-05 19:18:40.466613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.361 [2024-11-05 19:18:40.466623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.361 qpair failed and we were unable to recover it. 00:29:11.361 [2024-11-05 19:18:40.466921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.361 [2024-11-05 19:18:40.466932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.361 qpair failed and we were unable to recover it. 00:29:11.361 [2024-11-05 19:18:40.467238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.361 [2024-11-05 19:18:40.467249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.361 qpair failed and we were unable to recover it. 00:29:11.361 [2024-11-05 19:18:40.467524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.361 [2024-11-05 19:18:40.467535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.361 qpair failed and we were unable to recover it. 00:29:11.361 [2024-11-05 19:18:40.467840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.361 [2024-11-05 19:18:40.467851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.361 qpair failed and we were unable to recover it. 00:29:11.361 [2024-11-05 19:18:40.468183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.361 [2024-11-05 19:18:40.468195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.361 qpair failed and we were unable to recover it. 00:29:11.361 [2024-11-05 19:18:40.468381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.361 [2024-11-05 19:18:40.468392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.361 qpair failed and we were unable to recover it. 00:29:11.361 [2024-11-05 19:18:40.468719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.361 [2024-11-05 19:18:40.468730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.361 qpair failed and we were unable to recover it. 00:29:11.361 [2024-11-05 19:18:40.469054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.361 [2024-11-05 19:18:40.469066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.361 qpair failed and we were unable to recover it. 00:29:11.361 [2024-11-05 19:18:40.469360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.361 [2024-11-05 19:18:40.469371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.361 qpair failed and we were unable to recover it. 00:29:11.361 [2024-11-05 19:18:40.469668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.361 [2024-11-05 19:18:40.469679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.361 qpair failed and we were unable to recover it. 00:29:11.361 [2024-11-05 19:18:40.469805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.361 [2024-11-05 19:18:40.469815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.361 qpair failed and we were unable to recover it. 00:29:11.361 [2024-11-05 19:18:40.470134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.361 [2024-11-05 19:18:40.470146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.361 qpair failed and we were unable to recover it. 00:29:11.361 [2024-11-05 19:18:40.470459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.361 [2024-11-05 19:18:40.470471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.361 qpair failed and we were unable to recover it. 00:29:11.361 [2024-11-05 19:18:40.470770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.361 [2024-11-05 19:18:40.470782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.361 qpair failed and we were unable to recover it. 00:29:11.361 [2024-11-05 19:18:40.471204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.361 [2024-11-05 19:18:40.471215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.361 qpair failed and we were unable to recover it. 00:29:11.361 [2024-11-05 19:18:40.471519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.361 [2024-11-05 19:18:40.471531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.361 qpair failed and we were unable to recover it. 00:29:11.361 [2024-11-05 19:18:40.471838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.361 [2024-11-05 19:18:40.471850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.361 qpair failed and we were unable to recover it. 00:29:11.361 [2024-11-05 19:18:40.472063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.361 [2024-11-05 19:18:40.472073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.361 qpair failed and we were unable to recover it. 00:29:11.361 [2024-11-05 19:18:40.472370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.361 [2024-11-05 19:18:40.472381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.361 qpair failed and we were unable to recover it. 00:29:11.361 [2024-11-05 19:18:40.472714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.361 [2024-11-05 19:18:40.472725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.361 qpair failed and we were unable to recover it. 00:29:11.361 [2024-11-05 19:18:40.473031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.361 [2024-11-05 19:18:40.473042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.361 qpair failed and we were unable to recover it. 00:29:11.361 [2024-11-05 19:18:40.473367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.361 [2024-11-05 19:18:40.473379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.361 qpair failed and we were unable to recover it. 00:29:11.361 [2024-11-05 19:18:40.473657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.361 [2024-11-05 19:18:40.473668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.361 qpair failed and we were unable to recover it. 00:29:11.361 [2024-11-05 19:18:40.473862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.361 [2024-11-05 19:18:40.473873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.361 qpair failed and we were unable to recover it. 00:29:11.361 [2024-11-05 19:18:40.474081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.361 [2024-11-05 19:18:40.474094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.361 qpair failed and we were unable to recover it. 00:29:11.361 [2024-11-05 19:18:40.474415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.361 [2024-11-05 19:18:40.474426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.361 qpair failed and we were unable to recover it. 00:29:11.361 [2024-11-05 19:18:40.474720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.361 [2024-11-05 19:18:40.474731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.361 qpair failed and we were unable to recover it. 00:29:11.361 [2024-11-05 19:18:40.475037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.361 [2024-11-05 19:18:40.475049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.361 qpair failed and we were unable to recover it. 00:29:11.361 [2024-11-05 19:18:40.475353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.362 [2024-11-05 19:18:40.475365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.362 qpair failed and we were unable to recover it. 00:29:11.362 [2024-11-05 19:18:40.475742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.362 [2024-11-05 19:18:40.475756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.362 qpair failed and we were unable to recover it. 00:29:11.362 [2024-11-05 19:18:40.475936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.362 [2024-11-05 19:18:40.475948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.362 qpair failed and we were unable to recover it. 00:29:11.362 [2024-11-05 19:18:40.476263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.362 [2024-11-05 19:18:40.476275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.362 qpair failed and we were unable to recover it. 00:29:11.362 [2024-11-05 19:18:40.476523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.362 [2024-11-05 19:18:40.476534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.362 qpair failed and we were unable to recover it. 00:29:11.362 [2024-11-05 19:18:40.476816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.362 [2024-11-05 19:18:40.476827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.362 qpair failed and we were unable to recover it. 00:29:11.362 [2024-11-05 19:18:40.477152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.362 [2024-11-05 19:18:40.477163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.362 qpair failed and we were unable to recover it. 00:29:11.362 [2024-11-05 19:18:40.477459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.362 [2024-11-05 19:18:40.477472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.362 qpair failed and we were unable to recover it. 00:29:11.362 [2024-11-05 19:18:40.477768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.362 [2024-11-05 19:18:40.477781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.362 qpair failed and we were unable to recover it. 00:29:11.362 [2024-11-05 19:18:40.478110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.362 [2024-11-05 19:18:40.478122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.362 qpair failed and we were unable to recover it. 00:29:11.362 [2024-11-05 19:18:40.478450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.362 [2024-11-05 19:18:40.478461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.362 qpair failed and we were unable to recover it. 00:29:11.362 [2024-11-05 19:18:40.478756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.362 [2024-11-05 19:18:40.478770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.362 qpair failed and we were unable to recover it. 00:29:11.362 [2024-11-05 19:18:40.478989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.362 [2024-11-05 19:18:40.479000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.362 qpair failed and we were unable to recover it. 00:29:11.362 [2024-11-05 19:18:40.479329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.362 [2024-11-05 19:18:40.479340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.362 qpair failed and we were unable to recover it. 00:29:11.362 [2024-11-05 19:18:40.479666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.362 [2024-11-05 19:18:40.479678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.362 qpair failed and we were unable to recover it. 00:29:11.362 [2024-11-05 19:18:40.479984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.362 [2024-11-05 19:18:40.479996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.362 qpair failed and we were unable to recover it. 00:29:11.362 [2024-11-05 19:18:40.480173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.362 [2024-11-05 19:18:40.480184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.362 qpair failed and we were unable to recover it. 00:29:11.362 [2024-11-05 19:18:40.480509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.362 [2024-11-05 19:18:40.480520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.362 qpair failed and we were unable to recover it. 00:29:11.362 [2024-11-05 19:18:40.480847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.362 [2024-11-05 19:18:40.480860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.362 qpair failed and we were unable to recover it. 00:29:11.362 [2024-11-05 19:18:40.481191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.362 [2024-11-05 19:18:40.481202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.362 qpair failed and we were unable to recover it. 00:29:11.362 [2024-11-05 19:18:40.481468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.362 [2024-11-05 19:18:40.481479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.362 qpair failed and we were unable to recover it. 00:29:11.362 [2024-11-05 19:18:40.481868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.362 [2024-11-05 19:18:40.481879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.362 qpair failed and we were unable to recover it. 00:29:11.362 [2024-11-05 19:18:40.482173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.362 [2024-11-05 19:18:40.482185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.362 qpair failed and we were unable to recover it. 00:29:11.362 [2024-11-05 19:18:40.482488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.362 [2024-11-05 19:18:40.482499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.362 qpair failed and we were unable to recover it. 00:29:11.362 [2024-11-05 19:18:40.482821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.362 [2024-11-05 19:18:40.482833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.362 qpair failed and we were unable to recover it. 00:29:11.362 [2024-11-05 19:18:40.483145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.362 [2024-11-05 19:18:40.483156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.362 qpair failed and we were unable to recover it. 00:29:11.362 [2024-11-05 19:18:40.483431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.362 [2024-11-05 19:18:40.483442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.362 qpair failed and we were unable to recover it. 00:29:11.362 [2024-11-05 19:18:40.483752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.362 [2024-11-05 19:18:40.483764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.362 qpair failed and we were unable to recover it. 00:29:11.362 [2024-11-05 19:18:40.484094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.362 [2024-11-05 19:18:40.484105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.362 qpair failed and we were unable to recover it. 00:29:11.362 [2024-11-05 19:18:40.484410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.362 [2024-11-05 19:18:40.484421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.362 qpair failed and we were unable to recover it. 00:29:11.362 [2024-11-05 19:18:40.484756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.362 [2024-11-05 19:18:40.484768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.362 qpair failed and we were unable to recover it. 00:29:11.362 [2024-11-05 19:18:40.484990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.362 [2024-11-05 19:18:40.485000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.362 qpair failed and we were unable to recover it. 00:29:11.362 [2024-11-05 19:18:40.485320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.362 [2024-11-05 19:18:40.485332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.362 qpair failed and we were unable to recover it. 00:29:11.362 [2024-11-05 19:18:40.485632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.362 [2024-11-05 19:18:40.485643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.362 qpair failed and we were unable to recover it. 00:29:11.362 [2024-11-05 19:18:40.485917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.362 [2024-11-05 19:18:40.485928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.362 qpair failed and we were unable to recover it. 00:29:11.362 [2024-11-05 19:18:40.486230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.362 [2024-11-05 19:18:40.486242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.362 qpair failed and we were unable to recover it. 00:29:11.362 [2024-11-05 19:18:40.486540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.362 [2024-11-05 19:18:40.486552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.362 qpair failed and we were unable to recover it. 00:29:11.362 [2024-11-05 19:18:40.486864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.362 [2024-11-05 19:18:40.486876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.362 qpair failed and we were unable to recover it. 00:29:11.362 [2024-11-05 19:18:40.487170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.362 [2024-11-05 19:18:40.487183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.362 qpair failed and we were unable to recover it. 00:29:11.363 [2024-11-05 19:18:40.487479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.363 [2024-11-05 19:18:40.487490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.363 qpair failed and we were unable to recover it. 00:29:11.363 [2024-11-05 19:18:40.487787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.363 [2024-11-05 19:18:40.487799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.363 qpair failed and we were unable to recover it. 00:29:11.363 [2024-11-05 19:18:40.488106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.363 [2024-11-05 19:18:40.488117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.363 qpair failed and we were unable to recover it. 00:29:11.363 [2024-11-05 19:18:40.488419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.363 [2024-11-05 19:18:40.488431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.363 qpair failed and we were unable to recover it. 00:29:11.363 [2024-11-05 19:18:40.488737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.363 [2024-11-05 19:18:40.488752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.363 qpair failed and we were unable to recover it. 00:29:11.363 [2024-11-05 19:18:40.489060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.363 [2024-11-05 19:18:40.489072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.363 qpair failed and we were unable to recover it. 00:29:11.363 [2024-11-05 19:18:40.489378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.363 [2024-11-05 19:18:40.489390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.363 qpair failed and we were unable to recover it. 00:29:11.363 [2024-11-05 19:18:40.489715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.363 [2024-11-05 19:18:40.489727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.363 qpair failed and we were unable to recover it. 00:29:11.363 [2024-11-05 19:18:40.490032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.363 [2024-11-05 19:18:40.490044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.363 qpair failed and we were unable to recover it. 00:29:11.363 [2024-11-05 19:18:40.490307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.363 [2024-11-05 19:18:40.490318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.363 qpair failed and we were unable to recover it. 00:29:11.363 [2024-11-05 19:18:40.490599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.363 [2024-11-05 19:18:40.490610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.363 qpair failed and we were unable to recover it. 00:29:11.363 [2024-11-05 19:18:40.490836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.363 [2024-11-05 19:18:40.490847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.363 qpair failed and we were unable to recover it. 00:29:11.363 [2024-11-05 19:18:40.491157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.363 [2024-11-05 19:18:40.491168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.363 qpair failed and we were unable to recover it. 00:29:11.363 [2024-11-05 19:18:40.491512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.363 [2024-11-05 19:18:40.491523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.363 qpair failed and we were unable to recover it. 00:29:11.363 [2024-11-05 19:18:40.491697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.363 [2024-11-05 19:18:40.491708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.363 qpair failed and we were unable to recover it. 00:29:11.363 [2024-11-05 19:18:40.492022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.363 [2024-11-05 19:18:40.492035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.363 qpair failed and we were unable to recover it. 00:29:11.363 [2024-11-05 19:18:40.492354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.363 [2024-11-05 19:18:40.492364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.363 qpair failed and we were unable to recover it. 00:29:11.363 [2024-11-05 19:18:40.492663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.363 [2024-11-05 19:18:40.492675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.363 qpair failed and we were unable to recover it. 00:29:11.363 [2024-11-05 19:18:40.492989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.363 [2024-11-05 19:18:40.493000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.363 qpair failed and we were unable to recover it. 00:29:11.363 [2024-11-05 19:18:40.493363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.363 [2024-11-05 19:18:40.493374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.363 qpair failed and we were unable to recover it. 00:29:11.363 [2024-11-05 19:18:40.493695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.363 [2024-11-05 19:18:40.493707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.363 qpair failed and we were unable to recover it. 00:29:11.363 [2024-11-05 19:18:40.494060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.363 [2024-11-05 19:18:40.494072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.363 qpair failed and we were unable to recover it. 00:29:11.363 [2024-11-05 19:18:40.494367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.363 [2024-11-05 19:18:40.494379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.363 qpair failed and we were unable to recover it. 00:29:11.363 [2024-11-05 19:18:40.494710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.363 [2024-11-05 19:18:40.494722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.363 qpair failed and we were unable to recover it. 00:29:11.363 [2024-11-05 19:18:40.495083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.363 [2024-11-05 19:18:40.495095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.363 qpair failed and we were unable to recover it. 00:29:11.363 [2024-11-05 19:18:40.495399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.363 [2024-11-05 19:18:40.495410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.363 qpair failed and we were unable to recover it. 00:29:11.363 [2024-11-05 19:18:40.495759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.363 [2024-11-05 19:18:40.495773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.363 qpair failed and we were unable to recover it. 00:29:11.363 [2024-11-05 19:18:40.496076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.363 [2024-11-05 19:18:40.496087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.363 qpair failed and we were unable to recover it. 00:29:11.363 [2024-11-05 19:18:40.496407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.363 [2024-11-05 19:18:40.496418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.363 qpair failed and we were unable to recover it. 00:29:11.363 [2024-11-05 19:18:40.496716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.363 [2024-11-05 19:18:40.496727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.363 qpair failed and we were unable to recover it. 00:29:11.363 [2024-11-05 19:18:40.497190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.363 [2024-11-05 19:18:40.497201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.363 qpair failed and we were unable to recover it. 00:29:11.363 [2024-11-05 19:18:40.497527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.363 [2024-11-05 19:18:40.497539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.363 qpair failed and we were unable to recover it. 00:29:11.363 [2024-11-05 19:18:40.497844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.363 [2024-11-05 19:18:40.497856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.363 qpair failed and we were unable to recover it. 00:29:11.363 [2024-11-05 19:18:40.498193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.363 [2024-11-05 19:18:40.498204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.363 qpair failed and we were unable to recover it. 00:29:11.363 [2024-11-05 19:18:40.498505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.363 [2024-11-05 19:18:40.498516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.363 qpair failed and we were unable to recover it. 00:29:11.363 [2024-11-05 19:18:40.498843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.363 [2024-11-05 19:18:40.498855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.364 qpair failed and we were unable to recover it. 00:29:11.364 [2024-11-05 19:18:40.499164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.364 [2024-11-05 19:18:40.499175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.364 qpair failed and we were unable to recover it. 00:29:11.364 [2024-11-05 19:18:40.499484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.364 [2024-11-05 19:18:40.499495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.364 qpair failed and we were unable to recover it. 00:29:11.364 [2024-11-05 19:18:40.499779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.364 [2024-11-05 19:18:40.499791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.364 qpair failed and we were unable to recover it. 00:29:11.364 [2024-11-05 19:18:40.499988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.364 [2024-11-05 19:18:40.499999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.364 qpair failed and we were unable to recover it. 00:29:11.364 [2024-11-05 19:18:40.500167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.364 [2024-11-05 19:18:40.500179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.364 qpair failed and we were unable to recover it. 00:29:11.364 [2024-11-05 19:18:40.500510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.364 [2024-11-05 19:18:40.500522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.364 qpair failed and we were unable to recover it. 00:29:11.364 [2024-11-05 19:18:40.500853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.364 [2024-11-05 19:18:40.500865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.364 qpair failed and we were unable to recover it. 00:29:11.364 [2024-11-05 19:18:40.501086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.364 [2024-11-05 19:18:40.501097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.364 qpair failed and we were unable to recover it. 00:29:11.364 [2024-11-05 19:18:40.501378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.364 [2024-11-05 19:18:40.501389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.364 qpair failed and we were unable to recover it. 00:29:11.364 [2024-11-05 19:18:40.501753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.364 [2024-11-05 19:18:40.501765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.364 qpair failed and we were unable to recover it. 00:29:11.364 [2024-11-05 19:18:40.502031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.364 [2024-11-05 19:18:40.502042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.364 qpair failed and we were unable to recover it. 00:29:11.364 [2024-11-05 19:18:40.502324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.364 [2024-11-05 19:18:40.502334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.364 qpair failed and we were unable to recover it. 00:29:11.364 [2024-11-05 19:18:40.502636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.364 [2024-11-05 19:18:40.502648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.364 qpair failed and we were unable to recover it. 00:29:11.364 [2024-11-05 19:18:40.502816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.364 [2024-11-05 19:18:40.502828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.364 qpair failed and we were unable to recover it. 00:29:11.364 [2024-11-05 19:18:40.503127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.364 [2024-11-05 19:18:40.503138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.364 qpair failed and we were unable to recover it. 00:29:11.364 [2024-11-05 19:18:40.503421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.364 [2024-11-05 19:18:40.503432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.364 qpair failed and we were unable to recover it. 00:29:11.364 [2024-11-05 19:18:40.503769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.364 [2024-11-05 19:18:40.503781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.364 qpair failed and we were unable to recover it. 00:29:11.364 [2024-11-05 19:18:40.504078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.364 [2024-11-05 19:18:40.504088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.364 qpair failed and we were unable to recover it. 00:29:11.364 [2024-11-05 19:18:40.504397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.364 [2024-11-05 19:18:40.504407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.364 qpair failed and we were unable to recover it. 00:29:11.364 [2024-11-05 19:18:40.504586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.364 [2024-11-05 19:18:40.504596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.364 qpair failed and we were unable to recover it. 00:29:11.364 [2024-11-05 19:18:40.504802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.364 [2024-11-05 19:18:40.504813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.364 qpair failed and we were unable to recover it. 00:29:11.364 [2024-11-05 19:18:40.505100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.364 [2024-11-05 19:18:40.505111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.364 qpair failed and we were unable to recover it. 00:29:11.364 [2024-11-05 19:18:40.505315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.364 [2024-11-05 19:18:40.505326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.364 qpair failed and we were unable to recover it. 00:29:11.364 [2024-11-05 19:18:40.505550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.364 [2024-11-05 19:18:40.505562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.364 qpair failed and we were unable to recover it. 00:29:11.364 [2024-11-05 19:18:40.505867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.364 [2024-11-05 19:18:40.505879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.364 qpair failed and we were unable to recover it. 00:29:11.364 [2024-11-05 19:18:40.506050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.364 [2024-11-05 19:18:40.506061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.364 qpair failed and we were unable to recover it. 00:29:11.364 [2024-11-05 19:18:40.506387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.364 [2024-11-05 19:18:40.506399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.364 qpair failed and we were unable to recover it. 00:29:11.364 [2024-11-05 19:18:40.506730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.364 [2024-11-05 19:18:40.506741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.364 qpair failed and we were unable to recover it. 00:29:11.364 [2024-11-05 19:18:40.507082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.364 [2024-11-05 19:18:40.507094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.364 qpair failed and we were unable to recover it. 00:29:11.364 [2024-11-05 19:18:40.507299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.364 [2024-11-05 19:18:40.507310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.364 qpair failed and we were unable to recover it. 00:29:11.364 [2024-11-05 19:18:40.507674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.364 [2024-11-05 19:18:40.507685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.364 qpair failed and we were unable to recover it. 00:29:11.364 [2024-11-05 19:18:40.507896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.364 [2024-11-05 19:18:40.507908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.364 qpair failed and we were unable to recover it. 00:29:11.364 [2024-11-05 19:18:40.508229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.364 [2024-11-05 19:18:40.508240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.364 qpair failed and we were unable to recover it. 00:29:11.364 [2024-11-05 19:18:40.508493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.364 [2024-11-05 19:18:40.508504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.364 qpair failed and we were unable to recover it. 00:29:11.364 [2024-11-05 19:18:40.508832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.364 [2024-11-05 19:18:40.508844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.364 qpair failed and we were unable to recover it. 00:29:11.364 [2024-11-05 19:18:40.509134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.364 [2024-11-05 19:18:40.509154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.364 qpair failed and we were unable to recover it. 00:29:11.364 [2024-11-05 19:18:40.509493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.365 [2024-11-05 19:18:40.509506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.365 qpair failed and we were unable to recover it. 00:29:11.365 [2024-11-05 19:18:40.509807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.365 [2024-11-05 19:18:40.509820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.365 qpair failed and we were unable to recover it. 00:29:11.365 [2024-11-05 19:18:40.510139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.365 [2024-11-05 19:18:40.510149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.365 qpair failed and we were unable to recover it. 00:29:11.365 [2024-11-05 19:18:40.510477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.365 [2024-11-05 19:18:40.510488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.365 qpair failed and we were unable to recover it. 00:29:11.365 [2024-11-05 19:18:40.510805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.365 [2024-11-05 19:18:40.510817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.365 qpair failed and we were unable to recover it. 00:29:11.365 [2024-11-05 19:18:40.511154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.365 [2024-11-05 19:18:40.511166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.365 qpair failed and we were unable to recover it. 00:29:11.365 [2024-11-05 19:18:40.511468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.365 [2024-11-05 19:18:40.511480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.365 qpair failed and we were unable to recover it. 00:29:11.365 [2024-11-05 19:18:40.511814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.365 [2024-11-05 19:18:40.511826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.365 qpair failed and we were unable to recover it. 00:29:11.365 [2024-11-05 19:18:40.512156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.365 [2024-11-05 19:18:40.512167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.365 qpair failed and we were unable to recover it. 00:29:11.365 [2024-11-05 19:18:40.512510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.365 [2024-11-05 19:18:40.512522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.365 qpair failed and we were unable to recover it. 00:29:11.365 [2024-11-05 19:18:40.512768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.365 [2024-11-05 19:18:40.512779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.365 qpair failed and we were unable to recover it. 00:29:11.365 [2024-11-05 19:18:40.513081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.365 [2024-11-05 19:18:40.513092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.365 qpair failed and we were unable to recover it. 00:29:11.365 [2024-11-05 19:18:40.513416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.365 [2024-11-05 19:18:40.513428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.365 qpair failed and we were unable to recover it. 00:29:11.365 [2024-11-05 19:18:40.513675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.365 [2024-11-05 19:18:40.513686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.365 qpair failed and we were unable to recover it. 00:29:11.365 [2024-11-05 19:18:40.514006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.365 [2024-11-05 19:18:40.514018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.365 qpair failed and we were unable to recover it. 00:29:11.365 [2024-11-05 19:18:40.514335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.365 [2024-11-05 19:18:40.514347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.365 qpair failed and we were unable to recover it. 00:29:11.365 [2024-11-05 19:18:40.514506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.365 [2024-11-05 19:18:40.514518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.365 qpair failed and we were unable to recover it. 00:29:11.365 [2024-11-05 19:18:40.514841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.365 [2024-11-05 19:18:40.514853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.365 qpair failed and we were unable to recover it. 00:29:11.365 [2024-11-05 19:18:40.515171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.365 [2024-11-05 19:18:40.515184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.365 qpair failed and we were unable to recover it. 00:29:11.365 [2024-11-05 19:18:40.515468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.365 [2024-11-05 19:18:40.515480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.365 qpair failed and we were unable to recover it. 00:29:11.365 [2024-11-05 19:18:40.515779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.365 [2024-11-05 19:18:40.515792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.365 qpair failed and we were unable to recover it. 00:29:11.365 [2024-11-05 19:18:40.516158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.365 [2024-11-05 19:18:40.516169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.365 qpair failed and we were unable to recover it. 00:29:11.365 [2024-11-05 19:18:40.516475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.365 [2024-11-05 19:18:40.516489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.365 qpair failed and we were unable to recover it. 00:29:11.365 [2024-11-05 19:18:40.516705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.365 [2024-11-05 19:18:40.516716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.365 qpair failed and we were unable to recover it. 00:29:11.365 [2024-11-05 19:18:40.516907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.365 [2024-11-05 19:18:40.516918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.365 qpair failed and we were unable to recover it. 00:29:11.365 [2024-11-05 19:18:40.517226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.365 [2024-11-05 19:18:40.517237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.365 qpair failed and we were unable to recover it. 00:29:11.365 [2024-11-05 19:18:40.517603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.365 [2024-11-05 19:18:40.517614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.365 qpair failed and we were unable to recover it. 00:29:11.365 [2024-11-05 19:18:40.517942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.365 [2024-11-05 19:18:40.517954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.365 qpair failed and we were unable to recover it. 00:29:11.365 [2024-11-05 19:18:40.518262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.365 [2024-11-05 19:18:40.518272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.365 qpair failed and we were unable to recover it. 00:29:11.365 [2024-11-05 19:18:40.518568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.365 [2024-11-05 19:18:40.518580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.365 qpair failed and we were unable to recover it. 00:29:11.365 [2024-11-05 19:18:40.518879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.365 [2024-11-05 19:18:40.518890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.365 qpair failed and we were unable to recover it. 00:29:11.365 [2024-11-05 19:18:40.519218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.365 [2024-11-05 19:18:40.519229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.365 qpair failed and we were unable to recover it. 00:29:11.365 [2024-11-05 19:18:40.519569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.365 [2024-11-05 19:18:40.519580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.365 qpair failed and we were unable to recover it. 00:29:11.365 [2024-11-05 19:18:40.519888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.365 [2024-11-05 19:18:40.519899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.365 qpair failed and we were unable to recover it. 00:29:11.365 [2024-11-05 19:18:40.520236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.365 [2024-11-05 19:18:40.520247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.365 qpair failed and we were unable to recover it. 00:29:11.365 [2024-11-05 19:18:40.520474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.365 [2024-11-05 19:18:40.520485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.365 qpair failed and we were unable to recover it. 00:29:11.365 [2024-11-05 19:18:40.520816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.365 [2024-11-05 19:18:40.520828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.365 qpair failed and we were unable to recover it. 00:29:11.365 [2024-11-05 19:18:40.521128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.365 [2024-11-05 19:18:40.521140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.365 qpair failed and we were unable to recover it. 00:29:11.365 [2024-11-05 19:18:40.521462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.365 [2024-11-05 19:18:40.521473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.365 qpair failed and we were unable to recover it. 00:29:11.365 [2024-11-05 19:18:40.521751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.366 [2024-11-05 19:18:40.521762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.366 qpair failed and we were unable to recover it. 00:29:11.366 [2024-11-05 19:18:40.522062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.366 [2024-11-05 19:18:40.522073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.366 qpair failed and we were unable to recover it. 00:29:11.366 [2024-11-05 19:18:40.522303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.366 [2024-11-05 19:18:40.522314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.366 qpair failed and we were unable to recover it. 00:29:11.366 [2024-11-05 19:18:40.522634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.366 [2024-11-05 19:18:40.522646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.366 qpair failed and we were unable to recover it. 00:29:11.366 [2024-11-05 19:18:40.522925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.366 [2024-11-05 19:18:40.522936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.366 qpair failed and we were unable to recover it. 00:29:11.366 [2024-11-05 19:18:40.523109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.366 [2024-11-05 19:18:40.523121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.366 qpair failed and we were unable to recover it. 00:29:11.366 [2024-11-05 19:18:40.523417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.366 [2024-11-05 19:18:40.523429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.366 qpair failed and we were unable to recover it. 00:29:11.366 [2024-11-05 19:18:40.523737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.366 [2024-11-05 19:18:40.523753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.366 qpair failed and we were unable to recover it. 00:29:11.366 [2024-11-05 19:18:40.524068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.366 [2024-11-05 19:18:40.524080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.366 qpair failed and we were unable to recover it. 00:29:11.366 [2024-11-05 19:18:40.524410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.366 [2024-11-05 19:18:40.524422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.366 qpair failed and we were unable to recover it. 00:29:11.366 [2024-11-05 19:18:40.524602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.366 [2024-11-05 19:18:40.524616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.366 qpair failed and we were unable to recover it. 00:29:11.366 [2024-11-05 19:18:40.524945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.366 [2024-11-05 19:18:40.524956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.366 qpair failed and we were unable to recover it. 00:29:11.366 [2024-11-05 19:18:40.525243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.366 [2024-11-05 19:18:40.525254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.366 qpair failed and we were unable to recover it. 00:29:11.366 [2024-11-05 19:18:40.525556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.366 [2024-11-05 19:18:40.525568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.366 qpair failed and we were unable to recover it. 00:29:11.366 [2024-11-05 19:18:40.525873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.366 [2024-11-05 19:18:40.525885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.366 qpair failed and we were unable to recover it. 00:29:11.366 [2024-11-05 19:18:40.526172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.366 [2024-11-05 19:18:40.526182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.366 qpair failed and we were unable to recover it. 00:29:11.366 [2024-11-05 19:18:40.526467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.366 [2024-11-05 19:18:40.526478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.366 qpair failed and we were unable to recover it. 00:29:11.366 [2024-11-05 19:18:40.526798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.366 [2024-11-05 19:18:40.526811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.366 qpair failed and we were unable to recover it. 00:29:11.366 [2024-11-05 19:18:40.526994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.366 [2024-11-05 19:18:40.527005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.366 qpair failed and we were unable to recover it. 00:29:11.366 [2024-11-05 19:18:40.527356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.366 [2024-11-05 19:18:40.527367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.366 qpair failed and we were unable to recover it. 00:29:11.366 [2024-11-05 19:18:40.527658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.366 [2024-11-05 19:18:40.527669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.366 qpair failed and we were unable to recover it. 00:29:11.366 [2024-11-05 19:18:40.528013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.366 [2024-11-05 19:18:40.528024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.366 qpair failed and we were unable to recover it. 00:29:11.366 [2024-11-05 19:18:40.528336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.366 [2024-11-05 19:18:40.528347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.366 qpair failed and we were unable to recover it. 00:29:11.366 [2024-11-05 19:18:40.528646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.366 [2024-11-05 19:18:40.528657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.366 qpair failed and we were unable to recover it. 00:29:11.366 [2024-11-05 19:18:40.528937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.366 [2024-11-05 19:18:40.528949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.366 qpair failed and we were unable to recover it. 00:29:11.366 [2024-11-05 19:18:40.529249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.366 [2024-11-05 19:18:40.529260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.366 qpair failed and we were unable to recover it. 00:29:11.366 [2024-11-05 19:18:40.529522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.366 [2024-11-05 19:18:40.529533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.366 qpair failed and we were unable to recover it. 00:29:11.366 [2024-11-05 19:18:40.529816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.366 [2024-11-05 19:18:40.529828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.366 qpair failed and we were unable to recover it. 00:29:11.366 [2024-11-05 19:18:40.530040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.366 [2024-11-05 19:18:40.530051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.366 qpair failed and we were unable to recover it. 00:29:11.366 [2024-11-05 19:18:40.530324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.366 [2024-11-05 19:18:40.530335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.366 qpair failed and we were unable to recover it. 00:29:11.366 [2024-11-05 19:18:40.530691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.366 [2024-11-05 19:18:40.530703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.366 qpair failed and we were unable to recover it. 00:29:11.366 [2024-11-05 19:18:40.530973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.366 [2024-11-05 19:18:40.530985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.366 qpair failed and we were unable to recover it. 00:29:11.366 [2024-11-05 19:18:40.531280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.366 [2024-11-05 19:18:40.531292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.366 qpair failed and we were unable to recover it. 00:29:11.366 [2024-11-05 19:18:40.531611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.366 [2024-11-05 19:18:40.531624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.366 qpair failed and we were unable to recover it. 00:29:11.366 [2024-11-05 19:18:40.531913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.366 [2024-11-05 19:18:40.531924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.366 qpair failed and we were unable to recover it. 00:29:11.366 [2024-11-05 19:18:40.532269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.366 [2024-11-05 19:18:40.532280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.366 qpair failed and we were unable to recover it. 00:29:11.366 [2024-11-05 19:18:40.532574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.366 [2024-11-05 19:18:40.532585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.366 qpair failed and we were unable to recover it. 00:29:11.366 [2024-11-05 19:18:40.532757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.366 [2024-11-05 19:18:40.532769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.366 qpair failed and we were unable to recover it. 00:29:11.366 [2024-11-05 19:18:40.532969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.366 [2024-11-05 19:18:40.532980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.366 qpair failed and we were unable to recover it. 00:29:11.367 [2024-11-05 19:18:40.533254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.367 [2024-11-05 19:18:40.533265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.367 qpair failed and we were unable to recover it. 00:29:11.367 [2024-11-05 19:18:40.533547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.367 [2024-11-05 19:18:40.533558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.367 qpair failed and we were unable to recover it. 00:29:11.367 [2024-11-05 19:18:40.533872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.367 [2024-11-05 19:18:40.533884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.367 qpair failed and we were unable to recover it. 00:29:11.367 [2024-11-05 19:18:40.534209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.367 [2024-11-05 19:18:40.534221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.367 qpair failed and we were unable to recover it. 00:29:11.367 [2024-11-05 19:18:40.534517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.367 [2024-11-05 19:18:40.534528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.367 qpair failed and we were unable to recover it. 00:29:11.367 [2024-11-05 19:18:40.534817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.367 [2024-11-05 19:18:40.534829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.367 qpair failed and we were unable to recover it. 00:29:11.367 [2024-11-05 19:18:40.535144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.367 [2024-11-05 19:18:40.535157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.367 qpair failed and we were unable to recover it. 00:29:11.367 [2024-11-05 19:18:40.535461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.367 [2024-11-05 19:18:40.535472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.367 qpair failed and we were unable to recover it. 00:29:11.367 [2024-11-05 19:18:40.535786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.367 [2024-11-05 19:18:40.535798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.367 qpair failed and we were unable to recover it. 00:29:11.367 [2024-11-05 19:18:40.536123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.367 [2024-11-05 19:18:40.536134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.367 qpair failed and we were unable to recover it. 00:29:11.367 [2024-11-05 19:18:40.536431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.367 [2024-11-05 19:18:40.536442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.367 qpair failed and we were unable to recover it. 00:29:11.367 [2024-11-05 19:18:40.536753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.367 [2024-11-05 19:18:40.536765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.367 qpair failed and we were unable to recover it. 00:29:11.367 [2024-11-05 19:18:40.537091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.367 [2024-11-05 19:18:40.537103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.367 qpair failed and we were unable to recover it. 00:29:11.367 [2024-11-05 19:18:40.537429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.367 [2024-11-05 19:18:40.537440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.367 qpair failed and we were unable to recover it. 00:29:11.367 [2024-11-05 19:18:40.537743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.367 [2024-11-05 19:18:40.537760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.367 qpair failed and we were unable to recover it. 00:29:11.367 [2024-11-05 19:18:40.538093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.367 [2024-11-05 19:18:40.538103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.367 qpair failed and we were unable to recover it. 00:29:11.367 [2024-11-05 19:18:40.538401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.367 [2024-11-05 19:18:40.538412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.367 qpair failed and we were unable to recover it. 00:29:11.367 [2024-11-05 19:18:40.538770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.367 [2024-11-05 19:18:40.538783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.367 qpair failed and we were unable to recover it. 00:29:11.367 [2024-11-05 19:18:40.539061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.367 [2024-11-05 19:18:40.539072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.367 qpair failed and we were unable to recover it. 00:29:11.367 [2024-11-05 19:18:40.539390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.367 [2024-11-05 19:18:40.539401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.367 qpair failed and we were unable to recover it. 00:29:11.367 [2024-11-05 19:18:40.539706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.367 [2024-11-05 19:18:40.539718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.367 qpair failed and we were unable to recover it. 00:29:11.367 [2024-11-05 19:18:40.540051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.367 [2024-11-05 19:18:40.540062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.367 qpair failed and we were unable to recover it. 00:29:11.367 [2024-11-05 19:18:40.540360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.367 [2024-11-05 19:18:40.540371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.367 qpair failed and we were unable to recover it. 00:29:11.367 [2024-11-05 19:18:40.540752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.367 [2024-11-05 19:18:40.540763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.367 qpair failed and we were unable to recover it. 00:29:11.367 [2024-11-05 19:18:40.541066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.367 [2024-11-05 19:18:40.541078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.367 qpair failed and we were unable to recover it. 00:29:11.367 [2024-11-05 19:18:40.541362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.367 [2024-11-05 19:18:40.541373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.367 qpair failed and we were unable to recover it. 00:29:11.367 [2024-11-05 19:18:40.541701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.367 [2024-11-05 19:18:40.541713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.367 qpair failed and we were unable to recover it. 00:29:11.367 [2024-11-05 19:18:40.542059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.367 [2024-11-05 19:18:40.542072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.367 qpair failed and we were unable to recover it. 00:29:11.367 [2024-11-05 19:18:40.542368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.367 [2024-11-05 19:18:40.542379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.367 qpair failed and we were unable to recover it. 00:29:11.367 [2024-11-05 19:18:40.542674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.367 [2024-11-05 19:18:40.542686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.367 qpair failed and we were unable to recover it. 00:29:11.367 [2024-11-05 19:18:40.542993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.367 [2024-11-05 19:18:40.543005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.367 qpair failed and we were unable to recover it. 00:29:11.367 [2024-11-05 19:18:40.543302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.367 [2024-11-05 19:18:40.543314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.367 qpair failed and we were unable to recover it. 00:29:11.367 [2024-11-05 19:18:40.543650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.367 [2024-11-05 19:18:40.543662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.367 qpair failed and we were unable to recover it. 00:29:11.367 [2024-11-05 19:18:40.543950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.367 [2024-11-05 19:18:40.543961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.367 qpair failed and we were unable to recover it. 00:29:11.367 [2024-11-05 19:18:40.544162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.367 [2024-11-05 19:18:40.544172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.367 qpair failed and we were unable to recover it. 00:29:11.367 [2024-11-05 19:18:40.544474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.367 [2024-11-05 19:18:40.544486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.367 qpair failed and we were unable to recover it. 00:29:11.367 [2024-11-05 19:18:40.544796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.367 [2024-11-05 19:18:40.544808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.367 qpair failed and we were unable to recover it. 00:29:11.367 [2024-11-05 19:18:40.545106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.367 [2024-11-05 19:18:40.545117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.367 qpair failed and we were unable to recover it. 00:29:11.367 [2024-11-05 19:18:40.545428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.367 [2024-11-05 19:18:40.545439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.368 qpair failed and we were unable to recover it. 00:29:11.368 [2024-11-05 19:18:40.545782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.368 [2024-11-05 19:18:40.545796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.368 qpair failed and we were unable to recover it. 00:29:11.368 [2024-11-05 19:18:40.546101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.368 [2024-11-05 19:18:40.546112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.368 qpair failed and we were unable to recover it. 00:29:11.368 [2024-11-05 19:18:40.546404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.368 [2024-11-05 19:18:40.546416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.368 qpair failed and we were unable to recover it. 00:29:11.368 [2024-11-05 19:18:40.546704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.368 [2024-11-05 19:18:40.546715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.368 qpair failed and we were unable to recover it. 00:29:11.368 [2024-11-05 19:18:40.547027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.368 [2024-11-05 19:18:40.547039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.368 qpair failed and we were unable to recover it. 00:29:11.368 [2024-11-05 19:18:40.547347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.368 [2024-11-05 19:18:40.547358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.368 qpair failed and we were unable to recover it. 00:29:11.368 [2024-11-05 19:18:40.547688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.368 [2024-11-05 19:18:40.547699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.368 qpair failed and we were unable to recover it. 00:29:11.368 [2024-11-05 19:18:40.548037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.368 [2024-11-05 19:18:40.548048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.368 qpair failed and we were unable to recover it. 00:29:11.368 [2024-11-05 19:18:40.548246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.368 [2024-11-05 19:18:40.548257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.368 qpair failed and we were unable to recover it. 00:29:11.368 [2024-11-05 19:18:40.548458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.368 [2024-11-05 19:18:40.548470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.368 qpair failed and we were unable to recover it. 00:29:11.368 [2024-11-05 19:18:40.548797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.368 [2024-11-05 19:18:40.548808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.368 qpair failed and we were unable to recover it. 00:29:11.368 [2024-11-05 19:18:40.548993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.368 [2024-11-05 19:18:40.549004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.368 qpair failed and we were unable to recover it. 00:29:11.368 [2024-11-05 19:18:40.549315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.368 [2024-11-05 19:18:40.549326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.368 qpair failed and we were unable to recover it. 00:29:11.368 [2024-11-05 19:18:40.549624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.368 [2024-11-05 19:18:40.549636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.368 qpair failed and we were unable to recover it. 00:29:11.368 [2024-11-05 19:18:40.549810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.368 [2024-11-05 19:18:40.549823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.368 qpair failed and we were unable to recover it. 00:29:11.368 [2024-11-05 19:18:40.550140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.368 [2024-11-05 19:18:40.550151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.368 qpair failed and we were unable to recover it. 00:29:11.368 [2024-11-05 19:18:40.550500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.368 [2024-11-05 19:18:40.550512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.368 qpair failed and we were unable to recover it. 00:29:11.368 [2024-11-05 19:18:40.550896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.368 [2024-11-05 19:18:40.550907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.368 qpair failed and we were unable to recover it. 00:29:11.368 [2024-11-05 19:18:40.551212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.368 [2024-11-05 19:18:40.551224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.368 qpair failed and we were unable to recover it. 00:29:11.368 [2024-11-05 19:18:40.551526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.368 [2024-11-05 19:18:40.551538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.368 qpair failed and we were unable to recover it. 00:29:11.368 [2024-11-05 19:18:40.551729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.368 [2024-11-05 19:18:40.551739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.368 qpair failed and we were unable to recover it. 00:29:11.368 [2024-11-05 19:18:40.552045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.368 [2024-11-05 19:18:40.552057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.368 qpair failed and we were unable to recover it. 00:29:11.368 [2024-11-05 19:18:40.552332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.368 [2024-11-05 19:18:40.552343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.368 qpair failed and we were unable to recover it. 00:29:11.368 [2024-11-05 19:18:40.552633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.368 [2024-11-05 19:18:40.552644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.368 qpair failed and we were unable to recover it. 00:29:11.368 [2024-11-05 19:18:40.552921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.368 [2024-11-05 19:18:40.552931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.368 qpair failed and we were unable to recover it. 00:29:11.368 [2024-11-05 19:18:40.553129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.368 [2024-11-05 19:18:40.553140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.368 qpair failed and we were unable to recover it. 00:29:11.368 [2024-11-05 19:18:40.553468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.368 [2024-11-05 19:18:40.553480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.368 qpair failed and we were unable to recover it. 00:29:11.368 [2024-11-05 19:18:40.553781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.368 [2024-11-05 19:18:40.553795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.368 qpair failed and we were unable to recover it. 00:29:11.368 [2024-11-05 19:18:40.553997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.368 [2024-11-05 19:18:40.554007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.368 qpair failed and we were unable to recover it. 00:29:11.368 [2024-11-05 19:18:40.554311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.368 [2024-11-05 19:18:40.554322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.368 qpair failed and we were unable to recover it. 00:29:11.368 [2024-11-05 19:18:40.554659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.368 [2024-11-05 19:18:40.554670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.368 qpair failed and we were unable to recover it. 00:29:11.368 [2024-11-05 19:18:40.554986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.368 [2024-11-05 19:18:40.554997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.368 qpair failed and we were unable to recover it. 00:29:11.368 [2024-11-05 19:18:40.555385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.368 [2024-11-05 19:18:40.555396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.368 qpair failed and we were unable to recover it. 00:29:11.368 [2024-11-05 19:18:40.555702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.368 [2024-11-05 19:18:40.555714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.368 qpair failed and we were unable to recover it. 00:29:11.368 [2024-11-05 19:18:40.555904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.369 [2024-11-05 19:18:40.555916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.369 qpair failed and we were unable to recover it. 00:29:11.369 [2024-11-05 19:18:40.556249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.369 [2024-11-05 19:18:40.556261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.369 qpair failed and we were unable to recover it. 00:29:11.369 [2024-11-05 19:18:40.556562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.369 [2024-11-05 19:18:40.556573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.369 qpair failed and we were unable to recover it. 00:29:11.369 [2024-11-05 19:18:40.556872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.369 [2024-11-05 19:18:40.556883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.369 qpair failed and we were unable to recover it. 00:29:11.369 [2024-11-05 19:18:40.557182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.369 [2024-11-05 19:18:40.557192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.369 qpair failed and we were unable to recover it. 00:29:11.369 [2024-11-05 19:18:40.557519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.369 [2024-11-05 19:18:40.557530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.369 qpair failed and we were unable to recover it. 00:29:11.369 [2024-11-05 19:18:40.557839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.369 [2024-11-05 19:18:40.557852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.369 qpair failed and we were unable to recover it. 00:29:11.369 [2024-11-05 19:18:40.558162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.369 [2024-11-05 19:18:40.558173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.369 qpair failed and we were unable to recover it. 00:29:11.369 [2024-11-05 19:18:40.558501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.369 [2024-11-05 19:18:40.558512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.369 qpair failed and we were unable to recover it. 00:29:11.369 [2024-11-05 19:18:40.558842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.369 [2024-11-05 19:18:40.558853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.369 qpair failed and we were unable to recover it. 00:29:11.369 [2024-11-05 19:18:40.559167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.369 [2024-11-05 19:18:40.559179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.369 qpair failed and we were unable to recover it. 00:29:11.369 [2024-11-05 19:18:40.559488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.369 [2024-11-05 19:18:40.559499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.369 qpair failed and we were unable to recover it. 00:29:11.369 [2024-11-05 19:18:40.559794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.369 [2024-11-05 19:18:40.559806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.369 qpair failed and we were unable to recover it. 00:29:11.369 [2024-11-05 19:18:40.560087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.369 [2024-11-05 19:18:40.560098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.369 qpair failed and we were unable to recover it. 00:29:11.369 [2024-11-05 19:18:40.560396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.369 [2024-11-05 19:18:40.560409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.369 qpair failed and we were unable to recover it. 00:29:11.369 [2024-11-05 19:18:40.560636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.369 [2024-11-05 19:18:40.560647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.369 qpair failed and we were unable to recover it. 00:29:11.369 [2024-11-05 19:18:40.560965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.369 [2024-11-05 19:18:40.560977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.369 qpair failed and we were unable to recover it. 00:29:11.369 [2024-11-05 19:18:40.561286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.369 [2024-11-05 19:18:40.561297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.369 qpair failed and we were unable to recover it. 00:29:11.369 [2024-11-05 19:18:40.561598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.369 [2024-11-05 19:18:40.561608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.369 qpair failed and we were unable to recover it. 00:29:11.369 [2024-11-05 19:18:40.561765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.369 [2024-11-05 19:18:40.561778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.369 qpair failed and we were unable to recover it. 00:29:11.369 [2024-11-05 19:18:40.562108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.369 [2024-11-05 19:18:40.562123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.369 qpair failed and we were unable to recover it. 00:29:11.369 [2024-11-05 19:18:40.562416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.369 [2024-11-05 19:18:40.562428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.369 qpair failed and we were unable to recover it. 00:29:11.369 [2024-11-05 19:18:40.562614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.369 [2024-11-05 19:18:40.562625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.369 qpair failed and we were unable to recover it. 00:29:11.369 [2024-11-05 19:18:40.562919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.369 [2024-11-05 19:18:40.562930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.369 qpair failed and we were unable to recover it. 00:29:11.369 [2024-11-05 19:18:40.563245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.369 [2024-11-05 19:18:40.563256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.369 qpair failed and we were unable to recover it. 00:29:11.369 [2024-11-05 19:18:40.563422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.369 [2024-11-05 19:18:40.563434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.369 qpair failed and we were unable to recover it. 00:29:11.369 [2024-11-05 19:18:40.563677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.369 [2024-11-05 19:18:40.563688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.369 qpair failed and we were unable to recover it. 00:29:11.369 [2024-11-05 19:18:40.563875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.369 [2024-11-05 19:18:40.563887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.369 qpair failed and we were unable to recover it. 00:29:11.369 [2024-11-05 19:18:40.564203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.369 [2024-11-05 19:18:40.564213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.369 qpair failed and we were unable to recover it. 00:29:11.369 [2024-11-05 19:18:40.564512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.369 [2024-11-05 19:18:40.564523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.369 qpair failed and we were unable to recover it. 00:29:11.369 [2024-11-05 19:18:40.564811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.369 [2024-11-05 19:18:40.564822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.369 qpair failed and we were unable to recover it. 00:29:11.369 [2024-11-05 19:18:40.565098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.369 [2024-11-05 19:18:40.565109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.369 qpair failed and we were unable to recover it. 00:29:11.369 [2024-11-05 19:18:40.565323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.369 [2024-11-05 19:18:40.565333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.369 qpair failed and we were unable to recover it. 00:29:11.369 [2024-11-05 19:18:40.565687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.369 [2024-11-05 19:18:40.565698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.369 qpair failed and we were unable to recover it. 00:29:11.369 [2024-11-05 19:18:40.565982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.369 [2024-11-05 19:18:40.565994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.369 qpair failed and we were unable to recover it. 00:29:11.369 [2024-11-05 19:18:40.566305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.369 [2024-11-05 19:18:40.566316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.369 qpair failed and we were unable to recover it. 00:29:11.369 [2024-11-05 19:18:40.566649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.369 [2024-11-05 19:18:40.566661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.369 qpair failed and we were unable to recover it. 00:29:11.369 [2024-11-05 19:18:40.566977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.369 [2024-11-05 19:18:40.566990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.369 qpair failed and we were unable to recover it. 00:29:11.369 [2024-11-05 19:18:40.567159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.369 [2024-11-05 19:18:40.567169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.369 qpair failed and we were unable to recover it. 00:29:11.370 [2024-11-05 19:18:40.567350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.370 [2024-11-05 19:18:40.567362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.370 qpair failed and we were unable to recover it. 00:29:11.370 [2024-11-05 19:18:40.567669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.370 [2024-11-05 19:18:40.567680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.370 qpair failed and we were unable to recover it. 00:29:11.370 [2024-11-05 19:18:40.567838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.370 [2024-11-05 19:18:40.567850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.370 qpair failed and we were unable to recover it. 00:29:11.370 [2024-11-05 19:18:40.568185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.370 [2024-11-05 19:18:40.568196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.370 qpair failed and we were unable to recover it. 00:29:11.370 [2024-11-05 19:18:40.568489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.370 [2024-11-05 19:18:40.568500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.370 qpair failed and we were unable to recover it. 00:29:11.370 [2024-11-05 19:18:40.568753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.370 [2024-11-05 19:18:40.568764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.370 qpair failed and we were unable to recover it. 00:29:11.370 [2024-11-05 19:18:40.569094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.370 [2024-11-05 19:18:40.569106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.370 qpair failed and we were unable to recover it. 00:29:11.370 [2024-11-05 19:18:40.569206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.370 [2024-11-05 19:18:40.569216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.370 qpair failed and we were unable to recover it. 00:29:11.370 [2024-11-05 19:18:40.569529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.370 [2024-11-05 19:18:40.569540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.370 qpair failed and we were unable to recover it. 00:29:11.370 [2024-11-05 19:18:40.569858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.370 [2024-11-05 19:18:40.569869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.370 qpair failed and we were unable to recover it. 00:29:11.370 [2024-11-05 19:18:40.570191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.370 [2024-11-05 19:18:40.570202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.370 qpair failed and we were unable to recover it. 00:29:11.370 [2024-11-05 19:18:40.570504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.370 [2024-11-05 19:18:40.570516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.370 qpair failed and we were unable to recover it. 00:29:11.370 [2024-11-05 19:18:40.570753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.370 [2024-11-05 19:18:40.570765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.370 qpair failed and we were unable to recover it. 00:29:11.370 [2024-11-05 19:18:40.570949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.370 [2024-11-05 19:18:40.570960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.370 qpair failed and we were unable to recover it. 00:29:11.370 [2024-11-05 19:18:40.571284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.370 [2024-11-05 19:18:40.571295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.370 qpair failed and we were unable to recover it. 00:29:11.370 [2024-11-05 19:18:40.571636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.370 [2024-11-05 19:18:40.571647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.370 qpair failed and we were unable to recover it. 00:29:11.370 [2024-11-05 19:18:40.571915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.370 [2024-11-05 19:18:40.571926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.370 qpair failed and we were unable to recover it. 00:29:11.370 [2024-11-05 19:18:40.572255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.370 [2024-11-05 19:18:40.572266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.370 qpair failed and we were unable to recover it. 00:29:11.370 [2024-11-05 19:18:40.572563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.370 [2024-11-05 19:18:40.572575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.370 qpair failed and we were unable to recover it. 00:29:11.370 [2024-11-05 19:18:40.572887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.370 [2024-11-05 19:18:40.572898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.370 qpair failed and we were unable to recover it. 00:29:11.370 [2024-11-05 19:18:40.573069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.370 [2024-11-05 19:18:40.573079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.370 qpair failed and we were unable to recover it. 00:29:11.370 [2024-11-05 19:18:40.573266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.370 [2024-11-05 19:18:40.573277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.370 qpair failed and we were unable to recover it. 00:29:11.370 [2024-11-05 19:18:40.573606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.370 [2024-11-05 19:18:40.573618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.370 qpair failed and we were unable to recover it. 00:29:11.370 [2024-11-05 19:18:40.573982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.370 [2024-11-05 19:18:40.573993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.370 qpair failed and we were unable to recover it. 00:29:11.370 [2024-11-05 19:18:40.574351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.370 [2024-11-05 19:18:40.574362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.370 qpair failed and we were unable to recover it. 00:29:11.370 [2024-11-05 19:18:40.574534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.370 [2024-11-05 19:18:40.574545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.370 qpair failed and we were unable to recover it. 00:29:11.370 [2024-11-05 19:18:40.574813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.370 [2024-11-05 19:18:40.574824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.370 qpair failed and we were unable to recover it. 00:29:11.370 [2024-11-05 19:18:40.575114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.370 [2024-11-05 19:18:40.575125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.370 qpair failed and we were unable to recover it. 00:29:11.370 [2024-11-05 19:18:40.575206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.370 [2024-11-05 19:18:40.575216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.370 qpair failed and we were unable to recover it. 00:29:11.370 [2024-11-05 19:18:40.575437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.370 [2024-11-05 19:18:40.575447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.370 qpair failed and we were unable to recover it. 00:29:11.370 [2024-11-05 19:18:40.575761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.370 [2024-11-05 19:18:40.575773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.370 qpair failed and we were unable to recover it. 00:29:11.370 [2024-11-05 19:18:40.576076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.370 [2024-11-05 19:18:40.576087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.370 qpair failed and we were unable to recover it. 00:29:11.370 [2024-11-05 19:18:40.576371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.370 [2024-11-05 19:18:40.576382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.370 qpair failed and we were unable to recover it. 00:29:11.370 [2024-11-05 19:18:40.576565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.370 [2024-11-05 19:18:40.576575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.370 qpair failed and we were unable to recover it. 00:29:11.370 [2024-11-05 19:18:40.576861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.370 [2024-11-05 19:18:40.576873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.370 qpair failed and we were unable to recover it. 00:29:11.370 [2024-11-05 19:18:40.577058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.370 [2024-11-05 19:18:40.577070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.370 qpair failed and we were unable to recover it. 00:29:11.370 [2024-11-05 19:18:40.577378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.370 [2024-11-05 19:18:40.577389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.370 qpair failed and we were unable to recover it. 00:29:11.370 [2024-11-05 19:18:40.577693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.370 [2024-11-05 19:18:40.577704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.370 qpair failed and we were unable to recover it. 00:29:11.370 [2024-11-05 19:18:40.577878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.370 [2024-11-05 19:18:40.577890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.371 qpair failed and we were unable to recover it. 00:29:11.371 [2024-11-05 19:18:40.578168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.371 [2024-11-05 19:18:40.578179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.371 qpair failed and we were unable to recover it. 00:29:11.371 [2024-11-05 19:18:40.578361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.371 [2024-11-05 19:18:40.578372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.371 qpair failed and we were unable to recover it. 00:29:11.371 [2024-11-05 19:18:40.578735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.371 [2024-11-05 19:18:40.578750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.371 qpair failed and we were unable to recover it. 00:29:11.371 [2024-11-05 19:18:40.579059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.371 [2024-11-05 19:18:40.579071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.371 qpair failed and we were unable to recover it. 00:29:11.371 [2024-11-05 19:18:40.579388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.371 [2024-11-05 19:18:40.579400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.371 qpair failed and we were unable to recover it. 00:29:11.371 [2024-11-05 19:18:40.579700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.371 [2024-11-05 19:18:40.579711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.371 qpair failed and we were unable to recover it. 00:29:11.371 [2024-11-05 19:18:40.580025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.371 [2024-11-05 19:18:40.580037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.371 qpair failed and we were unable to recover it. 00:29:11.371 [2024-11-05 19:18:40.580348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.371 [2024-11-05 19:18:40.580358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.371 qpair failed and we were unable to recover it. 00:29:11.371 [2024-11-05 19:18:40.580642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.371 [2024-11-05 19:18:40.580653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.371 qpair failed and we were unable to recover it. 00:29:11.371 [2024-11-05 19:18:40.580932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.371 [2024-11-05 19:18:40.580942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.371 qpair failed and we were unable to recover it. 00:29:11.371 [2024-11-05 19:18:40.581287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.371 [2024-11-05 19:18:40.581300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.371 qpair failed and we were unable to recover it. 00:29:11.371 [2024-11-05 19:18:40.581596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.371 [2024-11-05 19:18:40.581607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.371 qpair failed and we were unable to recover it. 00:29:11.371 [2024-11-05 19:18:40.581998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.371 [2024-11-05 19:18:40.582010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.371 qpair failed and we were unable to recover it. 00:29:11.371 [2024-11-05 19:18:40.582212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.371 [2024-11-05 19:18:40.582222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.371 qpair failed and we were unable to recover it. 00:29:11.371 [2024-11-05 19:18:40.582499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.371 [2024-11-05 19:18:40.582511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.371 qpair failed and we were unable to recover it. 00:29:11.371 [2024-11-05 19:18:40.582826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.371 [2024-11-05 19:18:40.582837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.371 qpair failed and we were unable to recover it. 00:29:11.371 [2024-11-05 19:18:40.583162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.371 [2024-11-05 19:18:40.583173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.371 qpair failed and we were unable to recover it. 00:29:11.371 [2024-11-05 19:18:40.583510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.371 [2024-11-05 19:18:40.583522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.371 qpair failed and we were unable to recover it. 00:29:11.371 [2024-11-05 19:18:40.583823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.371 [2024-11-05 19:18:40.583835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.371 qpair failed and we were unable to recover it. 00:29:11.371 [2024-11-05 19:18:40.584178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.371 [2024-11-05 19:18:40.584190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.371 qpair failed and we were unable to recover it. 00:29:11.371 [2024-11-05 19:18:40.584406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.371 [2024-11-05 19:18:40.584419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.371 qpair failed and we were unable to recover it. 00:29:11.371 [2024-11-05 19:18:40.584737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.371 [2024-11-05 19:18:40.584752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.371 qpair failed and we were unable to recover it. 00:29:11.371 [2024-11-05 19:18:40.585067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.371 [2024-11-05 19:18:40.585079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.371 qpair failed and we were unable to recover it. 00:29:11.371 [2024-11-05 19:18:40.585379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.371 [2024-11-05 19:18:40.585390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.371 qpair failed and we were unable to recover it. 00:29:11.371 [2024-11-05 19:18:40.585755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.371 [2024-11-05 19:18:40.585767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.371 qpair failed and we were unable to recover it. 00:29:11.371 [2024-11-05 19:18:40.586089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.371 [2024-11-05 19:18:40.586102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.371 qpair failed and we were unable to recover it. 00:29:11.371 [2024-11-05 19:18:40.586404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.371 [2024-11-05 19:18:40.586416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.371 qpair failed and we were unable to recover it. 00:29:11.371 [2024-11-05 19:18:40.586694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.371 [2024-11-05 19:18:40.586706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.371 qpair failed and we were unable to recover it. 00:29:11.371 [2024-11-05 19:18:40.587025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.371 [2024-11-05 19:18:40.587036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.371 qpair failed and we were unable to recover it. 00:29:11.371 [2024-11-05 19:18:40.587350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.371 [2024-11-05 19:18:40.587362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.371 qpair failed and we were unable to recover it. 00:29:11.371 [2024-11-05 19:18:40.587762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.371 [2024-11-05 19:18:40.587774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.371 qpair failed and we were unable to recover it. 00:29:11.371 [2024-11-05 19:18:40.588071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.371 [2024-11-05 19:18:40.588082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.371 qpair failed and we were unable to recover it. 00:29:11.371 [2024-11-05 19:18:40.588375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.371 [2024-11-05 19:18:40.588385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.372 qpair failed and we were unable to recover it. 00:29:11.372 [2024-11-05 19:18:40.588684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.372 [2024-11-05 19:18:40.588696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.372 qpair failed and we were unable to recover it. 00:29:11.372 [2024-11-05 19:18:40.588888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.372 [2024-11-05 19:18:40.588899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.372 qpair failed and we were unable to recover it. 00:29:11.372 [2024-11-05 19:18:40.589204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.372 [2024-11-05 19:18:40.589214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.372 qpair failed and we were unable to recover it. 00:29:11.372 [2024-11-05 19:18:40.589512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.372 [2024-11-05 19:18:40.589522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.372 qpair failed and we were unable to recover it. 00:29:11.372 [2024-11-05 19:18:40.589828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.372 [2024-11-05 19:18:40.589842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.372 qpair failed and we were unable to recover it. 00:29:11.372 [2024-11-05 19:18:40.590142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.372 [2024-11-05 19:18:40.590152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.372 qpair failed and we were unable to recover it. 00:29:11.372 [2024-11-05 19:18:40.590318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.372 [2024-11-05 19:18:40.590330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.372 qpair failed and we were unable to recover it. 00:29:11.372 [2024-11-05 19:18:40.590662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.372 [2024-11-05 19:18:40.590673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.372 qpair failed and we were unable to recover it. 00:29:11.372 [2024-11-05 19:18:40.590981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.372 [2024-11-05 19:18:40.590993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.372 qpair failed and we were unable to recover it. 00:29:11.372 [2024-11-05 19:18:40.591308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.372 [2024-11-05 19:18:40.591319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.372 qpair failed and we were unable to recover it. 00:29:11.372 [2024-11-05 19:18:40.591614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.372 [2024-11-05 19:18:40.591625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.372 qpair failed and we were unable to recover it. 00:29:11.372 [2024-11-05 19:18:40.591896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.372 [2024-11-05 19:18:40.591907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.372 qpair failed and we were unable to recover it. 00:29:11.372 [2024-11-05 19:18:40.592214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.372 [2024-11-05 19:18:40.592226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.372 qpair failed and we were unable to recover it. 00:29:11.372 [2024-11-05 19:18:40.592530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.372 [2024-11-05 19:18:40.592542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.372 qpair failed and we were unable to recover it. 00:29:11.372 [2024-11-05 19:18:40.592716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.372 [2024-11-05 19:18:40.592726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.372 qpair failed and we were unable to recover it. 00:29:11.372 [2024-11-05 19:18:40.593071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.372 [2024-11-05 19:18:40.593082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.372 qpair failed and we were unable to recover it. 00:29:11.372 [2024-11-05 19:18:40.593393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.372 [2024-11-05 19:18:40.593404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.372 qpair failed and we were unable to recover it. 00:29:11.372 [2024-11-05 19:18:40.593710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.372 [2024-11-05 19:18:40.593721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.372 qpair failed and we were unable to recover it. 00:29:11.372 [2024-11-05 19:18:40.594021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.372 [2024-11-05 19:18:40.594033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.372 qpair failed and we were unable to recover it. 00:29:11.372 [2024-11-05 19:18:40.594212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.372 [2024-11-05 19:18:40.594223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.372 qpair failed and we were unable to recover it. 00:29:11.372 [2024-11-05 19:18:40.594570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.372 [2024-11-05 19:18:40.594581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.372 qpair failed and we were unable to recover it. 00:29:11.372 [2024-11-05 19:18:40.594905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.372 [2024-11-05 19:18:40.594916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.372 qpair failed and we were unable to recover it. 00:29:11.372 [2024-11-05 19:18:40.595220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.372 [2024-11-05 19:18:40.595230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.372 qpair failed and we were unable to recover it. 00:29:11.372 [2024-11-05 19:18:40.595561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.372 [2024-11-05 19:18:40.595573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.372 qpair failed and we were unable to recover it. 00:29:11.372 [2024-11-05 19:18:40.595996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.372 [2024-11-05 19:18:40.596009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.372 qpair failed and we were unable to recover it. 00:29:11.372 [2024-11-05 19:18:40.596307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.372 [2024-11-05 19:18:40.596319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.372 qpair failed and we were unable to recover it. 00:29:11.372 [2024-11-05 19:18:40.596511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.372 [2024-11-05 19:18:40.596521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.372 qpair failed and we were unable to recover it. 00:29:11.372 [2024-11-05 19:18:40.596838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.372 [2024-11-05 19:18:40.596850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.372 qpair failed and we were unable to recover it. 00:29:11.372 [2024-11-05 19:18:40.597155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.372 [2024-11-05 19:18:40.597166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.372 qpair failed and we were unable to recover it. 00:29:11.372 [2024-11-05 19:18:40.597478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.372 [2024-11-05 19:18:40.597489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.372 qpair failed and we were unable to recover it. 00:29:11.372 [2024-11-05 19:18:40.597673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.372 [2024-11-05 19:18:40.597685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.372 qpair failed and we were unable to recover it. 00:29:11.372 [2024-11-05 19:18:40.598015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.372 [2024-11-05 19:18:40.598026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.372 qpair failed and we were unable to recover it. 00:29:11.372 [2024-11-05 19:18:40.598435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.372 [2024-11-05 19:18:40.598446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.372 qpair failed and we were unable to recover it. 00:29:11.372 [2024-11-05 19:18:40.598618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.372 [2024-11-05 19:18:40.598629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.372 qpair failed and we were unable to recover it. 00:29:11.372 [2024-11-05 19:18:40.598965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.373 [2024-11-05 19:18:40.598976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.373 qpair failed and we were unable to recover it. 00:29:11.373 [2024-11-05 19:18:40.599300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.373 [2024-11-05 19:18:40.599310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.373 qpair failed and we were unable to recover it. 00:29:11.373 [2024-11-05 19:18:40.599616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.373 [2024-11-05 19:18:40.599628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.373 qpair failed and we were unable to recover it. 00:29:11.373 [2024-11-05 19:18:40.599912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.373 [2024-11-05 19:18:40.599924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.373 qpair failed and we were unable to recover it. 00:29:11.373 [2024-11-05 19:18:40.600227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.373 [2024-11-05 19:18:40.600239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.373 qpair failed and we were unable to recover it. 00:29:11.373 [2024-11-05 19:18:40.600466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.373 [2024-11-05 19:18:40.600477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.373 qpair failed and we were unable to recover it. 00:29:11.373 [2024-11-05 19:18:40.600778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.373 [2024-11-05 19:18:40.600790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.373 qpair failed and we were unable to recover it. 00:29:11.373 [2024-11-05 19:18:40.601111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.373 [2024-11-05 19:18:40.601122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.373 qpair failed and we were unable to recover it. 00:29:11.373 [2024-11-05 19:18:40.601423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.373 [2024-11-05 19:18:40.601434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.373 qpair failed and we were unable to recover it. 00:29:11.373 [2024-11-05 19:18:40.601772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.373 [2024-11-05 19:18:40.601783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.373 qpair failed and we were unable to recover it. 00:29:11.373 [2024-11-05 19:18:40.602096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.373 [2024-11-05 19:18:40.602107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.373 qpair failed and we were unable to recover it. 00:29:11.373 [2024-11-05 19:18:40.602422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.373 [2024-11-05 19:18:40.602434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.373 qpair failed and we were unable to recover it. 00:29:11.373 [2024-11-05 19:18:40.602779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.373 [2024-11-05 19:18:40.602790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.373 qpair failed and we were unable to recover it. 00:29:11.373 [2024-11-05 19:18:40.603116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.373 [2024-11-05 19:18:40.603127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.373 qpair failed and we were unable to recover it. 00:29:11.373 [2024-11-05 19:18:40.603317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.373 [2024-11-05 19:18:40.603327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.373 qpair failed and we were unable to recover it. 00:29:11.373 [2024-11-05 19:18:40.603668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.373 [2024-11-05 19:18:40.603679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.373 qpair failed and we were unable to recover it. 00:29:11.373 [2024-11-05 19:18:40.604057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.373 [2024-11-05 19:18:40.604070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.373 qpair failed and we were unable to recover it. 00:29:11.373 [2024-11-05 19:18:40.604231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.373 [2024-11-05 19:18:40.604242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.373 qpair failed and we were unable to recover it. 00:29:11.373 [2024-11-05 19:18:40.604524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.373 [2024-11-05 19:18:40.604535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.373 qpair failed and we were unable to recover it. 00:29:11.373 [2024-11-05 19:18:40.604838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.373 [2024-11-05 19:18:40.604849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.373 qpair failed and we were unable to recover it. 00:29:11.373 [2024-11-05 19:18:40.605047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.373 [2024-11-05 19:18:40.605058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.373 qpair failed and we were unable to recover it. 00:29:11.373 [2024-11-05 19:18:40.605352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.373 [2024-11-05 19:18:40.605363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.373 qpair failed and we were unable to recover it. 00:29:11.373 [2024-11-05 19:18:40.605675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.373 [2024-11-05 19:18:40.605686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.373 qpair failed and we were unable to recover it. 00:29:11.373 [2024-11-05 19:18:40.605998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.373 [2024-11-05 19:18:40.606010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.373 qpair failed and we were unable to recover it. 00:29:11.373 [2024-11-05 19:18:40.606338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.373 [2024-11-05 19:18:40.606349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.373 qpair failed and we were unable to recover it. 00:29:11.373 [2024-11-05 19:18:40.606664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.373 [2024-11-05 19:18:40.606676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.373 qpair failed and we were unable to recover it. 00:29:11.373 [2024-11-05 19:18:40.606960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.373 [2024-11-05 19:18:40.606971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.373 qpair failed and we were unable to recover it. 00:29:11.373 [2024-11-05 19:18:40.607282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.373 [2024-11-05 19:18:40.607293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.373 qpair failed and we were unable to recover it. 00:29:11.373 [2024-11-05 19:18:40.607592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.373 [2024-11-05 19:18:40.607603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.373 qpair failed and we were unable to recover it. 00:29:11.373 [2024-11-05 19:18:40.607910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.373 [2024-11-05 19:18:40.607922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.373 qpair failed and we were unable to recover it. 00:29:11.373 [2024-11-05 19:18:40.608280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.373 [2024-11-05 19:18:40.608290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.373 qpair failed and we were unable to recover it. 00:29:11.373 [2024-11-05 19:18:40.608538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.373 [2024-11-05 19:18:40.608550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.373 qpair failed and we were unable to recover it. 00:29:11.373 [2024-11-05 19:18:40.608862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.373 [2024-11-05 19:18:40.608873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.373 qpair failed and we were unable to recover it. 00:29:11.373 [2024-11-05 19:18:40.609206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.373 [2024-11-05 19:18:40.609217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.374 qpair failed and we were unable to recover it. 00:29:11.374 [2024-11-05 19:18:40.609495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.374 [2024-11-05 19:18:40.609506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.374 qpair failed and we were unable to recover it. 00:29:11.374 [2024-11-05 19:18:40.609855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.374 [2024-11-05 19:18:40.609866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.374 qpair failed and we were unable to recover it. 00:29:11.374 [2024-11-05 19:18:40.610190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.374 [2024-11-05 19:18:40.610202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.374 qpair failed and we were unable to recover it. 00:29:11.374 [2024-11-05 19:18:40.610370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.374 [2024-11-05 19:18:40.610382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.374 qpair failed and we were unable to recover it. 00:29:11.374 [2024-11-05 19:18:40.610711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.374 [2024-11-05 19:18:40.610723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.374 qpair failed and we were unable to recover it. 00:29:11.374 [2024-11-05 19:18:40.611052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.374 [2024-11-05 19:18:40.611066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.374 qpair failed and we were unable to recover it. 00:29:11.374 [2024-11-05 19:18:40.611428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.374 [2024-11-05 19:18:40.611441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.374 qpair failed and we were unable to recover it. 00:29:11.374 [2024-11-05 19:18:40.611717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.374 [2024-11-05 19:18:40.611729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.374 qpair failed and we were unable to recover it. 00:29:11.374 [2024-11-05 19:18:40.611937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.374 [2024-11-05 19:18:40.611950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.374 qpair failed and we were unable to recover it. 00:29:11.374 [2024-11-05 19:18:40.612235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.374 [2024-11-05 19:18:40.612246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.374 qpair failed and we were unable to recover it. 00:29:11.374 [2024-11-05 19:18:40.612520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.374 [2024-11-05 19:18:40.612531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.374 qpair failed and we were unable to recover it. 00:29:11.374 [2024-11-05 19:18:40.612832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.374 [2024-11-05 19:18:40.612843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.374 qpair failed and we were unable to recover it. 00:29:11.374 [2024-11-05 19:18:40.613151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.374 [2024-11-05 19:18:40.613162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.374 qpair failed and we were unable to recover it. 00:29:11.374 [2024-11-05 19:18:40.613463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.374 [2024-11-05 19:18:40.613474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.374 qpair failed and we were unable to recover it. 00:29:11.374 [2024-11-05 19:18:40.613748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.374 [2024-11-05 19:18:40.613759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.374 qpair failed and we were unable to recover it. 00:29:11.374 [2024-11-05 19:18:40.614047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.374 [2024-11-05 19:18:40.614058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.374 qpair failed and we were unable to recover it. 00:29:11.374 [2024-11-05 19:18:40.614264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.374 [2024-11-05 19:18:40.614275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.374 qpair failed and we were unable to recover it. 00:29:11.374 [2024-11-05 19:18:40.614582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.374 [2024-11-05 19:18:40.614593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.374 qpair failed and we were unable to recover it. 00:29:11.374 [2024-11-05 19:18:40.614935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.374 [2024-11-05 19:18:40.614946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.374 qpair failed and we were unable to recover it. 00:29:11.374 [2024-11-05 19:18:40.615315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.374 [2024-11-05 19:18:40.615326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.374 qpair failed and we were unable to recover it. 00:29:11.374 [2024-11-05 19:18:40.615626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.374 [2024-11-05 19:18:40.615637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.374 qpair failed and we were unable to recover it. 00:29:11.374 [2024-11-05 19:18:40.615977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.374 [2024-11-05 19:18:40.615988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.374 qpair failed and we were unable to recover it. 00:29:11.374 [2024-11-05 19:18:40.616288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.374 [2024-11-05 19:18:40.616298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.374 qpair failed and we were unable to recover it. 00:29:11.374 [2024-11-05 19:18:40.616602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.374 [2024-11-05 19:18:40.616613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.374 qpair failed and we were unable to recover it. 00:29:11.374 [2024-11-05 19:18:40.616910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.374 [2024-11-05 19:18:40.616922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.374 qpair failed and we were unable to recover it. 00:29:11.374 [2024-11-05 19:18:40.617243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.374 [2024-11-05 19:18:40.617254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.374 qpair failed and we were unable to recover it. 00:29:11.374 [2024-11-05 19:18:40.617541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.374 [2024-11-05 19:18:40.617552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.374 qpair failed and we were unable to recover it. 00:29:11.374 [2024-11-05 19:18:40.617847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.374 [2024-11-05 19:18:40.617858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.374 qpair failed and we were unable to recover it. 00:29:11.374 [2024-11-05 19:18:40.618029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.374 [2024-11-05 19:18:40.618040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.374 qpair failed and we were unable to recover it. 00:29:11.374 [2024-11-05 19:18:40.618416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.374 [2024-11-05 19:18:40.618427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.374 qpair failed and we were unable to recover it. 00:29:11.375 [2024-11-05 19:18:40.618732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.375 [2024-11-05 19:18:40.618749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.375 qpair failed and we were unable to recover it. 00:29:11.375 [2024-11-05 19:18:40.619060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.375 [2024-11-05 19:18:40.619073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.375 qpair failed and we were unable to recover it. 00:29:11.375 [2024-11-05 19:18:40.619422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.375 [2024-11-05 19:18:40.619433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.375 qpair failed and we were unable to recover it. 00:29:11.375 [2024-11-05 19:18:40.619618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.375 [2024-11-05 19:18:40.619629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.375 qpair failed and we were unable to recover it. 00:29:11.375 [2024-11-05 19:18:40.619923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.375 [2024-11-05 19:18:40.619934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.375 qpair failed and we were unable to recover it. 00:29:11.375 [2024-11-05 19:18:40.620262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.375 [2024-11-05 19:18:40.620273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.375 qpair failed and we were unable to recover it. 00:29:11.375 [2024-11-05 19:18:40.620576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.375 [2024-11-05 19:18:40.620587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.375 qpair failed and we were unable to recover it. 00:29:11.375 [2024-11-05 19:18:40.620772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.375 [2024-11-05 19:18:40.620784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.375 qpair failed and we were unable to recover it. 00:29:11.375 [2024-11-05 19:18:40.621105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.375 [2024-11-05 19:18:40.621116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.375 qpair failed and we were unable to recover it. 00:29:11.375 [2024-11-05 19:18:40.621474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.375 [2024-11-05 19:18:40.621485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.375 qpair failed and we were unable to recover it. 00:29:11.375 [2024-11-05 19:18:40.621774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.375 [2024-11-05 19:18:40.621785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.375 qpair failed and we were unable to recover it. 00:29:11.375 [2024-11-05 19:18:40.622109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.375 [2024-11-05 19:18:40.622119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.375 qpair failed and we were unable to recover it. 00:29:11.375 [2024-11-05 19:18:40.622417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.375 [2024-11-05 19:18:40.622429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.375 qpair failed and we were unable to recover it. 00:29:11.375 [2024-11-05 19:18:40.622740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.375 [2024-11-05 19:18:40.622754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.375 qpair failed and we were unable to recover it. 00:29:11.375 [2024-11-05 19:18:40.622954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.375 [2024-11-05 19:18:40.622965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.375 qpair failed and we were unable to recover it. 00:29:11.375 [2024-11-05 19:18:40.623280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.375 [2024-11-05 19:18:40.623292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.375 qpair failed and we were unable to recover it. 00:29:11.375 [2024-11-05 19:18:40.623567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.375 [2024-11-05 19:18:40.623579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.375 qpair failed and we were unable to recover it. 00:29:11.375 [2024-11-05 19:18:40.623867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.375 [2024-11-05 19:18:40.623879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.375 qpair failed and we were unable to recover it. 00:29:11.375 [2024-11-05 19:18:40.624189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.375 [2024-11-05 19:18:40.624201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.375 qpair failed and we were unable to recover it. 00:29:11.375 [2024-11-05 19:18:40.624512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.375 [2024-11-05 19:18:40.624525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.375 qpair failed and we were unable to recover it. 00:29:11.375 [2024-11-05 19:18:40.624815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.375 [2024-11-05 19:18:40.624826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.375 qpair failed and we were unable to recover it. 00:29:11.375 [2024-11-05 19:18:40.625125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.375 [2024-11-05 19:18:40.625136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.375 qpair failed and we were unable to recover it. 00:29:11.375 [2024-11-05 19:18:40.625408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.375 [2024-11-05 19:18:40.625419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.375 qpair failed and we were unable to recover it. 00:29:11.375 [2024-11-05 19:18:40.625736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.375 [2024-11-05 19:18:40.625751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.375 qpair failed and we were unable to recover it. 00:29:11.375 [2024-11-05 19:18:40.625946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.375 [2024-11-05 19:18:40.625957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.375 qpair failed and we were unable to recover it. 00:29:11.375 [2024-11-05 19:18:40.626229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.375 [2024-11-05 19:18:40.626240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.375 qpair failed and we were unable to recover it. 00:29:11.375 [2024-11-05 19:18:40.626582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.375 [2024-11-05 19:18:40.626594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.375 qpair failed and we were unable to recover it. 00:29:11.375 [2024-11-05 19:18:40.626870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.375 [2024-11-05 19:18:40.626880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.375 qpair failed and we were unable to recover it. 00:29:11.375 [2024-11-05 19:18:40.627267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.375 [2024-11-05 19:18:40.627280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.375 qpair failed and we were unable to recover it. 00:29:11.375 [2024-11-05 19:18:40.627565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.375 [2024-11-05 19:18:40.627575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.375 qpair failed and we were unable to recover it. 00:29:11.375 [2024-11-05 19:18:40.627875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.375 [2024-11-05 19:18:40.627887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.375 qpair failed and we were unable to recover it. 00:29:11.375 [2024-11-05 19:18:40.628174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.375 [2024-11-05 19:18:40.628184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.375 qpair failed and we were unable to recover it. 00:29:11.375 [2024-11-05 19:18:40.628557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.375 [2024-11-05 19:18:40.628568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.375 qpair failed and we were unable to recover it. 00:29:11.375 [2024-11-05 19:18:40.628886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.375 [2024-11-05 19:18:40.628899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.375 qpair failed and we were unable to recover it. 00:29:11.375 [2024-11-05 19:18:40.629200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.375 [2024-11-05 19:18:40.629211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.375 qpair failed and we were unable to recover it. 00:29:11.375 [2024-11-05 19:18:40.629539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.375 [2024-11-05 19:18:40.629549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.375 qpair failed and we were unable to recover it. 00:29:11.375 [2024-11-05 19:18:40.629892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.375 [2024-11-05 19:18:40.629904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.375 qpair failed and we were unable to recover it. 00:29:11.375 [2024-11-05 19:18:40.630244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.375 [2024-11-05 19:18:40.630255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.375 qpair failed and we were unable to recover it. 00:29:11.375 [2024-11-05 19:18:40.630593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.376 [2024-11-05 19:18:40.630604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.376 qpair failed and we were unable to recover it. 00:29:11.376 [2024-11-05 19:18:40.630795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.376 [2024-11-05 19:18:40.630808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.376 qpair failed and we were unable to recover it. 00:29:11.376 [2024-11-05 19:18:40.631149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.376 [2024-11-05 19:18:40.631160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.376 qpair failed and we were unable to recover it. 00:29:11.376 [2024-11-05 19:18:40.631497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.376 [2024-11-05 19:18:40.631510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.376 qpair failed and we were unable to recover it. 00:29:11.376 [2024-11-05 19:18:40.631876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.376 [2024-11-05 19:18:40.631887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.376 qpair failed and we were unable to recover it. 00:29:11.376 [2024-11-05 19:18:40.632086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.376 [2024-11-05 19:18:40.632097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.376 qpair failed and we were unable to recover it. 00:29:11.376 [2024-11-05 19:18:40.632262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.376 [2024-11-05 19:18:40.632273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.376 qpair failed and we were unable to recover it. 00:29:11.376 [2024-11-05 19:18:40.632641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.376 [2024-11-05 19:18:40.632652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.376 qpair failed and we were unable to recover it. 00:29:11.376 [2024-11-05 19:18:40.632966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.376 [2024-11-05 19:18:40.632978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.376 qpair failed and we were unable to recover it. 00:29:11.376 [2024-11-05 19:18:40.633372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.376 [2024-11-05 19:18:40.633383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.376 qpair failed and we were unable to recover it. 00:29:11.376 [2024-11-05 19:18:40.633714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.376 [2024-11-05 19:18:40.633725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.376 qpair failed and we were unable to recover it. 00:29:11.376 [2024-11-05 19:18:40.634061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.376 [2024-11-05 19:18:40.634072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.376 qpair failed and we were unable to recover it. 00:29:11.376 [2024-11-05 19:18:40.634384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.376 [2024-11-05 19:18:40.634396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.376 qpair failed and we were unable to recover it. 00:29:11.376 [2024-11-05 19:18:40.634550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.376 [2024-11-05 19:18:40.634561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.376 qpair failed and we were unable to recover it. 00:29:11.376 [2024-11-05 19:18:40.634865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.376 [2024-11-05 19:18:40.634877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.376 qpair failed and we were unable to recover it. 00:29:11.376 [2024-11-05 19:18:40.635179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.376 [2024-11-05 19:18:40.635190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.376 qpair failed and we were unable to recover it. 00:29:11.376 [2024-11-05 19:18:40.635500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.376 [2024-11-05 19:18:40.635511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.376 qpair failed and we were unable to recover it. 00:29:11.376 [2024-11-05 19:18:40.635697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.376 [2024-11-05 19:18:40.635708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.376 qpair failed and we were unable to recover it. 00:29:11.376 [2024-11-05 19:18:40.636094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.376 [2024-11-05 19:18:40.636106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.376 qpair failed and we were unable to recover it. 00:29:11.376 [2024-11-05 19:18:40.636438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.376 [2024-11-05 19:18:40.636450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.376 qpair failed and we were unable to recover it. 00:29:11.376 [2024-11-05 19:18:40.636851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.376 [2024-11-05 19:18:40.636863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.376 qpair failed and we were unable to recover it. 00:29:11.376 [2024-11-05 19:18:40.637199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.376 [2024-11-05 19:18:40.637210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.376 qpair failed and we were unable to recover it. 00:29:11.376 [2024-11-05 19:18:40.637509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.376 [2024-11-05 19:18:40.637520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.376 qpair failed and we were unable to recover it. 00:29:11.376 [2024-11-05 19:18:40.637817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.376 [2024-11-05 19:18:40.637828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.376 qpair failed and we were unable to recover it. 00:29:11.376 [2024-11-05 19:18:40.638140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.376 [2024-11-05 19:18:40.638151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.376 qpair failed and we were unable to recover it. 00:29:11.376 [2024-11-05 19:18:40.638431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.376 [2024-11-05 19:18:40.638443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.376 qpair failed and we were unable to recover it. 00:29:11.376 [2024-11-05 19:18:40.638771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.376 [2024-11-05 19:18:40.638783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.376 qpair failed and we were unable to recover it. 00:29:11.376 [2024-11-05 19:18:40.639175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.376 [2024-11-05 19:18:40.639186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.376 qpair failed and we were unable to recover it. 00:29:11.376 [2024-11-05 19:18:40.639515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.376 [2024-11-05 19:18:40.639526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.376 qpair failed and we were unable to recover it. 00:29:11.376 [2024-11-05 19:18:40.639838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.376 [2024-11-05 19:18:40.639850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.376 qpair failed and we were unable to recover it. 00:29:11.376 [2024-11-05 19:18:40.640185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.376 [2024-11-05 19:18:40.640198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.376 qpair failed and we were unable to recover it. 00:29:11.376 [2024-11-05 19:18:40.640504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.377 [2024-11-05 19:18:40.640517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.377 qpair failed and we were unable to recover it. 00:29:11.377 [2024-11-05 19:18:40.640854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.377 [2024-11-05 19:18:40.640867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.377 qpair failed and we were unable to recover it. 00:29:11.377 [2024-11-05 19:18:40.641031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.377 [2024-11-05 19:18:40.641042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.377 qpair failed and we were unable to recover it. 00:29:11.377 [2024-11-05 19:18:40.641351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.377 [2024-11-05 19:18:40.641362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.377 qpair failed and we were unable to recover it. 00:29:11.377 [2024-11-05 19:18:40.641662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.377 [2024-11-05 19:18:40.641674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.377 qpair failed and we were unable to recover it. 00:29:11.377 [2024-11-05 19:18:40.642047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.377 [2024-11-05 19:18:40.642058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.377 qpair failed and we were unable to recover it. 00:29:11.377 [2024-11-05 19:18:40.642371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.377 [2024-11-05 19:18:40.642382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.377 qpair failed and we were unable to recover it. 00:29:11.377 [2024-11-05 19:18:40.642663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.377 [2024-11-05 19:18:40.642675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.377 qpair failed and we were unable to recover it. 00:29:11.377 [2024-11-05 19:18:40.642981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.377 [2024-11-05 19:18:40.642995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.377 qpair failed and we were unable to recover it. 00:29:11.377 [2024-11-05 19:18:40.643307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.377 [2024-11-05 19:18:40.643319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.377 qpair failed and we were unable to recover it. 00:29:11.377 [2024-11-05 19:18:40.643630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.377 [2024-11-05 19:18:40.643641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.377 qpair failed and we were unable to recover it. 00:29:11.377 [2024-11-05 19:18:40.643851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.377 [2024-11-05 19:18:40.643863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.377 qpair failed and we were unable to recover it. 00:29:11.377 [2024-11-05 19:18:40.644163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.377 [2024-11-05 19:18:40.644174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.377 qpair failed and we were unable to recover it. 00:29:11.377 [2024-11-05 19:18:40.644473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.377 [2024-11-05 19:18:40.644483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.377 qpair failed and we were unable to recover it. 00:29:11.377 [2024-11-05 19:18:40.644804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.377 [2024-11-05 19:18:40.644816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.377 qpair failed and we were unable to recover it. 00:29:11.377 [2024-11-05 19:18:40.645199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.377 [2024-11-05 19:18:40.645210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.377 qpair failed and we were unable to recover it. 00:29:11.377 [2024-11-05 19:18:40.645514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.377 [2024-11-05 19:18:40.645526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.377 qpair failed and we were unable to recover it. 00:29:11.377 [2024-11-05 19:18:40.645812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.377 [2024-11-05 19:18:40.645823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.377 qpair failed and we were unable to recover it. 00:29:11.377 [2024-11-05 19:18:40.646108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.377 [2024-11-05 19:18:40.646119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.377 qpair failed and we were unable to recover it. 00:29:11.377 [2024-11-05 19:18:40.646459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.377 [2024-11-05 19:18:40.646471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.377 qpair failed and we were unable to recover it. 00:29:11.377 [2024-11-05 19:18:40.646761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.377 [2024-11-05 19:18:40.646774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.377 qpair failed and we were unable to recover it. 00:29:11.377 [2024-11-05 19:18:40.647104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.377 [2024-11-05 19:18:40.647115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.377 qpair failed and we were unable to recover it. 00:29:11.377 [2024-11-05 19:18:40.647421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.377 [2024-11-05 19:18:40.647433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.377 qpair failed and we were unable to recover it. 00:29:11.377 [2024-11-05 19:18:40.647780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.377 [2024-11-05 19:18:40.647792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.377 qpair failed and we were unable to recover it. 00:29:11.377 [2024-11-05 19:18:40.648112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.377 [2024-11-05 19:18:40.648124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.377 qpair failed and we were unable to recover it. 00:29:11.377 [2024-11-05 19:18:40.648422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.377 [2024-11-05 19:18:40.648434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.377 qpair failed and we were unable to recover it. 00:29:11.377 [2024-11-05 19:18:40.648739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.377 [2024-11-05 19:18:40.648755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.377 qpair failed and we were unable to recover it. 00:29:11.377 [2024-11-05 19:18:40.649068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.377 [2024-11-05 19:18:40.649082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.377 qpair failed and we were unable to recover it. 00:29:11.377 [2024-11-05 19:18:40.649380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.377 [2024-11-05 19:18:40.649393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.377 qpair failed and we were unable to recover it. 00:29:11.377 [2024-11-05 19:18:40.649691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.377 [2024-11-05 19:18:40.649702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.377 qpair failed and we were unable to recover it. 00:29:11.377 [2024-11-05 19:18:40.650014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.377 [2024-11-05 19:18:40.650027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.377 qpair failed and we were unable to recover it. 00:29:11.377 [2024-11-05 19:18:40.650312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.377 [2024-11-05 19:18:40.650323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.377 qpair failed and we were unable to recover it. 00:29:11.377 [2024-11-05 19:18:40.650635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.377 [2024-11-05 19:18:40.650647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.377 qpair failed and we were unable to recover it. 00:29:11.377 [2024-11-05 19:18:40.650946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.377 [2024-11-05 19:18:40.650958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.377 qpair failed and we were unable to recover it. 00:29:11.377 [2024-11-05 19:18:40.651266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.377 [2024-11-05 19:18:40.651278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.377 qpair failed and we were unable to recover it. 00:29:11.377 [2024-11-05 19:18:40.651555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.377 [2024-11-05 19:18:40.651566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.377 qpair failed and we were unable to recover it. 00:29:11.377 [2024-11-05 19:18:40.651701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.377 [2024-11-05 19:18:40.651712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.377 qpair failed and we were unable to recover it. 00:29:11.377 [2024-11-05 19:18:40.651986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.377 [2024-11-05 19:18:40.651998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.378 qpair failed and we were unable to recover it. 00:29:11.378 [2024-11-05 19:18:40.652316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.378 [2024-11-05 19:18:40.652327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.378 qpair failed and we were unable to recover it. 00:29:11.378 [2024-11-05 19:18:40.652618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.378 [2024-11-05 19:18:40.652630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.378 qpair failed and we were unable to recover it. 00:29:11.378 [2024-11-05 19:18:40.652858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.378 [2024-11-05 19:18:40.652869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.378 qpair failed and we were unable to recover it. 00:29:11.378 [2024-11-05 19:18:40.653186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.378 [2024-11-05 19:18:40.653199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.378 qpair failed and we were unable to recover it. 00:29:11.378 [2024-11-05 19:18:40.653504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.378 [2024-11-05 19:18:40.653517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.378 qpair failed and we were unable to recover it. 00:29:11.378 [2024-11-05 19:18:40.653819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.378 [2024-11-05 19:18:40.653831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.378 qpair failed and we were unable to recover it. 00:29:11.378 [2024-11-05 19:18:40.654003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.378 [2024-11-05 19:18:40.654015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.378 qpair failed and we were unable to recover it. 00:29:11.378 [2024-11-05 19:18:40.654289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.378 [2024-11-05 19:18:40.654302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.378 qpair failed and we were unable to recover it. 00:29:11.378 [2024-11-05 19:18:40.654557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.378 [2024-11-05 19:18:40.654567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.378 qpair failed and we were unable to recover it. 00:29:11.378 [2024-11-05 19:18:40.654778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.378 [2024-11-05 19:18:40.654789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.378 qpair failed and we were unable to recover it. 00:29:11.378 [2024-11-05 19:18:40.655089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.378 [2024-11-05 19:18:40.655100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.378 qpair failed and we were unable to recover it. 00:29:11.378 [2024-11-05 19:18:40.655399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.378 [2024-11-05 19:18:40.655410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.378 qpair failed and we were unable to recover it. 00:29:11.378 [2024-11-05 19:18:40.655760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.378 [2024-11-05 19:18:40.655772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.378 qpair failed and we were unable to recover it. 00:29:11.378 [2024-11-05 19:18:40.656071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.378 [2024-11-05 19:18:40.656084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.378 qpair failed and we were unable to recover it. 00:29:11.378 [2024-11-05 19:18:40.656390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.378 [2024-11-05 19:18:40.656401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.378 qpair failed and we were unable to recover it. 00:29:11.378 [2024-11-05 19:18:40.656693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.378 [2024-11-05 19:18:40.656705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.378 qpair failed and we were unable to recover it. 00:29:11.378 [2024-11-05 19:18:40.657005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.378 [2024-11-05 19:18:40.657019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.378 qpair failed and we were unable to recover it. 00:29:11.378 [2024-11-05 19:18:40.657342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.378 [2024-11-05 19:18:40.657353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.378 qpair failed and we were unable to recover it. 00:29:11.378 [2024-11-05 19:18:40.657661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.378 [2024-11-05 19:18:40.657673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.378 qpair failed and we were unable to recover it. 00:29:11.378 [2024-11-05 19:18:40.657892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.378 [2024-11-05 19:18:40.657903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.378 qpair failed and we were unable to recover it. 00:29:11.378 [2024-11-05 19:18:40.658222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.378 [2024-11-05 19:18:40.658234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.378 qpair failed and we were unable to recover it. 00:29:11.378 [2024-11-05 19:18:40.658534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.378 [2024-11-05 19:18:40.658547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.378 qpair failed and we were unable to recover it. 00:29:11.378 [2024-11-05 19:18:40.658852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.378 [2024-11-05 19:18:40.658863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.378 qpair failed and we were unable to recover it. 00:29:11.378 [2024-11-05 19:18:40.659200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.378 [2024-11-05 19:18:40.659212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.378 qpair failed and we were unable to recover it. 00:29:11.378 [2024-11-05 19:18:40.659534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.378 [2024-11-05 19:18:40.659545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.378 qpair failed and we were unable to recover it. 00:29:11.378 [2024-11-05 19:18:40.659862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.378 [2024-11-05 19:18:40.659873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.378 qpair failed and we were unable to recover it. 00:29:11.378 [2024-11-05 19:18:40.660224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.378 [2024-11-05 19:18:40.660235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.378 qpair failed and we were unable to recover it. 00:29:11.378 [2024-11-05 19:18:40.660622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.378 [2024-11-05 19:18:40.660634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.378 qpair failed and we were unable to recover it. 00:29:11.378 [2024-11-05 19:18:40.660945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.378 [2024-11-05 19:18:40.660957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.378 qpair failed and we were unable to recover it. 00:29:11.378 [2024-11-05 19:18:40.661134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.379 [2024-11-05 19:18:40.661145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.379 qpair failed and we were unable to recover it. 00:29:11.379 [2024-11-05 19:18:40.661462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.379 [2024-11-05 19:18:40.661474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.379 qpair failed and we were unable to recover it. 00:29:11.379 [2024-11-05 19:18:40.661760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.379 [2024-11-05 19:18:40.661772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.379 qpair failed and we were unable to recover it. 00:29:11.379 [2024-11-05 19:18:40.662065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.379 [2024-11-05 19:18:40.662076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.379 qpair failed and we were unable to recover it. 00:29:11.379 [2024-11-05 19:18:40.662362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.379 [2024-11-05 19:18:40.662374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.379 qpair failed and we were unable to recover it. 00:29:11.379 [2024-11-05 19:18:40.662725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.379 [2024-11-05 19:18:40.662737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.379 qpair failed and we were unable to recover it. 00:29:11.379 [2024-11-05 19:18:40.663033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.379 [2024-11-05 19:18:40.663045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.379 qpair failed and we were unable to recover it. 00:29:11.379 [2024-11-05 19:18:40.663371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.379 [2024-11-05 19:18:40.663383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.379 qpair failed and we were unable to recover it. 00:29:11.379 [2024-11-05 19:18:40.663565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.379 [2024-11-05 19:18:40.663577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.379 qpair failed and we were unable to recover it. 00:29:11.379 [2024-11-05 19:18:40.663834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.379 [2024-11-05 19:18:40.663846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.379 qpair failed and we were unable to recover it. 00:29:11.379 [2024-11-05 19:18:40.664154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.379 [2024-11-05 19:18:40.664165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.379 qpair failed and we were unable to recover it. 00:29:11.379 [2024-11-05 19:18:40.664473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.379 [2024-11-05 19:18:40.664485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.379 qpair failed and we were unable to recover it. 00:29:11.379 [2024-11-05 19:18:40.664814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.379 [2024-11-05 19:18:40.664826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.379 qpair failed and we were unable to recover it. 00:29:11.379 [2024-11-05 19:18:40.665188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.379 [2024-11-05 19:18:40.665199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.379 qpair failed and we were unable to recover it. 00:29:11.379 [2024-11-05 19:18:40.665495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.379 [2024-11-05 19:18:40.665506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.379 qpair failed and we were unable to recover it. 00:29:11.379 [2024-11-05 19:18:40.665823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.379 [2024-11-05 19:18:40.665834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.379 qpair failed and we were unable to recover it. 00:29:11.379 [2024-11-05 19:18:40.666111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.379 [2024-11-05 19:18:40.666122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.379 qpair failed and we were unable to recover it. 00:29:11.655 [2024-11-05 19:18:40.666309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.655 [2024-11-05 19:18:40.666322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.655 qpair failed and we were unable to recover it. 00:29:11.655 [2024-11-05 19:18:40.667083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.655 [2024-11-05 19:18:40.667106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.655 qpair failed and we were unable to recover it. 00:29:11.655 [2024-11-05 19:18:40.667412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.655 [2024-11-05 19:18:40.667425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.655 qpair failed and we were unable to recover it. 00:29:11.655 [2024-11-05 19:18:40.667754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.655 [2024-11-05 19:18:40.667767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.655 qpair failed and we were unable to recover it. 00:29:11.655 [2024-11-05 19:18:40.668094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.655 [2024-11-05 19:18:40.668107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.655 qpair failed and we were unable to recover it. 00:29:11.655 [2024-11-05 19:18:40.668408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.655 [2024-11-05 19:18:40.668419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.655 qpair failed and we were unable to recover it. 00:29:11.655 [2024-11-05 19:18:40.668744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.655 [2024-11-05 19:18:40.668759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.655 qpair failed and we were unable to recover it. 00:29:11.655 [2024-11-05 19:18:40.669043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.655 [2024-11-05 19:18:40.669054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.655 qpair failed and we were unable to recover it. 00:29:11.655 [2024-11-05 19:18:40.669360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.655 [2024-11-05 19:18:40.669373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.655 qpair failed and we were unable to recover it. 00:29:11.656 [2024-11-05 19:18:40.669636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.656 [2024-11-05 19:18:40.669648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.656 qpair failed and we were unable to recover it. 00:29:11.656 [2024-11-05 19:18:40.669964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.656 [2024-11-05 19:18:40.669977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.656 qpair failed and we were unable to recover it. 00:29:11.656 [2024-11-05 19:18:40.670184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.656 [2024-11-05 19:18:40.670196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.656 qpair failed and we were unable to recover it. 00:29:11.656 [2024-11-05 19:18:40.670473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.656 [2024-11-05 19:18:40.670485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.656 qpair failed and we were unable to recover it. 00:29:11.656 [2024-11-05 19:18:40.670808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.656 [2024-11-05 19:18:40.670819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.656 qpair failed and we were unable to recover it. 00:29:11.656 [2024-11-05 19:18:40.671118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.656 [2024-11-05 19:18:40.671129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.656 qpair failed and we were unable to recover it. 00:29:11.656 [2024-11-05 19:18:40.671458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.656 [2024-11-05 19:18:40.671469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.656 qpair failed and we were unable to recover it. 00:29:11.656 [2024-11-05 19:18:40.671768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.656 [2024-11-05 19:18:40.671779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.656 qpair failed and we were unable to recover it. 00:29:11.656 [2024-11-05 19:18:40.672092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.656 [2024-11-05 19:18:40.672103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.656 qpair failed and we were unable to recover it. 00:29:11.656 [2024-11-05 19:18:40.672932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.656 [2024-11-05 19:18:40.672954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.656 qpair failed and we were unable to recover it. 00:29:11.656 [2024-11-05 19:18:40.673275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.656 [2024-11-05 19:18:40.673288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.656 qpair failed and we were unable to recover it. 00:29:11.656 [2024-11-05 19:18:40.673588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.656 [2024-11-05 19:18:40.673600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.656 qpair failed and we were unable to recover it. 00:29:11.656 [2024-11-05 19:18:40.673944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.656 [2024-11-05 19:18:40.673958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.656 qpair failed and we were unable to recover it. 00:29:11.656 [2024-11-05 19:18:40.674257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.656 [2024-11-05 19:18:40.674268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.656 qpair failed and we were unable to recover it. 00:29:11.656 [2024-11-05 19:18:40.674594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.656 [2024-11-05 19:18:40.674605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.656 qpair failed and we were unable to recover it. 00:29:11.656 [2024-11-05 19:18:40.674914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.656 [2024-11-05 19:18:40.674926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.656 qpair failed and we were unable to recover it. 00:29:11.656 [2024-11-05 19:18:40.675304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.656 [2024-11-05 19:18:40.675315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.656 qpair failed and we were unable to recover it. 00:29:11.656 [2024-11-05 19:18:40.675615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.656 [2024-11-05 19:18:40.675628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.656 qpair failed and we were unable to recover it. 00:29:11.656 [2024-11-05 19:18:40.675960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.656 [2024-11-05 19:18:40.675971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.656 qpair failed and we were unable to recover it. 00:29:11.656 [2024-11-05 19:18:40.676154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.656 [2024-11-05 19:18:40.676165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.656 qpair failed and we were unable to recover it. 00:29:11.656 [2024-11-05 19:18:40.676489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.656 [2024-11-05 19:18:40.676500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.656 qpair failed and we were unable to recover it. 00:29:11.656 [2024-11-05 19:18:40.676766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.656 [2024-11-05 19:18:40.676778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.656 qpair failed and we were unable to recover it. 00:29:11.656 [2024-11-05 19:18:40.677082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.656 [2024-11-05 19:18:40.677093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.656 qpair failed and we were unable to recover it. 00:29:11.656 [2024-11-05 19:18:40.677396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.656 [2024-11-05 19:18:40.677407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.656 qpair failed and we were unable to recover it. 00:29:11.656 [2024-11-05 19:18:40.677722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.656 [2024-11-05 19:18:40.677733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.656 qpair failed and we were unable to recover it. 00:29:11.656 [2024-11-05 19:18:40.678023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.656 [2024-11-05 19:18:40.678034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.656 qpair failed and we were unable to recover it. 00:29:11.656 [2024-11-05 19:18:40.678341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.656 [2024-11-05 19:18:40.678352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.656 qpair failed and we were unable to recover it. 00:29:11.656 [2024-11-05 19:18:40.678658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.656 [2024-11-05 19:18:40.678670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.656 qpair failed and we were unable to recover it. 00:29:11.656 [2024-11-05 19:18:40.678997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.656 [2024-11-05 19:18:40.679008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.656 qpair failed and we were unable to recover it. 00:29:11.656 [2024-11-05 19:18:40.679312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.656 [2024-11-05 19:18:40.679328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.656 qpair failed and we were unable to recover it. 00:29:11.656 [2024-11-05 19:18:40.679546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.656 [2024-11-05 19:18:40.679558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.656 qpair failed and we were unable to recover it. 00:29:11.656 [2024-11-05 19:18:40.679861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.656 [2024-11-05 19:18:40.679873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.656 qpair failed and we were unable to recover it. 00:29:11.656 [2024-11-05 19:18:40.680154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.656 [2024-11-05 19:18:40.680165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.656 qpair failed and we were unable to recover it. 00:29:11.656 [2024-11-05 19:18:40.680335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.656 [2024-11-05 19:18:40.680347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.656 qpair failed and we were unable to recover it. 00:29:11.656 [2024-11-05 19:18:40.680703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.656 [2024-11-05 19:18:40.680714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.656 qpair failed and we were unable to recover it. 00:29:11.656 [2024-11-05 19:18:40.681021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.656 [2024-11-05 19:18:40.681033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.656 qpair failed and we were unable to recover it. 00:29:11.656 [2024-11-05 19:18:40.681358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.656 [2024-11-05 19:18:40.681369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.656 qpair failed and we were unable to recover it. 00:29:11.656 [2024-11-05 19:18:40.681670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.656 [2024-11-05 19:18:40.681690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.656 qpair failed and we were unable to recover it. 00:29:11.657 [2024-11-05 19:18:40.682006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.657 [2024-11-05 19:18:40.682017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.657 qpair failed and we were unable to recover it. 00:29:11.657 [2024-11-05 19:18:40.682279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.657 [2024-11-05 19:18:40.682290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.657 qpair failed and we were unable to recover it. 00:29:11.657 [2024-11-05 19:18:40.682599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.657 [2024-11-05 19:18:40.682611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.657 qpair failed and we were unable to recover it. 00:29:11.657 [2024-11-05 19:18:40.682936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.657 [2024-11-05 19:18:40.682949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.657 qpair failed and we were unable to recover it. 00:29:11.657 [2024-11-05 19:18:40.683154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.657 [2024-11-05 19:18:40.683165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.657 qpair failed and we were unable to recover it. 00:29:11.657 [2024-11-05 19:18:40.683473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.657 [2024-11-05 19:18:40.683484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.657 qpair failed and we were unable to recover it. 00:29:11.657 [2024-11-05 19:18:40.683786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.657 [2024-11-05 19:18:40.683798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.657 qpair failed and we were unable to recover it. 00:29:11.657 [2024-11-05 19:18:40.684109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.657 [2024-11-05 19:18:40.684120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.657 qpair failed and we were unable to recover it. 00:29:11.657 [2024-11-05 19:18:40.684444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.657 [2024-11-05 19:18:40.684455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.657 qpair failed and we were unable to recover it. 00:29:11.657 [2024-11-05 19:18:40.684778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.657 [2024-11-05 19:18:40.684791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.657 qpair failed and we were unable to recover it. 00:29:11.657 [2024-11-05 19:18:40.685086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.657 [2024-11-05 19:18:40.685097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.657 qpair failed and we were unable to recover it. 00:29:11.657 [2024-11-05 19:18:40.685397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.657 [2024-11-05 19:18:40.685409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.657 qpair failed and we were unable to recover it. 00:29:11.657 [2024-11-05 19:18:40.685753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.657 [2024-11-05 19:18:40.685764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.657 qpair failed and we were unable to recover it. 00:29:11.657 [2024-11-05 19:18:40.686058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.657 [2024-11-05 19:18:40.686071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.657 qpair failed and we were unable to recover it. 00:29:11.657 [2024-11-05 19:18:40.686379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.657 [2024-11-05 19:18:40.686389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.657 qpair failed and we were unable to recover it. 00:29:11.657 [2024-11-05 19:18:40.686676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.657 [2024-11-05 19:18:40.686697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.657 qpair failed and we were unable to recover it. 00:29:11.657 [2024-11-05 19:18:40.687006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.657 [2024-11-05 19:18:40.687018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.657 qpair failed and we were unable to recover it. 00:29:11.657 [2024-11-05 19:18:40.687323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.657 [2024-11-05 19:18:40.687336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.657 qpair failed and we were unable to recover it. 00:29:11.657 [2024-11-05 19:18:40.687634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.657 [2024-11-05 19:18:40.687649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.657 qpair failed and we were unable to recover it. 00:29:11.657 [2024-11-05 19:18:40.687981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.657 [2024-11-05 19:18:40.687992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.657 qpair failed and we were unable to recover it. 00:29:11.657 [2024-11-05 19:18:40.688275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.657 [2024-11-05 19:18:40.688286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.657 qpair failed and we were unable to recover it. 00:29:11.657 [2024-11-05 19:18:40.688615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.657 [2024-11-05 19:18:40.688627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.657 qpair failed and we were unable to recover it. 00:29:11.657 [2024-11-05 19:18:40.688911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.657 [2024-11-05 19:18:40.688923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.657 qpair failed and we were unable to recover it. 00:29:11.657 [2024-11-05 19:18:40.689233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.657 [2024-11-05 19:18:40.689245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.657 qpair failed and we were unable to recover it. 00:29:11.657 [2024-11-05 19:18:40.689459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.657 [2024-11-05 19:18:40.689471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.657 qpair failed and we were unable to recover it. 00:29:11.657 [2024-11-05 19:18:40.689632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.657 [2024-11-05 19:18:40.689644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.657 qpair failed and we were unable to recover it. 00:29:11.657 [2024-11-05 19:18:40.689958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.657 [2024-11-05 19:18:40.689970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.657 qpair failed and we were unable to recover it. 00:29:11.657 [2024-11-05 19:18:40.690303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.657 [2024-11-05 19:18:40.690315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.657 qpair failed and we were unable to recover it. 00:29:11.657 [2024-11-05 19:18:40.690599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.657 [2024-11-05 19:18:40.690611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.657 qpair failed and we were unable to recover it. 00:29:11.657 [2024-11-05 19:18:40.690910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.657 [2024-11-05 19:18:40.690923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.657 qpair failed and we were unable to recover it. 00:29:11.657 [2024-11-05 19:18:40.691265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.657 [2024-11-05 19:18:40.691277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.657 qpair failed and we were unable to recover it. 00:29:11.657 [2024-11-05 19:18:40.691490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.657 [2024-11-05 19:18:40.691500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.657 qpair failed and we were unable to recover it. 00:29:11.657 [2024-11-05 19:18:40.691798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.657 [2024-11-05 19:18:40.691810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.657 qpair failed and we were unable to recover it. 00:29:11.657 [2024-11-05 19:18:40.692096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.657 [2024-11-05 19:18:40.692107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.657 qpair failed and we were unable to recover it. 00:29:11.657 [2024-11-05 19:18:40.692326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.657 [2024-11-05 19:18:40.692337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.657 qpair failed and we were unable to recover it. 00:29:11.657 [2024-11-05 19:18:40.692651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.657 [2024-11-05 19:18:40.692662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.657 qpair failed and we were unable to recover it. 00:29:11.658 [2024-11-05 19:18:40.692948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.658 [2024-11-05 19:18:40.692960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.658 qpair failed and we were unable to recover it. 00:29:11.658 [2024-11-05 19:18:40.693233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.658 [2024-11-05 19:18:40.693243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.658 qpair failed and we were unable to recover it. 00:29:11.658 [2024-11-05 19:18:40.693542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.658 [2024-11-05 19:18:40.693553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.658 qpair failed and we were unable to recover it. 00:29:11.658 [2024-11-05 19:18:40.693883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.658 [2024-11-05 19:18:40.693895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.658 qpair failed and we were unable to recover it. 00:29:11.658 [2024-11-05 19:18:40.694232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.658 [2024-11-05 19:18:40.694244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.658 qpair failed and we were unable to recover it. 00:29:11.658 [2024-11-05 19:18:40.694554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.658 [2024-11-05 19:18:40.694566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.658 qpair failed and we were unable to recover it. 00:29:11.658 [2024-11-05 19:18:40.694896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.658 [2024-11-05 19:18:40.694908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.658 qpair failed and we were unable to recover it. 00:29:11.658 [2024-11-05 19:18:40.695178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.658 [2024-11-05 19:18:40.695189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.658 qpair failed and we were unable to recover it. 00:29:11.658 [2024-11-05 19:18:40.695498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.658 [2024-11-05 19:18:40.695509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.658 qpair failed and we were unable to recover it. 00:29:11.658 [2024-11-05 19:18:40.695816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.658 [2024-11-05 19:18:40.695830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.658 qpair failed and we were unable to recover it. 00:29:11.658 [2024-11-05 19:18:40.696080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.658 [2024-11-05 19:18:40.696092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.658 qpair failed and we were unable to recover it. 00:29:11.658 [2024-11-05 19:18:40.696373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.658 [2024-11-05 19:18:40.696383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.658 qpair failed and we were unable to recover it. 00:29:11.658 [2024-11-05 19:18:40.696594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.658 [2024-11-05 19:18:40.696605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.658 qpair failed and we were unable to recover it. 00:29:11.658 [2024-11-05 19:18:40.696903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.658 [2024-11-05 19:18:40.696914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.658 qpair failed and we were unable to recover it. 00:29:11.658 [2024-11-05 19:18:40.697217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.658 [2024-11-05 19:18:40.697228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.658 qpair failed and we were unable to recover it. 00:29:11.658 [2024-11-05 19:18:40.697534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.658 [2024-11-05 19:18:40.697546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.658 qpair failed and we were unable to recover it. 00:29:11.658 [2024-11-05 19:18:40.697844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.658 [2024-11-05 19:18:40.697855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.658 qpair failed and we were unable to recover it. 00:29:11.658 [2024-11-05 19:18:40.697929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.658 [2024-11-05 19:18:40.697939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.658 qpair failed and we were unable to recover it. 00:29:11.658 [2024-11-05 19:18:40.698199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.658 [2024-11-05 19:18:40.698210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.658 qpair failed and we were unable to recover it. 00:29:11.658 [2024-11-05 19:18:40.698525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.658 [2024-11-05 19:18:40.698536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.658 qpair failed and we were unable to recover it. 00:29:11.658 [2024-11-05 19:18:40.698823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.658 [2024-11-05 19:18:40.698834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.658 qpair failed and we were unable to recover it. 00:29:11.658 [2024-11-05 19:18:40.699159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.658 [2024-11-05 19:18:40.699171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.658 qpair failed and we were unable to recover it. 00:29:11.658 [2024-11-05 19:18:40.699486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.658 [2024-11-05 19:18:40.699498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.658 qpair failed and we were unable to recover it. 00:29:11.658 [2024-11-05 19:18:40.699816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.658 [2024-11-05 19:18:40.699828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.658 qpair failed and we were unable to recover it. 00:29:11.658 [2024-11-05 19:18:40.700131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.658 [2024-11-05 19:18:40.700144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.658 qpair failed and we were unable to recover it. 00:29:11.658 [2024-11-05 19:18:40.700426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.658 [2024-11-05 19:18:40.700437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.658 qpair failed and we were unable to recover it. 00:29:11.658 [2024-11-05 19:18:40.700763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.658 [2024-11-05 19:18:40.700775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.658 qpair failed and we were unable to recover it. 00:29:11.658 [2024-11-05 19:18:40.700984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.659 [2024-11-05 19:18:40.700995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.659 qpair failed and we were unable to recover it. 00:29:11.659 [2024-11-05 19:18:40.701316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.659 [2024-11-05 19:18:40.701328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.659 qpair failed and we were unable to recover it. 00:29:11.659 [2024-11-05 19:18:40.701618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.659 [2024-11-05 19:18:40.701629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.659 qpair failed and we were unable to recover it. 00:29:11.659 [2024-11-05 19:18:40.701920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.659 [2024-11-05 19:18:40.701931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.659 qpair failed and we were unable to recover it. 00:29:11.659 [2024-11-05 19:18:40.702248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.659 [2024-11-05 19:18:40.702259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.659 qpair failed and we were unable to recover it. 00:29:11.659 [2024-11-05 19:18:40.702530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.659 [2024-11-05 19:18:40.702540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.659 qpair failed and we were unable to recover it. 00:29:11.659 [2024-11-05 19:18:40.702919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.659 [2024-11-05 19:18:40.702930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.659 qpair failed and we were unable to recover it. 00:29:11.659 [2024-11-05 19:18:40.703210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.659 [2024-11-05 19:18:40.703220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.659 qpair failed and we were unable to recover it. 00:29:11.659 [2024-11-05 19:18:40.703585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.659 [2024-11-05 19:18:40.703596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.659 qpair failed and we were unable to recover it. 00:29:11.659 [2024-11-05 19:18:40.703896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.659 [2024-11-05 19:18:40.703907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.659 qpair failed and we were unable to recover it. 00:29:11.659 [2024-11-05 19:18:40.704116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.659 [2024-11-05 19:18:40.704127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.659 qpair failed and we were unable to recover it. 00:29:11.659 [2024-11-05 19:18:40.704505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.659 [2024-11-05 19:18:40.704516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.659 qpair failed and we were unable to recover it. 00:29:11.659 [2024-11-05 19:18:40.704823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.659 [2024-11-05 19:18:40.704834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.659 qpair failed and we were unable to recover it. 00:29:11.659 [2024-11-05 19:18:40.705133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.659 [2024-11-05 19:18:40.705143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.659 qpair failed and we were unable to recover it. 00:29:11.659 [2024-11-05 19:18:40.705429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.659 [2024-11-05 19:18:40.705441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.659 qpair failed and we were unable to recover it. 00:29:11.659 [2024-11-05 19:18:40.705720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.659 [2024-11-05 19:18:40.705731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.659 qpair failed and we were unable to recover it. 00:29:11.659 [2024-11-05 19:18:40.706038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.659 [2024-11-05 19:18:40.706049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.659 qpair failed and we were unable to recover it. 00:29:11.659 [2024-11-05 19:18:40.706259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.659 [2024-11-05 19:18:40.706270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.659 qpair failed and we were unable to recover it. 00:29:11.659 [2024-11-05 19:18:40.706607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.659 [2024-11-05 19:18:40.706618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.659 qpair failed and we were unable to recover it. 00:29:11.659 [2024-11-05 19:18:40.706920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.659 [2024-11-05 19:18:40.706932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.659 qpair failed and we were unable to recover it. 00:29:11.659 [2024-11-05 19:18:40.707244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.659 [2024-11-05 19:18:40.707255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.659 qpair failed and we were unable to recover it. 00:29:11.659 [2024-11-05 19:18:40.707552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.659 [2024-11-05 19:18:40.707563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.659 qpair failed and we were unable to recover it. 00:29:11.659 [2024-11-05 19:18:40.707870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.659 [2024-11-05 19:18:40.707882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.659 qpair failed and we were unable to recover it. 00:29:11.659 [2024-11-05 19:18:40.708193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.659 [2024-11-05 19:18:40.708206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.659 qpair failed and we were unable to recover it. 00:29:11.659 [2024-11-05 19:18:40.708520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.659 [2024-11-05 19:18:40.708532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.659 qpair failed and we were unable to recover it. 00:29:11.659 [2024-11-05 19:18:40.708846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.659 [2024-11-05 19:18:40.708858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.659 qpair failed and we were unable to recover it. 00:29:11.659 [2024-11-05 19:18:40.709147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.659 [2024-11-05 19:18:40.709158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.659 qpair failed and we were unable to recover it. 00:29:11.659 [2024-11-05 19:18:40.709438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.659 [2024-11-05 19:18:40.709449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.659 qpair failed and we were unable to recover it. 00:29:11.659 [2024-11-05 19:18:40.709762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.660 [2024-11-05 19:18:40.709774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.660 qpair failed and we were unable to recover it. 00:29:11.660 [2024-11-05 19:18:40.710072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.660 [2024-11-05 19:18:40.710083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.660 qpair failed and we were unable to recover it. 00:29:11.660 [2024-11-05 19:18:40.710398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.660 [2024-11-05 19:18:40.710409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.660 qpair failed and we were unable to recover it. 00:29:11.660 [2024-11-05 19:18:40.710775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.660 [2024-11-05 19:18:40.710787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.660 qpair failed and we were unable to recover it. 00:29:11.660 [2024-11-05 19:18:40.711096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.660 [2024-11-05 19:18:40.711107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.660 qpair failed and we were unable to recover it. 00:29:11.660 [2024-11-05 19:18:40.711398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.660 [2024-11-05 19:18:40.711410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.660 qpair failed and we were unable to recover it. 00:29:11.660 [2024-11-05 19:18:40.711717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.660 [2024-11-05 19:18:40.711729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.660 qpair failed and we were unable to recover it. 00:29:11.660 [2024-11-05 19:18:40.712024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.660 [2024-11-05 19:18:40.712036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.660 qpair failed and we were unable to recover it. 00:29:11.660 [2024-11-05 19:18:40.712353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.660 [2024-11-05 19:18:40.712365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.660 qpair failed and we were unable to recover it. 00:29:11.660 [2024-11-05 19:18:40.712567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.660 [2024-11-05 19:18:40.712578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.660 qpair failed and we were unable to recover it. 00:29:11.660 [2024-11-05 19:18:40.712793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.660 [2024-11-05 19:18:40.712804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.660 qpair failed and we were unable to recover it. 00:29:11.660 [2024-11-05 19:18:40.713016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.660 [2024-11-05 19:18:40.713028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.660 qpair failed and we were unable to recover it. 00:29:11.660 [2024-11-05 19:18:40.713240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.660 [2024-11-05 19:18:40.713252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.660 qpair failed and we were unable to recover it. 00:29:11.660 [2024-11-05 19:18:40.713619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.660 [2024-11-05 19:18:40.713634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.660 qpair failed and we were unable to recover it. 00:29:11.660 [2024-11-05 19:18:40.713936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.660 [2024-11-05 19:18:40.713947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.660 qpair failed and we were unable to recover it. 00:29:11.660 [2024-11-05 19:18:40.714156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.660 [2024-11-05 19:18:40.714166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.660 qpair failed and we were unable to recover it. 00:29:11.660 [2024-11-05 19:18:40.714494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.660 [2024-11-05 19:18:40.714505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.660 qpair failed and we were unable to recover it. 00:29:11.660 [2024-11-05 19:18:40.714861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.660 [2024-11-05 19:18:40.714873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.660 qpair failed and we were unable to recover it. 00:29:11.660 [2024-11-05 19:18:40.715093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.660 [2024-11-05 19:18:40.715104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.660 qpair failed and we were unable to recover it. 00:29:11.660 [2024-11-05 19:18:40.715400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.660 [2024-11-05 19:18:40.715411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.660 qpair failed and we were unable to recover it. 00:29:11.660 [2024-11-05 19:18:40.715621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.660 [2024-11-05 19:18:40.715631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.660 qpair failed and we were unable to recover it. 00:29:11.660 [2024-11-05 19:18:40.715829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.660 [2024-11-05 19:18:40.715841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.660 qpair failed and we were unable to recover it. 00:29:11.660 [2024-11-05 19:18:40.716057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.660 [2024-11-05 19:18:40.716069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.660 qpair failed and we were unable to recover it. 00:29:11.660 [2024-11-05 19:18:40.716398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.660 [2024-11-05 19:18:40.716409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.660 qpair failed and we were unable to recover it. 00:29:11.660 [2024-11-05 19:18:40.716708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.660 [2024-11-05 19:18:40.716719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.660 qpair failed and we were unable to recover it. 00:29:11.660 [2024-11-05 19:18:40.717009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.660 [2024-11-05 19:18:40.717021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.660 qpair failed and we were unable to recover it. 00:29:11.660 [2024-11-05 19:18:40.717325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.660 [2024-11-05 19:18:40.717336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.660 qpair failed and we were unable to recover it. 00:29:11.660 [2024-11-05 19:18:40.717647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.660 [2024-11-05 19:18:40.717658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.660 qpair failed and we were unable to recover it. 00:29:11.661 [2024-11-05 19:18:40.717941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.661 [2024-11-05 19:18:40.717952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.661 qpair failed and we were unable to recover it. 00:29:11.661 [2024-11-05 19:18:40.718281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.661 [2024-11-05 19:18:40.718292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.661 qpair failed and we were unable to recover it. 00:29:11.661 [2024-11-05 19:18:40.718596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.661 [2024-11-05 19:18:40.718608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.661 qpair failed and we were unable to recover it. 00:29:11.661 [2024-11-05 19:18:40.718930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.661 [2024-11-05 19:18:40.718942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.661 qpair failed and we were unable to recover it. 00:29:11.661 [2024-11-05 19:18:40.719235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.661 [2024-11-05 19:18:40.719247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.661 qpair failed and we were unable to recover it. 00:29:11.661 [2024-11-05 19:18:40.719534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.661 [2024-11-05 19:18:40.719545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.661 qpair failed and we were unable to recover it. 00:29:11.661 [2024-11-05 19:18:40.719838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.661 [2024-11-05 19:18:40.719850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.661 qpair failed and we were unable to recover it. 00:29:11.661 [2024-11-05 19:18:40.720029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.661 [2024-11-05 19:18:40.720041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.661 qpair failed and we were unable to recover it. 00:29:11.661 [2024-11-05 19:18:40.720386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.661 [2024-11-05 19:18:40.720398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.661 qpair failed and we were unable to recover it. 00:29:11.661 [2024-11-05 19:18:40.720739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.661 [2024-11-05 19:18:40.720754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.661 qpair failed and we were unable to recover it. 00:29:11.661 [2024-11-05 19:18:40.721048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.661 [2024-11-05 19:18:40.721060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.661 qpair failed and we were unable to recover it. 00:29:11.661 [2024-11-05 19:18:40.721335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.661 [2024-11-05 19:18:40.721346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.661 qpair failed and we were unable to recover it. 00:29:11.661 [2024-11-05 19:18:40.721711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.661 [2024-11-05 19:18:40.721722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.661 qpair failed and we were unable to recover it. 00:29:11.661 [2024-11-05 19:18:40.721925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.661 [2024-11-05 19:18:40.721936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.661 qpair failed and we were unable to recover it. 00:29:11.661 [2024-11-05 19:18:40.722232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.661 [2024-11-05 19:18:40.722242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.661 qpair failed and we were unable to recover it. 00:29:11.661 [2024-11-05 19:18:40.722598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.661 [2024-11-05 19:18:40.722609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.661 qpair failed and we were unable to recover it. 00:29:11.661 [2024-11-05 19:18:40.722925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.661 [2024-11-05 19:18:40.722937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.661 qpair failed and we were unable to recover it. 00:29:11.661 [2024-11-05 19:18:40.723134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.661 [2024-11-05 19:18:40.723145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.661 qpair failed and we were unable to recover it. 00:29:11.661 [2024-11-05 19:18:40.723446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.661 [2024-11-05 19:18:40.723457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.661 qpair failed and we were unable to recover it. 00:29:11.661 [2024-11-05 19:18:40.723759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.661 [2024-11-05 19:18:40.723770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.661 qpair failed and we were unable to recover it. 00:29:11.661 [2024-11-05 19:18:40.724053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.661 [2024-11-05 19:18:40.724064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.661 qpair failed and we were unable to recover it. 00:29:11.661 [2024-11-05 19:18:40.724373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.661 [2024-11-05 19:18:40.724387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.661 qpair failed and we were unable to recover it. 00:29:11.661 [2024-11-05 19:18:40.724756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.662 [2024-11-05 19:18:40.724768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.662 qpair failed and we were unable to recover it. 00:29:11.662 [2024-11-05 19:18:40.724964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.662 [2024-11-05 19:18:40.724974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.662 qpair failed and we were unable to recover it. 00:29:11.662 [2024-11-05 19:18:40.725336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.662 [2024-11-05 19:18:40.725347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.662 qpair failed and we were unable to recover it. 00:29:11.662 [2024-11-05 19:18:40.725654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.662 [2024-11-05 19:18:40.725667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.662 qpair failed and we were unable to recover it. 00:29:11.662 [2024-11-05 19:18:40.725842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.662 [2024-11-05 19:18:40.725853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.662 qpair failed and we were unable to recover it. 00:29:11.662 [2024-11-05 19:18:40.726200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.662 [2024-11-05 19:18:40.726212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.662 qpair failed and we were unable to recover it. 00:29:11.662 [2024-11-05 19:18:40.726553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.662 [2024-11-05 19:18:40.726565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.662 qpair failed and we were unable to recover it. 00:29:11.662 [2024-11-05 19:18:40.726771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.662 [2024-11-05 19:18:40.726782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.662 qpair failed and we were unable to recover it. 00:29:11.662 [2024-11-05 19:18:40.727095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.662 [2024-11-05 19:18:40.727106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.662 qpair failed and we were unable to recover it. 00:29:11.662 [2024-11-05 19:18:40.727412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.662 [2024-11-05 19:18:40.727424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.662 qpair failed and we were unable to recover it. 00:29:11.662 [2024-11-05 19:18:40.727830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.662 [2024-11-05 19:18:40.727842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.662 qpair failed and we were unable to recover it. 00:29:11.662 [2024-11-05 19:18:40.728200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.662 [2024-11-05 19:18:40.728211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.662 qpair failed and we were unable to recover it. 00:29:11.662 [2024-11-05 19:18:40.728511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.662 [2024-11-05 19:18:40.728522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.662 qpair failed and we were unable to recover it. 00:29:11.662 [2024-11-05 19:18:40.728841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.662 [2024-11-05 19:18:40.728852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.662 qpair failed and we were unable to recover it. 00:29:11.662 [2024-11-05 19:18:40.729156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.662 [2024-11-05 19:18:40.729168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.662 qpair failed and we were unable to recover it. 00:29:11.662 [2024-11-05 19:18:40.729534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.662 [2024-11-05 19:18:40.729545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.662 qpair failed and we were unable to recover it. 00:29:11.662 [2024-11-05 19:18:40.729742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.662 [2024-11-05 19:18:40.729756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.662 qpair failed and we were unable to recover it. 00:29:11.662 [2024-11-05 19:18:40.729982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.662 [2024-11-05 19:18:40.729994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.662 qpair failed and we were unable to recover it. 00:29:11.662 [2024-11-05 19:18:40.730379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.662 [2024-11-05 19:18:40.730391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.662 qpair failed and we were unable to recover it. 00:29:11.662 [2024-11-05 19:18:40.730711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.662 [2024-11-05 19:18:40.730723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.662 qpair failed and we were unable to recover it. 00:29:11.662 [2024-11-05 19:18:40.730942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.662 [2024-11-05 19:18:40.730954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.662 qpair failed and we were unable to recover it. 00:29:11.662 [2024-11-05 19:18:40.731359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.662 [2024-11-05 19:18:40.731372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.662 qpair failed and we were unable to recover it. 00:29:11.662 [2024-11-05 19:18:40.731675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.662 [2024-11-05 19:18:40.731687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.662 qpair failed and we were unable to recover it. 00:29:11.662 [2024-11-05 19:18:40.731981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.662 [2024-11-05 19:18:40.731994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.662 qpair failed and we were unable to recover it. 00:29:11.662 [2024-11-05 19:18:40.732282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.662 [2024-11-05 19:18:40.732294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.662 qpair failed and we were unable to recover it. 00:29:11.662 [2024-11-05 19:18:40.732611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.662 [2024-11-05 19:18:40.732623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.662 qpair failed and we were unable to recover it. 00:29:11.662 [2024-11-05 19:18:40.732916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.662 [2024-11-05 19:18:40.732928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.662 qpair failed and we were unable to recover it. 00:29:11.662 [2024-11-05 19:18:40.733128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.662 [2024-11-05 19:18:40.733141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.663 qpair failed and we were unable to recover it. 00:29:11.663 [2024-11-05 19:18:40.733446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.663 [2024-11-05 19:18:40.733459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.663 qpair failed and we were unable to recover it. 00:29:11.663 [2024-11-05 19:18:40.733775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.663 [2024-11-05 19:18:40.733787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.663 qpair failed and we were unable to recover it. 00:29:11.663 [2024-11-05 19:18:40.734000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.663 [2024-11-05 19:18:40.734011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.663 qpair failed and we were unable to recover it. 00:29:11.663 [2024-11-05 19:18:40.734312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.663 [2024-11-05 19:18:40.734324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.663 qpair failed and we were unable to recover it. 00:29:11.663 [2024-11-05 19:18:40.734514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.663 [2024-11-05 19:18:40.734525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.663 qpair failed and we were unable to recover it. 00:29:11.663 [2024-11-05 19:18:40.734723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.663 [2024-11-05 19:18:40.734734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.663 qpair failed and we were unable to recover it. 00:29:11.663 [2024-11-05 19:18:40.734935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.663 [2024-11-05 19:18:40.734946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.663 qpair failed and we were unable to recover it. 00:29:11.663 [2024-11-05 19:18:40.735239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.663 [2024-11-05 19:18:40.735249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.663 qpair failed and we were unable to recover it. 00:29:11.663 [2024-11-05 19:18:40.735531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.663 [2024-11-05 19:18:40.735542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.663 qpair failed and we were unable to recover it. 00:29:11.663 [2024-11-05 19:18:40.735736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.663 [2024-11-05 19:18:40.735751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.663 qpair failed and we were unable to recover it. 00:29:11.663 [2024-11-05 19:18:40.735953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.663 [2024-11-05 19:18:40.735964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.663 qpair failed and we were unable to recover it. 00:29:11.663 [2024-11-05 19:18:40.736284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.663 [2024-11-05 19:18:40.736296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.663 qpair failed and we were unable to recover it. 00:29:11.663 [2024-11-05 19:18:40.736619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.663 [2024-11-05 19:18:40.736631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.663 qpair failed and we were unable to recover it. 00:29:11.663 [2024-11-05 19:18:40.736943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.663 [2024-11-05 19:18:40.736955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.663 qpair failed and we were unable to recover it. 00:29:11.663 [2024-11-05 19:18:40.737282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.663 [2024-11-05 19:18:40.737293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.663 qpair failed and we were unable to recover it. 00:29:11.663 [2024-11-05 19:18:40.737509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.663 [2024-11-05 19:18:40.737520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.663 qpair failed and we were unable to recover it. 00:29:11.663 [2024-11-05 19:18:40.737848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.663 [2024-11-05 19:18:40.737860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.663 qpair failed and we were unable to recover it. 00:29:11.663 [2024-11-05 19:18:40.738195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.663 [2024-11-05 19:18:40.738206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.663 qpair failed and we were unable to recover it. 00:29:11.663 [2024-11-05 19:18:40.738643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.663 [2024-11-05 19:18:40.738654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.663 qpair failed and we were unable to recover it. 00:29:11.663 [2024-11-05 19:18:40.739003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.663 [2024-11-05 19:18:40.739015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.663 qpair failed and we were unable to recover it. 00:29:11.663 [2024-11-05 19:18:40.739299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.663 [2024-11-05 19:18:40.739310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.663 qpair failed and we were unable to recover it. 00:29:11.663 [2024-11-05 19:18:40.739618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.663 [2024-11-05 19:18:40.739631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.663 qpair failed and we were unable to recover it. 00:29:11.663 [2024-11-05 19:18:40.740004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.663 [2024-11-05 19:18:40.740015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.663 qpair failed and we were unable to recover it. 00:29:11.663 [2024-11-05 19:18:40.740228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.663 [2024-11-05 19:18:40.740239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.663 qpair failed and we were unable to recover it. 00:29:11.663 [2024-11-05 19:18:40.740453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.663 [2024-11-05 19:18:40.740465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.663 qpair failed and we were unable to recover it. 00:29:11.663 [2024-11-05 19:18:40.740753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.663 [2024-11-05 19:18:40.740765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.663 qpair failed and we were unable to recover it. 00:29:11.663 [2024-11-05 19:18:40.741069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.663 [2024-11-05 19:18:40.741081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.663 qpair failed and we were unable to recover it. 00:29:11.664 [2024-11-05 19:18:40.741234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.664 [2024-11-05 19:18:40.741246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.664 qpair failed and we were unable to recover it. 00:29:11.664 [2024-11-05 19:18:40.741566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.664 [2024-11-05 19:18:40.741578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.664 qpair failed and we were unable to recover it. 00:29:11.664 [2024-11-05 19:18:40.741979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.664 [2024-11-05 19:18:40.741991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.664 qpair failed and we were unable to recover it. 00:29:11.664 [2024-11-05 19:18:40.742272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.664 [2024-11-05 19:18:40.742283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.664 qpair failed and we were unable to recover it. 00:29:11.664 [2024-11-05 19:18:40.742491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.664 [2024-11-05 19:18:40.742502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.664 qpair failed and we were unable to recover it. 00:29:11.664 [2024-11-05 19:18:40.742696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.664 [2024-11-05 19:18:40.742706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.664 qpair failed and we were unable to recover it. 00:29:11.664 [2024-11-05 19:18:40.743077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.664 [2024-11-05 19:18:40.743088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.664 qpair failed and we were unable to recover it. 00:29:11.664 [2024-11-05 19:18:40.743403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.664 [2024-11-05 19:18:40.743415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.664 qpair failed and we were unable to recover it. 00:29:11.664 [2024-11-05 19:18:40.743729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.664 [2024-11-05 19:18:40.743740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.664 qpair failed and we were unable to recover it. 00:29:11.664 [2024-11-05 19:18:40.744056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.664 [2024-11-05 19:18:40.744068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.664 qpair failed and we were unable to recover it. 00:29:11.664 [2024-11-05 19:18:40.744366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.664 [2024-11-05 19:18:40.744378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.664 qpair failed and we were unable to recover it. 00:29:11.664 [2024-11-05 19:18:40.744682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.664 [2024-11-05 19:18:40.744694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.664 qpair failed and we were unable to recover it. 00:29:11.664 [2024-11-05 19:18:40.745142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.664 [2024-11-05 19:18:40.745156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.664 qpair failed and we were unable to recover it. 00:29:11.664 [2024-11-05 19:18:40.745429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.664 [2024-11-05 19:18:40.745440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.664 qpair failed and we were unable to recover it. 00:29:11.664 [2024-11-05 19:18:40.745643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.664 [2024-11-05 19:18:40.745653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.664 qpair failed and we were unable to recover it. 00:29:11.664 [2024-11-05 19:18:40.745966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.664 [2024-11-05 19:18:40.745978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.664 qpair failed and we were unable to recover it. 00:29:11.664 [2024-11-05 19:18:40.746359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.664 [2024-11-05 19:18:40.746370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.664 qpair failed and we were unable to recover it. 00:29:11.664 [2024-11-05 19:18:40.746675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.664 [2024-11-05 19:18:40.746687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.664 qpair failed and we were unable to recover it. 00:29:11.664 [2024-11-05 19:18:40.746907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.664 [2024-11-05 19:18:40.746918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.664 qpair failed and we were unable to recover it. 00:29:11.664 [2024-11-05 19:18:40.747297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.664 [2024-11-05 19:18:40.747308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.664 qpair failed and we were unable to recover it. 00:29:11.664 [2024-11-05 19:18:40.747527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.664 [2024-11-05 19:18:40.747538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.664 qpair failed and we were unable to recover it. 00:29:11.664 [2024-11-05 19:18:40.747853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.664 [2024-11-05 19:18:40.747864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.664 qpair failed and we were unable to recover it. 00:29:11.664 [2024-11-05 19:18:40.748199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.664 [2024-11-05 19:18:40.748210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.664 qpair failed and we were unable to recover it. 00:29:11.664 [2024-11-05 19:18:40.748381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.664 [2024-11-05 19:18:40.748394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.664 qpair failed and we were unable to recover it. 00:29:11.664 [2024-11-05 19:18:40.748668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.664 [2024-11-05 19:18:40.748679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.664 qpair failed and we were unable to recover it. 00:29:11.664 [2024-11-05 19:18:40.749006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.664 [2024-11-05 19:18:40.749018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.664 qpair failed and we were unable to recover it. 00:29:11.664 [2024-11-05 19:18:40.749351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.664 [2024-11-05 19:18:40.749363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.664 qpair failed and we were unable to recover it. 00:29:11.664 [2024-11-05 19:18:40.749549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.665 [2024-11-05 19:18:40.749562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.665 qpair failed and we were unable to recover it. 00:29:11.665 [2024-11-05 19:18:40.749883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.665 [2024-11-05 19:18:40.749895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.665 qpair failed and we were unable to recover it. 00:29:11.665 [2024-11-05 19:18:40.750313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.665 [2024-11-05 19:18:40.750324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.665 qpair failed and we were unable to recover it. 00:29:11.665 [2024-11-05 19:18:40.750626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.665 [2024-11-05 19:18:40.750637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.665 qpair failed and we were unable to recover it. 00:29:11.665 [2024-11-05 19:18:40.750939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.665 [2024-11-05 19:18:40.750950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.665 qpair failed and we were unable to recover it. 00:29:11.665 [2024-11-05 19:18:40.751255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.665 [2024-11-05 19:18:40.751267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.665 qpair failed and we were unable to recover it. 00:29:11.665 [2024-11-05 19:18:40.751558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.665 [2024-11-05 19:18:40.751569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.665 qpair failed and we were unable to recover it. 00:29:11.665 [2024-11-05 19:18:40.751724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.665 [2024-11-05 19:18:40.751735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.665 qpair failed and we were unable to recover it. 00:29:11.665 [2024-11-05 19:18:40.752040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.665 [2024-11-05 19:18:40.752051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.665 qpair failed and we were unable to recover it. 00:29:11.665 [2024-11-05 19:18:40.752365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.665 [2024-11-05 19:18:40.752376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.665 qpair failed and we were unable to recover it. 00:29:11.665 [2024-11-05 19:18:40.752680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.665 [2024-11-05 19:18:40.752691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.665 qpair failed and we were unable to recover it. 00:29:11.665 [2024-11-05 19:18:40.753052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.665 [2024-11-05 19:18:40.753063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.665 qpair failed and we were unable to recover it. 00:29:11.665 [2024-11-05 19:18:40.753383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.665 [2024-11-05 19:18:40.753398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.665 qpair failed and we were unable to recover it. 00:29:11.665 [2024-11-05 19:18:40.753703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.665 [2024-11-05 19:18:40.753714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.665 qpair failed and we were unable to recover it. 00:29:11.665 [2024-11-05 19:18:40.754029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.665 [2024-11-05 19:18:40.754042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.665 qpair failed and we were unable to recover it. 00:29:11.665 [2024-11-05 19:18:40.754357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.665 [2024-11-05 19:18:40.754368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.665 qpair failed and we were unable to recover it. 00:29:11.665 [2024-11-05 19:18:40.754578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.665 [2024-11-05 19:18:40.754589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.665 qpair failed and we were unable to recover it. 00:29:11.665 [2024-11-05 19:18:40.754932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.665 [2024-11-05 19:18:40.754945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.665 qpair failed and we were unable to recover it. 00:29:11.665 [2024-11-05 19:18:40.755254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.665 [2024-11-05 19:18:40.755266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.665 qpair failed and we were unable to recover it. 00:29:11.665 [2024-11-05 19:18:40.755473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.665 [2024-11-05 19:18:40.755484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.665 qpair failed and we were unable to recover it. 00:29:11.665 [2024-11-05 19:18:40.755787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.665 [2024-11-05 19:18:40.755799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.665 qpair failed and we were unable to recover it. 00:29:11.665 [2024-11-05 19:18:40.755927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.665 [2024-11-05 19:18:40.755938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.665 qpair failed and we were unable to recover it. 00:29:11.665 [2024-11-05 19:18:40.756237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.665 [2024-11-05 19:18:40.756248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.665 qpair failed and we were unable to recover it. 00:29:11.665 [2024-11-05 19:18:40.756553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.665 [2024-11-05 19:18:40.756565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.665 qpair failed and we were unable to recover it. 00:29:11.665 [2024-11-05 19:18:40.756862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.665 [2024-11-05 19:18:40.756874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.665 qpair failed and we were unable to recover it. 00:29:11.665 [2024-11-05 19:18:40.757205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.665 [2024-11-05 19:18:40.757216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.665 qpair failed and we were unable to recover it. 00:29:11.665 [2024-11-05 19:18:40.757501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.665 [2024-11-05 19:18:40.757512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.665 qpair failed and we were unable to recover it. 00:29:11.665 [2024-11-05 19:18:40.757810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.665 [2024-11-05 19:18:40.757822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.666 qpair failed and we were unable to recover it. 00:29:11.666 [2024-11-05 19:18:40.758131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.666 [2024-11-05 19:18:40.758142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.666 qpair failed and we were unable to recover it. 00:29:11.666 [2024-11-05 19:18:40.758296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.666 [2024-11-05 19:18:40.758307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.666 qpair failed and we were unable to recover it. 00:29:11.666 [2024-11-05 19:18:40.758609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.666 [2024-11-05 19:18:40.758621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.666 qpair failed and we were unable to recover it. 00:29:11.666 [2024-11-05 19:18:40.758950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.666 [2024-11-05 19:18:40.758962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.666 qpair failed and we were unable to recover it. 00:29:11.666 [2024-11-05 19:18:40.759178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.666 [2024-11-05 19:18:40.759189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.666 qpair failed and we were unable to recover it. 00:29:11.666 [2024-11-05 19:18:40.759494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.666 [2024-11-05 19:18:40.759505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.666 qpair failed and we were unable to recover it. 00:29:11.666 [2024-11-05 19:18:40.759777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.666 [2024-11-05 19:18:40.759789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.666 qpair failed and we were unable to recover it. 00:29:11.666 [2024-11-05 19:18:40.760089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.666 [2024-11-05 19:18:40.760100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.666 qpair failed and we were unable to recover it. 00:29:11.666 [2024-11-05 19:18:40.760402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.666 [2024-11-05 19:18:40.760413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.666 qpair failed and we were unable to recover it. 00:29:11.666 [2024-11-05 19:18:40.760607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.666 [2024-11-05 19:18:40.760618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.666 qpair failed and we were unable to recover it. 00:29:11.666 [2024-11-05 19:18:40.760948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.666 [2024-11-05 19:18:40.760960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.666 qpair failed and we were unable to recover it. 00:29:11.666 [2024-11-05 19:18:40.761135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.666 [2024-11-05 19:18:40.761150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.666 qpair failed and we were unable to recover it. 00:29:11.666 [2024-11-05 19:18:40.761469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.666 [2024-11-05 19:18:40.761480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.666 qpair failed and we were unable to recover it. 00:29:11.666 [2024-11-05 19:18:40.761568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.666 [2024-11-05 19:18:40.761579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.666 qpair failed and we were unable to recover it. 00:29:11.666 [2024-11-05 19:18:40.761909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.666 [2024-11-05 19:18:40.761922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.666 qpair failed and we were unable to recover it. 00:29:11.666 [2024-11-05 19:18:40.762183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.666 [2024-11-05 19:18:40.762194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.666 qpair failed and we were unable to recover it. 00:29:11.666 [2024-11-05 19:18:40.762507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.666 [2024-11-05 19:18:40.762519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.666 qpair failed and we were unable to recover it. 00:29:11.666 [2024-11-05 19:18:40.762699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.666 [2024-11-05 19:18:40.762712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.666 qpair failed and we were unable to recover it. 00:29:11.666 [2024-11-05 19:18:40.763008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.666 [2024-11-05 19:18:40.763020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.666 qpair failed and we were unable to recover it. 00:29:11.666 [2024-11-05 19:18:40.763226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.666 [2024-11-05 19:18:40.763237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.666 qpair failed and we were unable to recover it. 00:29:11.666 [2024-11-05 19:18:40.763413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.666 [2024-11-05 19:18:40.763425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.666 qpair failed and we were unable to recover it. 00:29:11.666 [2024-11-05 19:18:40.763626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.666 [2024-11-05 19:18:40.763638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.666 qpair failed and we were unable to recover it. 00:29:11.666 [2024-11-05 19:18:40.763976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.666 [2024-11-05 19:18:40.763988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.666 qpair failed and we were unable to recover it. 00:29:11.666 [2024-11-05 19:18:40.764170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.666 [2024-11-05 19:18:40.764182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.666 qpair failed and we were unable to recover it. 00:29:11.666 [2024-11-05 19:18:40.764395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.666 [2024-11-05 19:18:40.764407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.666 qpair failed and we were unable to recover it. 00:29:11.666 [2024-11-05 19:18:40.764774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.666 [2024-11-05 19:18:40.764787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.666 qpair failed and we were unable to recover it. 00:29:11.666 [2024-11-05 19:18:40.765125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.666 [2024-11-05 19:18:40.765136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.666 qpair failed and we were unable to recover it. 00:29:11.666 [2024-11-05 19:18:40.765443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.666 [2024-11-05 19:18:40.765455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.666 qpair failed and we were unable to recover it. 00:29:11.666 [2024-11-05 19:18:40.765766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.666 [2024-11-05 19:18:40.765778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.666 qpair failed and we were unable to recover it. 00:29:11.666 [2024-11-05 19:18:40.766087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.667 [2024-11-05 19:18:40.766099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.667 qpair failed and we were unable to recover it. 00:29:11.667 [2024-11-05 19:18:40.766395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.667 [2024-11-05 19:18:40.766406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.667 qpair failed and we were unable to recover it. 00:29:11.667 [2024-11-05 19:18:40.766614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.667 [2024-11-05 19:18:40.766625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.667 qpair failed and we were unable to recover it. 00:29:11.667 [2024-11-05 19:18:40.766952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.667 [2024-11-05 19:18:40.766963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.667 qpair failed and we were unable to recover it. 00:29:11.667 [2024-11-05 19:18:40.767292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.667 [2024-11-05 19:18:40.767303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.667 qpair failed and we were unable to recover it. 00:29:11.667 [2024-11-05 19:18:40.767491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.667 [2024-11-05 19:18:40.767502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.667 qpair failed and we were unable to recover it. 00:29:11.667 [2024-11-05 19:18:40.767674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.667 [2024-11-05 19:18:40.767685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.667 qpair failed and we were unable to recover it. 00:29:11.667 [2024-11-05 19:18:40.767891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.667 [2024-11-05 19:18:40.767902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.667 qpair failed and we were unable to recover it. 00:29:11.667 [2024-11-05 19:18:40.768134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.667 [2024-11-05 19:18:40.768144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.667 qpair failed and we were unable to recover it. 00:29:11.667 [2024-11-05 19:18:40.768453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.667 [2024-11-05 19:18:40.768464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.667 qpair failed and we were unable to recover it. 00:29:11.667 [2024-11-05 19:18:40.768788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.667 [2024-11-05 19:18:40.768799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.667 qpair failed and we were unable to recover it. 00:29:11.667 [2024-11-05 19:18:40.769027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.667 [2024-11-05 19:18:40.769038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.667 qpair failed and we were unable to recover it. 00:29:11.667 [2024-11-05 19:18:40.769347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.667 [2024-11-05 19:18:40.769358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.667 qpair failed and we were unable to recover it. 00:29:11.667 [2024-11-05 19:18:40.769655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.667 [2024-11-05 19:18:40.769665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.667 qpair failed and we were unable to recover it. 00:29:11.667 [2024-11-05 19:18:40.769978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.667 [2024-11-05 19:18:40.769990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.667 qpair failed and we were unable to recover it. 00:29:11.667 [2024-11-05 19:18:40.770277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.667 [2024-11-05 19:18:40.770296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.667 qpair failed and we were unable to recover it. 00:29:11.667 [2024-11-05 19:18:40.770507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.667 [2024-11-05 19:18:40.770518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.667 qpair failed and we were unable to recover it. 00:29:11.667 [2024-11-05 19:18:40.770821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.667 [2024-11-05 19:18:40.770834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.667 qpair failed and we were unable to recover it. 00:29:11.667 [2024-11-05 19:18:40.771192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.667 [2024-11-05 19:18:40.771203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.667 qpair failed and we were unable to recover it. 00:29:11.667 [2024-11-05 19:18:40.771516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.667 [2024-11-05 19:18:40.771528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.667 qpair failed and we were unable to recover it. 00:29:11.667 [2024-11-05 19:18:40.771868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.667 [2024-11-05 19:18:40.771880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.667 qpair failed and we were unable to recover it. 00:29:11.667 [2024-11-05 19:18:40.772187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.667 [2024-11-05 19:18:40.772198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.667 qpair failed and we were unable to recover it. 00:29:11.667 [2024-11-05 19:18:40.772514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.667 [2024-11-05 19:18:40.772525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.667 qpair failed and we were unable to recover it. 00:29:11.667 [2024-11-05 19:18:40.772833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.668 [2024-11-05 19:18:40.772848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.668 qpair failed and we were unable to recover it. 00:29:11.668 [2024-11-05 19:18:40.773168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.668 [2024-11-05 19:18:40.773179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.668 qpair failed and we were unable to recover it. 00:29:11.668 [2024-11-05 19:18:40.773464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.668 [2024-11-05 19:18:40.773475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.668 qpair failed and we were unable to recover it. 00:29:11.668 [2024-11-05 19:18:40.773799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.668 [2024-11-05 19:18:40.773810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.668 qpair failed and we were unable to recover it. 00:29:11.668 [2024-11-05 19:18:40.774032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.668 [2024-11-05 19:18:40.774043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.668 qpair failed and we were unable to recover it. 00:29:11.668 [2024-11-05 19:18:40.774238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.668 [2024-11-05 19:18:40.774250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.668 qpair failed and we were unable to recover it. 00:29:11.668 [2024-11-05 19:18:40.774554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.668 [2024-11-05 19:18:40.774565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.668 qpair failed and we were unable to recover it. 00:29:11.668 [2024-11-05 19:18:40.774729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.668 [2024-11-05 19:18:40.774741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.668 qpair failed and we were unable to recover it. 00:29:11.668 [2024-11-05 19:18:40.774941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.668 [2024-11-05 19:18:40.774952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.668 qpair failed and we were unable to recover it. 00:29:11.668 [2024-11-05 19:18:40.775252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.668 [2024-11-05 19:18:40.775264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.668 qpair failed and we were unable to recover it. 00:29:11.668 [2024-11-05 19:18:40.775544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.668 [2024-11-05 19:18:40.775554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.668 qpair failed and we were unable to recover it. 00:29:11.668 [2024-11-05 19:18:40.775786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.668 [2024-11-05 19:18:40.775797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.668 qpair failed and we were unable to recover it. 00:29:11.668 [2024-11-05 19:18:40.776112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.668 [2024-11-05 19:18:40.776124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.668 qpair failed and we were unable to recover it. 00:29:11.668 [2024-11-05 19:18:40.776326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.668 [2024-11-05 19:18:40.776337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.668 qpair failed and we were unable to recover it. 00:29:11.668 [2024-11-05 19:18:40.776540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.668 [2024-11-05 19:18:40.776550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.668 qpair failed and we were unable to recover it. 00:29:11.668 [2024-11-05 19:18:40.776874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.668 [2024-11-05 19:18:40.776886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.668 qpair failed and we were unable to recover it. 00:29:11.668 [2024-11-05 19:18:40.777202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.668 [2024-11-05 19:18:40.777213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.668 qpair failed and we were unable to recover it. 00:29:11.668 [2024-11-05 19:18:40.777553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.668 [2024-11-05 19:18:40.777564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.668 qpair failed and we were unable to recover it. 00:29:11.668 [2024-11-05 19:18:40.777930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.668 [2024-11-05 19:18:40.777942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.668 qpair failed and we were unable to recover it. 00:29:11.668 [2024-11-05 19:18:40.778242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.668 [2024-11-05 19:18:40.778253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.668 qpair failed and we were unable to recover it. 00:29:11.668 [2024-11-05 19:18:40.778564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.668 [2024-11-05 19:18:40.778576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.668 qpair failed and we were unable to recover it. 00:29:11.668 [2024-11-05 19:18:40.778817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.668 [2024-11-05 19:18:40.778829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.668 qpair failed and we were unable to recover it. 00:29:11.668 [2024-11-05 19:18:40.779184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.668 [2024-11-05 19:18:40.779196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.668 qpair failed and we were unable to recover it. 00:29:11.668 [2024-11-05 19:18:40.779502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.668 [2024-11-05 19:18:40.779513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.668 qpair failed and we were unable to recover it. 00:29:11.668 [2024-11-05 19:18:40.779757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.668 [2024-11-05 19:18:40.779768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.668 qpair failed and we were unable to recover it. 00:29:11.668 [2024-11-05 19:18:40.780079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.668 [2024-11-05 19:18:40.780090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.668 qpair failed and we were unable to recover it. 00:29:11.668 [2024-11-05 19:18:40.780273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.668 [2024-11-05 19:18:40.780284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.668 qpair failed and we were unable to recover it. 00:29:11.668 [2024-11-05 19:18:40.780485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.668 [2024-11-05 19:18:40.780499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.668 qpair failed and we were unable to recover it. 00:29:11.668 [2024-11-05 19:18:40.780730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.668 [2024-11-05 19:18:40.780741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.668 qpair failed and we were unable to recover it. 00:29:11.668 [2024-11-05 19:18:40.780982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.669 [2024-11-05 19:18:40.780993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.669 qpair failed and we were unable to recover it. 00:29:11.669 [2024-11-05 19:18:40.781298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.669 [2024-11-05 19:18:40.781310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.669 qpair failed and we were unable to recover it. 00:29:11.669 [2024-11-05 19:18:40.781625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.669 [2024-11-05 19:18:40.781636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.669 qpair failed and we were unable to recover it. 00:29:11.669 [2024-11-05 19:18:40.781817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.669 [2024-11-05 19:18:40.781828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.669 qpair failed and we were unable to recover it. 00:29:11.669 [2024-11-05 19:18:40.782116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.669 [2024-11-05 19:18:40.782127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.669 qpair failed and we were unable to recover it. 00:29:11.669 [2024-11-05 19:18:40.782431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.669 [2024-11-05 19:18:40.782441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.669 qpair failed and we were unable to recover it. 00:29:11.669 [2024-11-05 19:18:40.782766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.669 [2024-11-05 19:18:40.782777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.669 qpair failed and we were unable to recover it. 00:29:11.669 [2024-11-05 19:18:40.783153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.669 [2024-11-05 19:18:40.783163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.669 qpair failed and we were unable to recover it. 00:29:11.669 [2024-11-05 19:18:40.783433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.669 [2024-11-05 19:18:40.783444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.669 qpair failed and we were unable to recover it. 00:29:11.669 [2024-11-05 19:18:40.783643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.669 [2024-11-05 19:18:40.783655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.669 qpair failed and we were unable to recover it. 00:29:11.669 [2024-11-05 19:18:40.784065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.669 [2024-11-05 19:18:40.784077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.669 qpair failed and we were unable to recover it. 00:29:11.669 [2024-11-05 19:18:40.784377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.669 [2024-11-05 19:18:40.784389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.669 qpair failed and we were unable to recover it. 00:29:11.669 [2024-11-05 19:18:40.784732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.669 [2024-11-05 19:18:40.784743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.669 qpair failed and we were unable to recover it. 00:29:11.669 [2024-11-05 19:18:40.784970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.669 [2024-11-05 19:18:40.784982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.669 qpair failed and we were unable to recover it. 00:29:11.669 [2024-11-05 19:18:40.785310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.669 [2024-11-05 19:18:40.785321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.669 qpair failed and we were unable to recover it. 00:29:11.669 [2024-11-05 19:18:40.785530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.669 [2024-11-05 19:18:40.785540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.669 qpair failed and we were unable to recover it. 00:29:11.669 [2024-11-05 19:18:40.785841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.669 [2024-11-05 19:18:40.785852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.669 qpair failed and we were unable to recover it. 00:29:11.669 [2024-11-05 19:18:40.786242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.669 [2024-11-05 19:18:40.786254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.669 qpair failed and we were unable to recover it. 00:29:11.669 [2024-11-05 19:18:40.786568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.669 [2024-11-05 19:18:40.786580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.669 qpair failed and we were unable to recover it. 00:29:11.669 [2024-11-05 19:18:40.786914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.669 [2024-11-05 19:18:40.786926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.669 qpair failed and we were unable to recover it. 00:29:11.669 [2024-11-05 19:18:40.787268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.669 [2024-11-05 19:18:40.787279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.669 qpair failed and we were unable to recover it. 00:29:11.669 [2024-11-05 19:18:40.787561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.669 [2024-11-05 19:18:40.787572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.669 qpair failed and we were unable to recover it. 00:29:11.669 [2024-11-05 19:18:40.787856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.669 [2024-11-05 19:18:40.787868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.669 qpair failed and we were unable to recover it. 00:29:11.669 [2024-11-05 19:18:40.788042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.669 [2024-11-05 19:18:40.788053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.669 qpair failed and we were unable to recover it. 00:29:11.669 [2024-11-05 19:18:40.788339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.669 [2024-11-05 19:18:40.788350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.669 qpair failed and we were unable to recover it. 00:29:11.669 [2024-11-05 19:18:40.788650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.669 [2024-11-05 19:18:40.788664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.669 qpair failed and we were unable to recover it. 00:29:11.669 [2024-11-05 19:18:40.788775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.669 [2024-11-05 19:18:40.788786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.669 qpair failed and we were unable to recover it. 00:29:11.669 [2024-11-05 19:18:40.789107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.669 [2024-11-05 19:18:40.789118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.669 qpair failed and we were unable to recover it. 00:29:11.669 [2024-11-05 19:18:40.789423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.669 [2024-11-05 19:18:40.789435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.669 qpair failed and we were unable to recover it. 00:29:11.669 [2024-11-05 19:18:40.789735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.669 [2024-11-05 19:18:40.789751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.669 qpair failed and we were unable to recover it. 00:29:11.669 [2024-11-05 19:18:40.790051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.669 [2024-11-05 19:18:40.790063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.670 qpair failed and we were unable to recover it. 00:29:11.670 [2024-11-05 19:18:40.790375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.670 [2024-11-05 19:18:40.790386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.670 qpair failed and we were unable to recover it. 00:29:11.670 [2024-11-05 19:18:40.790753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.670 [2024-11-05 19:18:40.790765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.670 qpair failed and we were unable to recover it. 00:29:11.670 [2024-11-05 19:18:40.791069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.670 [2024-11-05 19:18:40.791080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.670 qpair failed and we were unable to recover it. 00:29:11.670 [2024-11-05 19:18:40.791383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.670 [2024-11-05 19:18:40.791394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.670 qpair failed and we were unable to recover it. 00:29:11.670 [2024-11-05 19:18:40.791702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.670 [2024-11-05 19:18:40.791713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.670 qpair failed and we were unable to recover it. 00:29:11.670 [2024-11-05 19:18:40.791934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.670 [2024-11-05 19:18:40.791946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.670 qpair failed and we were unable to recover it. 00:29:11.670 [2024-11-05 19:18:40.792153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.670 [2024-11-05 19:18:40.792164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.670 qpair failed and we were unable to recover it. 00:29:11.670 [2024-11-05 19:18:40.792491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.670 [2024-11-05 19:18:40.792503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.670 qpair failed and we were unable to recover it. 00:29:11.670 [2024-11-05 19:18:40.792834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.670 [2024-11-05 19:18:40.792845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.670 qpair failed and we were unable to recover it. 00:29:11.670 [2024-11-05 19:18:40.793168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.670 [2024-11-05 19:18:40.793180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.670 qpair failed and we were unable to recover it. 00:29:11.670 [2024-11-05 19:18:40.793507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.670 [2024-11-05 19:18:40.793518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.670 qpair failed and we were unable to recover it. 00:29:11.670 [2024-11-05 19:18:40.793742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.670 [2024-11-05 19:18:40.793756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.670 qpair failed and we were unable to recover it. 00:29:11.670 [2024-11-05 19:18:40.794137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.670 [2024-11-05 19:18:40.794148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.670 qpair failed and we were unable to recover it. 00:29:11.670 [2024-11-05 19:18:40.794454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.670 [2024-11-05 19:18:40.794466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.670 qpair failed and we were unable to recover it. 00:29:11.670 [2024-11-05 19:18:40.794699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.670 [2024-11-05 19:18:40.794711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.670 qpair failed and we were unable to recover it. 00:29:11.670 [2024-11-05 19:18:40.795103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.670 [2024-11-05 19:18:40.795114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.670 qpair failed and we were unable to recover it. 00:29:11.670 [2024-11-05 19:18:40.795425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.670 [2024-11-05 19:18:40.795436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.670 qpair failed and we were unable to recover it. 00:29:11.670 [2024-11-05 19:18:40.795736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.670 [2024-11-05 19:18:40.795751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.670 qpair failed and we were unable to recover it. 00:29:11.670 [2024-11-05 19:18:40.796067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.670 [2024-11-05 19:18:40.796079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.670 qpair failed and we were unable to recover it. 00:29:11.670 [2024-11-05 19:18:40.796286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.670 [2024-11-05 19:18:40.796297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.670 qpair failed and we were unable to recover it. 00:29:11.670 [2024-11-05 19:18:40.796506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.670 [2024-11-05 19:18:40.796517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.670 qpair failed and we were unable to recover it. 00:29:11.670 [2024-11-05 19:18:40.796697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.670 [2024-11-05 19:18:40.796708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.670 qpair failed and we were unable to recover it. 00:29:11.670 [2024-11-05 19:18:40.796978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.670 [2024-11-05 19:18:40.796990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.670 qpair failed and we were unable to recover it. 00:29:11.670 [2024-11-05 19:18:40.797344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.670 [2024-11-05 19:18:40.797356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.670 qpair failed and we were unable to recover it. 00:29:11.670 [2024-11-05 19:18:40.797666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.670 [2024-11-05 19:18:40.797677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.670 qpair failed and we were unable to recover it. 00:29:11.670 [2024-11-05 19:18:40.797897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.670 [2024-11-05 19:18:40.797909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.670 qpair failed and we were unable to recover it. 00:29:11.670 [2024-11-05 19:18:40.798213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.670 [2024-11-05 19:18:40.798225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.670 qpair failed and we were unable to recover it. 00:29:11.670 [2024-11-05 19:18:40.798545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.670 [2024-11-05 19:18:40.798557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.670 qpair failed and we were unable to recover it. 00:29:11.670 [2024-11-05 19:18:40.798922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.670 [2024-11-05 19:18:40.798934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.670 qpair failed and we were unable to recover it. 00:29:11.670 [2024-11-05 19:18:40.799155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.670 [2024-11-05 19:18:40.799165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.670 qpair failed and we were unable to recover it. 00:29:11.670 [2024-11-05 19:18:40.799463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.670 [2024-11-05 19:18:40.799474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.670 qpair failed and we were unable to recover it. 00:29:11.670 [2024-11-05 19:18:40.799775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.670 [2024-11-05 19:18:40.799787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.670 qpair failed and we were unable to recover it. 00:29:11.670 [2024-11-05 19:18:40.800118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.670 [2024-11-05 19:18:40.800129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.670 qpair failed and we were unable to recover it. 00:29:11.670 [2024-11-05 19:18:40.800437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.671 [2024-11-05 19:18:40.800449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.671 qpair failed and we were unable to recover it. 00:29:11.671 [2024-11-05 19:18:40.800758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.671 [2024-11-05 19:18:40.800770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.671 qpair failed and we were unable to recover it. 00:29:11.671 [2024-11-05 19:18:40.801083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.671 [2024-11-05 19:18:40.801095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.671 qpair failed and we were unable to recover it. 00:29:11.671 [2024-11-05 19:18:40.801403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.671 [2024-11-05 19:18:40.801414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.671 qpair failed and we were unable to recover it. 00:29:11.671 [2024-11-05 19:18:40.801724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.671 [2024-11-05 19:18:40.801736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.671 qpair failed and we were unable to recover it. 00:29:11.671 [2024-11-05 19:18:40.802095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.671 [2024-11-05 19:18:40.802106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.671 qpair failed and we were unable to recover it. 00:29:11.671 [2024-11-05 19:18:40.802417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.671 [2024-11-05 19:18:40.802429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.671 qpair failed and we were unable to recover it. 00:29:11.671 [2024-11-05 19:18:40.802642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.671 [2024-11-05 19:18:40.802653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.671 qpair failed and we were unable to recover it. 00:29:11.671 [2024-11-05 19:18:40.802832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.671 [2024-11-05 19:18:40.802843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.671 qpair failed and we were unable to recover it. 00:29:11.671 [2024-11-05 19:18:40.803167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.671 [2024-11-05 19:18:40.803179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.671 qpair failed and we were unable to recover it. 00:29:11.671 [2024-11-05 19:18:40.803490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.671 [2024-11-05 19:18:40.803501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.671 qpair failed and we were unable to recover it. 00:29:11.671 [2024-11-05 19:18:40.803821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.671 [2024-11-05 19:18:40.803833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.671 qpair failed and we were unable to recover it. 00:29:11.671 [2024-11-05 19:18:40.804165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.671 [2024-11-05 19:18:40.804176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.671 qpair failed and we were unable to recover it. 00:29:11.671 [2024-11-05 19:18:40.804350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.671 [2024-11-05 19:18:40.804362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.671 qpair failed and we were unable to recover it. 00:29:11.671 [2024-11-05 19:18:40.804691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.671 [2024-11-05 19:18:40.804703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.671 qpair failed and we were unable to recover it. 00:29:11.671 [2024-11-05 19:18:40.804999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.671 [2024-11-05 19:18:40.805011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.671 qpair failed and we were unable to recover it. 00:29:11.671 [2024-11-05 19:18:40.805228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.671 [2024-11-05 19:18:40.805239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.671 qpair failed and we were unable to recover it. 00:29:11.671 [2024-11-05 19:18:40.805541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.671 [2024-11-05 19:18:40.805553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.671 qpair failed and we were unable to recover it. 00:29:11.671 [2024-11-05 19:18:40.805857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.671 [2024-11-05 19:18:40.805869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.671 qpair failed and we were unable to recover it. 00:29:11.671 [2024-11-05 19:18:40.806199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.671 [2024-11-05 19:18:40.806211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.671 qpair failed and we were unable to recover it. 00:29:11.671 [2024-11-05 19:18:40.806518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.671 [2024-11-05 19:18:40.806529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.671 qpair failed and we were unable to recover it. 00:29:11.671 [2024-11-05 19:18:40.806865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.671 [2024-11-05 19:18:40.806877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.671 qpair failed and we were unable to recover it. 00:29:11.671 [2024-11-05 19:18:40.807050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.671 [2024-11-05 19:18:40.807062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.671 qpair failed and we were unable to recover it. 00:29:11.671 [2024-11-05 19:18:40.807421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.671 [2024-11-05 19:18:40.807432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.671 qpair failed and we were unable to recover it. 00:29:11.671 [2024-11-05 19:18:40.807740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.671 [2024-11-05 19:18:40.807761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.671 qpair failed and we were unable to recover it. 00:29:11.671 [2024-11-05 19:18:40.808057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.671 [2024-11-05 19:18:40.808068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.671 qpair failed and we were unable to recover it. 00:29:11.671 [2024-11-05 19:18:40.808373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.671 [2024-11-05 19:18:40.808385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.671 qpair failed and we were unable to recover it. 00:29:11.671 [2024-11-05 19:18:40.808688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.671 [2024-11-05 19:18:40.808699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.671 qpair failed and we were unable to recover it. 00:29:11.671 [2024-11-05 19:18:40.809028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.671 [2024-11-05 19:18:40.809040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.671 qpair failed and we were unable to recover it. 00:29:11.671 [2024-11-05 19:18:40.809374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.671 [2024-11-05 19:18:40.809388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.671 qpair failed and we were unable to recover it. 00:29:11.671 [2024-11-05 19:18:40.809670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.671 [2024-11-05 19:18:40.809681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.671 qpair failed and we were unable to recover it. 00:29:11.671 [2024-11-05 19:18:40.810044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.671 [2024-11-05 19:18:40.810057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.671 qpair failed and we were unable to recover it. 00:29:11.671 [2024-11-05 19:18:40.810370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.671 [2024-11-05 19:18:40.810381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.671 qpair failed and we were unable to recover it. 00:29:11.671 [2024-11-05 19:18:40.810569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.671 [2024-11-05 19:18:40.810581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.671 qpair failed and we were unable to recover it. 00:29:11.671 [2024-11-05 19:18:40.810885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.671 [2024-11-05 19:18:40.810897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.671 qpair failed and we were unable to recover it. 00:29:11.671 [2024-11-05 19:18:40.811228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.671 [2024-11-05 19:18:40.811239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.671 qpair failed and we were unable to recover it. 00:29:11.671 [2024-11-05 19:18:40.811538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.671 [2024-11-05 19:18:40.811550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.671 qpair failed and we were unable to recover it. 00:29:11.671 [2024-11-05 19:18:40.811913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.671 [2024-11-05 19:18:40.811924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.671 qpair failed and we were unable to recover it. 00:29:11.671 [2024-11-05 19:18:40.812220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.671 [2024-11-05 19:18:40.812231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.671 qpair failed and we were unable to recover it. 00:29:11.672 [2024-11-05 19:18:40.812527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.672 [2024-11-05 19:18:40.812538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.672 qpair failed and we were unable to recover it. 00:29:11.672 [2024-11-05 19:18:40.812823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.672 [2024-11-05 19:18:40.812834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.672 qpair failed and we were unable to recover it. 00:29:11.672 [2024-11-05 19:18:40.812935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.672 [2024-11-05 19:18:40.812945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.672 qpair failed and we were unable to recover it. 00:29:11.672 [2024-11-05 19:18:40.813153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.672 [2024-11-05 19:18:40.813164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.672 qpair failed and we were unable to recover it. 00:29:11.672 [2024-11-05 19:18:40.813483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.672 [2024-11-05 19:18:40.813493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.672 qpair failed and we were unable to recover it. 00:29:11.672 [2024-11-05 19:18:40.813680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.672 [2024-11-05 19:18:40.813692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.672 qpair failed and we were unable to recover it. 00:29:11.672 [2024-11-05 19:18:40.813884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.672 [2024-11-05 19:18:40.813895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.672 qpair failed and we were unable to recover it. 00:29:11.672 [2024-11-05 19:18:40.814167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.672 [2024-11-05 19:18:40.814179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.672 qpair failed and we were unable to recover it. 00:29:11.672 [2024-11-05 19:18:40.814488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.672 [2024-11-05 19:18:40.814499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.672 qpair failed and we were unable to recover it. 00:29:11.672 [2024-11-05 19:18:40.814687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.672 [2024-11-05 19:18:40.814698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.672 qpair failed and we were unable to recover it. 00:29:11.672 [2024-11-05 19:18:40.814997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.672 [2024-11-05 19:18:40.815009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.672 qpair failed and we were unable to recover it. 00:29:11.672 [2024-11-05 19:18:40.815310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.672 [2024-11-05 19:18:40.815321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.672 qpair failed and we were unable to recover it. 00:29:11.672 [2024-11-05 19:18:40.815663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.672 [2024-11-05 19:18:40.815674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.672 qpair failed and we were unable to recover it. 00:29:11.672 [2024-11-05 19:18:40.815863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.672 [2024-11-05 19:18:40.815874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.672 qpair failed and we were unable to recover it. 00:29:11.672 [2024-11-05 19:18:40.816086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.672 [2024-11-05 19:18:40.816097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.672 qpair failed and we were unable to recover it. 00:29:11.672 [2024-11-05 19:18:40.816318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.672 [2024-11-05 19:18:40.816329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.672 qpair failed and we were unable to recover it. 00:29:11.672 [2024-11-05 19:18:40.816537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.672 [2024-11-05 19:18:40.816547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.672 qpair failed and we were unable to recover it. 00:29:11.672 [2024-11-05 19:18:40.816842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.672 [2024-11-05 19:18:40.816855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.672 qpair failed and we were unable to recover it. 00:29:11.672 [2024-11-05 19:18:40.817165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.672 [2024-11-05 19:18:40.817177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.672 qpair failed and we were unable to recover it. 00:29:11.672 [2024-11-05 19:18:40.817486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.672 [2024-11-05 19:18:40.817497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.672 qpair failed and we were unable to recover it. 00:29:11.672 [2024-11-05 19:18:40.817858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.672 [2024-11-05 19:18:40.817870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.672 qpair failed and we were unable to recover it. 00:29:11.672 [2024-11-05 19:18:40.818194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.672 [2024-11-05 19:18:40.818205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.672 qpair failed and we were unable to recover it. 00:29:11.672 [2024-11-05 19:18:40.818502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.672 [2024-11-05 19:18:40.818513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.672 qpair failed and we were unable to recover it. 00:29:11.672 [2024-11-05 19:18:40.818707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.672 [2024-11-05 19:18:40.818718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.672 qpair failed and we were unable to recover it. 00:29:11.672 [2024-11-05 19:18:40.818873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.672 [2024-11-05 19:18:40.818884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.672 qpair failed and we were unable to recover it. 00:29:11.672 [2024-11-05 19:18:40.819162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.672 [2024-11-05 19:18:40.819173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.672 qpair failed and we were unable to recover it. 00:29:11.672 [2024-11-05 19:18:40.819481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.672 [2024-11-05 19:18:40.819491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.672 qpair failed and we were unable to recover it. 00:29:11.672 [2024-11-05 19:18:40.819789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.672 [2024-11-05 19:18:40.819801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.672 qpair failed and we were unable to recover it. 00:29:11.672 [2024-11-05 19:18:40.820000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.672 [2024-11-05 19:18:40.820012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.672 qpair failed and we were unable to recover it. 00:29:11.672 [2024-11-05 19:18:40.820203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.672 [2024-11-05 19:18:40.820214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.672 qpair failed and we were unable to recover it. 00:29:11.672 [2024-11-05 19:18:40.820416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.672 [2024-11-05 19:18:40.820427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.672 qpair failed and we were unable to recover it. 00:29:11.672 [2024-11-05 19:18:40.820762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.672 [2024-11-05 19:18:40.820773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.672 qpair failed and we were unable to recover it. 00:29:11.672 [2024-11-05 19:18:40.821056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.672 [2024-11-05 19:18:40.821067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.672 qpair failed and we were unable to recover it. 00:29:11.672 [2024-11-05 19:18:40.821290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.673 [2024-11-05 19:18:40.821300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.673 qpair failed and we were unable to recover it. 00:29:11.673 [2024-11-05 19:18:40.821607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.673 [2024-11-05 19:18:40.821619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.673 qpair failed and we were unable to recover it. 00:29:11.673 [2024-11-05 19:18:40.821934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.673 [2024-11-05 19:18:40.821947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.673 qpair failed and we were unable to recover it. 00:29:11.673 [2024-11-05 19:18:40.822306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.673 [2024-11-05 19:18:40.822318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.673 qpair failed and we were unable to recover it. 00:29:11.673 [2024-11-05 19:18:40.822629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.673 [2024-11-05 19:18:40.822640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.673 qpair failed and we were unable to recover it. 00:29:11.673 [2024-11-05 19:18:40.823013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.673 [2024-11-05 19:18:40.823024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.673 qpair failed and we were unable to recover it. 00:29:11.673 [2024-11-05 19:18:40.823322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.673 [2024-11-05 19:18:40.823333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.673 qpair failed and we were unable to recover it. 00:29:11.673 [2024-11-05 19:18:40.823639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.673 [2024-11-05 19:18:40.823650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.673 qpair failed and we were unable to recover it. 00:29:11.673 [2024-11-05 19:18:40.823966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.673 [2024-11-05 19:18:40.823978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.673 qpair failed and we were unable to recover it. 00:29:11.673 [2024-11-05 19:18:40.824211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.673 [2024-11-05 19:18:40.824222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.673 qpair failed and we were unable to recover it. 00:29:11.673 [2024-11-05 19:18:40.824476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.673 [2024-11-05 19:18:40.824487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.673 qpair failed and we were unable to recover it. 00:29:11.673 [2024-11-05 19:18:40.824566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.673 [2024-11-05 19:18:40.824577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.673 qpair failed and we were unable to recover it. 00:29:11.673 [2024-11-05 19:18:40.824893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.673 [2024-11-05 19:18:40.824905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.673 qpair failed and we were unable to recover it. 00:29:11.673 [2024-11-05 19:18:40.825194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.673 [2024-11-05 19:18:40.825206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.673 qpair failed and we were unable to recover it. 00:29:11.673 [2024-11-05 19:18:40.825442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.673 [2024-11-05 19:18:40.825453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.673 qpair failed and we were unable to recover it. 00:29:11.673 [2024-11-05 19:18:40.825623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.673 [2024-11-05 19:18:40.825635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.673 qpair failed and we were unable to recover it. 00:29:11.673 [2024-11-05 19:18:40.825829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.673 [2024-11-05 19:18:40.825841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.673 qpair failed and we were unable to recover it. 00:29:11.673 [2024-11-05 19:18:40.826114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.673 [2024-11-05 19:18:40.826126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.673 qpair failed and we were unable to recover it. 00:29:11.673 [2024-11-05 19:18:40.826301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.673 [2024-11-05 19:18:40.826313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.673 qpair failed and we were unable to recover it. 00:29:11.673 [2024-11-05 19:18:40.826577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.673 [2024-11-05 19:18:40.826588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.673 qpair failed and we were unable to recover it. 00:29:11.673 [2024-11-05 19:18:40.826922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.673 [2024-11-05 19:18:40.826933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.673 qpair failed and we were unable to recover it. 00:29:11.673 [2024-11-05 19:18:40.827257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.673 [2024-11-05 19:18:40.827269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.673 qpair failed and we were unable to recover it. 00:29:11.673 [2024-11-05 19:18:40.827605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.673 [2024-11-05 19:18:40.827617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.673 qpair failed and we were unable to recover it. 00:29:11.673 [2024-11-05 19:18:40.827886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.673 [2024-11-05 19:18:40.827897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.673 qpair failed and we were unable to recover it. 00:29:11.673 [2024-11-05 19:18:40.828233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.673 [2024-11-05 19:18:40.828245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.673 qpair failed and we were unable to recover it. 00:29:11.673 [2024-11-05 19:18:40.828557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.673 [2024-11-05 19:18:40.828568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.673 qpair failed and we were unable to recover it. 00:29:11.673 [2024-11-05 19:18:40.828924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.673 [2024-11-05 19:18:40.828935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.673 qpair failed and we were unable to recover it. 00:29:11.673 [2024-11-05 19:18:40.829229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.673 [2024-11-05 19:18:40.829240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.673 qpair failed and we were unable to recover it. 00:29:11.673 [2024-11-05 19:18:40.829544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.673 [2024-11-05 19:18:40.829555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.673 qpair failed and we were unable to recover it. 00:29:11.673 [2024-11-05 19:18:40.829755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.673 [2024-11-05 19:18:40.829766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.673 qpair failed and we were unable to recover it. 00:29:11.673 [2024-11-05 19:18:40.830085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.673 [2024-11-05 19:18:40.830097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.673 qpair failed and we were unable to recover it. 00:29:11.673 [2024-11-05 19:18:40.830381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.673 [2024-11-05 19:18:40.830393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.673 qpair failed and we were unable to recover it. 00:29:11.673 [2024-11-05 19:18:40.830697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.673 [2024-11-05 19:18:40.830708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.673 qpair failed and we were unable to recover it. 00:29:11.673 [2024-11-05 19:18:40.830939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.673 [2024-11-05 19:18:40.830951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.673 qpair failed and we were unable to recover it. 00:29:11.673 [2024-11-05 19:18:40.831177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.673 [2024-11-05 19:18:40.831188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.673 qpair failed and we were unable to recover it. 00:29:11.673 [2024-11-05 19:18:40.831520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.673 [2024-11-05 19:18:40.831531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.673 qpair failed and we were unable to recover it. 00:29:11.673 [2024-11-05 19:18:40.831712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.673 [2024-11-05 19:18:40.831723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.673 qpair failed and we were unable to recover it. 00:29:11.673 [2024-11-05 19:18:40.831930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.673 [2024-11-05 19:18:40.831942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.673 qpair failed and we were unable to recover it. 00:29:11.673 [2024-11-05 19:18:40.832282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.673 [2024-11-05 19:18:40.832293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.673 qpair failed and we were unable to recover it. 00:29:11.674 [2024-11-05 19:18:40.832469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.674 [2024-11-05 19:18:40.832480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.674 qpair failed and we were unable to recover it. 00:29:11.674 [2024-11-05 19:18:40.832788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.674 [2024-11-05 19:18:40.832800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.674 qpair failed and we were unable to recover it. 00:29:11.674 [2024-11-05 19:18:40.833149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.674 [2024-11-05 19:18:40.833160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.674 qpair failed and we were unable to recover it. 00:29:11.674 [2024-11-05 19:18:40.833438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.674 [2024-11-05 19:18:40.833449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.674 qpair failed and we were unable to recover it. 00:29:11.674 [2024-11-05 19:18:40.833772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.674 [2024-11-05 19:18:40.833784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.674 qpair failed and we were unable to recover it. 00:29:11.674 [2024-11-05 19:18:40.834091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.674 [2024-11-05 19:18:40.834103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.674 qpair failed and we were unable to recover it. 00:29:11.674 [2024-11-05 19:18:40.834316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.674 [2024-11-05 19:18:40.834328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.674 qpair failed and we were unable to recover it. 00:29:11.674 [2024-11-05 19:18:40.834542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.674 [2024-11-05 19:18:40.834552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.674 qpair failed and we were unable to recover it. 00:29:11.674 [2024-11-05 19:18:40.834824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.674 [2024-11-05 19:18:40.834835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.674 qpair failed and we were unable to recover it. 00:29:11.674 [2024-11-05 19:18:40.835158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.674 [2024-11-05 19:18:40.835169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.674 qpair failed and we were unable to recover it. 00:29:11.674 [2024-11-05 19:18:40.835478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.674 [2024-11-05 19:18:40.835488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.674 qpair failed and we were unable to recover it. 00:29:11.674 [2024-11-05 19:18:40.835675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.674 [2024-11-05 19:18:40.835687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.674 qpair failed and we were unable to recover it. 00:29:11.674 [2024-11-05 19:18:40.836066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.674 [2024-11-05 19:18:40.836077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.674 qpair failed and we were unable to recover it. 00:29:11.674 [2024-11-05 19:18:40.836400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.674 [2024-11-05 19:18:40.836414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.674 qpair failed and we were unable to recover it. 00:29:11.674 [2024-11-05 19:18:40.836722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.674 [2024-11-05 19:18:40.836734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.674 qpair failed and we were unable to recover it. 00:29:11.674 [2024-11-05 19:18:40.836955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.674 [2024-11-05 19:18:40.836967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.674 qpair failed and we were unable to recover it. 00:29:11.674 [2024-11-05 19:18:40.837283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.674 [2024-11-05 19:18:40.837295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.674 qpair failed and we were unable to recover it. 00:29:11.674 [2024-11-05 19:18:40.837627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.674 [2024-11-05 19:18:40.837639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.674 qpair failed and we were unable to recover it. 00:29:11.674 [2024-11-05 19:18:40.837834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.674 [2024-11-05 19:18:40.837847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.674 qpair failed and we were unable to recover it. 00:29:11.674 [2024-11-05 19:18:40.838162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.674 [2024-11-05 19:18:40.838174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.674 qpair failed and we were unable to recover it. 00:29:11.674 [2024-11-05 19:18:40.838486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.674 [2024-11-05 19:18:40.838498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.674 qpair failed and we were unable to recover it. 00:29:11.674 [2024-11-05 19:18:40.838808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.674 [2024-11-05 19:18:40.838820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.674 qpair failed and we were unable to recover it. 00:29:11.674 [2024-11-05 19:18:40.839125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.674 [2024-11-05 19:18:40.839136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.674 qpair failed and we were unable to recover it. 00:29:11.674 [2024-11-05 19:18:40.839458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.674 [2024-11-05 19:18:40.839469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.674 qpair failed and we were unable to recover it. 00:29:11.674 [2024-11-05 19:18:40.839578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.674 [2024-11-05 19:18:40.839589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.674 qpair failed and we were unable to recover it. 00:29:11.674 [2024-11-05 19:18:40.839847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.674 [2024-11-05 19:18:40.839858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.674 qpair failed and we were unable to recover it. 00:29:11.674 [2024-11-05 19:18:40.840177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.674 [2024-11-05 19:18:40.840188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.674 qpair failed and we were unable to recover it. 00:29:11.674 [2024-11-05 19:18:40.840475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.674 [2024-11-05 19:18:40.840486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.674 qpair failed and we were unable to recover it. 00:29:11.674 [2024-11-05 19:18:40.840783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.674 [2024-11-05 19:18:40.840794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.674 qpair failed and we were unable to recover it. 00:29:11.674 [2024-11-05 19:18:40.841005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.674 [2024-11-05 19:18:40.841015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.674 qpair failed and we were unable to recover it. 00:29:11.674 [2024-11-05 19:18:40.841336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.674 [2024-11-05 19:18:40.841348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.674 qpair failed and we were unable to recover it. 00:29:11.674 [2024-11-05 19:18:40.841581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.674 [2024-11-05 19:18:40.841592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.674 qpair failed and we were unable to recover it. 00:29:11.674 [2024-11-05 19:18:40.841917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.674 [2024-11-05 19:18:40.841928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.674 qpair failed and we were unable to recover it. 00:29:11.674 [2024-11-05 19:18:40.842185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.674 [2024-11-05 19:18:40.842195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.674 qpair failed and we were unable to recover it. 00:29:11.674 [2024-11-05 19:18:40.842512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.674 [2024-11-05 19:18:40.842523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.674 qpair failed and we were unable to recover it. 00:29:11.674 [2024-11-05 19:18:40.842846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.674 [2024-11-05 19:18:40.842859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.674 qpair failed and we were unable to recover it. 00:29:11.674 [2024-11-05 19:18:40.843032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.674 [2024-11-05 19:18:40.843044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.674 qpair failed and we were unable to recover it. 00:29:11.674 [2024-11-05 19:18:40.843247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.674 [2024-11-05 19:18:40.843259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.674 qpair failed and we were unable to recover it. 00:29:11.674 [2024-11-05 19:18:40.843608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.674 [2024-11-05 19:18:40.843620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.674 qpair failed and we were unable to recover it. 00:29:11.675 [2024-11-05 19:18:40.843975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.675 [2024-11-05 19:18:40.843986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.675 qpair failed and we were unable to recover it. 00:29:11.675 [2024-11-05 19:18:40.844302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.675 [2024-11-05 19:18:40.844316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.675 qpair failed and we were unable to recover it. 00:29:11.675 [2024-11-05 19:18:40.844535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.675 [2024-11-05 19:18:40.844546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.675 qpair failed and we were unable to recover it. 00:29:11.675 [2024-11-05 19:18:40.844866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.675 [2024-11-05 19:18:40.844877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.675 qpair failed and we were unable to recover it. 00:29:11.675 [2024-11-05 19:18:40.845207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.675 [2024-11-05 19:18:40.845217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.675 qpair failed and we were unable to recover it. 00:29:11.675 [2024-11-05 19:18:40.845532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.675 [2024-11-05 19:18:40.845544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.675 qpair failed and we were unable to recover it. 00:29:11.675 [2024-11-05 19:18:40.845864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.675 [2024-11-05 19:18:40.845875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.675 qpair failed and we were unable to recover it. 00:29:11.675 [2024-11-05 19:18:40.846171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.675 [2024-11-05 19:18:40.846182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.675 qpair failed and we were unable to recover it. 00:29:11.675 [2024-11-05 19:18:40.846468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.675 [2024-11-05 19:18:40.846478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.675 qpair failed and we were unable to recover it. 00:29:11.675 [2024-11-05 19:18:40.846652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.675 [2024-11-05 19:18:40.846662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.675 qpair failed and we were unable to recover it. 00:29:11.675 [2024-11-05 19:18:40.846860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.675 [2024-11-05 19:18:40.846871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.675 qpair failed and we were unable to recover it. 00:29:11.675 [2024-11-05 19:18:40.847154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.675 [2024-11-05 19:18:40.847165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.675 qpair failed and we were unable to recover it. 00:29:11.675 [2024-11-05 19:18:40.847479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.675 [2024-11-05 19:18:40.847490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.675 qpair failed and we were unable to recover it. 00:29:11.675 [2024-11-05 19:18:40.847682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.675 [2024-11-05 19:18:40.847693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.675 qpair failed and we were unable to recover it. 00:29:11.675 [2024-11-05 19:18:40.847993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.675 [2024-11-05 19:18:40.848004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.675 qpair failed and we were unable to recover it. 00:29:11.675 [2024-11-05 19:18:40.848201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.675 [2024-11-05 19:18:40.848212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.675 qpair failed and we were unable to recover it. 00:29:11.675 [2024-11-05 19:18:40.848382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.675 [2024-11-05 19:18:40.848393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.675 qpair failed and we were unable to recover it. 00:29:11.675 [2024-11-05 19:18:40.848653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.675 [2024-11-05 19:18:40.848664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.675 qpair failed and we were unable to recover it. 00:29:11.675 [2024-11-05 19:18:40.848968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.675 [2024-11-05 19:18:40.848979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.675 qpair failed and we were unable to recover it. 00:29:11.675 [2024-11-05 19:18:40.849144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.675 [2024-11-05 19:18:40.849156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.675 qpair failed and we were unable to recover it. 00:29:11.675 [2024-11-05 19:18:40.849489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.675 [2024-11-05 19:18:40.849500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.675 qpair failed and we were unable to recover it. 00:29:11.675 [2024-11-05 19:18:40.849810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.675 [2024-11-05 19:18:40.849821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.675 qpair failed and we were unable to recover it. 00:29:11.675 [2024-11-05 19:18:40.850150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.675 [2024-11-05 19:18:40.850161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.675 qpair failed and we were unable to recover it. 00:29:11.675 [2024-11-05 19:18:40.850353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.675 [2024-11-05 19:18:40.850365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.675 qpair failed and we were unable to recover it. 00:29:11.675 [2024-11-05 19:18:40.850543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.675 [2024-11-05 19:18:40.850554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.675 qpair failed and we were unable to recover it. 00:29:11.675 [2024-11-05 19:18:40.850827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.675 [2024-11-05 19:18:40.850840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.675 qpair failed and we were unable to recover it. 00:29:11.675 [2024-11-05 19:18:40.851173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.675 [2024-11-05 19:18:40.851184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.675 qpair failed and we were unable to recover it. 00:29:11.675 [2024-11-05 19:18:40.851376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.675 [2024-11-05 19:18:40.851387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.675 qpair failed and we were unable to recover it. 00:29:11.675 [2024-11-05 19:18:40.851631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.675 [2024-11-05 19:18:40.851644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.675 qpair failed and we were unable to recover it. 00:29:11.675 [2024-11-05 19:18:40.851869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.675 [2024-11-05 19:18:40.851881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.675 qpair failed and we were unable to recover it. 00:29:11.675 [2024-11-05 19:18:40.852052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.675 [2024-11-05 19:18:40.852064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.675 qpair failed and we were unable to recover it. 00:29:11.675 [2024-11-05 19:18:40.852239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.675 [2024-11-05 19:18:40.852250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.675 qpair failed and we were unable to recover it. 00:29:11.675 [2024-11-05 19:18:40.852462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.675 [2024-11-05 19:18:40.852473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.675 qpair failed and we were unable to recover it. 00:29:11.675 [2024-11-05 19:18:40.852740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.675 [2024-11-05 19:18:40.852754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.675 qpair failed and we were unable to recover it. 00:29:11.675 [2024-11-05 19:18:40.853141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.675 [2024-11-05 19:18:40.853152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.675 qpair failed and we were unable to recover it. 00:29:11.675 [2024-11-05 19:18:40.853345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.675 [2024-11-05 19:18:40.853357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.675 qpair failed and we were unable to recover it. 00:29:11.675 [2024-11-05 19:18:40.853691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.675 [2024-11-05 19:18:40.853702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.675 qpair failed and we were unable to recover it. 00:29:11.675 [2024-11-05 19:18:40.854001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.675 [2024-11-05 19:18:40.854013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.675 qpair failed and we were unable to recover it. 00:29:11.675 [2024-11-05 19:18:40.854202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.675 [2024-11-05 19:18:40.854212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.675 qpair failed and we were unable to recover it. 00:29:11.676 [2024-11-05 19:18:40.854430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.676 [2024-11-05 19:18:40.854442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.676 qpair failed and we were unable to recover it. 00:29:11.676 [2024-11-05 19:18:40.854636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.676 [2024-11-05 19:18:40.854646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.676 qpair failed and we were unable to recover it. 00:29:11.676 [2024-11-05 19:18:40.854827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.676 [2024-11-05 19:18:40.854838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.676 qpair failed and we were unable to recover it. 00:29:11.676 [2024-11-05 19:18:40.855168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.676 [2024-11-05 19:18:40.855179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.676 qpair failed and we were unable to recover it. 00:29:11.676 [2024-11-05 19:18:40.855426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.676 [2024-11-05 19:18:40.855437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.676 qpair failed and we were unable to recover it. 00:29:11.676 [2024-11-05 19:18:40.855726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.676 [2024-11-05 19:18:40.855738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.676 qpair failed and we were unable to recover it. 00:29:11.676 [2024-11-05 19:18:40.856104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.676 [2024-11-05 19:18:40.856116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.676 qpair failed and we were unable to recover it. 00:29:11.676 [2024-11-05 19:18:40.856436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.676 [2024-11-05 19:18:40.856447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.676 qpair failed and we were unable to recover it. 00:29:11.676 [2024-11-05 19:18:40.856637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.676 [2024-11-05 19:18:40.856649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.676 qpair failed and we were unable to recover it. 00:29:11.676 [2024-11-05 19:18:40.856885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.676 [2024-11-05 19:18:40.856897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.676 qpair failed and we were unable to recover it. 00:29:11.676 [2024-11-05 19:18:40.857088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.676 [2024-11-05 19:18:40.857100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.676 qpair failed and we were unable to recover it. 00:29:11.676 [2024-11-05 19:18:40.857279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.676 [2024-11-05 19:18:40.857292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.676 qpair failed and we were unable to recover it. 00:29:11.676 [2024-11-05 19:18:40.857428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.676 [2024-11-05 19:18:40.857440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.676 qpair failed and we were unable to recover it. 00:29:11.676 [2024-11-05 19:18:40.857607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.676 [2024-11-05 19:18:40.857619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.676 qpair failed and we were unable to recover it. 00:29:11.676 [2024-11-05 19:18:40.857875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.676 [2024-11-05 19:18:40.857888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.676 qpair failed and we were unable to recover it. 00:29:11.676 [2024-11-05 19:18:40.857976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.676 [2024-11-05 19:18:40.857985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.676 qpair failed and we were unable to recover it. 00:29:11.676 [2024-11-05 19:18:40.858061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.676 [2024-11-05 19:18:40.858071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.676 qpair failed and we were unable to recover it. 00:29:11.676 [2024-11-05 19:18:40.858371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.676 [2024-11-05 19:18:40.858383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.676 qpair failed and we were unable to recover it. 00:29:11.676 [2024-11-05 19:18:40.858738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.676 [2024-11-05 19:18:40.858752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.676 qpair failed and we were unable to recover it. 00:29:11.676 [2024-11-05 19:18:40.858984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.676 [2024-11-05 19:18:40.858994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.676 qpair failed and we were unable to recover it. 00:29:11.676 [2024-11-05 19:18:40.859182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.676 [2024-11-05 19:18:40.859195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.676 qpair failed and we were unable to recover it. 00:29:11.676 [2024-11-05 19:18:40.859528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.676 [2024-11-05 19:18:40.859539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.676 qpair failed and we were unable to recover it. 00:29:11.676 [2024-11-05 19:18:40.859674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.676 [2024-11-05 19:18:40.859684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.676 qpair failed and we were unable to recover it. 00:29:11.676 [2024-11-05 19:18:40.859772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.676 [2024-11-05 19:18:40.859782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.676 qpair failed and we were unable to recover it. 00:29:11.676 [2024-11-05 19:18:40.860015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.676 [2024-11-05 19:18:40.860026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.676 qpair failed and we were unable to recover it. 00:29:11.676 [2024-11-05 19:18:40.860335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.676 [2024-11-05 19:18:40.860346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.676 qpair failed and we were unable to recover it. 00:29:11.676 [2024-11-05 19:18:40.860650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.676 [2024-11-05 19:18:40.860661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.676 qpair failed and we were unable to recover it. 00:29:11.676 [2024-11-05 19:18:40.860862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.676 [2024-11-05 19:18:40.860874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.676 qpair failed and we were unable to recover it. 00:29:11.676 [2024-11-05 19:18:40.861041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.676 [2024-11-05 19:18:40.861051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.676 qpair failed and we were unable to recover it. 00:29:11.676 [2024-11-05 19:18:40.861376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.676 [2024-11-05 19:18:40.861386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.676 qpair failed and we were unable to recover it. 00:29:11.676 [2024-11-05 19:18:40.861706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.676 [2024-11-05 19:18:40.861717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.676 qpair failed and we were unable to recover it. 00:29:11.676 [2024-11-05 19:18:40.862174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.676 [2024-11-05 19:18:40.862185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.676 qpair failed and we were unable to recover it. 00:29:11.676 [2024-11-05 19:18:40.862483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.676 [2024-11-05 19:18:40.862494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.676 qpair failed and we were unable to recover it. 00:29:11.676 [2024-11-05 19:18:40.862684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.676 [2024-11-05 19:18:40.862695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.676 qpair failed and we were unable to recover it. 00:29:11.676 [2024-11-05 19:18:40.863034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.676 [2024-11-05 19:18:40.863046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.676 qpair failed and we were unable to recover it. 00:29:11.676 [2024-11-05 19:18:40.863326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.676 [2024-11-05 19:18:40.863336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.676 qpair failed and we were unable to recover it. 00:29:11.676 [2024-11-05 19:18:40.863678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.676 [2024-11-05 19:18:40.863689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.676 qpair failed and we were unable to recover it. 00:29:11.676 [2024-11-05 19:18:40.864023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.676 [2024-11-05 19:18:40.864034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.676 qpair failed and we were unable to recover it. 00:29:11.676 [2024-11-05 19:18:40.864392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.676 [2024-11-05 19:18:40.864403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.677 qpair failed and we were unable to recover it. 00:29:11.677 [2024-11-05 19:18:40.864696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.677 [2024-11-05 19:18:40.864707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.677 qpair failed and we were unable to recover it. 00:29:11.677 [2024-11-05 19:18:40.864884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.677 [2024-11-05 19:18:40.864897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.677 qpair failed and we were unable to recover it. 00:29:11.677 [2024-11-05 19:18:40.865239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.677 [2024-11-05 19:18:40.865251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.677 qpair failed and we were unable to recover it. 00:29:11.677 [2024-11-05 19:18:40.865457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.677 [2024-11-05 19:18:40.865468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.677 qpair failed and we were unable to recover it. 00:29:11.677 [2024-11-05 19:18:40.865775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.677 [2024-11-05 19:18:40.865787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.677 qpair failed and we were unable to recover it. 00:29:11.677 [2024-11-05 19:18:40.865988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.677 [2024-11-05 19:18:40.865999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.677 qpair failed and we were unable to recover it. 00:29:11.677 [2024-11-05 19:18:40.866195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.677 [2024-11-05 19:18:40.866206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.677 qpair failed and we were unable to recover it. 00:29:11.677 [2024-11-05 19:18:40.866378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.677 [2024-11-05 19:18:40.866388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.677 qpair failed and we were unable to recover it. 00:29:11.677 [2024-11-05 19:18:40.866667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.677 [2024-11-05 19:18:40.866678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.677 qpair failed and we were unable to recover it. 00:29:11.677 [2024-11-05 19:18:40.866995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.677 [2024-11-05 19:18:40.867008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.677 qpair failed and we were unable to recover it. 00:29:11.677 [2024-11-05 19:18:40.867296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.677 [2024-11-05 19:18:40.867307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.677 qpair failed and we were unable to recover it. 00:29:11.677 [2024-11-05 19:18:40.867693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.677 [2024-11-05 19:18:40.867704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.677 qpair failed and we were unable to recover it. 00:29:11.677 [2024-11-05 19:18:40.867915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.677 [2024-11-05 19:18:40.867926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.677 qpair failed and we were unable to recover it. 00:29:11.677 [2024-11-05 19:18:40.868121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.677 [2024-11-05 19:18:40.868140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.677 qpair failed and we were unable to recover it. 00:29:11.677 [2024-11-05 19:18:40.868465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.677 [2024-11-05 19:18:40.868476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.677 qpair failed and we were unable to recover it. 00:29:11.677 [2024-11-05 19:18:40.868794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.677 [2024-11-05 19:18:40.868806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.677 qpair failed and we were unable to recover it. 00:29:11.677 [2024-11-05 19:18:40.869135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.677 [2024-11-05 19:18:40.869146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.677 qpair failed and we were unable to recover it. 00:29:11.677 [2024-11-05 19:18:40.869457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.677 [2024-11-05 19:18:40.869469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.677 qpair failed and we were unable to recover it. 00:29:11.677 [2024-11-05 19:18:40.869782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.677 [2024-11-05 19:18:40.869798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.677 qpair failed and we were unable to recover it. 00:29:11.677 [2024-11-05 19:18:40.870173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.677 [2024-11-05 19:18:40.870184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.677 qpair failed and we were unable to recover it. 00:29:11.677 [2024-11-05 19:18:40.870498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.677 [2024-11-05 19:18:40.870509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.677 qpair failed and we were unable to recover it. 00:29:11.677 [2024-11-05 19:18:40.870825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.677 [2024-11-05 19:18:40.870837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.677 qpair failed and we were unable to recover it. 00:29:11.677 [2024-11-05 19:18:40.871152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.677 [2024-11-05 19:18:40.871164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.677 qpair failed and we were unable to recover it. 00:29:11.677 [2024-11-05 19:18:40.871477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.677 [2024-11-05 19:18:40.871488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.677 qpair failed and we were unable to recover it. 00:29:11.677 [2024-11-05 19:18:40.871814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.677 [2024-11-05 19:18:40.871826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.677 qpair failed and we were unable to recover it. 00:29:11.677 [2024-11-05 19:18:40.872188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.677 [2024-11-05 19:18:40.872199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.677 qpair failed and we were unable to recover it. 00:29:11.677 [2024-11-05 19:18:40.872516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.677 [2024-11-05 19:18:40.872527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.677 qpair failed and we were unable to recover it. 00:29:11.677 [2024-11-05 19:18:40.872859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.677 [2024-11-05 19:18:40.872870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.677 qpair failed and we were unable to recover it. 00:29:11.677 [2024-11-05 19:18:40.873295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.677 [2024-11-05 19:18:40.873306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.677 qpair failed and we were unable to recover it. 00:29:11.677 [2024-11-05 19:18:40.873672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.677 [2024-11-05 19:18:40.873683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.677 qpair failed and we were unable to recover it. 00:29:11.677 [2024-11-05 19:18:40.874010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.677 [2024-11-05 19:18:40.874029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.677 qpair failed and we were unable to recover it. 00:29:11.677 [2024-11-05 19:18:40.874228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.677 [2024-11-05 19:18:40.874239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.677 qpair failed and we were unable to recover it. 00:29:11.677 [2024-11-05 19:18:40.874526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.677 [2024-11-05 19:18:40.874538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.677 qpair failed and we were unable to recover it. 00:29:11.677 [2024-11-05 19:18:40.874721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.677 [2024-11-05 19:18:40.874732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.677 qpair failed and we were unable to recover it. 00:29:11.678 [2024-11-05 19:18:40.875057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.678 [2024-11-05 19:18:40.875068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.678 qpair failed and we were unable to recover it. 00:29:11.678 [2024-11-05 19:18:40.875272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.678 [2024-11-05 19:18:40.875283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.678 qpair failed and we were unable to recover it. 00:29:11.678 [2024-11-05 19:18:40.875582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.678 [2024-11-05 19:18:40.875593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.678 qpair failed and we were unable to recover it. 00:29:11.678 [2024-11-05 19:18:40.875894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.678 [2024-11-05 19:18:40.875906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.678 qpair failed and we were unable to recover it. 00:29:11.678 [2024-11-05 19:18:40.876350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.678 [2024-11-05 19:18:40.876362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.678 qpair failed and we were unable to recover it. 00:29:11.678 [2024-11-05 19:18:40.876592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.678 [2024-11-05 19:18:40.876605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.678 qpair failed and we were unable to recover it. 00:29:11.678 [2024-11-05 19:18:40.876848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.678 [2024-11-05 19:18:40.876859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.678 qpair failed and we were unable to recover it. 00:29:11.678 [2024-11-05 19:18:40.877170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.678 [2024-11-05 19:18:40.877181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.678 qpair failed and we were unable to recover it. 00:29:11.678 [2024-11-05 19:18:40.877435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.678 [2024-11-05 19:18:40.877446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.678 qpair failed and we were unable to recover it. 00:29:11.678 [2024-11-05 19:18:40.877778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.678 [2024-11-05 19:18:40.877790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.678 qpair failed and we were unable to recover it. 00:29:11.678 [2024-11-05 19:18:40.877974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.678 [2024-11-05 19:18:40.877986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.678 qpair failed and we were unable to recover it. 00:29:11.678 [2024-11-05 19:18:40.878295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.678 [2024-11-05 19:18:40.878308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.678 qpair failed and we were unable to recover it. 00:29:11.678 [2024-11-05 19:18:40.878620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.678 [2024-11-05 19:18:40.878632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.678 qpair failed and we were unable to recover it. 00:29:11.678 [2024-11-05 19:18:40.879042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.678 [2024-11-05 19:18:40.879054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.678 qpair failed and we were unable to recover it. 00:29:11.678 [2024-11-05 19:18:40.879255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.678 [2024-11-05 19:18:40.879266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.678 qpair failed and we were unable to recover it. 00:29:11.678 [2024-11-05 19:18:40.879527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.678 [2024-11-05 19:18:40.879538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.678 qpair failed and we were unable to recover it. 00:29:11.678 [2024-11-05 19:18:40.879724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.678 [2024-11-05 19:18:40.879735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.678 qpair failed and we were unable to recover it. 00:29:11.678 [2024-11-05 19:18:40.880049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.678 [2024-11-05 19:18:40.880061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.678 qpair failed and we were unable to recover it. 00:29:11.678 [2024-11-05 19:18:40.880371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.678 [2024-11-05 19:18:40.880383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.678 qpair failed and we were unable to recover it. 00:29:11.678 [2024-11-05 19:18:40.880699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.678 [2024-11-05 19:18:40.880712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.678 qpair failed and we were unable to recover it. 00:29:11.678 [2024-11-05 19:18:40.881017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.678 [2024-11-05 19:18:40.881029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.678 qpair failed and we were unable to recover it. 00:29:11.678 [2024-11-05 19:18:40.881360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.678 [2024-11-05 19:18:40.881371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.678 qpair failed and we were unable to recover it. 00:29:11.678 [2024-11-05 19:18:40.881696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.678 [2024-11-05 19:18:40.881707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.678 qpair failed and we were unable to recover it. 00:29:11.678 [2024-11-05 19:18:40.881999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.678 [2024-11-05 19:18:40.882010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.678 qpair failed and we were unable to recover it. 00:29:11.678 [2024-11-05 19:18:40.882309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.678 [2024-11-05 19:18:40.882320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.678 qpair failed and we were unable to recover it. 00:29:11.678 [2024-11-05 19:18:40.882651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.678 [2024-11-05 19:18:40.882663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.678 qpair failed and we were unable to recover it. 00:29:11.678 [2024-11-05 19:18:40.882820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.678 [2024-11-05 19:18:40.882832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.678 qpair failed and we were unable to recover it. 00:29:11.678 [2024-11-05 19:18:40.883139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.678 [2024-11-05 19:18:40.883151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.678 qpair failed and we were unable to recover it. 00:29:11.678 [2024-11-05 19:18:40.883450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.678 [2024-11-05 19:18:40.883461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.678 qpair failed and we were unable to recover it. 00:29:11.678 [2024-11-05 19:18:40.883754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.678 [2024-11-05 19:18:40.883765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.678 qpair failed and we were unable to recover it. 00:29:11.678 [2024-11-05 19:18:40.884077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.678 [2024-11-05 19:18:40.884089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.678 qpair failed and we were unable to recover it. 00:29:11.678 [2024-11-05 19:18:40.884399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.678 [2024-11-05 19:18:40.884409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.678 qpair failed and we were unable to recover it. 00:29:11.678 [2024-11-05 19:18:40.884581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.678 [2024-11-05 19:18:40.884592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.678 qpair failed and we were unable to recover it. 00:29:11.678 [2024-11-05 19:18:40.884889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.678 [2024-11-05 19:18:40.884902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.678 qpair failed and we were unable to recover it. 00:29:11.678 [2024-11-05 19:18:40.885249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.678 [2024-11-05 19:18:40.885259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.678 qpair failed and we were unable to recover it. 00:29:11.678 [2024-11-05 19:18:40.885427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.678 [2024-11-05 19:18:40.885439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.678 qpair failed and we were unable to recover it. 00:29:11.678 [2024-11-05 19:18:40.885764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.678 [2024-11-05 19:18:40.885777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.678 qpair failed and we were unable to recover it. 00:29:11.678 [2024-11-05 19:18:40.886109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.678 [2024-11-05 19:18:40.886120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.678 qpair failed and we were unable to recover it. 00:29:11.678 [2024-11-05 19:18:40.886447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.678 [2024-11-05 19:18:40.886459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.679 qpair failed and we were unable to recover it. 00:29:11.679 [2024-11-05 19:18:40.886679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.679 [2024-11-05 19:18:40.886691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.679 qpair failed and we were unable to recover it. 00:29:11.679 [2024-11-05 19:18:40.887005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.679 [2024-11-05 19:18:40.887016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.679 qpair failed and we were unable to recover it. 00:29:11.679 [2024-11-05 19:18:40.887294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.679 [2024-11-05 19:18:40.887305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.679 qpair failed and we were unable to recover it. 00:29:11.679 [2024-11-05 19:18:40.887615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.679 [2024-11-05 19:18:40.887627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.679 qpair failed and we were unable to recover it. 00:29:11.679 [2024-11-05 19:18:40.887805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.679 [2024-11-05 19:18:40.887816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.679 qpair failed and we were unable to recover it. 00:29:11.679 [2024-11-05 19:18:40.888101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.679 [2024-11-05 19:18:40.888112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.679 qpair failed and we were unable to recover it. 00:29:11.679 [2024-11-05 19:18:40.888393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.679 [2024-11-05 19:18:40.888405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.679 qpair failed and we were unable to recover it. 00:29:11.679 [2024-11-05 19:18:40.888738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.679 [2024-11-05 19:18:40.888754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.679 qpair failed and we were unable to recover it. 00:29:11.679 [2024-11-05 19:18:40.889086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.679 [2024-11-05 19:18:40.889099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.679 qpair failed and we were unable to recover it. 00:29:11.679 [2024-11-05 19:18:40.889439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.679 [2024-11-05 19:18:40.889449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.679 qpair failed and we were unable to recover it. 00:29:11.679 [2024-11-05 19:18:40.889727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.679 [2024-11-05 19:18:40.889738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.679 qpair failed and we were unable to recover it. 00:29:11.679 [2024-11-05 19:18:40.889963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.679 [2024-11-05 19:18:40.889974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.679 qpair failed and we were unable to recover it. 00:29:11.679 [2024-11-05 19:18:40.890279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.679 [2024-11-05 19:18:40.890291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.679 qpair failed and we were unable to recover it. 00:29:11.679 [2024-11-05 19:18:40.890628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.679 [2024-11-05 19:18:40.890640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.679 qpair failed and we were unable to recover it. 00:29:11.679 [2024-11-05 19:18:40.890981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.679 [2024-11-05 19:18:40.890993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.679 qpair failed and we were unable to recover it. 00:29:11.679 [2024-11-05 19:18:40.891666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.679 [2024-11-05 19:18:40.891690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.679 qpair failed and we were unable to recover it. 00:29:11.679 [2024-11-05 19:18:40.892008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.679 [2024-11-05 19:18:40.892021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.679 qpair failed and we were unable to recover it. 00:29:11.679 [2024-11-05 19:18:40.892350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.679 [2024-11-05 19:18:40.892362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.679 qpair failed and we were unable to recover it. 00:29:11.679 [2024-11-05 19:18:40.892691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.679 [2024-11-05 19:18:40.892703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.679 qpair failed and we were unable to recover it. 00:29:11.679 [2024-11-05 19:18:40.892910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.679 [2024-11-05 19:18:40.892922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.679 qpair failed and we were unable to recover it. 00:29:11.679 [2024-11-05 19:18:40.893249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.679 [2024-11-05 19:18:40.893261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.679 qpair failed and we were unable to recover it. 00:29:11.679 [2024-11-05 19:18:40.893570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.679 [2024-11-05 19:18:40.893582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.679 qpair failed and we were unable to recover it. 00:29:11.679 [2024-11-05 19:18:40.893903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.679 [2024-11-05 19:18:40.893914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.679 qpair failed and we were unable to recover it. 00:29:11.679 [2024-11-05 19:18:40.894241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.679 [2024-11-05 19:18:40.894254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.679 qpair failed and we were unable to recover it. 00:29:11.679 [2024-11-05 19:18:40.894607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.679 [2024-11-05 19:18:40.894619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.679 qpair failed and we were unable to recover it. 00:29:11.679 [2024-11-05 19:18:40.894823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.679 [2024-11-05 19:18:40.894835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.679 qpair failed and we were unable to recover it. 00:29:11.679 [2024-11-05 19:18:40.895145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.679 [2024-11-05 19:18:40.895157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.679 qpair failed and we were unable to recover it. 00:29:11.679 [2024-11-05 19:18:40.895377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.679 [2024-11-05 19:18:40.895388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.679 qpair failed and we were unable to recover it. 00:29:11.679 [2024-11-05 19:18:40.895605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.679 [2024-11-05 19:18:40.895616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.679 qpair failed and we were unable to recover it. 00:29:11.679 [2024-11-05 19:18:40.895805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.679 [2024-11-05 19:18:40.895818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.679 qpair failed and we were unable to recover it. 00:29:11.679 [2024-11-05 19:18:40.896154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.679 [2024-11-05 19:18:40.896165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.679 qpair failed and we were unable to recover it. 00:29:11.679 [2024-11-05 19:18:40.896473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.679 [2024-11-05 19:18:40.896485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.679 qpair failed and we were unable to recover it. 00:29:11.679 [2024-11-05 19:18:40.896790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.679 [2024-11-05 19:18:40.896801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.679 qpair failed and we were unable to recover it. 00:29:11.679 [2024-11-05 19:18:40.897104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.679 [2024-11-05 19:18:40.897115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.679 qpair failed and we were unable to recover it. 00:29:11.679 [2024-11-05 19:18:40.897442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.679 [2024-11-05 19:18:40.897454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.679 qpair failed and we were unable to recover it. 00:29:11.679 [2024-11-05 19:18:40.897766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.679 [2024-11-05 19:18:40.897779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.679 qpair failed and we were unable to recover it. 00:29:11.679 [2024-11-05 19:18:40.898113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.679 [2024-11-05 19:18:40.898125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.679 qpair failed and we were unable to recover it. 00:29:11.679 [2024-11-05 19:18:40.898377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.679 [2024-11-05 19:18:40.898390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.679 qpair failed and we were unable to recover it. 00:29:11.679 [2024-11-05 19:18:40.898674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.679 [2024-11-05 19:18:40.898686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.680 qpair failed and we were unable to recover it. 00:29:11.680 [2024-11-05 19:18:40.898997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.680 [2024-11-05 19:18:40.899010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.680 qpair failed and we were unable to recover it. 00:29:11.680 [2024-11-05 19:18:40.899318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.680 [2024-11-05 19:18:40.899334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.680 qpair failed and we were unable to recover it. 00:29:11.680 [2024-11-05 19:18:40.899632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.680 [2024-11-05 19:18:40.899645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.680 qpair failed and we were unable to recover it. 00:29:11.680 [2024-11-05 19:18:40.899963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.680 [2024-11-05 19:18:40.899976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.680 qpair failed and we were unable to recover it. 00:29:11.680 [2024-11-05 19:18:40.900219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.680 [2024-11-05 19:18:40.900231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.680 qpair failed and we were unable to recover it. 00:29:11.680 [2024-11-05 19:18:40.900549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.680 [2024-11-05 19:18:40.900561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.680 qpair failed and we were unable to recover it. 00:29:11.680 [2024-11-05 19:18:40.900854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.680 [2024-11-05 19:18:40.900867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.680 qpair failed and we were unable to recover it. 00:29:11.680 [2024-11-05 19:18:40.901161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.680 [2024-11-05 19:18:40.901173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.680 qpair failed and we were unable to recover it. 00:29:11.680 [2024-11-05 19:18:40.901492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.680 [2024-11-05 19:18:40.901504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.680 qpair failed and we were unable to recover it. 00:29:11.680 [2024-11-05 19:18:40.901805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.680 [2024-11-05 19:18:40.901818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.680 qpair failed and we were unable to recover it. 00:29:11.680 [2024-11-05 19:18:40.902134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.680 [2024-11-05 19:18:40.902146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.680 qpair failed and we were unable to recover it. 00:29:11.680 [2024-11-05 19:18:40.902419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.680 [2024-11-05 19:18:40.902431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.680 qpair failed and we were unable to recover it. 00:29:11.680 [2024-11-05 19:18:40.902733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.680 [2024-11-05 19:18:40.902752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.680 qpair failed and we were unable to recover it. 00:29:11.680 [2024-11-05 19:18:40.903063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.680 [2024-11-05 19:18:40.903075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.680 qpair failed and we were unable to recover it. 00:29:11.680 [2024-11-05 19:18:40.903384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.680 [2024-11-05 19:18:40.903396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.680 qpair failed and we were unable to recover it. 00:29:11.680 [2024-11-05 19:18:40.903737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.680 [2024-11-05 19:18:40.903754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.680 qpair failed and we were unable to recover it. 00:29:11.680 [2024-11-05 19:18:40.904032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.680 [2024-11-05 19:18:40.904044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.680 qpair failed and we were unable to recover it. 00:29:11.680 [2024-11-05 19:18:40.904387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.680 [2024-11-05 19:18:40.904399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.680 qpair failed and we were unable to recover it. 00:29:11.680 [2024-11-05 19:18:40.904722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.680 [2024-11-05 19:18:40.904735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.680 qpair failed and we were unable to recover it. 00:29:11.680 [2024-11-05 19:18:40.905070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.680 [2024-11-05 19:18:40.905082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.680 qpair failed and we were unable to recover it. 00:29:11.680 [2024-11-05 19:18:40.905378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.680 [2024-11-05 19:18:40.905390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.680 qpair failed and we were unable to recover it. 00:29:11.680 [2024-11-05 19:18:40.905690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.680 [2024-11-05 19:18:40.905702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.680 qpair failed and we were unable to recover it. 00:29:11.680 [2024-11-05 19:18:40.906021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.680 [2024-11-05 19:18:40.906034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.680 qpair failed and we were unable to recover it. 00:29:11.680 [2024-11-05 19:18:40.906367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.680 [2024-11-05 19:18:40.906378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.680 qpair failed and we were unable to recover it. 00:29:11.680 [2024-11-05 19:18:40.906657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.680 [2024-11-05 19:18:40.906670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.680 qpair failed and we were unable to recover it. 00:29:11.680 [2024-11-05 19:18:40.906957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.680 [2024-11-05 19:18:40.906970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.680 qpair failed and we were unable to recover it. 00:29:11.680 [2024-11-05 19:18:40.907277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.680 [2024-11-05 19:18:40.907290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.680 qpair failed and we were unable to recover it. 00:29:11.680 [2024-11-05 19:18:40.907587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.680 [2024-11-05 19:18:40.907598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.680 qpair failed and we were unable to recover it. 00:29:11.680 [2024-11-05 19:18:40.907909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.680 [2024-11-05 19:18:40.907923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.680 qpair failed and we were unable to recover it. 00:29:11.680 [2024-11-05 19:18:40.908275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.680 [2024-11-05 19:18:40.908287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.680 qpair failed and we were unable to recover it. 00:29:11.680 [2024-11-05 19:18:40.908503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.680 [2024-11-05 19:18:40.908515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.680 qpair failed and we were unable to recover it. 00:29:11.680 [2024-11-05 19:18:40.908825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.680 [2024-11-05 19:18:40.908838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.680 qpair failed and we were unable to recover it. 00:29:11.680 [2024-11-05 19:18:40.909809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.680 [2024-11-05 19:18:40.909834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.680 qpair failed and we were unable to recover it. 00:29:11.680 [2024-11-05 19:18:40.910139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.680 [2024-11-05 19:18:40.910153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.680 qpair failed and we were unable to recover it. 00:29:11.680 [2024-11-05 19:18:40.910523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.680 [2024-11-05 19:18:40.910535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.680 qpair failed and we were unable to recover it. 00:29:11.680 [2024-11-05 19:18:40.910837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.680 [2024-11-05 19:18:40.910850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.680 qpair failed and we were unable to recover it. 00:29:11.680 [2024-11-05 19:18:40.911077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.680 [2024-11-05 19:18:40.911089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.680 qpair failed and we were unable to recover it. 00:29:11.680 [2024-11-05 19:18:40.911401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.680 [2024-11-05 19:18:40.911413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.680 qpair failed and we were unable to recover it. 00:29:11.680 [2024-11-05 19:18:40.911730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.681 [2024-11-05 19:18:40.911742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.681 qpair failed and we were unable to recover it. 00:29:11.681 [2024-11-05 19:18:40.912055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.681 [2024-11-05 19:18:40.912068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.681 qpair failed and we were unable to recover it. 00:29:11.681 [2024-11-05 19:18:40.912367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.681 [2024-11-05 19:18:40.912378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.681 qpair failed and we were unable to recover it. 00:29:11.681 [2024-11-05 19:18:40.912684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.681 [2024-11-05 19:18:40.912696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.681 qpair failed and we were unable to recover it. 00:29:11.681 [2024-11-05 19:18:40.913021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.681 [2024-11-05 19:18:40.913034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.681 qpair failed and we were unable to recover it. 00:29:11.681 [2024-11-05 19:18:40.913357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.681 [2024-11-05 19:18:40.913369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.681 qpair failed and we were unable to recover it. 00:29:11.681 [2024-11-05 19:18:40.913674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.681 [2024-11-05 19:18:40.913686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.681 qpair failed and we were unable to recover it. 00:29:11.681 [2024-11-05 19:18:40.913982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.681 [2024-11-05 19:18:40.913995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.681 qpair failed and we were unable to recover it. 00:29:11.681 [2024-11-05 19:18:40.914322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.681 [2024-11-05 19:18:40.914334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.681 qpair failed and we were unable to recover it. 00:29:11.681 [2024-11-05 19:18:40.914654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.681 [2024-11-05 19:18:40.914666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.681 qpair failed and we were unable to recover it. 00:29:11.681 [2024-11-05 19:18:40.914985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.681 [2024-11-05 19:18:40.914998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.681 qpair failed and we were unable to recover it. 00:29:11.681 [2024-11-05 19:18:40.915308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.681 [2024-11-05 19:18:40.915321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.681 qpair failed and we were unable to recover it. 00:29:11.681 [2024-11-05 19:18:40.915622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.681 [2024-11-05 19:18:40.915633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.681 qpair failed and we were unable to recover it. 00:29:11.681 [2024-11-05 19:18:40.915912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.681 [2024-11-05 19:18:40.915925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.681 qpair failed and we were unable to recover it. 00:29:11.681 [2024-11-05 19:18:40.916255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.681 [2024-11-05 19:18:40.916267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.681 qpair failed and we were unable to recover it. 00:29:11.681 [2024-11-05 19:18:40.916563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.681 [2024-11-05 19:18:40.916574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.681 qpair failed and we were unable to recover it. 00:29:11.681 [2024-11-05 19:18:40.916885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.681 [2024-11-05 19:18:40.916898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.681 qpair failed and we were unable to recover it. 00:29:11.681 [2024-11-05 19:18:40.917246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.681 [2024-11-05 19:18:40.917260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.681 qpair failed and we were unable to recover it. 00:29:11.681 [2024-11-05 19:18:40.917559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.681 [2024-11-05 19:18:40.917571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.681 qpair failed and we were unable to recover it. 00:29:11.681 [2024-11-05 19:18:40.917885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.681 [2024-11-05 19:18:40.917896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.681 qpair failed and we were unable to recover it. 00:29:11.681 [2024-11-05 19:18:40.918234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.681 [2024-11-05 19:18:40.918245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.681 qpair failed and we were unable to recover it. 00:29:11.681 [2024-11-05 19:18:40.918572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.681 [2024-11-05 19:18:40.918583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.681 qpair failed and we were unable to recover it. 00:29:11.681 [2024-11-05 19:18:40.918790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.681 [2024-11-05 19:18:40.918802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.681 qpair failed and we were unable to recover it. 00:29:11.681 [2024-11-05 19:18:40.918922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.681 [2024-11-05 19:18:40.918934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.681 qpair failed and we were unable to recover it. 00:29:11.681 [2024-11-05 19:18:40.919243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.681 [2024-11-05 19:18:40.919253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.681 qpair failed and we were unable to recover it. 00:29:11.681 [2024-11-05 19:18:40.919585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.681 [2024-11-05 19:18:40.919597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.681 qpair failed and we were unable to recover it. 00:29:11.681 [2024-11-05 19:18:40.919904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.681 [2024-11-05 19:18:40.919915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.681 qpair failed and we were unable to recover it. 00:29:11.681 [2024-11-05 19:18:40.920236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.681 [2024-11-05 19:18:40.920247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.681 qpair failed and we were unable to recover it. 00:29:11.681 [2024-11-05 19:18:40.920609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.681 [2024-11-05 19:18:40.920621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.681 qpair failed and we were unable to recover it. 00:29:11.681 [2024-11-05 19:18:40.920943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.681 [2024-11-05 19:18:40.920955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.681 qpair failed and we were unable to recover it. 00:29:11.681 [2024-11-05 19:18:40.921285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.681 [2024-11-05 19:18:40.921297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.681 qpair failed and we were unable to recover it. 00:29:11.681 [2024-11-05 19:18:40.921604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.681 [2024-11-05 19:18:40.921615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.681 qpair failed and we were unable to recover it. 00:29:11.681 [2024-11-05 19:18:40.921814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.681 [2024-11-05 19:18:40.921825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.681 qpair failed and we were unable to recover it. 00:29:11.681 [2024-11-05 19:18:40.922156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.681 [2024-11-05 19:18:40.922167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.681 qpair failed and we were unable to recover it. 00:29:11.681 [2024-11-05 19:18:40.922475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.682 [2024-11-05 19:18:40.922487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.682 qpair failed and we were unable to recover it. 00:29:11.682 [2024-11-05 19:18:40.922668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.682 [2024-11-05 19:18:40.922679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.682 qpair failed and we were unable to recover it. 00:29:11.682 [2024-11-05 19:18:40.922964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.682 [2024-11-05 19:18:40.922976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.682 qpair failed and we were unable to recover it. 00:29:11.682 [2024-11-05 19:18:40.923257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.682 [2024-11-05 19:18:40.923268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.682 qpair failed and we were unable to recover it. 00:29:11.682 [2024-11-05 19:18:40.923577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.682 [2024-11-05 19:18:40.923589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.682 qpair failed and we were unable to recover it. 00:29:11.682 [2024-11-05 19:18:40.923799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.682 [2024-11-05 19:18:40.923811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.682 qpair failed and we were unable to recover it. 00:29:11.682 [2024-11-05 19:18:40.924070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.682 [2024-11-05 19:18:40.924080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.682 qpair failed and we were unable to recover it. 00:29:11.682 [2024-11-05 19:18:40.924410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.682 [2024-11-05 19:18:40.924422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.682 qpair failed and we were unable to recover it. 00:29:11.682 [2024-11-05 19:18:40.924750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.682 [2024-11-05 19:18:40.924763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.682 qpair failed and we were unable to recover it. 00:29:11.682 [2024-11-05 19:18:40.925075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.682 [2024-11-05 19:18:40.925087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.682 qpair failed and we were unable to recover it. 00:29:11.682 [2024-11-05 19:18:40.925425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.682 [2024-11-05 19:18:40.925437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.682 qpair failed and we were unable to recover it. 00:29:11.682 [2024-11-05 19:18:40.925770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.682 [2024-11-05 19:18:40.925782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.682 qpair failed and we were unable to recover it. 00:29:11.682 [2024-11-05 19:18:40.926152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.682 [2024-11-05 19:18:40.926163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.682 qpair failed and we were unable to recover it. 00:29:11.682 [2024-11-05 19:18:40.926470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.682 [2024-11-05 19:18:40.926481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.682 qpair failed and we were unable to recover it. 00:29:11.682 [2024-11-05 19:18:40.926776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.682 [2024-11-05 19:18:40.926788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.682 qpair failed and we were unable to recover it. 00:29:11.682 [2024-11-05 19:18:40.927117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.682 [2024-11-05 19:18:40.927130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.682 qpair failed and we were unable to recover it. 00:29:11.682 [2024-11-05 19:18:40.927455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.682 [2024-11-05 19:18:40.927467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.682 qpair failed and we were unable to recover it. 00:29:11.682 [2024-11-05 19:18:40.927793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.682 [2024-11-05 19:18:40.927806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.682 qpair failed and we were unable to recover it. 00:29:11.682 [2024-11-05 19:18:40.928127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.682 [2024-11-05 19:18:40.928138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.682 qpair failed and we were unable to recover it. 00:29:11.682 [2024-11-05 19:18:40.928416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.682 [2024-11-05 19:18:40.928427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.682 qpair failed and we were unable to recover it. 00:29:11.682 [2024-11-05 19:18:40.928726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.682 [2024-11-05 19:18:40.928737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.682 qpair failed and we were unable to recover it. 00:29:11.682 [2024-11-05 19:18:40.929041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.682 [2024-11-05 19:18:40.929053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.682 qpair failed and we were unable to recover it. 00:29:11.682 [2024-11-05 19:18:40.929353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.682 [2024-11-05 19:18:40.929365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.682 qpair failed and we were unable to recover it. 00:29:11.682 [2024-11-05 19:18:40.929697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.682 [2024-11-05 19:18:40.929709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.682 qpair failed and we were unable to recover it. 00:29:11.682 [2024-11-05 19:18:40.930040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.682 [2024-11-05 19:18:40.930052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.682 qpair failed and we were unable to recover it. 00:29:11.682 [2024-11-05 19:18:40.930139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.682 [2024-11-05 19:18:40.930150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.682 qpair failed and we were unable to recover it. 00:29:11.682 [2024-11-05 19:18:40.930408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.682 [2024-11-05 19:18:40.930419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.682 qpair failed and we were unable to recover it. 00:29:11.682 [2024-11-05 19:18:40.930714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.682 [2024-11-05 19:18:40.930725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.682 qpair failed and we were unable to recover it. 00:29:11.682 [2024-11-05 19:18:40.930897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.682 [2024-11-05 19:18:40.930910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.682 qpair failed and we were unable to recover it. 00:29:11.682 [2024-11-05 19:18:40.931224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.682 [2024-11-05 19:18:40.931235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.682 qpair failed and we were unable to recover it. 00:29:11.682 [2024-11-05 19:18:40.931522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.682 [2024-11-05 19:18:40.931532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.682 qpair failed and we were unable to recover it. 00:29:11.682 [2024-11-05 19:18:40.931838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.682 [2024-11-05 19:18:40.931850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.682 qpair failed and we were unable to recover it. 00:29:11.682 [2024-11-05 19:18:40.932176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.682 [2024-11-05 19:18:40.932188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.682 qpair failed and we were unable to recover it. 00:29:11.682 [2024-11-05 19:18:40.932531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.682 [2024-11-05 19:18:40.932543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.682 qpair failed and we were unable to recover it. 00:29:11.682 [2024-11-05 19:18:40.932846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.682 [2024-11-05 19:18:40.932858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.682 qpair failed and we were unable to recover it. 00:29:11.682 [2024-11-05 19:18:40.933135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.682 [2024-11-05 19:18:40.933145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.682 qpair failed and we were unable to recover it. 00:29:11.682 [2024-11-05 19:18:40.933479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.682 [2024-11-05 19:18:40.933491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.682 qpair failed and we were unable to recover it. 00:29:11.682 [2024-11-05 19:18:40.933783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.682 [2024-11-05 19:18:40.933795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.682 qpair failed and we were unable to recover it. 00:29:11.682 [2024-11-05 19:18:40.934093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.682 [2024-11-05 19:18:40.934105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.682 qpair failed and we were unable to recover it. 00:29:11.683 [2024-11-05 19:18:40.934403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.683 [2024-11-05 19:18:40.934413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.683 qpair failed and we were unable to recover it. 00:29:11.683 [2024-11-05 19:18:40.934734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.683 [2024-11-05 19:18:40.934745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.683 qpair failed and we were unable to recover it. 00:29:11.683 [2024-11-05 19:18:40.935062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.683 [2024-11-05 19:18:40.935074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.683 qpair failed and we were unable to recover it. 00:29:11.683 [2024-11-05 19:18:40.935369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.683 [2024-11-05 19:18:40.935380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.683 qpair failed and we were unable to recover it. 00:29:11.683 [2024-11-05 19:18:40.935683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.683 [2024-11-05 19:18:40.935695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.683 qpair failed and we were unable to recover it. 00:29:11.683 [2024-11-05 19:18:40.936007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.683 [2024-11-05 19:18:40.936019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.683 qpair failed and we were unable to recover it. 00:29:11.683 [2024-11-05 19:18:40.936307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.683 [2024-11-05 19:18:40.936318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.683 qpair failed and we were unable to recover it. 00:29:11.683 [2024-11-05 19:18:40.936620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.683 [2024-11-05 19:18:40.936631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.683 qpair failed and we were unable to recover it. 00:29:11.683 [2024-11-05 19:18:40.936955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.683 [2024-11-05 19:18:40.936968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.683 qpair failed and we were unable to recover it. 00:29:11.683 [2024-11-05 19:18:40.937264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.683 [2024-11-05 19:18:40.937276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.683 qpair failed and we were unable to recover it. 00:29:11.683 [2024-11-05 19:18:40.937578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.683 [2024-11-05 19:18:40.937590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.683 qpair failed and we were unable to recover it. 00:29:11.683 [2024-11-05 19:18:40.937900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.683 [2024-11-05 19:18:40.937911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.683 qpair failed and we were unable to recover it. 00:29:11.683 [2024-11-05 19:18:40.938239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.683 [2024-11-05 19:18:40.938251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.683 qpair failed and we were unable to recover it. 00:29:11.683 [2024-11-05 19:18:40.938528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.683 [2024-11-05 19:18:40.938538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.683 qpair failed and we were unable to recover it. 00:29:11.683 [2024-11-05 19:18:40.938853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.683 [2024-11-05 19:18:40.938864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.683 qpair failed and we were unable to recover it. 00:29:11.683 [2024-11-05 19:18:40.939168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.683 [2024-11-05 19:18:40.939181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.683 qpair failed and we were unable to recover it. 00:29:11.683 [2024-11-05 19:18:40.939489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.683 [2024-11-05 19:18:40.939500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.683 qpair failed and we were unable to recover it. 00:29:11.683 [2024-11-05 19:18:40.939680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.683 [2024-11-05 19:18:40.939692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.683 qpair failed and we were unable to recover it. 00:29:11.683 [2024-11-05 19:18:40.939954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.683 [2024-11-05 19:18:40.939967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.683 qpair failed and we were unable to recover it. 00:29:11.683 [2024-11-05 19:18:40.940263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.683 [2024-11-05 19:18:40.940275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.683 qpair failed and we were unable to recover it. 00:29:11.683 [2024-11-05 19:18:40.940587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.683 [2024-11-05 19:18:40.940599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.683 qpair failed and we were unable to recover it. 00:29:11.683 [2024-11-05 19:18:40.940934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.683 [2024-11-05 19:18:40.940946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.683 qpair failed and we were unable to recover it. 00:29:11.683 [2024-11-05 19:18:40.941259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.683 [2024-11-05 19:18:40.941271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.683 qpair failed and we were unable to recover it. 00:29:11.683 [2024-11-05 19:18:40.941606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.683 [2024-11-05 19:18:40.941617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.683 qpair failed and we were unable to recover it. 00:29:11.683 [2024-11-05 19:18:40.941926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.683 [2024-11-05 19:18:40.941938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.683 qpair failed and we were unable to recover it. 00:29:11.683 [2024-11-05 19:18:40.942241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.683 [2024-11-05 19:18:40.942252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.683 qpair failed and we were unable to recover it. 00:29:11.683 [2024-11-05 19:18:40.942558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.683 [2024-11-05 19:18:40.942569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.683 qpair failed and we were unable to recover it. 00:29:11.683 [2024-11-05 19:18:40.942883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.683 [2024-11-05 19:18:40.942895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.683 qpair failed and we were unable to recover it. 00:29:11.683 [2024-11-05 19:18:40.943226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.683 [2024-11-05 19:18:40.943238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.683 qpair failed and we were unable to recover it. 00:29:11.683 [2024-11-05 19:18:40.943571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.683 [2024-11-05 19:18:40.943583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.683 qpair failed and we were unable to recover it. 00:29:11.683 [2024-11-05 19:18:40.943896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.683 [2024-11-05 19:18:40.943909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.683 qpair failed and we were unable to recover it. 00:29:11.683 [2024-11-05 19:18:40.944226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.683 [2024-11-05 19:18:40.944237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.683 qpair failed and we were unable to recover it. 00:29:11.683 [2024-11-05 19:18:40.944421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.683 [2024-11-05 19:18:40.944433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.683 qpair failed and we were unable to recover it. 00:29:11.683 [2024-11-05 19:18:40.944788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.683 [2024-11-05 19:18:40.944799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.683 qpair failed and we were unable to recover it. 00:29:11.683 [2024-11-05 19:18:40.945090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.683 [2024-11-05 19:18:40.945101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.683 qpair failed and we were unable to recover it. 00:29:11.683 [2024-11-05 19:18:40.945415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.683 [2024-11-05 19:18:40.945427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.683 qpair failed and we were unable to recover it. 00:29:11.683 [2024-11-05 19:18:40.945722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.683 [2024-11-05 19:18:40.945735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.683 qpair failed and we were unable to recover it. 00:29:11.683 [2024-11-05 19:18:40.946071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.683 [2024-11-05 19:18:40.946083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.683 qpair failed and we were unable to recover it. 00:29:11.683 [2024-11-05 19:18:40.946380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.683 [2024-11-05 19:18:40.946391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.683 qpair failed and we were unable to recover it. 00:29:11.684 [2024-11-05 19:18:40.946690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.684 [2024-11-05 19:18:40.946703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.684 qpair failed and we were unable to recover it. 00:29:11.684 [2024-11-05 19:18:40.947023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.684 [2024-11-05 19:18:40.947036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.684 qpair failed and we were unable to recover it. 00:29:11.684 [2024-11-05 19:18:40.947370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.684 [2024-11-05 19:18:40.947381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.684 qpair failed and we were unable to recover it. 00:29:11.684 [2024-11-05 19:18:40.947697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.684 [2024-11-05 19:18:40.947709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.684 qpair failed and we were unable to recover it. 00:29:11.684 [2024-11-05 19:18:40.947925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.684 [2024-11-05 19:18:40.947937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.684 qpair failed and we were unable to recover it. 00:29:11.684 [2024-11-05 19:18:40.948242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.684 [2024-11-05 19:18:40.948253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.684 qpair failed and we were unable to recover it. 00:29:11.684 [2024-11-05 19:18:40.948561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.684 [2024-11-05 19:18:40.948571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.684 qpair failed and we were unable to recover it. 00:29:11.684 [2024-11-05 19:18:40.948872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.684 [2024-11-05 19:18:40.948883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.684 qpair failed and we were unable to recover it. 00:29:11.684 [2024-11-05 19:18:40.949164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.684 [2024-11-05 19:18:40.949175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.684 qpair failed and we were unable to recover it. 00:29:11.684 [2024-11-05 19:18:40.949487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.684 [2024-11-05 19:18:40.949500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.684 qpair failed and we were unable to recover it. 00:29:11.684 [2024-11-05 19:18:40.949827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.684 [2024-11-05 19:18:40.949839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.684 qpair failed and we were unable to recover it. 00:29:11.684 [2024-11-05 19:18:40.950143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.684 [2024-11-05 19:18:40.950154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.684 qpair failed and we were unable to recover it. 00:29:11.684 [2024-11-05 19:18:40.950456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.684 [2024-11-05 19:18:40.950466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.684 qpair failed and we were unable to recover it. 00:29:11.684 [2024-11-05 19:18:40.950760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.684 [2024-11-05 19:18:40.950771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.684 qpair failed and we were unable to recover it. 00:29:11.684 [2024-11-05 19:18:40.951088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.684 [2024-11-05 19:18:40.951100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.684 qpair failed and we were unable to recover it. 00:29:11.684 [2024-11-05 19:18:40.951426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.684 [2024-11-05 19:18:40.951437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.684 qpair failed and we were unable to recover it. 00:29:11.684 [2024-11-05 19:18:40.951606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.684 [2024-11-05 19:18:40.951618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.684 qpair failed and we were unable to recover it. 00:29:11.684 [2024-11-05 19:18:40.951917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.684 [2024-11-05 19:18:40.951929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.684 qpair failed and we were unable to recover it. 00:29:11.684 [2024-11-05 19:18:40.952234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.684 [2024-11-05 19:18:40.952245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.684 qpair failed and we were unable to recover it. 00:29:11.684 [2024-11-05 19:18:40.952544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.684 [2024-11-05 19:18:40.952555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.684 qpair failed and we were unable to recover it. 00:29:11.684 [2024-11-05 19:18:40.952855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.684 [2024-11-05 19:18:40.952867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.684 qpair failed and we were unable to recover it. 00:29:11.684 [2024-11-05 19:18:40.953158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.684 [2024-11-05 19:18:40.953169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.684 qpair failed and we were unable to recover it. 00:29:11.684 [2024-11-05 19:18:40.953464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.684 [2024-11-05 19:18:40.953475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.684 qpair failed and we were unable to recover it. 00:29:11.684 [2024-11-05 19:18:40.953655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.684 [2024-11-05 19:18:40.953667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.684 qpair failed and we were unable to recover it. 00:29:11.684 [2024-11-05 19:18:40.953982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.684 [2024-11-05 19:18:40.953994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.684 qpair failed and we were unable to recover it. 00:29:11.684 [2024-11-05 19:18:40.954194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.684 [2024-11-05 19:18:40.954204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.684 qpair failed and we were unable to recover it. 00:29:11.684 [2024-11-05 19:18:40.954385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.684 [2024-11-05 19:18:40.954396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.684 qpair failed and we were unable to recover it. 00:29:11.684 [2024-11-05 19:18:40.954717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.684 [2024-11-05 19:18:40.954728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.684 qpair failed and we were unable to recover it. 00:29:11.684 [2024-11-05 19:18:40.955053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.684 [2024-11-05 19:18:40.955065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.684 qpair failed and we were unable to recover it. 00:29:11.684 [2024-11-05 19:18:40.955445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.684 [2024-11-05 19:18:40.955456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.684 qpair failed and we were unable to recover it. 00:29:11.684 [2024-11-05 19:18:40.955754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.684 [2024-11-05 19:18:40.955766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.684 qpair failed and we were unable to recover it. 00:29:11.684 [2024-11-05 19:18:40.956060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.684 [2024-11-05 19:18:40.956072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.684 qpair failed and we were unable to recover it. 00:29:11.684 [2024-11-05 19:18:40.956391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.684 [2024-11-05 19:18:40.956403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.684 qpair failed and we were unable to recover it. 00:29:11.684 [2024-11-05 19:18:40.956699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.684 [2024-11-05 19:18:40.956711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.684 qpair failed and we were unable to recover it. 00:29:11.684 [2024-11-05 19:18:40.957044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.684 [2024-11-05 19:18:40.957056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.684 qpair failed and we were unable to recover it. 00:29:11.684 [2024-11-05 19:18:40.957354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.684 [2024-11-05 19:18:40.957366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.684 qpair failed and we were unable to recover it. 00:29:11.684 [2024-11-05 19:18:40.957673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.684 [2024-11-05 19:18:40.957685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.684 qpair failed and we were unable to recover it. 00:29:11.684 [2024-11-05 19:18:40.957962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.684 [2024-11-05 19:18:40.957974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.684 qpair failed and we were unable to recover it. 00:29:11.684 [2024-11-05 19:18:40.958309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.685 [2024-11-05 19:18:40.958321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.685 qpair failed and we were unable to recover it. 00:29:11.685 [2024-11-05 19:18:40.958631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.685 [2024-11-05 19:18:40.958642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.685 qpair failed and we were unable to recover it. 00:29:11.685 [2024-11-05 19:18:40.958967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.685 [2024-11-05 19:18:40.958979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.685 qpair failed and we were unable to recover it. 00:29:11.685 [2024-11-05 19:18:40.959289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.685 [2024-11-05 19:18:40.959302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.685 qpair failed and we were unable to recover it. 00:29:11.685 [2024-11-05 19:18:40.959587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.685 [2024-11-05 19:18:40.959599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.685 qpair failed and we were unable to recover it. 00:29:11.685 [2024-11-05 19:18:40.959908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.685 [2024-11-05 19:18:40.959920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.685 qpair failed and we were unable to recover it. 00:29:11.685 [2024-11-05 19:18:40.960230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.685 [2024-11-05 19:18:40.960241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.685 qpair failed and we were unable to recover it. 00:29:11.685 [2024-11-05 19:18:40.960552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.685 [2024-11-05 19:18:40.960563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.685 qpair failed and we were unable to recover it. 00:29:11.685 [2024-11-05 19:18:40.960870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.685 [2024-11-05 19:18:40.960882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.685 qpair failed and we were unable to recover it. 00:29:11.685 [2024-11-05 19:18:40.961260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.685 [2024-11-05 19:18:40.961271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.685 qpair failed and we were unable to recover it. 00:29:11.685 [2024-11-05 19:18:40.961464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.685 [2024-11-05 19:18:40.961475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.685 qpair failed and we were unable to recover it. 00:29:11.685 [2024-11-05 19:18:40.961691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.685 [2024-11-05 19:18:40.961703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.685 qpair failed and we were unable to recover it. 00:29:11.685 [2024-11-05 19:18:40.962011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.685 [2024-11-05 19:18:40.962023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.685 qpair failed and we were unable to recover it. 00:29:11.685 [2024-11-05 19:18:40.962365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.685 [2024-11-05 19:18:40.962377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.685 qpair failed and we were unable to recover it. 00:29:11.685 [2024-11-05 19:18:40.962675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.685 [2024-11-05 19:18:40.962686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.685 qpair failed and we were unable to recover it. 00:29:11.685 [2024-11-05 19:18:40.962957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.685 [2024-11-05 19:18:40.962968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.685 qpair failed and we were unable to recover it. 00:29:11.685 [2024-11-05 19:18:40.963250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.685 [2024-11-05 19:18:40.963261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.685 qpair failed and we were unable to recover it. 00:29:11.685 [2024-11-05 19:18:40.963564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.685 [2024-11-05 19:18:40.963575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.685 qpair failed and we were unable to recover it. 00:29:11.685 [2024-11-05 19:18:40.963876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.685 [2024-11-05 19:18:40.963888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.685 qpair failed and we were unable to recover it. 00:29:11.685 [2024-11-05 19:18:40.964187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.685 [2024-11-05 19:18:40.964198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.685 qpair failed and we were unable to recover it. 00:29:11.685 [2024-11-05 19:18:40.964529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.685 [2024-11-05 19:18:40.964540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.685 qpair failed and we were unable to recover it. 00:29:11.685 [2024-11-05 19:18:40.964840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.685 [2024-11-05 19:18:40.964852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.685 qpair failed and we were unable to recover it. 00:29:11.685 [2024-11-05 19:18:40.965159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.685 [2024-11-05 19:18:40.965170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.685 qpair failed and we were unable to recover it. 00:29:11.685 [2024-11-05 19:18:40.965348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.685 [2024-11-05 19:18:40.965359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.685 qpair failed and we were unable to recover it. 00:29:11.965 [2024-11-05 19:18:40.965671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.965 [2024-11-05 19:18:40.965683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.965 qpair failed and we were unable to recover it. 00:29:11.965 [2024-11-05 19:18:40.965986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.965 [2024-11-05 19:18:40.965998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.965 qpair failed and we were unable to recover it. 00:29:11.965 [2024-11-05 19:18:40.966356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.965 [2024-11-05 19:18:40.966368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.965 qpair failed and we were unable to recover it. 00:29:11.965 [2024-11-05 19:18:40.967344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.965 [2024-11-05 19:18:40.967369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.965 qpair failed and we were unable to recover it. 00:29:11.965 [2024-11-05 19:18:40.967698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.965 [2024-11-05 19:18:40.967711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.965 qpair failed and we were unable to recover it. 00:29:11.965 [2024-11-05 19:18:40.968503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.965 [2024-11-05 19:18:40.968524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.965 qpair failed and we were unable to recover it. 00:29:11.965 [2024-11-05 19:18:40.968771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.965 [2024-11-05 19:18:40.968788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.965 qpair failed and we were unable to recover it. 00:29:11.965 [2024-11-05 19:18:40.969101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.965 [2024-11-05 19:18:40.969112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.965 qpair failed and we were unable to recover it. 00:29:11.965 [2024-11-05 19:18:40.969404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.965 [2024-11-05 19:18:40.969416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.965 qpair failed and we were unable to recover it. 00:29:11.965 [2024-11-05 19:18:40.969722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.965 [2024-11-05 19:18:40.969733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.965 qpair failed and we were unable to recover it. 00:29:11.965 [2024-11-05 19:18:40.970020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.965 [2024-11-05 19:18:40.970031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.965 qpair failed and we were unable to recover it. 00:29:11.965 [2024-11-05 19:18:40.970351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.965 [2024-11-05 19:18:40.970363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.965 qpair failed and we were unable to recover it. 00:29:11.965 [2024-11-05 19:18:40.970661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.965 [2024-11-05 19:18:40.970673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.965 qpair failed and we were unable to recover it. 00:29:11.965 [2024-11-05 19:18:40.970982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.965 [2024-11-05 19:18:40.970993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.965 qpair failed and we were unable to recover it. 00:29:11.965 [2024-11-05 19:18:40.971288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.965 [2024-11-05 19:18:40.971299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.965 qpair failed and we were unable to recover it. 00:29:11.965 [2024-11-05 19:18:40.971619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.965 [2024-11-05 19:18:40.971631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.965 qpair failed and we were unable to recover it. 00:29:11.965 [2024-11-05 19:18:40.971963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.965 [2024-11-05 19:18:40.971976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.965 qpair failed and we were unable to recover it. 00:29:11.965 [2024-11-05 19:18:40.972295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.966 [2024-11-05 19:18:40.972307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.966 qpair failed and we were unable to recover it. 00:29:11.966 [2024-11-05 19:18:40.972632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.966 [2024-11-05 19:18:40.972644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.966 qpair failed and we were unable to recover it. 00:29:11.966 [2024-11-05 19:18:40.972920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.966 [2024-11-05 19:18:40.972932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.966 qpair failed and we were unable to recover it. 00:29:11.966 [2024-11-05 19:18:40.973135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.966 [2024-11-05 19:18:40.973147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.966 qpair failed and we were unable to recover it. 00:29:11.966 [2024-11-05 19:18:40.973477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.966 [2024-11-05 19:18:40.973489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.966 qpair failed and we were unable to recover it. 00:29:11.966 [2024-11-05 19:18:40.973817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.966 [2024-11-05 19:18:40.973829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.966 qpair failed and we were unable to recover it. 00:29:11.966 [2024-11-05 19:18:40.974153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.966 [2024-11-05 19:18:40.974164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.966 qpair failed and we were unable to recover it. 00:29:11.966 [2024-11-05 19:18:40.974492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.966 [2024-11-05 19:18:40.974503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.966 qpair failed and we were unable to recover it. 00:29:11.966 [2024-11-05 19:18:40.974772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.966 [2024-11-05 19:18:40.974784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.966 qpair failed and we were unable to recover it. 00:29:11.966 [2024-11-05 19:18:40.975087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.966 [2024-11-05 19:18:40.975100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.966 qpair failed and we were unable to recover it. 00:29:11.966 [2024-11-05 19:18:40.975410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.966 [2024-11-05 19:18:40.975421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.966 qpair failed and we were unable to recover it. 00:29:11.966 [2024-11-05 19:18:40.975714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.966 [2024-11-05 19:18:40.975725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.966 qpair failed and we were unable to recover it. 00:29:11.966 [2024-11-05 19:18:40.976055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.966 [2024-11-05 19:18:40.976067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.966 qpair failed and we were unable to recover it. 00:29:11.966 [2024-11-05 19:18:40.976394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.966 [2024-11-05 19:18:40.976405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.966 qpair failed and we were unable to recover it. 00:29:11.966 [2024-11-05 19:18:40.976712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.966 [2024-11-05 19:18:40.976723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.966 qpair failed and we were unable to recover it. 00:29:11.966 [2024-11-05 19:18:40.976950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.966 [2024-11-05 19:18:40.976961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.966 qpair failed and we were unable to recover it. 00:29:11.966 [2024-11-05 19:18:40.977272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.966 [2024-11-05 19:18:40.977286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.966 qpair failed and we were unable to recover it. 00:29:11.966 [2024-11-05 19:18:40.977609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.966 [2024-11-05 19:18:40.977622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.966 qpair failed and we were unable to recover it. 00:29:11.966 [2024-11-05 19:18:40.977839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.966 [2024-11-05 19:18:40.977852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.966 qpair failed and we were unable to recover it. 00:29:11.966 [2024-11-05 19:18:40.978203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.966 [2024-11-05 19:18:40.978214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.966 qpair failed and we were unable to recover it. 00:29:11.966 [2024-11-05 19:18:40.978553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.966 [2024-11-05 19:18:40.978566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.966 qpair failed and we were unable to recover it. 00:29:11.966 [2024-11-05 19:18:40.978766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.966 [2024-11-05 19:18:40.978778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.966 qpair failed and we were unable to recover it. 00:29:11.966 [2024-11-05 19:18:40.979086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.966 [2024-11-05 19:18:40.979097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.966 qpair failed and we were unable to recover it. 00:29:11.966 [2024-11-05 19:18:40.979412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.966 [2024-11-05 19:18:40.979423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.966 qpair failed and we were unable to recover it. 00:29:11.966 [2024-11-05 19:18:40.979695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.966 [2024-11-05 19:18:40.979707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.966 qpair failed and we were unable to recover it. 00:29:11.966 [2024-11-05 19:18:40.979919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.966 [2024-11-05 19:18:40.979930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.966 qpair failed and we were unable to recover it. 00:29:11.966 [2024-11-05 19:18:40.980212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.966 [2024-11-05 19:18:40.980223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.966 qpair failed and we were unable to recover it. 00:29:11.966 [2024-11-05 19:18:40.980514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.966 [2024-11-05 19:18:40.980525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.966 qpair failed and we were unable to recover it. 00:29:11.966 [2024-11-05 19:18:40.980859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.966 [2024-11-05 19:18:40.980871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.966 qpair failed and we were unable to recover it. 00:29:11.966 [2024-11-05 19:18:40.981187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.966 [2024-11-05 19:18:40.981200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.966 qpair failed and we were unable to recover it. 00:29:11.966 [2024-11-05 19:18:40.981516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.966 [2024-11-05 19:18:40.981527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.966 qpair failed and we were unable to recover it. 00:29:11.966 [2024-11-05 19:18:40.981848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.966 [2024-11-05 19:18:40.981859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.966 qpair failed and we were unable to recover it. 00:29:11.966 [2024-11-05 19:18:40.982146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.966 [2024-11-05 19:18:40.982157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.966 qpair failed and we were unable to recover it. 00:29:11.966 [2024-11-05 19:18:40.982452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.966 [2024-11-05 19:18:40.982463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.966 qpair failed and we were unable to recover it. 00:29:11.966 [2024-11-05 19:18:40.982787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.966 [2024-11-05 19:18:40.982798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.966 qpair failed and we were unable to recover it. 00:29:11.966 [2024-11-05 19:18:40.983125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.966 [2024-11-05 19:18:40.983136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.966 qpair failed and we were unable to recover it. 00:29:11.966 [2024-11-05 19:18:40.983438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.966 [2024-11-05 19:18:40.983449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.966 qpair failed and we were unable to recover it. 00:29:11.966 [2024-11-05 19:18:40.983836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.966 [2024-11-05 19:18:40.983848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.966 qpair failed and we were unable to recover it. 00:29:11.966 [2024-11-05 19:18:40.984261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.966 [2024-11-05 19:18:40.984272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.966 qpair failed and we were unable to recover it. 00:29:11.967 [2024-11-05 19:18:40.984601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.967 [2024-11-05 19:18:40.984613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.967 qpair failed and we were unable to recover it. 00:29:11.967 [2024-11-05 19:18:40.984830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.967 [2024-11-05 19:18:40.984843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.967 qpair failed and we were unable to recover it. 00:29:11.967 [2024-11-05 19:18:40.985167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.967 [2024-11-05 19:18:40.985179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.967 qpair failed and we were unable to recover it. 00:29:11.967 [2024-11-05 19:18:40.985380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.967 [2024-11-05 19:18:40.985391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.967 qpair failed and we were unable to recover it. 00:29:11.967 [2024-11-05 19:18:40.985743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.967 [2024-11-05 19:18:40.985760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.967 qpair failed and we were unable to recover it. 00:29:11.967 [2024-11-05 19:18:40.986054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.967 [2024-11-05 19:18:40.986064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.967 qpair failed and we were unable to recover it. 00:29:11.967 [2024-11-05 19:18:40.986373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.967 [2024-11-05 19:18:40.986384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.967 qpair failed and we were unable to recover it. 00:29:11.967 [2024-11-05 19:18:40.986696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.967 [2024-11-05 19:18:40.986708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.967 qpair failed and we were unable to recover it. 00:29:11.967 [2024-11-05 19:18:40.987072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.967 [2024-11-05 19:18:40.987083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.967 qpair failed and we were unable to recover it. 00:29:11.967 [2024-11-05 19:18:40.987397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.967 [2024-11-05 19:18:40.987409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.967 qpair failed and we were unable to recover it. 00:29:11.967 [2024-11-05 19:18:40.987631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.967 [2024-11-05 19:18:40.987642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.967 qpair failed and we were unable to recover it. 00:29:11.967 [2024-11-05 19:18:40.987950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.967 [2024-11-05 19:18:40.987961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.967 qpair failed and we were unable to recover it. 00:29:11.967 [2024-11-05 19:18:40.988261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.967 [2024-11-05 19:18:40.988271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.967 qpair failed and we were unable to recover it. 00:29:11.967 [2024-11-05 19:18:40.988475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.967 [2024-11-05 19:18:40.988486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.967 qpair failed and we were unable to recover it. 00:29:11.967 [2024-11-05 19:18:40.988706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.967 [2024-11-05 19:18:40.988718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.967 qpair failed and we were unable to recover it. 00:29:11.967 [2024-11-05 19:18:40.989018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.967 [2024-11-05 19:18:40.989029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.967 qpair failed and we were unable to recover it. 00:29:11.967 [2024-11-05 19:18:40.989398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.967 [2024-11-05 19:18:40.989409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.967 qpair failed and we were unable to recover it. 00:29:11.967 [2024-11-05 19:18:40.989680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.967 [2024-11-05 19:18:40.989692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.967 qpair failed and we were unable to recover it. 00:29:11.967 [2024-11-05 19:18:40.989908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.967 [2024-11-05 19:18:40.989919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.967 qpair failed and we were unable to recover it. 00:29:11.967 [2024-11-05 19:18:40.990240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.967 [2024-11-05 19:18:40.990251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.967 qpair failed and we were unable to recover it. 00:29:11.967 [2024-11-05 19:18:40.990445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.967 [2024-11-05 19:18:40.990456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.967 qpair failed and we were unable to recover it. 00:29:11.967 [2024-11-05 19:18:40.990793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.967 [2024-11-05 19:18:40.990804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.967 qpair failed and we were unable to recover it. 00:29:11.967 [2024-11-05 19:18:40.991122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.967 [2024-11-05 19:18:40.991132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.967 qpair failed and we were unable to recover it. 00:29:11.967 [2024-11-05 19:18:40.991317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.967 [2024-11-05 19:18:40.991330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.967 qpair failed and we were unable to recover it. 00:29:11.967 [2024-11-05 19:18:40.991604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.967 [2024-11-05 19:18:40.991615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.967 qpair failed and we were unable to recover it. 00:29:11.967 [2024-11-05 19:18:40.991897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.967 [2024-11-05 19:18:40.991908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.967 qpair failed and we were unable to recover it. 00:29:11.967 [2024-11-05 19:18:40.992222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.967 [2024-11-05 19:18:40.992233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.967 qpair failed and we were unable to recover it. 00:29:11.967 [2024-11-05 19:18:40.992391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.967 [2024-11-05 19:18:40.992401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.967 qpair failed and we were unable to recover it. 00:29:11.967 [2024-11-05 19:18:40.992617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.967 [2024-11-05 19:18:40.992627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.967 qpair failed and we were unable to recover it. 00:29:11.967 [2024-11-05 19:18:40.992966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.967 [2024-11-05 19:18:40.992978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.967 qpair failed and we were unable to recover it. 00:29:11.967 [2024-11-05 19:18:40.993291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.967 [2024-11-05 19:18:40.993302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.967 qpair failed and we were unable to recover it. 00:29:11.967 [2024-11-05 19:18:40.993618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.967 [2024-11-05 19:18:40.993629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.967 qpair failed and we were unable to recover it. 00:29:11.967 [2024-11-05 19:18:40.993972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.967 [2024-11-05 19:18:40.993984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.967 qpair failed and we were unable to recover it. 00:29:11.967 [2024-11-05 19:18:40.994294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.967 [2024-11-05 19:18:40.994305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.967 qpair failed and we were unable to recover it. 00:29:11.967 [2024-11-05 19:18:40.994616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.967 [2024-11-05 19:18:40.994628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.967 qpair failed and we were unable to recover it. 00:29:11.967 [2024-11-05 19:18:40.994899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.967 [2024-11-05 19:18:40.994910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.967 qpair failed and we were unable to recover it. 00:29:11.967 [2024-11-05 19:18:40.995218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.967 [2024-11-05 19:18:40.995229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.967 qpair failed and we were unable to recover it. 00:29:11.967 [2024-11-05 19:18:40.995616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.967 [2024-11-05 19:18:40.995627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.967 qpair failed and we were unable to recover it. 00:29:11.967 [2024-11-05 19:18:40.995848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.967 [2024-11-05 19:18:40.995859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.968 qpair failed and we were unable to recover it. 00:29:11.968 [2024-11-05 19:18:40.996184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.968 [2024-11-05 19:18:40.996196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.968 qpair failed and we were unable to recover it. 00:29:11.968 [2024-11-05 19:18:40.996498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.968 [2024-11-05 19:18:40.996509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.968 qpair failed and we were unable to recover it. 00:29:11.968 [2024-11-05 19:18:40.996893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.968 [2024-11-05 19:18:40.996904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.968 qpair failed and we were unable to recover it. 00:29:11.968 [2024-11-05 19:18:40.997214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.968 [2024-11-05 19:18:40.997226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.968 qpair failed and we were unable to recover it. 00:29:11.968 [2024-11-05 19:18:40.997543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.968 [2024-11-05 19:18:40.997555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.968 qpair failed and we were unable to recover it. 00:29:11.968 [2024-11-05 19:18:40.997875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.968 [2024-11-05 19:18:40.997888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.968 qpair failed and we were unable to recover it. 00:29:11.968 [2024-11-05 19:18:40.998211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.968 [2024-11-05 19:18:40.998222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.968 qpair failed and we were unable to recover it. 00:29:11.968 [2024-11-05 19:18:40.998524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.968 [2024-11-05 19:18:40.998536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.968 qpair failed and we were unable to recover it. 00:29:11.968 [2024-11-05 19:18:40.998775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.968 [2024-11-05 19:18:40.998786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.968 qpair failed and we were unable to recover it. 00:29:11.968 [2024-11-05 19:18:40.998902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.968 [2024-11-05 19:18:40.998912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.968 qpair failed and we were unable to recover it. 00:29:11.968 [2024-11-05 19:18:40.999124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.968 [2024-11-05 19:18:40.999135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.968 qpair failed and we were unable to recover it. 00:29:11.968 [2024-11-05 19:18:40.999438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.968 [2024-11-05 19:18:40.999450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.968 qpair failed and we were unable to recover it. 00:29:11.968 [2024-11-05 19:18:40.999556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.968 [2024-11-05 19:18:40.999567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.968 qpair failed and we were unable to recover it. 00:29:11.968 [2024-11-05 19:18:40.999878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.968 [2024-11-05 19:18:40.999890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.968 qpair failed and we were unable to recover it. 00:29:11.968 [2024-11-05 19:18:41.000344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.968 [2024-11-05 19:18:41.000359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.968 qpair failed and we were unable to recover it. 00:29:11.968 [2024-11-05 19:18:41.000672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.968 [2024-11-05 19:18:41.000685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.968 qpair failed and we were unable to recover it. 00:29:11.968 [2024-11-05 19:18:41.000900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.968 [2024-11-05 19:18:41.000912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.968 qpair failed and we were unable to recover it. 00:29:11.968 [2024-11-05 19:18:41.001231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.968 [2024-11-05 19:18:41.001243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.968 qpair failed and we were unable to recover it. 00:29:11.968 [2024-11-05 19:18:41.001543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.968 [2024-11-05 19:18:41.001555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.968 qpair failed and we were unable to recover it. 00:29:11.968 [2024-11-05 19:18:41.001773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.968 [2024-11-05 19:18:41.001785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.968 qpair failed and we were unable to recover it. 00:29:11.968 [2024-11-05 19:18:41.002068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.968 [2024-11-05 19:18:41.002080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.968 qpair failed and we were unable to recover it. 00:29:11.968 [2024-11-05 19:18:41.002388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.968 [2024-11-05 19:18:41.002400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.968 qpair failed and we were unable to recover it. 00:29:11.968 [2024-11-05 19:18:41.002675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.968 [2024-11-05 19:18:41.002686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.968 qpair failed and we were unable to recover it. 00:29:11.968 [2024-11-05 19:18:41.002994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.968 [2024-11-05 19:18:41.003005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.968 qpair failed and we were unable to recover it. 00:29:11.968 [2024-11-05 19:18:41.003307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.968 [2024-11-05 19:18:41.003318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.968 qpair failed and we were unable to recover it. 00:29:11.968 [2024-11-05 19:18:41.003511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.968 [2024-11-05 19:18:41.003522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.968 qpair failed and we were unable to recover it. 00:29:11.968 [2024-11-05 19:18:41.003619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.968 [2024-11-05 19:18:41.003630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.968 qpair failed and we were unable to recover it. 00:29:11.968 [2024-11-05 19:18:41.004032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.968 [2024-11-05 19:18:41.004043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.968 qpair failed and we were unable to recover it. 00:29:11.968 [2024-11-05 19:18:41.004343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.968 [2024-11-05 19:18:41.004356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.968 qpair failed and we were unable to recover it. 00:29:11.968 [2024-11-05 19:18:41.004678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.968 [2024-11-05 19:18:41.004689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.968 qpair failed and we were unable to recover it. 00:29:11.968 [2024-11-05 19:18:41.005086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.968 [2024-11-05 19:18:41.005098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.968 qpair failed and we were unable to recover it. 00:29:11.968 [2024-11-05 19:18:41.005435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.968 [2024-11-05 19:18:41.005446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.968 qpair failed and we were unable to recover it. 00:29:11.968 [2024-11-05 19:18:41.005749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.968 [2024-11-05 19:18:41.005761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.968 qpair failed and we were unable to recover it. 00:29:11.968 [2024-11-05 19:18:41.005993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.968 [2024-11-05 19:18:41.006006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.968 qpair failed and we were unable to recover it. 00:29:11.968 [2024-11-05 19:18:41.006263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.968 [2024-11-05 19:18:41.006274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.968 qpair failed and we were unable to recover it. 00:29:11.968 [2024-11-05 19:18:41.006607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.968 [2024-11-05 19:18:41.006619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.968 qpair failed and we were unable to recover it. 00:29:11.968 [2024-11-05 19:18:41.006812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.968 [2024-11-05 19:18:41.006823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.968 qpair failed and we were unable to recover it. 00:29:11.968 [2024-11-05 19:18:41.007145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.968 [2024-11-05 19:18:41.007156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.968 qpair failed and we were unable to recover it. 00:29:11.968 [2024-11-05 19:18:41.007461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.969 [2024-11-05 19:18:41.007472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.969 qpair failed and we were unable to recover it. 00:29:11.969 [2024-11-05 19:18:41.007798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.969 [2024-11-05 19:18:41.007810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.969 qpair failed and we were unable to recover it. 00:29:11.969 [2024-11-05 19:18:41.008134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.969 [2024-11-05 19:18:41.008145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.969 qpair failed and we were unable to recover it. 00:29:11.969 [2024-11-05 19:18:41.008443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.969 [2024-11-05 19:18:41.008455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.969 qpair failed and we were unable to recover it. 00:29:11.969 [2024-11-05 19:18:41.008674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.969 [2024-11-05 19:18:41.008685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.969 qpair failed and we were unable to recover it. 00:29:11.969 [2024-11-05 19:18:41.009009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.969 [2024-11-05 19:18:41.009021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.969 qpair failed and we were unable to recover it. 00:29:11.969 [2024-11-05 19:18:41.009320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.969 [2024-11-05 19:18:41.009331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.969 qpair failed and we were unable to recover it. 00:29:11.969 [2024-11-05 19:18:41.009641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.969 [2024-11-05 19:18:41.009652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.969 qpair failed and we were unable to recover it. 00:29:11.969 [2024-11-05 19:18:41.009971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.969 [2024-11-05 19:18:41.009983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.969 qpair failed and we were unable to recover it. 00:29:11.969 [2024-11-05 19:18:41.010350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.969 [2024-11-05 19:18:41.010362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.969 qpair failed and we were unable to recover it. 00:29:11.969 [2024-11-05 19:18:41.010531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.969 [2024-11-05 19:18:41.010542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.969 qpair failed and we were unable to recover it. 00:29:11.969 [2024-11-05 19:18:41.010870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.969 [2024-11-05 19:18:41.010881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.969 qpair failed and we were unable to recover it. 00:29:11.969 [2024-11-05 19:18:41.011204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.969 [2024-11-05 19:18:41.011216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.969 qpair failed and we were unable to recover it. 00:29:11.969 [2024-11-05 19:18:41.011404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.969 [2024-11-05 19:18:41.011415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.969 qpair failed and we were unable to recover it. 00:29:11.969 [2024-11-05 19:18:41.011755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.969 [2024-11-05 19:18:41.011767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.969 qpair failed and we were unable to recover it. 00:29:11.969 [2024-11-05 19:18:41.012086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.969 [2024-11-05 19:18:41.012097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.969 qpair failed and we were unable to recover it. 00:29:11.969 [2024-11-05 19:18:41.012348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.969 [2024-11-05 19:18:41.012359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.969 qpair failed and we were unable to recover it. 00:29:11.969 [2024-11-05 19:18:41.012551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.969 [2024-11-05 19:18:41.012562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.969 qpair failed and we were unable to recover it. 00:29:11.969 [2024-11-05 19:18:41.012769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.969 [2024-11-05 19:18:41.012780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.969 qpair failed and we were unable to recover it. 00:29:11.969 [2024-11-05 19:18:41.013110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.969 [2024-11-05 19:18:41.013121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.969 qpair failed and we were unable to recover it. 00:29:11.969 [2024-11-05 19:18:41.013449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.969 [2024-11-05 19:18:41.013460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.969 qpair failed and we were unable to recover it. 00:29:11.969 [2024-11-05 19:18:41.013736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.969 [2024-11-05 19:18:41.013753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.969 qpair failed and we were unable to recover it. 00:29:11.969 [2024-11-05 19:18:41.014064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.969 [2024-11-05 19:18:41.014078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.969 qpair failed and we were unable to recover it. 00:29:11.969 [2024-11-05 19:18:41.014361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.969 [2024-11-05 19:18:41.014372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.969 qpair failed and we were unable to recover it. 00:29:11.969 [2024-11-05 19:18:41.014671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.969 [2024-11-05 19:18:41.014682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.969 qpair failed and we were unable to recover it. 00:29:11.969 [2024-11-05 19:18:41.014874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.969 [2024-11-05 19:18:41.014885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.969 qpair failed and we were unable to recover it. 00:29:11.969 [2024-11-05 19:18:41.015080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.969 [2024-11-05 19:18:41.015091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.969 qpair failed and we were unable to recover it. 00:29:11.969 [2024-11-05 19:18:41.015156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.969 [2024-11-05 19:18:41.015167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.969 qpair failed and we were unable to recover it. 00:29:11.969 [2024-11-05 19:18:41.015442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.969 [2024-11-05 19:18:41.015453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.969 qpair failed and we were unable to recover it. 00:29:11.969 [2024-11-05 19:18:41.015750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.969 [2024-11-05 19:18:41.015761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.969 qpair failed and we were unable to recover it. 00:29:11.969 [2024-11-05 19:18:41.015970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.969 [2024-11-05 19:18:41.015981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.969 qpair failed and we were unable to recover it. 00:29:11.969 [2024-11-05 19:18:41.016282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.969 [2024-11-05 19:18:41.016293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.969 qpair failed and we were unable to recover it. 00:29:11.969 [2024-11-05 19:18:41.016602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.969 [2024-11-05 19:18:41.016614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.969 qpair failed and we were unable to recover it. 00:29:11.969 [2024-11-05 19:18:41.016949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.969 [2024-11-05 19:18:41.016962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.969 qpair failed and we were unable to recover it. 00:29:11.969 [2024-11-05 19:18:41.017326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.969 [2024-11-05 19:18:41.017337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.969 qpair failed and we were unable to recover it. 00:29:11.969 [2024-11-05 19:18:41.017635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.969 [2024-11-05 19:18:41.017647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.969 qpair failed and we were unable to recover it. 00:29:11.969 [2024-11-05 19:18:41.017936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.969 [2024-11-05 19:18:41.017948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.969 qpair failed and we were unable to recover it. 00:29:11.969 [2024-11-05 19:18:41.018258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.969 [2024-11-05 19:18:41.018270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.969 qpair failed and we were unable to recover it. 00:29:11.969 [2024-11-05 19:18:41.018577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.970 [2024-11-05 19:18:41.018589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.970 qpair failed and we were unable to recover it. 00:29:11.970 [2024-11-05 19:18:41.018847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.970 [2024-11-05 19:18:41.018859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.970 qpair failed and we were unable to recover it. 00:29:11.970 [2024-11-05 19:18:41.019168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.970 [2024-11-05 19:18:41.019179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.970 qpair failed and we were unable to recover it. 00:29:11.970 [2024-11-05 19:18:41.019477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.970 [2024-11-05 19:18:41.019488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.970 qpair failed and we were unable to recover it. 00:29:11.970 [2024-11-05 19:18:41.019791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.970 [2024-11-05 19:18:41.019803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.970 qpair failed and we were unable to recover it. 00:29:11.970 [2024-11-05 19:18:41.019969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.970 [2024-11-05 19:18:41.019980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.970 qpair failed and we were unable to recover it. 00:29:11.970 [2024-11-05 19:18:41.020278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.970 [2024-11-05 19:18:41.020290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.970 qpair failed and we were unable to recover it. 00:29:11.970 [2024-11-05 19:18:41.020595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.970 [2024-11-05 19:18:41.020606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.970 qpair failed and we were unable to recover it. 00:29:11.970 [2024-11-05 19:18:41.020917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.970 [2024-11-05 19:18:41.020928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.970 qpair failed and we were unable to recover it. 00:29:11.970 [2024-11-05 19:18:41.021569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.970 [2024-11-05 19:18:41.021591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.970 qpair failed and we were unable to recover it. 00:29:11.970 [2024-11-05 19:18:41.021906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.970 [2024-11-05 19:18:41.021919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.970 qpair failed and we were unable to recover it. 00:29:11.970 [2024-11-05 19:18:41.022250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.970 [2024-11-05 19:18:41.022261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.970 qpair failed and we were unable to recover it. 00:29:11.970 [2024-11-05 19:18:41.022592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.970 [2024-11-05 19:18:41.022603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.970 qpair failed and we were unable to recover it. 00:29:11.970 [2024-11-05 19:18:41.022899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.970 [2024-11-05 19:18:41.022911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.970 qpair failed and we were unable to recover it. 00:29:11.970 [2024-11-05 19:18:41.023216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.970 [2024-11-05 19:18:41.023229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.970 qpair failed and we were unable to recover it. 00:29:11.970 [2024-11-05 19:18:41.023530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.970 [2024-11-05 19:18:41.023541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.970 qpair failed and we were unable to recover it. 00:29:11.970 [2024-11-05 19:18:41.023813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.970 [2024-11-05 19:18:41.023825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.970 qpair failed and we were unable to recover it. 00:29:11.970 [2024-11-05 19:18:41.024139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.970 [2024-11-05 19:18:41.024150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.970 qpair failed and we were unable to recover it. 00:29:11.970 [2024-11-05 19:18:41.024458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.970 [2024-11-05 19:18:41.024470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.970 qpair failed and we were unable to recover it. 00:29:11.970 [2024-11-05 19:18:41.024765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.970 [2024-11-05 19:18:41.024776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.970 qpair failed and we were unable to recover it. 00:29:11.970 [2024-11-05 19:18:41.025070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.970 [2024-11-05 19:18:41.025082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.970 qpair failed and we were unable to recover it. 00:29:11.970 [2024-11-05 19:18:41.025395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.970 [2024-11-05 19:18:41.025407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.970 qpair failed and we were unable to recover it. 00:29:11.970 [2024-11-05 19:18:41.025704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.970 [2024-11-05 19:18:41.025716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.970 qpair failed and we were unable to recover it. 00:29:11.970 [2024-11-05 19:18:41.026011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.970 [2024-11-05 19:18:41.026023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.970 qpair failed and we were unable to recover it. 00:29:11.970 [2024-11-05 19:18:41.026348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.970 [2024-11-05 19:18:41.026360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.970 qpair failed and we were unable to recover it. 00:29:11.970 [2024-11-05 19:18:41.026670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.970 [2024-11-05 19:18:41.026683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.970 qpair failed and we were unable to recover it. 00:29:11.970 [2024-11-05 19:18:41.026900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.970 [2024-11-05 19:18:41.026911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.970 qpair failed and we were unable to recover it. 00:29:11.970 [2024-11-05 19:18:41.027230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.970 [2024-11-05 19:18:41.027242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.970 qpair failed and we were unable to recover it. 00:29:11.970 [2024-11-05 19:18:41.027605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.970 [2024-11-05 19:18:41.027617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.970 qpair failed and we were unable to recover it. 00:29:11.970 [2024-11-05 19:18:41.027877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.970 [2024-11-05 19:18:41.027888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.970 qpair failed and we were unable to recover it. 00:29:11.970 [2024-11-05 19:18:41.028220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.970 [2024-11-05 19:18:41.028231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.970 qpair failed and we were unable to recover it. 00:29:11.970 [2024-11-05 19:18:41.028540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.970 [2024-11-05 19:18:41.028552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.970 qpair failed and we were unable to recover it. 00:29:11.970 [2024-11-05 19:18:41.028857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.970 [2024-11-05 19:18:41.028868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.970 qpair failed and we were unable to recover it. 00:29:11.971 [2024-11-05 19:18:41.029181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.971 [2024-11-05 19:18:41.029191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.971 qpair failed and we were unable to recover it. 00:29:11.971 [2024-11-05 19:18:41.029461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.971 [2024-11-05 19:18:41.029472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.971 qpair failed and we were unable to recover it. 00:29:11.971 [2024-11-05 19:18:41.029771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.971 [2024-11-05 19:18:41.029783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.971 qpair failed and we were unable to recover it. 00:29:11.971 [2024-11-05 19:18:41.030090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.971 [2024-11-05 19:18:41.030101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.971 qpair failed and we were unable to recover it. 00:29:11.971 [2024-11-05 19:18:41.030404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.971 [2024-11-05 19:18:41.030416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.971 qpair failed and we were unable to recover it. 00:29:11.971 [2024-11-05 19:18:41.030757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.971 [2024-11-05 19:18:41.030769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.971 qpair failed and we were unable to recover it. 00:29:11.971 [2024-11-05 19:18:41.031098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.971 [2024-11-05 19:18:41.031110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.971 qpair failed and we were unable to recover it. 00:29:11.971 [2024-11-05 19:18:41.031474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.971 [2024-11-05 19:18:41.031485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.971 qpair failed and we were unable to recover it. 00:29:11.971 [2024-11-05 19:18:41.031788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.971 [2024-11-05 19:18:41.031800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.971 qpair failed and we were unable to recover it. 00:29:11.971 [2024-11-05 19:18:41.032120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.971 [2024-11-05 19:18:41.032131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.971 qpair failed and we were unable to recover it. 00:29:11.971 [2024-11-05 19:18:41.032431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.971 [2024-11-05 19:18:41.032443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.971 qpair failed and we were unable to recover it. 00:29:11.971 [2024-11-05 19:18:41.032641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.971 [2024-11-05 19:18:41.032652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.971 qpair failed and we were unable to recover it. 00:29:11.971 [2024-11-05 19:18:41.032982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.971 [2024-11-05 19:18:41.032994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.971 qpair failed and we were unable to recover it. 00:29:11.971 [2024-11-05 19:18:41.033292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.971 [2024-11-05 19:18:41.033303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.971 qpair failed and we were unable to recover it. 00:29:11.971 [2024-11-05 19:18:41.033603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.971 [2024-11-05 19:18:41.033614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.971 qpair failed and we were unable to recover it. 00:29:11.971 [2024-11-05 19:18:41.033951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.971 [2024-11-05 19:18:41.033963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.971 qpair failed and we were unable to recover it. 00:29:11.971 [2024-11-05 19:18:41.034337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.971 [2024-11-05 19:18:41.034348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.971 qpair failed and we were unable to recover it. 00:29:11.971 [2024-11-05 19:18:41.034654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.971 [2024-11-05 19:18:41.034666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.971 qpair failed and we were unable to recover it. 00:29:11.971 [2024-11-05 19:18:41.034862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.971 [2024-11-05 19:18:41.034875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.971 qpair failed and we were unable to recover it. 00:29:11.971 [2024-11-05 19:18:41.035163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.971 [2024-11-05 19:18:41.035176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.971 qpair failed and we were unable to recover it. 00:29:11.971 [2024-11-05 19:18:41.035509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.971 [2024-11-05 19:18:41.035520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.971 qpair failed and we were unable to recover it. 00:29:11.971 [2024-11-05 19:18:41.035822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.971 [2024-11-05 19:18:41.035833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.971 qpair failed and we were unable to recover it. 00:29:11.971 [2024-11-05 19:18:41.036140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.971 [2024-11-05 19:18:41.036152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.971 qpair failed and we were unable to recover it. 00:29:11.971 [2024-11-05 19:18:41.036479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.971 [2024-11-05 19:18:41.036491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.971 qpair failed and we were unable to recover it. 00:29:11.971 [2024-11-05 19:18:41.036883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.971 [2024-11-05 19:18:41.036895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.971 qpair failed and we were unable to recover it. 00:29:11.971 [2024-11-05 19:18:41.037145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.971 [2024-11-05 19:18:41.037156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.971 qpair failed and we were unable to recover it. 00:29:11.971 [2024-11-05 19:18:41.037473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.971 [2024-11-05 19:18:41.037483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.971 qpair failed and we were unable to recover it. 00:29:11.971 [2024-11-05 19:18:41.037768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.971 [2024-11-05 19:18:41.037780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.971 qpair failed and we were unable to recover it. 00:29:11.971 [2024-11-05 19:18:41.038087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.971 [2024-11-05 19:18:41.038099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.971 qpair failed and we were unable to recover it. 00:29:11.971 [2024-11-05 19:18:41.038396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.971 [2024-11-05 19:18:41.038408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.971 qpair failed and we were unable to recover it. 00:29:11.971 [2024-11-05 19:18:41.038624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.971 [2024-11-05 19:18:41.038635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.971 qpair failed and we were unable to recover it. 00:29:11.971 [2024-11-05 19:18:41.038958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.971 [2024-11-05 19:18:41.038970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.971 qpair failed and we were unable to recover it. 00:29:11.971 [2024-11-05 19:18:41.039280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.971 [2024-11-05 19:18:41.039291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.971 qpair failed and we were unable to recover it. 00:29:11.971 [2024-11-05 19:18:41.039596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.971 [2024-11-05 19:18:41.039608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.971 qpair failed and we were unable to recover it. 00:29:11.971 [2024-11-05 19:18:41.039942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.971 [2024-11-05 19:18:41.039954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.971 qpair failed and we were unable to recover it. 00:29:11.971 [2024-11-05 19:18:41.040282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.971 [2024-11-05 19:18:41.040295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.971 qpair failed and we were unable to recover it. 00:29:11.971 [2024-11-05 19:18:41.040594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.971 [2024-11-05 19:18:41.040605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.971 qpair failed and we were unable to recover it. 00:29:11.971 [2024-11-05 19:18:41.040903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.971 [2024-11-05 19:18:41.040915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.971 qpair failed and we were unable to recover it. 00:29:11.971 [2024-11-05 19:18:41.041253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.971 [2024-11-05 19:18:41.041264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.971 qpair failed and we were unable to recover it. 00:29:11.972 [2024-11-05 19:18:41.041455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.972 [2024-11-05 19:18:41.041467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.972 qpair failed and we were unable to recover it. 00:29:11.972 [2024-11-05 19:18:41.041783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.972 [2024-11-05 19:18:41.041795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.972 qpair failed and we were unable to recover it. 00:29:11.972 [2024-11-05 19:18:41.042093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.972 [2024-11-05 19:18:41.042104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.972 qpair failed and we were unable to recover it. 00:29:11.972 [2024-11-05 19:18:41.042396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.972 [2024-11-05 19:18:41.042407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.972 qpair failed and we were unable to recover it. 00:29:11.972 [2024-11-05 19:18:41.042704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.972 [2024-11-05 19:18:41.042714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.972 qpair failed and we were unable to recover it. 00:29:11.972 [2024-11-05 19:18:41.043000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.972 [2024-11-05 19:18:41.043012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.972 qpair failed and we were unable to recover it. 00:29:11.972 [2024-11-05 19:18:41.043308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.972 [2024-11-05 19:18:41.043320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.972 qpair failed and we were unable to recover it. 00:29:11.972 [2024-11-05 19:18:41.043681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.972 [2024-11-05 19:18:41.043693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.972 qpair failed and we were unable to recover it. 00:29:11.972 [2024-11-05 19:18:41.044018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.972 [2024-11-05 19:18:41.044030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.972 qpair failed and we were unable to recover it. 00:29:11.972 [2024-11-05 19:18:41.044351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.972 [2024-11-05 19:18:41.044363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.972 qpair failed and we were unable to recover it. 00:29:11.972 [2024-11-05 19:18:41.044556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.972 [2024-11-05 19:18:41.044568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.972 qpair failed and we were unable to recover it. 00:29:11.972 [2024-11-05 19:18:41.044917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.972 [2024-11-05 19:18:41.044929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.972 qpair failed and we were unable to recover it. 00:29:11.972 [2024-11-05 19:18:41.045236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.972 [2024-11-05 19:18:41.045247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.972 qpair failed and we were unable to recover it. 00:29:11.972 [2024-11-05 19:18:41.045549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.972 [2024-11-05 19:18:41.045561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.972 qpair failed and we were unable to recover it. 00:29:11.972 [2024-11-05 19:18:41.045866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.972 [2024-11-05 19:18:41.045877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.972 qpair failed and we were unable to recover it. 00:29:11.972 [2024-11-05 19:18:41.046164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.972 [2024-11-05 19:18:41.046174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.972 qpair failed and we were unable to recover it. 00:29:11.972 [2024-11-05 19:18:41.046449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.972 [2024-11-05 19:18:41.046460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.972 qpair failed and we were unable to recover it. 00:29:11.972 [2024-11-05 19:18:41.046661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.972 [2024-11-05 19:18:41.046672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.972 qpair failed and we were unable to recover it. 00:29:11.972 [2024-11-05 19:18:41.046980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.972 [2024-11-05 19:18:41.046992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.972 qpair failed and we were unable to recover it. 00:29:11.972 [2024-11-05 19:18:41.047317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.972 [2024-11-05 19:18:41.047328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.972 qpair failed and we were unable to recover it. 00:29:11.972 [2024-11-05 19:18:41.047603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.972 [2024-11-05 19:18:41.047614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.972 qpair failed and we were unable to recover it. 00:29:11.972 [2024-11-05 19:18:41.047919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.972 [2024-11-05 19:18:41.047930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.972 qpair failed and we were unable to recover it. 00:29:11.972 [2024-11-05 19:18:41.048243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.972 [2024-11-05 19:18:41.048255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.972 qpair failed and we were unable to recover it. 00:29:11.972 [2024-11-05 19:18:41.048558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.972 [2024-11-05 19:18:41.048569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.972 qpair failed and we were unable to recover it. 00:29:11.972 [2024-11-05 19:18:41.048876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.972 [2024-11-05 19:18:41.048887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.972 qpair failed and we were unable to recover it. 00:29:11.972 [2024-11-05 19:18:41.049195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.972 [2024-11-05 19:18:41.049205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.972 qpair failed and we were unable to recover it. 00:29:11.972 [2024-11-05 19:18:41.049516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.972 [2024-11-05 19:18:41.049527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.972 qpair failed and we were unable to recover it. 00:29:11.972 [2024-11-05 19:18:41.049684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.972 [2024-11-05 19:18:41.049697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.972 qpair failed and we were unable to recover it. 00:29:11.972 [2024-11-05 19:18:41.050001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.972 [2024-11-05 19:18:41.050012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.972 qpair failed and we were unable to recover it. 00:29:11.972 [2024-11-05 19:18:41.050312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.972 [2024-11-05 19:18:41.050323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.972 qpair failed and we were unable to recover it. 00:29:11.972 [2024-11-05 19:18:41.050642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.972 [2024-11-05 19:18:41.050654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.972 qpair failed and we were unable to recover it. 00:29:11.972 [2024-11-05 19:18:41.050992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.972 [2024-11-05 19:18:41.051004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.972 qpair failed and we were unable to recover it. 00:29:11.972 [2024-11-05 19:18:41.051325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.972 [2024-11-05 19:18:41.051337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.972 qpair failed and we were unable to recover it. 00:29:11.972 [2024-11-05 19:18:41.051657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.972 [2024-11-05 19:18:41.051669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.972 qpair failed and we were unable to recover it. 00:29:11.972 [2024-11-05 19:18:41.051949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.972 [2024-11-05 19:18:41.051962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.972 qpair failed and we were unable to recover it. 00:29:11.972 [2024-11-05 19:18:41.052287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.972 [2024-11-05 19:18:41.052298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.972 qpair failed and we were unable to recover it. 00:29:11.972 [2024-11-05 19:18:41.052578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.972 [2024-11-05 19:18:41.052589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.972 qpair failed and we were unable to recover it. 00:29:11.972 [2024-11-05 19:18:41.052900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.972 [2024-11-05 19:18:41.052913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.972 qpair failed and we were unable to recover it. 00:29:11.972 [2024-11-05 19:18:41.053113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.973 [2024-11-05 19:18:41.053124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.973 qpair failed and we were unable to recover it. 00:29:11.973 [2024-11-05 19:18:41.053397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.973 [2024-11-05 19:18:41.053408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.973 qpair failed and we were unable to recover it. 00:29:11.973 [2024-11-05 19:18:41.053601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.973 [2024-11-05 19:18:41.053612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.973 qpair failed and we were unable to recover it. 00:29:11.973 [2024-11-05 19:18:41.053899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.973 [2024-11-05 19:18:41.053911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.973 qpair failed and we were unable to recover it. 00:29:11.973 [2024-11-05 19:18:41.054221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.973 [2024-11-05 19:18:41.054232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.973 qpair failed and we were unable to recover it. 00:29:11.973 [2024-11-05 19:18:41.054534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.973 [2024-11-05 19:18:41.054546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.973 qpair failed and we were unable to recover it. 00:29:11.973 [2024-11-05 19:18:41.054859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.973 [2024-11-05 19:18:41.054871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.973 qpair failed and we were unable to recover it. 00:29:11.973 [2024-11-05 19:18:41.055170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.973 [2024-11-05 19:18:41.055181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.973 qpair failed and we were unable to recover it. 00:29:11.973 [2024-11-05 19:18:41.055525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.973 [2024-11-05 19:18:41.055536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.973 qpair failed and we were unable to recover it. 00:29:11.973 [2024-11-05 19:18:41.055834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.973 [2024-11-05 19:18:41.055845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.973 qpair failed and we were unable to recover it. 00:29:11.973 [2024-11-05 19:18:41.056161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.973 [2024-11-05 19:18:41.056172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.973 qpair failed and we were unable to recover it. 00:29:11.973 [2024-11-05 19:18:41.056514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.973 [2024-11-05 19:18:41.056526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.973 qpair failed and we were unable to recover it. 00:29:11.973 [2024-11-05 19:18:41.056719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.973 [2024-11-05 19:18:41.056730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.973 qpair failed and we were unable to recover it. 00:29:11.973 [2024-11-05 19:18:41.056941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.973 [2024-11-05 19:18:41.056953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.973 qpair failed and we were unable to recover it. 00:29:11.973 [2024-11-05 19:18:41.057233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.973 [2024-11-05 19:18:41.057244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.973 qpair failed and we were unable to recover it. 00:29:11.973 [2024-11-05 19:18:41.057557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.973 [2024-11-05 19:18:41.057569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.973 qpair failed and we were unable to recover it. 00:29:11.973 [2024-11-05 19:18:41.057773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.973 [2024-11-05 19:18:41.057785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.973 qpair failed and we were unable to recover it. 00:29:11.973 [2024-11-05 19:18:41.058099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.973 [2024-11-05 19:18:41.058110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.973 qpair failed and we were unable to recover it. 00:29:11.973 [2024-11-05 19:18:41.058441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.973 [2024-11-05 19:18:41.058453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.973 qpair failed and we were unable to recover it. 00:29:11.973 [2024-11-05 19:18:41.058794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.973 [2024-11-05 19:18:41.058807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.973 qpair failed and we were unable to recover it. 00:29:11.973 [2024-11-05 19:18:41.059152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.973 [2024-11-05 19:18:41.059163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.973 qpair failed and we were unable to recover it. 00:29:11.973 [2024-11-05 19:18:41.059464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.973 [2024-11-05 19:18:41.059475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.973 qpair failed and we were unable to recover it. 00:29:11.973 [2024-11-05 19:18:41.059689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.973 [2024-11-05 19:18:41.059700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.973 qpair failed and we were unable to recover it. 00:29:11.973 [2024-11-05 19:18:41.060007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.973 [2024-11-05 19:18:41.060019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.973 qpair failed and we were unable to recover it. 00:29:11.973 [2024-11-05 19:18:41.060224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.973 [2024-11-05 19:18:41.060234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.973 qpair failed and we were unable to recover it. 00:29:11.973 [2024-11-05 19:18:41.060521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.973 [2024-11-05 19:18:41.060532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.973 qpair failed and we were unable to recover it. 00:29:11.973 [2024-11-05 19:18:41.060732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.973 [2024-11-05 19:18:41.060743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.973 qpair failed and we were unable to recover it. 00:29:11.973 [2024-11-05 19:18:41.061057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.973 [2024-11-05 19:18:41.061070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.973 qpair failed and we were unable to recover it. 00:29:11.973 [2024-11-05 19:18:41.061378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.973 [2024-11-05 19:18:41.061389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.973 qpair failed and we were unable to recover it. 00:29:11.973 [2024-11-05 19:18:41.061580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.973 [2024-11-05 19:18:41.061593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.973 qpair failed and we were unable to recover it. 00:29:11.973 [2024-11-05 19:18:41.061886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.973 [2024-11-05 19:18:41.061898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.973 qpair failed and we were unable to recover it. 00:29:11.973 [2024-11-05 19:18:41.062196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.973 [2024-11-05 19:18:41.062207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.973 qpair failed and we were unable to recover it. 00:29:11.973 [2024-11-05 19:18:41.062539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.973 [2024-11-05 19:18:41.062550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.973 qpair failed and we were unable to recover it. 00:29:11.973 [2024-11-05 19:18:41.062867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.973 [2024-11-05 19:18:41.062880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.973 qpair failed and we were unable to recover it. 00:29:11.973 [2024-11-05 19:18:41.063249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.973 [2024-11-05 19:18:41.063260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.973 qpair failed and we were unable to recover it. 00:29:11.973 [2024-11-05 19:18:41.063555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.973 [2024-11-05 19:18:41.063567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.973 qpair failed and we were unable to recover it. 00:29:11.973 [2024-11-05 19:18:41.063928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.973 [2024-11-05 19:18:41.063940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.973 qpair failed and we were unable to recover it. 00:29:11.973 [2024-11-05 19:18:41.064253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.973 [2024-11-05 19:18:41.064264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.973 qpair failed and we were unable to recover it. 00:29:11.973 [2024-11-05 19:18:41.064584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.974 [2024-11-05 19:18:41.064594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.974 qpair failed and we were unable to recover it. 00:29:11.974 [2024-11-05 19:18:41.064900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.974 [2024-11-05 19:18:41.064911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.974 qpair failed and we were unable to recover it. 00:29:11.974 [2024-11-05 19:18:41.065110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.974 [2024-11-05 19:18:41.065121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.974 qpair failed and we were unable to recover it. 00:29:11.974 [2024-11-05 19:18:41.065431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.974 [2024-11-05 19:18:41.065442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.974 qpair failed and we were unable to recover it. 00:29:11.974 [2024-11-05 19:18:41.065765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.974 [2024-11-05 19:18:41.065777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.974 qpair failed and we were unable to recover it. 00:29:11.974 [2024-11-05 19:18:41.066184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.974 [2024-11-05 19:18:41.066196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.974 qpair failed and we were unable to recover it. 00:29:11.974 [2024-11-05 19:18:41.066413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.974 [2024-11-05 19:18:41.066424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.974 qpair failed and we were unable to recover it. 00:29:11.974 [2024-11-05 19:18:41.066629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.974 [2024-11-05 19:18:41.066640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.974 qpair failed and we were unable to recover it. 00:29:11.974 [2024-11-05 19:18:41.066960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.974 [2024-11-05 19:18:41.066972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.974 qpair failed and we were unable to recover it. 00:29:11.974 [2024-11-05 19:18:41.067280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.974 [2024-11-05 19:18:41.067292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.974 qpair failed and we were unable to recover it. 00:29:11.974 [2024-11-05 19:18:41.067612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.974 [2024-11-05 19:18:41.067624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.974 qpair failed and we were unable to recover it. 00:29:11.974 [2024-11-05 19:18:41.067922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.974 [2024-11-05 19:18:41.067935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.974 qpair failed and we were unable to recover it. 00:29:11.974 [2024-11-05 19:18:41.068107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.974 [2024-11-05 19:18:41.068119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.974 qpair failed and we were unable to recover it. 00:29:11.974 [2024-11-05 19:18:41.068486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.974 [2024-11-05 19:18:41.068498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.974 qpair failed and we were unable to recover it. 00:29:11.974 [2024-11-05 19:18:41.068777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.974 [2024-11-05 19:18:41.068789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.974 qpair failed and we were unable to recover it. 00:29:11.974 [2024-11-05 19:18:41.069085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.974 [2024-11-05 19:18:41.069097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.974 qpair failed and we were unable to recover it. 00:29:11.974 [2024-11-05 19:18:41.069433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.974 [2024-11-05 19:18:41.069444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.974 qpair failed and we were unable to recover it. 00:29:11.974 [2024-11-05 19:18:41.069754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.974 [2024-11-05 19:18:41.069766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.974 qpair failed and we were unable to recover it. 00:29:11.974 [2024-11-05 19:18:41.069953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.974 [2024-11-05 19:18:41.069964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.974 qpair failed and we were unable to recover it. 00:29:11.974 [2024-11-05 19:18:41.070290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.974 [2024-11-05 19:18:41.070301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.974 qpair failed and we were unable to recover it. 00:29:11.974 [2024-11-05 19:18:41.070494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.974 [2024-11-05 19:18:41.070505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.974 qpair failed and we were unable to recover it. 00:29:11.974 [2024-11-05 19:18:41.070831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.974 [2024-11-05 19:18:41.070842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.974 qpair failed and we were unable to recover it. 00:29:11.974 [2024-11-05 19:18:41.071171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.974 [2024-11-05 19:18:41.071182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.974 qpair failed and we were unable to recover it. 00:29:11.974 [2024-11-05 19:18:41.071507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.974 [2024-11-05 19:18:41.071519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.974 qpair failed and we were unable to recover it. 00:29:11.974 [2024-11-05 19:18:41.071823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.974 [2024-11-05 19:18:41.071836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.974 qpair failed and we were unable to recover it. 00:29:11.974 [2024-11-05 19:18:41.072214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.974 [2024-11-05 19:18:41.072227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.974 qpair failed and we were unable to recover it. 00:29:11.974 [2024-11-05 19:18:41.072531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.974 [2024-11-05 19:18:41.072544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.974 qpair failed and we were unable to recover it. 00:29:11.974 [2024-11-05 19:18:41.072872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.974 [2024-11-05 19:18:41.072884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.974 qpair failed and we were unable to recover it. 00:29:11.974 [2024-11-05 19:18:41.073205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.974 [2024-11-05 19:18:41.073216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.974 qpair failed and we were unable to recover it. 00:29:11.974 [2024-11-05 19:18:41.073520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.974 [2024-11-05 19:18:41.073531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.974 qpair failed and we were unable to recover it. 00:29:11.974 [2024-11-05 19:18:41.073840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.974 [2024-11-05 19:18:41.073852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.974 qpair failed and we were unable to recover it. 00:29:11.974 [2024-11-05 19:18:41.074019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.974 [2024-11-05 19:18:41.074030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.974 qpair failed and we were unable to recover it. 00:29:11.974 [2024-11-05 19:18:41.074329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.974 [2024-11-05 19:18:41.074340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.974 qpair failed and we were unable to recover it. 00:29:11.974 [2024-11-05 19:18:41.074692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.974 [2024-11-05 19:18:41.074704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.974 qpair failed and we were unable to recover it. 00:29:11.974 [2024-11-05 19:18:41.074985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.975 [2024-11-05 19:18:41.074999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.975 qpair failed and we were unable to recover it. 00:29:11.975 [2024-11-05 19:18:41.075285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.975 [2024-11-05 19:18:41.075296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.975 qpair failed and we were unable to recover it. 00:29:11.975 [2024-11-05 19:18:41.075597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.975 [2024-11-05 19:18:41.075610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.975 qpair failed and we were unable to recover it. 00:29:11.975 [2024-11-05 19:18:41.075913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.975 [2024-11-05 19:18:41.075926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.975 qpair failed and we were unable to recover it. 00:29:11.975 [2024-11-05 19:18:41.076214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.975 [2024-11-05 19:18:41.076226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.975 qpair failed and we were unable to recover it. 00:29:11.975 [2024-11-05 19:18:41.076536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.975 [2024-11-05 19:18:41.076548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.975 qpair failed and we were unable to recover it. 00:29:11.975 [2024-11-05 19:18:41.076822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.975 [2024-11-05 19:18:41.076833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.975 qpair failed and we were unable to recover it. 00:29:11.975 [2024-11-05 19:18:41.077151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.975 [2024-11-05 19:18:41.077163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.975 qpair failed and we were unable to recover it. 00:29:11.975 [2024-11-05 19:18:41.077458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.975 [2024-11-05 19:18:41.077471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.975 qpair failed and we were unable to recover it. 00:29:11.975 [2024-11-05 19:18:41.077848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.975 [2024-11-05 19:18:41.077860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.975 qpair failed and we were unable to recover it. 00:29:11.975 [2024-11-05 19:18:41.078184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.975 [2024-11-05 19:18:41.078195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.975 qpair failed and we were unable to recover it. 00:29:11.975 [2024-11-05 19:18:41.078376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.975 [2024-11-05 19:18:41.078388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.975 qpair failed and we were unable to recover it. 00:29:11.975 [2024-11-05 19:18:41.078705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.975 [2024-11-05 19:18:41.078716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.975 qpair failed and we were unable to recover it. 00:29:11.975 [2024-11-05 19:18:41.079042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.975 [2024-11-05 19:18:41.079055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.975 qpair failed and we were unable to recover it. 00:29:11.975 [2024-11-05 19:18:41.079424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.975 [2024-11-05 19:18:41.079436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.975 qpair failed and we were unable to recover it. 00:29:11.975 [2024-11-05 19:18:41.079778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.975 [2024-11-05 19:18:41.079790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.975 qpair failed and we were unable to recover it. 00:29:11.975 [2024-11-05 19:18:41.080095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.975 [2024-11-05 19:18:41.080107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.975 qpair failed and we were unable to recover it. 00:29:11.975 [2024-11-05 19:18:41.080441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.975 [2024-11-05 19:18:41.080453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.975 qpair failed and we were unable to recover it. 00:29:11.975 [2024-11-05 19:18:41.080744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.975 [2024-11-05 19:18:41.080761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.975 qpair failed and we were unable to recover it. 00:29:11.975 [2024-11-05 19:18:41.081622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.975 [2024-11-05 19:18:41.081646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.975 qpair failed and we were unable to recover it. 00:29:11.975 [2024-11-05 19:18:41.081922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.975 [2024-11-05 19:18:41.081934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.975 qpair failed and we were unable to recover it. 00:29:11.975 [2024-11-05 19:18:41.082251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.975 [2024-11-05 19:18:41.082263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.975 qpair failed and we were unable to recover it. 00:29:11.975 [2024-11-05 19:18:41.082544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.975 [2024-11-05 19:18:41.082555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.975 qpair failed and we were unable to recover it. 00:29:11.975 [2024-11-05 19:18:41.082853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.975 [2024-11-05 19:18:41.082865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.975 qpair failed and we were unable to recover it. 00:29:11.975 [2024-11-05 19:18:41.083175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.975 [2024-11-05 19:18:41.083187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.975 qpair failed and we were unable to recover it. 00:29:11.975 [2024-11-05 19:18:41.083493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.975 [2024-11-05 19:18:41.083504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.975 qpair failed and we were unable to recover it. 00:29:11.975 [2024-11-05 19:18:41.083691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.975 [2024-11-05 19:18:41.083702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.975 qpair failed and we were unable to recover it. 00:29:11.975 [2024-11-05 19:18:41.084012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.975 [2024-11-05 19:18:41.084024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.975 qpair failed and we were unable to recover it. 00:29:11.975 [2024-11-05 19:18:41.084335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.975 [2024-11-05 19:18:41.084347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.975 qpair failed and we were unable to recover it. 00:29:11.975 [2024-11-05 19:18:41.084653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.975 [2024-11-05 19:18:41.084665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.975 qpair failed and we were unable to recover it. 00:29:11.975 [2024-11-05 19:18:41.084976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.975 [2024-11-05 19:18:41.084988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.975 qpair failed and we were unable to recover it. 00:29:11.975 [2024-11-05 19:18:41.085169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.975 [2024-11-05 19:18:41.085180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.975 qpair failed and we were unable to recover it. 00:29:11.975 [2024-11-05 19:18:41.085528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.975 [2024-11-05 19:18:41.085540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.975 qpair failed and we were unable to recover it. 00:29:11.975 [2024-11-05 19:18:41.085847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.975 [2024-11-05 19:18:41.085858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.975 qpair failed and we were unable to recover it. 00:29:11.975 [2024-11-05 19:18:41.086198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.975 [2024-11-05 19:18:41.086211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.975 qpair failed and we were unable to recover it. 00:29:11.975 [2024-11-05 19:18:41.086493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.975 [2024-11-05 19:18:41.086505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.975 qpair failed and we were unable to recover it. 00:29:11.975 [2024-11-05 19:18:41.086815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.975 [2024-11-05 19:18:41.086827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.975 qpair failed and we were unable to recover it. 00:29:11.975 [2024-11-05 19:18:41.087147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.975 [2024-11-05 19:18:41.087158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.975 qpair failed and we were unable to recover it. 00:29:11.975 [2024-11-05 19:18:41.087453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.975 [2024-11-05 19:18:41.087464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.975 qpair failed and we were unable to recover it. 00:29:11.976 [2024-11-05 19:18:41.087787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.976 [2024-11-05 19:18:41.087799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.976 qpair failed and we were unable to recover it. 00:29:11.976 [2024-11-05 19:18:41.088095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.976 [2024-11-05 19:18:41.088107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.976 qpair failed and we were unable to recover it. 00:29:11.976 [2024-11-05 19:18:41.088288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.976 [2024-11-05 19:18:41.088299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.976 qpair failed and we were unable to recover it. 00:29:11.976 [2024-11-05 19:18:41.088573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.976 [2024-11-05 19:18:41.088585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.976 qpair failed and we were unable to recover it. 00:29:11.976 [2024-11-05 19:18:41.088898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.976 [2024-11-05 19:18:41.088910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.976 qpair failed and we were unable to recover it. 00:29:11.976 [2024-11-05 19:18:41.089251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.976 [2024-11-05 19:18:41.089263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.976 qpair failed and we were unable to recover it. 00:29:11.976 [2024-11-05 19:18:41.089584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.976 [2024-11-05 19:18:41.089596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.976 qpair failed and we were unable to recover it. 00:29:11.976 [2024-11-05 19:18:41.089915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.976 [2024-11-05 19:18:41.089926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.976 qpair failed and we were unable to recover it. 00:29:11.976 [2024-11-05 19:18:41.090196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.976 [2024-11-05 19:18:41.090208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.976 qpair failed and we were unable to recover it. 00:29:11.976 [2024-11-05 19:18:41.090520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.976 [2024-11-05 19:18:41.090531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.976 qpair failed and we were unable to recover it. 00:29:11.976 [2024-11-05 19:18:41.090854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.976 [2024-11-05 19:18:41.090866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.976 qpair failed and we were unable to recover it. 00:29:11.976 [2024-11-05 19:18:41.091156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.976 [2024-11-05 19:18:41.091166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.976 qpair failed and we were unable to recover it. 00:29:11.976 [2024-11-05 19:18:41.091455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.976 [2024-11-05 19:18:41.091466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.976 qpair failed and we were unable to recover it. 00:29:11.976 [2024-11-05 19:18:41.091811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.976 [2024-11-05 19:18:41.091822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.976 qpair failed and we were unable to recover it. 00:29:11.976 [2024-11-05 19:18:41.092142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.976 [2024-11-05 19:18:41.092153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.976 qpair failed and we were unable to recover it. 00:29:11.976 [2024-11-05 19:18:41.092462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.976 [2024-11-05 19:18:41.092475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.976 qpair failed and we were unable to recover it. 00:29:11.976 [2024-11-05 19:18:41.092803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.976 [2024-11-05 19:18:41.092815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.976 qpair failed and we were unable to recover it. 00:29:11.976 [2024-11-05 19:18:41.093129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.976 [2024-11-05 19:18:41.093141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.976 qpair failed and we were unable to recover it. 00:29:11.976 [2024-11-05 19:18:41.093340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.976 [2024-11-05 19:18:41.093351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.976 qpair failed and we were unable to recover it. 00:29:11.976 [2024-11-05 19:18:41.093666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.976 [2024-11-05 19:18:41.093676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.976 qpair failed and we were unable to recover it. 00:29:11.976 [2024-11-05 19:18:41.093843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.976 [2024-11-05 19:18:41.093856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.976 qpair failed and we were unable to recover it. 00:29:11.976 [2024-11-05 19:18:41.094186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.976 [2024-11-05 19:18:41.094197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.976 qpair failed and we were unable to recover it. 00:29:11.976 [2024-11-05 19:18:41.094501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.976 [2024-11-05 19:18:41.094512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.976 qpair failed and we were unable to recover it. 00:29:11.976 [2024-11-05 19:18:41.094789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.976 [2024-11-05 19:18:41.094801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.976 qpair failed and we were unable to recover it. 00:29:11.976 [2024-11-05 19:18:41.095083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.976 [2024-11-05 19:18:41.095094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.976 qpair failed and we were unable to recover it. 00:29:11.976 [2024-11-05 19:18:41.095427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.976 [2024-11-05 19:18:41.095438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.976 qpair failed and we were unable to recover it. 00:29:11.976 [2024-11-05 19:18:41.095602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.976 [2024-11-05 19:18:41.095615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.976 qpair failed and we were unable to recover it. 00:29:11.976 [2024-11-05 19:18:41.095906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.976 [2024-11-05 19:18:41.095918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.976 qpair failed and we were unable to recover it. 00:29:11.976 [2024-11-05 19:18:41.096228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.976 [2024-11-05 19:18:41.096240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.976 qpair failed and we were unable to recover it. 00:29:11.976 [2024-11-05 19:18:41.096549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.976 [2024-11-05 19:18:41.096560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.976 qpair failed and we were unable to recover it. 00:29:11.976 [2024-11-05 19:18:41.096865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.976 [2024-11-05 19:18:41.096876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.976 qpair failed and we were unable to recover it. 00:29:11.976 [2024-11-05 19:18:41.097073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.976 [2024-11-05 19:18:41.097085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.976 qpair failed and we were unable to recover it. 00:29:11.976 [2024-11-05 19:18:41.097254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.976 [2024-11-05 19:18:41.097266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.976 qpair failed and we were unable to recover it. 00:29:11.976 [2024-11-05 19:18:41.097414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.976 [2024-11-05 19:18:41.097426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.976 qpair failed and we were unable to recover it. 00:29:11.976 [2024-11-05 19:18:41.097724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.976 [2024-11-05 19:18:41.097736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.976 qpair failed and we were unable to recover it. 00:29:11.976 [2024-11-05 19:18:41.098073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.976 [2024-11-05 19:18:41.098084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.976 qpair failed and we were unable to recover it. 00:29:11.976 [2024-11-05 19:18:41.098400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.976 [2024-11-05 19:18:41.098411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.976 qpair failed and we were unable to recover it. 00:29:11.976 [2024-11-05 19:18:41.098564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.976 [2024-11-05 19:18:41.098574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.976 qpair failed and we were unable to recover it. 00:29:11.976 [2024-11-05 19:18:41.098867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.976 [2024-11-05 19:18:41.098878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.977 qpair failed and we were unable to recover it. 00:29:11.977 [2024-11-05 19:18:41.099056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.977 [2024-11-05 19:18:41.099066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.977 qpair failed and we were unable to recover it. 00:29:11.977 [2024-11-05 19:18:41.099339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.977 [2024-11-05 19:18:41.099349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.977 qpair failed and we were unable to recover it. 00:29:11.977 [2024-11-05 19:18:41.099656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.977 [2024-11-05 19:18:41.099666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.977 qpair failed and we were unable to recover it. 00:29:11.977 [2024-11-05 19:18:41.100003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.977 [2024-11-05 19:18:41.100013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.977 qpair failed and we were unable to recover it. 00:29:11.977 [2024-11-05 19:18:41.100308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.977 [2024-11-05 19:18:41.100318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.977 qpair failed and we were unable to recover it. 00:29:11.977 [2024-11-05 19:18:41.100626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.977 [2024-11-05 19:18:41.100636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.977 qpair failed and we were unable to recover it. 00:29:11.977 [2024-11-05 19:18:41.100921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.977 [2024-11-05 19:18:41.100932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.977 qpair failed and we were unable to recover it. 00:29:11.977 [2024-11-05 19:18:41.101236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.977 [2024-11-05 19:18:41.101246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.977 qpair failed and we were unable to recover it. 00:29:11.977 [2024-11-05 19:18:41.101543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.977 [2024-11-05 19:18:41.101552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.977 qpair failed and we were unable to recover it. 00:29:11.977 [2024-11-05 19:18:41.101740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.977 [2024-11-05 19:18:41.101757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.977 qpair failed and we were unable to recover it. 00:29:11.977 [2024-11-05 19:18:41.102043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.977 [2024-11-05 19:18:41.102054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.977 qpair failed and we were unable to recover it. 00:29:11.977 [2024-11-05 19:18:41.102380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.977 [2024-11-05 19:18:41.102391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.977 qpair failed and we were unable to recover it. 00:29:11.977 [2024-11-05 19:18:41.102694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.977 [2024-11-05 19:18:41.102705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.977 qpair failed and we were unable to recover it. 00:29:11.977 [2024-11-05 19:18:41.102884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.977 [2024-11-05 19:18:41.102895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.977 qpair failed and we were unable to recover it. 00:29:11.977 [2024-11-05 19:18:41.103132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.977 [2024-11-05 19:18:41.103144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.977 qpair failed and we were unable to recover it. 00:29:11.977 [2024-11-05 19:18:41.103440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.977 [2024-11-05 19:18:41.103451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.977 qpair failed and we were unable to recover it. 00:29:11.977 [2024-11-05 19:18:41.103750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.977 [2024-11-05 19:18:41.103762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.977 qpair failed and we were unable to recover it. 00:29:11.977 [2024-11-05 19:18:41.104098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.977 [2024-11-05 19:18:41.104110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.977 qpair failed and we were unable to recover it. 00:29:11.977 [2024-11-05 19:18:41.104422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.977 [2024-11-05 19:18:41.104434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.977 qpair failed and we were unable to recover it. 00:29:11.977 [2024-11-05 19:18:41.104711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.977 [2024-11-05 19:18:41.104723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.977 qpair failed and we were unable to recover it. 00:29:11.977 [2024-11-05 19:18:41.105070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.977 [2024-11-05 19:18:41.105083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.977 qpair failed and we were unable to recover it. 00:29:11.977 [2024-11-05 19:18:41.105390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.977 [2024-11-05 19:18:41.105402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.977 qpair failed and we were unable to recover it. 00:29:11.977 [2024-11-05 19:18:41.105745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.977 [2024-11-05 19:18:41.105762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.977 qpair failed and we were unable to recover it. 00:29:11.977 [2024-11-05 19:18:41.105960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.977 [2024-11-05 19:18:41.105972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.977 qpair failed and we were unable to recover it. 00:29:11.977 [2024-11-05 19:18:41.106263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.977 [2024-11-05 19:18:41.106275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.977 qpair failed and we were unable to recover it. 00:29:11.977 [2024-11-05 19:18:41.106582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.977 [2024-11-05 19:18:41.106593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.977 qpair failed and we were unable to recover it. 00:29:11.977 [2024-11-05 19:18:41.106908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.977 [2024-11-05 19:18:41.106921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.977 qpair failed and we were unable to recover it. 00:29:11.977 [2024-11-05 19:18:41.107235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.977 [2024-11-05 19:18:41.107247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.977 qpair failed and we were unable to recover it. 00:29:11.977 [2024-11-05 19:18:41.107542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.977 [2024-11-05 19:18:41.107554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.977 qpair failed and we were unable to recover it. 00:29:11.977 [2024-11-05 19:18:41.107891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.977 [2024-11-05 19:18:41.107904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.977 qpair failed and we were unable to recover it. 00:29:11.977 [2024-11-05 19:18:41.108237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.977 [2024-11-05 19:18:41.108249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.977 qpair failed and we were unable to recover it. 00:29:11.977 [2024-11-05 19:18:41.108591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.977 [2024-11-05 19:18:41.108603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.977 qpair failed and we were unable to recover it. 00:29:11.977 [2024-11-05 19:18:41.108942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.977 [2024-11-05 19:18:41.108955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.977 qpair failed and we were unable to recover it. 00:29:11.977 [2024-11-05 19:18:41.109284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.977 [2024-11-05 19:18:41.109296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.977 qpair failed and we were unable to recover it. 00:29:11.977 [2024-11-05 19:18:41.109638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.977 [2024-11-05 19:18:41.109650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.977 qpair failed and we were unable to recover it. 00:29:11.977 [2024-11-05 19:18:41.109982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.977 [2024-11-05 19:18:41.109994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.977 qpair failed and we were unable to recover it. 00:29:11.977 [2024-11-05 19:18:41.110340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.977 [2024-11-05 19:18:41.110353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.977 qpair failed and we were unable to recover it. 00:29:11.977 [2024-11-05 19:18:41.110682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.977 [2024-11-05 19:18:41.110694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.977 qpair failed and we were unable to recover it. 00:29:11.977 [2024-11-05 19:18:41.110993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.978 [2024-11-05 19:18:41.111005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.978 qpair failed and we were unable to recover it. 00:29:11.978 [2024-11-05 19:18:41.111203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.978 [2024-11-05 19:18:41.111215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.978 qpair failed and we were unable to recover it. 00:29:11.978 [2024-11-05 19:18:41.111535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.978 [2024-11-05 19:18:41.111546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.978 qpair failed and we were unable to recover it. 00:29:11.978 [2024-11-05 19:18:41.111853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.978 [2024-11-05 19:18:41.111865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.978 qpair failed and we were unable to recover it. 00:29:11.978 [2024-11-05 19:18:41.112190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.978 [2024-11-05 19:18:41.112201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.978 qpair failed and we were unable to recover it. 00:29:11.978 [2024-11-05 19:18:41.112506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.978 [2024-11-05 19:18:41.112518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.978 qpair failed and we were unable to recover it. 00:29:11.978 [2024-11-05 19:18:41.112730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.978 [2024-11-05 19:18:41.112742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.978 qpair failed and we were unable to recover it. 00:29:11.978 [2024-11-05 19:18:41.113074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.978 [2024-11-05 19:18:41.113086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.978 qpair failed and we were unable to recover it. 00:29:11.978 [2024-11-05 19:18:41.113403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.978 [2024-11-05 19:18:41.113415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.978 qpair failed and we were unable to recover it. 00:29:11.978 [2024-11-05 19:18:41.113758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.978 [2024-11-05 19:18:41.113771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.978 qpair failed and we were unable to recover it. 00:29:11.978 [2024-11-05 19:18:41.114039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.978 [2024-11-05 19:18:41.114050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.978 qpair failed and we were unable to recover it. 00:29:11.978 [2024-11-05 19:18:41.114350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.978 [2024-11-05 19:18:41.114362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.978 qpair failed and we were unable to recover it. 00:29:11.978 [2024-11-05 19:18:41.114531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.978 [2024-11-05 19:18:41.114543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.978 qpair failed and we were unable to recover it. 00:29:11.978 [2024-11-05 19:18:41.114874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.978 [2024-11-05 19:18:41.114886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.978 qpair failed and we were unable to recover it. 00:29:11.978 [2024-11-05 19:18:41.115180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.978 [2024-11-05 19:18:41.115191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.978 qpair failed and we were unable to recover it. 00:29:11.978 [2024-11-05 19:18:41.115501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.978 [2024-11-05 19:18:41.115512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.978 qpair failed and we were unable to recover it. 00:29:11.978 [2024-11-05 19:18:41.115835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.978 [2024-11-05 19:18:41.115846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.978 qpair failed and we were unable to recover it. 00:29:11.978 [2024-11-05 19:18:41.116150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.978 [2024-11-05 19:18:41.116161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.978 qpair failed and we were unable to recover it. 00:29:11.978 [2024-11-05 19:18:41.116453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.978 [2024-11-05 19:18:41.116464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.978 qpair failed and we were unable to recover it. 00:29:11.978 [2024-11-05 19:18:41.116771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.978 [2024-11-05 19:18:41.116784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.978 qpair failed and we were unable to recover it. 00:29:11.978 [2024-11-05 19:18:41.117082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.978 [2024-11-05 19:18:41.117095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.978 qpair failed and we were unable to recover it. 00:29:11.978 [2024-11-05 19:18:41.117427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.978 [2024-11-05 19:18:41.117439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.978 qpair failed and we were unable to recover it. 00:29:11.978 [2024-11-05 19:18:41.117772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.978 [2024-11-05 19:18:41.117785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.978 qpair failed and we were unable to recover it. 00:29:11.978 [2024-11-05 19:18:41.117976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.978 [2024-11-05 19:18:41.117987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.978 qpair failed and we were unable to recover it. 00:29:11.978 [2024-11-05 19:18:41.118311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.978 [2024-11-05 19:18:41.118322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.978 qpair failed and we were unable to recover it. 00:29:11.978 [2024-11-05 19:18:41.118692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.978 [2024-11-05 19:18:41.118706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.978 qpair failed and we were unable to recover it. 00:29:11.978 [2024-11-05 19:18:41.119042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.978 [2024-11-05 19:18:41.119055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.978 qpair failed and we were unable to recover it. 00:29:11.978 [2024-11-05 19:18:41.119402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.978 [2024-11-05 19:18:41.119414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.978 qpair failed and we were unable to recover it. 00:29:11.978 [2024-11-05 19:18:41.119715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.978 [2024-11-05 19:18:41.119727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.978 qpair failed and we were unable to recover it. 00:29:11.978 [2024-11-05 19:18:41.120088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.978 [2024-11-05 19:18:41.120099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.978 qpair failed and we were unable to recover it. 00:29:11.978 [2024-11-05 19:18:41.120367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.978 [2024-11-05 19:18:41.120377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.978 qpair failed and we were unable to recover it. 00:29:11.978 [2024-11-05 19:18:41.120728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.978 [2024-11-05 19:18:41.120739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.978 qpair failed and we were unable to recover it. 00:29:11.978 [2024-11-05 19:18:41.121080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.978 [2024-11-05 19:18:41.121092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.978 qpair failed and we were unable to recover it. 00:29:11.978 [2024-11-05 19:18:41.121397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.978 [2024-11-05 19:18:41.121407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.978 qpair failed and we were unable to recover it. 00:29:11.978 [2024-11-05 19:18:41.121724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.978 [2024-11-05 19:18:41.121736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.978 qpair failed and we were unable to recover it. 00:29:11.978 [2024-11-05 19:18:41.122047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.978 [2024-11-05 19:18:41.122059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.978 qpair failed and we were unable to recover it. 00:29:11.978 [2024-11-05 19:18:41.122371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.978 [2024-11-05 19:18:41.122382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.978 qpair failed and we were unable to recover it. 00:29:11.978 [2024-11-05 19:18:41.122696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.978 [2024-11-05 19:18:41.122708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.978 qpair failed and we were unable to recover it. 00:29:11.978 [2024-11-05 19:18:41.122888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.978 [2024-11-05 19:18:41.122900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.978 qpair failed and we were unable to recover it. 00:29:11.978 [2024-11-05 19:18:41.123177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.979 [2024-11-05 19:18:41.123188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.979 qpair failed and we were unable to recover it. 00:29:11.979 [2024-11-05 19:18:41.123506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.979 [2024-11-05 19:18:41.123519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.979 qpair failed and we were unable to recover it. 00:29:11.979 [2024-11-05 19:18:41.123859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.979 [2024-11-05 19:18:41.123871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.979 qpair failed and we were unable to recover it. 00:29:11.979 [2024-11-05 19:18:41.124186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.979 [2024-11-05 19:18:41.124197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.979 qpair failed and we were unable to recover it. 00:29:11.979 [2024-11-05 19:18:41.124369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.979 [2024-11-05 19:18:41.124380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.979 qpair failed and we were unable to recover it. 00:29:11.979 [2024-11-05 19:18:41.124657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.979 [2024-11-05 19:18:41.124669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.979 qpair failed and we were unable to recover it. 00:29:11.979 [2024-11-05 19:18:41.124992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.979 [2024-11-05 19:18:41.125003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.979 qpair failed and we were unable to recover it. 00:29:11.979 [2024-11-05 19:18:41.125312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.979 [2024-11-05 19:18:41.125323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.979 qpair failed and we were unable to recover it. 00:29:11.979 [2024-11-05 19:18:41.125632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.979 [2024-11-05 19:18:41.125643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.979 qpair failed and we were unable to recover it. 00:29:11.979 [2024-11-05 19:18:41.125959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.979 [2024-11-05 19:18:41.125971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.979 qpair failed and we were unable to recover it. 00:29:11.979 [2024-11-05 19:18:41.126281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.979 [2024-11-05 19:18:41.126292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.979 qpair failed and we were unable to recover it. 00:29:11.979 [2024-11-05 19:18:41.126573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.979 [2024-11-05 19:18:41.126585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.979 qpair failed and we were unable to recover it. 00:29:11.979 [2024-11-05 19:18:41.126895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.979 [2024-11-05 19:18:41.126906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.979 qpair failed and we were unable to recover it. 00:29:11.979 [2024-11-05 19:18:41.127231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.979 [2024-11-05 19:18:41.127244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.979 qpair failed and we were unable to recover it. 00:29:11.979 [2024-11-05 19:18:41.127526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.979 [2024-11-05 19:18:41.127539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.979 qpair failed and we were unable to recover it. 00:29:11.979 [2024-11-05 19:18:41.127865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.979 [2024-11-05 19:18:41.127878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.979 qpair failed and we were unable to recover it. 00:29:11.979 [2024-11-05 19:18:41.128191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.979 [2024-11-05 19:18:41.128204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.979 qpair failed and we were unable to recover it. 00:29:11.979 [2024-11-05 19:18:41.128502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.979 [2024-11-05 19:18:41.128514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.979 qpair failed and we were unable to recover it. 00:29:11.979 [2024-11-05 19:18:41.128822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.979 [2024-11-05 19:18:41.128834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.979 qpair failed and we were unable to recover it. 00:29:11.979 [2024-11-05 19:18:41.129182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.979 [2024-11-05 19:18:41.129194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.979 qpair failed and we were unable to recover it. 00:29:11.979 [2024-11-05 19:18:41.129394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.979 [2024-11-05 19:18:41.129406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.979 qpair failed and we were unable to recover it. 00:29:11.979 [2024-11-05 19:18:41.129624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.979 [2024-11-05 19:18:41.129636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.979 qpair failed and we were unable to recover it. 00:29:11.979 [2024-11-05 19:18:41.129929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.979 [2024-11-05 19:18:41.129942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.979 qpair failed and we were unable to recover it. 00:29:11.979 [2024-11-05 19:18:41.130310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.979 [2024-11-05 19:18:41.130323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.979 qpair failed and we were unable to recover it. 00:29:11.979 [2024-11-05 19:18:41.130661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.979 [2024-11-05 19:18:41.130673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.979 qpair failed and we were unable to recover it. 00:29:11.979 [2024-11-05 19:18:41.131048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.979 [2024-11-05 19:18:41.131061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.979 qpair failed and we were unable to recover it. 00:29:11.979 [2024-11-05 19:18:41.131331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.979 [2024-11-05 19:18:41.131343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.979 qpair failed and we were unable to recover it. 00:29:11.979 [2024-11-05 19:18:41.131679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.979 [2024-11-05 19:18:41.131691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.979 qpair failed and we were unable to recover it. 00:29:11.979 [2024-11-05 19:18:41.132004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.979 [2024-11-05 19:18:41.132017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.979 qpair failed and we were unable to recover it. 00:29:11.979 [2024-11-05 19:18:41.132350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.979 [2024-11-05 19:18:41.132362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.979 qpair failed and we were unable to recover it. 00:29:11.979 [2024-11-05 19:18:41.132692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.979 [2024-11-05 19:18:41.132704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.979 qpair failed and we were unable to recover it. 00:29:11.979 [2024-11-05 19:18:41.133010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.979 [2024-11-05 19:18:41.133022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.979 qpair failed and we were unable to recover it. 00:29:11.979 [2024-11-05 19:18:41.133338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.979 [2024-11-05 19:18:41.133351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.979 qpair failed and we were unable to recover it. 00:29:11.979 [2024-11-05 19:18:41.133654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.979 [2024-11-05 19:18:41.133667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.979 qpair failed and we were unable to recover it. 00:29:11.979 [2024-11-05 19:18:41.133989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.980 [2024-11-05 19:18:41.134002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.980 qpair failed and we were unable to recover it. 00:29:11.980 [2024-11-05 19:18:41.134340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.980 [2024-11-05 19:18:41.134352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.980 qpair failed and we were unable to recover it. 00:29:11.980 [2024-11-05 19:18:41.134649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.980 [2024-11-05 19:18:41.134661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.980 qpair failed and we were unable to recover it. 00:29:11.980 [2024-11-05 19:18:41.134983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.980 [2024-11-05 19:18:41.134997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.980 qpair failed and we were unable to recover it. 00:29:11.980 [2024-11-05 19:18:41.135181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.980 [2024-11-05 19:18:41.135194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.980 qpair failed and we were unable to recover it. 00:29:11.980 [2024-11-05 19:18:41.135511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.980 [2024-11-05 19:18:41.135524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.980 qpair failed and we were unable to recover it. 00:29:11.980 [2024-11-05 19:18:41.135859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.980 [2024-11-05 19:18:41.135871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.980 qpair failed and we were unable to recover it. 00:29:11.980 [2024-11-05 19:18:41.136195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.980 [2024-11-05 19:18:41.136207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.980 qpair failed and we were unable to recover it. 00:29:11.980 [2024-11-05 19:18:41.136553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.980 [2024-11-05 19:18:41.136566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.980 qpair failed and we were unable to recover it. 00:29:11.980 [2024-11-05 19:18:41.136832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.980 [2024-11-05 19:18:41.136845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.980 qpair failed and we were unable to recover it. 00:29:11.980 [2024-11-05 19:18:41.137165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.980 [2024-11-05 19:18:41.137177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.980 qpair failed and we were unable to recover it. 00:29:11.980 [2024-11-05 19:18:41.137525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.980 [2024-11-05 19:18:41.137536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.980 qpair failed and we were unable to recover it. 00:29:11.980 [2024-11-05 19:18:41.137854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.980 [2024-11-05 19:18:41.137867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.980 qpair failed and we were unable to recover it. 00:29:11.980 [2024-11-05 19:18:41.138210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.980 [2024-11-05 19:18:41.138222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.980 qpair failed and we were unable to recover it. 00:29:11.980 [2024-11-05 19:18:41.138533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.980 [2024-11-05 19:18:41.138546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.980 qpair failed and we were unable to recover it. 00:29:11.980 [2024-11-05 19:18:41.138815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.980 [2024-11-05 19:18:41.138828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.980 qpair failed and we were unable to recover it. 00:29:11.980 [2024-11-05 19:18:41.139027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.980 [2024-11-05 19:18:41.139039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.980 qpair failed and we were unable to recover it. 00:29:11.980 [2024-11-05 19:18:41.139336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.980 [2024-11-05 19:18:41.139348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.980 qpair failed and we were unable to recover it. 00:29:11.980 [2024-11-05 19:18:41.139657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.980 [2024-11-05 19:18:41.139670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.980 qpair failed and we were unable to recover it. 00:29:11.980 [2024-11-05 19:18:41.139970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.980 [2024-11-05 19:18:41.139983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.980 qpair failed and we were unable to recover it. 00:29:11.980 [2024-11-05 19:18:41.140307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.980 [2024-11-05 19:18:41.140323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.980 qpair failed and we were unable to recover it. 00:29:11.980 [2024-11-05 19:18:41.140659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.980 [2024-11-05 19:18:41.140671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.980 qpair failed and we were unable to recover it. 00:29:11.980 [2024-11-05 19:18:41.140986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.980 [2024-11-05 19:18:41.140999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.980 qpair failed and we were unable to recover it. 00:29:11.980 [2024-11-05 19:18:41.141304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.980 [2024-11-05 19:18:41.141316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.980 qpair failed and we were unable to recover it. 00:29:11.980 [2024-11-05 19:18:41.141492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.980 [2024-11-05 19:18:41.141505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.980 qpair failed and we were unable to recover it. 00:29:11.980 [2024-11-05 19:18:41.141809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.980 [2024-11-05 19:18:41.141822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.980 qpair failed and we were unable to recover it. 00:29:11.980 [2024-11-05 19:18:41.142147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.980 [2024-11-05 19:18:41.142159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.980 qpair failed and we were unable to recover it. 00:29:11.980 [2024-11-05 19:18:41.142469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.980 [2024-11-05 19:18:41.142482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.980 qpair failed and we were unable to recover it. 00:29:11.980 [2024-11-05 19:18:41.142707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.980 [2024-11-05 19:18:41.142719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.980 qpair failed and we were unable to recover it. 00:29:11.980 [2024-11-05 19:18:41.143020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.980 [2024-11-05 19:18:41.143033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.980 qpair failed and we were unable to recover it. 00:29:11.980 [2024-11-05 19:18:41.143362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.980 [2024-11-05 19:18:41.143374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.980 qpair failed and we were unable to recover it. 00:29:11.980 [2024-11-05 19:18:41.143684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.980 [2024-11-05 19:18:41.143696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.980 qpair failed and we were unable to recover it. 00:29:11.980 [2024-11-05 19:18:41.144013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.980 [2024-11-05 19:18:41.144026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.980 qpair failed and we were unable to recover it. 00:29:11.980 [2024-11-05 19:18:41.144347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.980 [2024-11-05 19:18:41.144359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.980 qpair failed and we were unable to recover it. 00:29:11.980 [2024-11-05 19:18:41.144691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.980 [2024-11-05 19:18:41.144704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.980 qpair failed and we were unable to recover it. 00:29:11.980 [2024-11-05 19:18:41.145015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.980 [2024-11-05 19:18:41.145028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.980 qpair failed and we were unable to recover it. 00:29:11.980 [2024-11-05 19:18:41.145333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.980 [2024-11-05 19:18:41.145345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.980 qpair failed and we were unable to recover it. 00:29:11.980 [2024-11-05 19:18:41.145625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.980 [2024-11-05 19:18:41.145637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.980 qpair failed and we were unable to recover it. 00:29:11.980 [2024-11-05 19:18:41.145960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.980 [2024-11-05 19:18:41.145974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.980 qpair failed and we were unable to recover it. 00:29:11.980 [2024-11-05 19:18:41.146286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.981 [2024-11-05 19:18:41.146298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.981 qpair failed and we were unable to recover it. 00:29:11.981 [2024-11-05 19:18:41.146498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.981 [2024-11-05 19:18:41.146511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.981 qpair failed and we were unable to recover it. 00:29:11.981 [2024-11-05 19:18:41.146905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.981 [2024-11-05 19:18:41.146918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.981 qpair failed and we were unable to recover it. 00:29:11.981 [2024-11-05 19:18:41.147242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.981 [2024-11-05 19:18:41.147254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.981 qpair failed and we were unable to recover it. 00:29:11.981 [2024-11-05 19:18:41.147562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.981 [2024-11-05 19:18:41.147574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.981 qpair failed and we were unable to recover it. 00:29:11.981 [2024-11-05 19:18:41.147881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.981 [2024-11-05 19:18:41.147892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.981 qpair failed and we were unable to recover it. 00:29:11.981 [2024-11-05 19:18:41.148114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.981 [2024-11-05 19:18:41.148126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.981 qpair failed and we were unable to recover it. 00:29:11.981 [2024-11-05 19:18:41.148483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.981 [2024-11-05 19:18:41.148495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.981 qpair failed and we were unable to recover it. 00:29:11.981 [2024-11-05 19:18:41.148706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.981 [2024-11-05 19:18:41.148719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.981 qpair failed and we were unable to recover it. 00:29:11.981 [2024-11-05 19:18:41.149037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.981 [2024-11-05 19:18:41.149049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.981 qpair failed and we were unable to recover it. 00:29:11.981 [2024-11-05 19:18:41.149336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.981 [2024-11-05 19:18:41.149346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.981 qpair failed and we were unable to recover it. 00:29:11.981 [2024-11-05 19:18:41.149532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.981 [2024-11-05 19:18:41.149543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.981 qpair failed and we were unable to recover it. 00:29:11.981 [2024-11-05 19:18:41.149721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.981 [2024-11-05 19:18:41.149732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.981 qpair failed and we were unable to recover it. 00:29:11.981 [2024-11-05 19:18:41.150037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.981 [2024-11-05 19:18:41.150048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.981 qpair failed and we were unable to recover it. 00:29:11.981 [2024-11-05 19:18:41.150330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.981 [2024-11-05 19:18:41.150343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.981 qpair failed and we were unable to recover it. 00:29:11.981 [2024-11-05 19:18:41.150536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.981 [2024-11-05 19:18:41.150548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.981 qpair failed and we were unable to recover it. 00:29:11.981 [2024-11-05 19:18:41.150739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.981 [2024-11-05 19:18:41.150754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.981 qpair failed and we were unable to recover it. 00:29:11.981 [2024-11-05 19:18:41.151079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.981 [2024-11-05 19:18:41.151090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.981 qpair failed and we were unable to recover it. 00:29:11.981 [2024-11-05 19:18:41.151387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.981 [2024-11-05 19:18:41.151398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.981 qpair failed and we were unable to recover it. 00:29:11.981 [2024-11-05 19:18:41.151586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.981 [2024-11-05 19:18:41.151597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.981 qpair failed and we were unable to recover it. 00:29:11.981 [2024-11-05 19:18:41.151905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.981 [2024-11-05 19:18:41.151917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.981 qpair failed and we were unable to recover it. 00:29:11.981 [2024-11-05 19:18:41.152265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.981 [2024-11-05 19:18:41.152276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.981 qpair failed and we were unable to recover it. 00:29:11.981 [2024-11-05 19:18:41.152575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.981 [2024-11-05 19:18:41.152588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.981 qpair failed and we were unable to recover it. 00:29:11.981 [2024-11-05 19:18:41.152898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.981 [2024-11-05 19:18:41.152910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.981 qpair failed and we were unable to recover it. 00:29:11.981 [2024-11-05 19:18:41.153218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.981 [2024-11-05 19:18:41.153229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.981 qpair failed and we were unable to recover it. 00:29:11.981 [2024-11-05 19:18:41.153418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.981 [2024-11-05 19:18:41.153428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.981 qpair failed and we were unable to recover it. 00:29:11.981 [2024-11-05 19:18:41.153752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.981 [2024-11-05 19:18:41.153765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.981 qpair failed and we were unable to recover it. 00:29:11.981 [2024-11-05 19:18:41.154141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.981 [2024-11-05 19:18:41.154152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.981 qpair failed and we were unable to recover it. 00:29:11.981 [2024-11-05 19:18:41.154449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.981 [2024-11-05 19:18:41.154461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.981 qpair failed and we were unable to recover it. 00:29:11.981 [2024-11-05 19:18:41.154808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.981 [2024-11-05 19:18:41.154820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.981 qpair failed and we were unable to recover it. 00:29:11.981 [2024-11-05 19:18:41.155134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.981 [2024-11-05 19:18:41.155147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.981 qpair failed and we were unable to recover it. 00:29:11.981 [2024-11-05 19:18:41.155501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.981 [2024-11-05 19:18:41.155513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.981 qpair failed and we were unable to recover it. 00:29:11.981 [2024-11-05 19:18:41.155812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.981 [2024-11-05 19:18:41.155824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.981 qpair failed and we were unable to recover it. 00:29:11.982 [2024-11-05 19:18:41.156123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.982 [2024-11-05 19:18:41.156134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.982 qpair failed and we were unable to recover it. 00:29:11.982 [2024-11-05 19:18:41.156415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.982 [2024-11-05 19:18:41.156427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.982 qpair failed and we were unable to recover it. 00:29:11.982 [2024-11-05 19:18:41.156728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.982 [2024-11-05 19:18:41.156739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.982 qpair failed and we were unable to recover it. 00:29:11.982 [2024-11-05 19:18:41.157101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.982 [2024-11-05 19:18:41.157112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.982 qpair failed and we were unable to recover it. 00:29:11.982 [2024-11-05 19:18:41.157434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.982 [2024-11-05 19:18:41.157445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.982 qpair failed and we were unable to recover it. 00:29:11.982 [2024-11-05 19:18:41.157771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.982 [2024-11-05 19:18:41.157782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.982 qpair failed and we were unable to recover it. 00:29:11.982 [2024-11-05 19:18:41.158024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.982 [2024-11-05 19:18:41.158035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.982 qpair failed and we were unable to recover it. 00:29:11.982 [2024-11-05 19:18:41.158291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.982 [2024-11-05 19:18:41.158302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.982 qpair failed and we were unable to recover it. 00:29:11.982 [2024-11-05 19:18:41.158619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.982 [2024-11-05 19:18:41.158631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.982 qpair failed and we were unable to recover it. 00:29:11.982 [2024-11-05 19:18:41.158955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.982 [2024-11-05 19:18:41.158968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.982 qpair failed and we were unable to recover it. 00:29:11.982 [2024-11-05 19:18:41.159280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.982 [2024-11-05 19:18:41.159293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.982 qpair failed and we were unable to recover it. 00:29:11.982 [2024-11-05 19:18:41.159612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.982 [2024-11-05 19:18:41.159624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.982 qpair failed and we were unable to recover it. 00:29:11.982 [2024-11-05 19:18:41.159832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.982 [2024-11-05 19:18:41.159844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.982 qpair failed and we were unable to recover it. 00:29:11.982 [2024-11-05 19:18:41.160174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.982 [2024-11-05 19:18:41.160185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.982 qpair failed and we were unable to recover it. 00:29:11.982 [2024-11-05 19:18:41.160493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.982 [2024-11-05 19:18:41.160505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.982 qpair failed and we were unable to recover it. 00:29:11.982 [2024-11-05 19:18:41.160906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.982 [2024-11-05 19:18:41.160918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.982 qpair failed and we were unable to recover it. 00:29:11.982 [2024-11-05 19:18:41.161202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.982 [2024-11-05 19:18:41.161214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.982 qpair failed and we were unable to recover it. 00:29:11.982 [2024-11-05 19:18:41.161550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.982 [2024-11-05 19:18:41.161561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.982 qpair failed and we were unable to recover it. 00:29:11.982 [2024-11-05 19:18:41.161811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.982 [2024-11-05 19:18:41.161822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.982 qpair failed and we were unable to recover it. 00:29:11.982 [2024-11-05 19:18:41.162205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.982 [2024-11-05 19:18:41.162216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.982 qpair failed and we were unable to recover it. 00:29:11.982 [2024-11-05 19:18:41.162545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.982 [2024-11-05 19:18:41.162557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.982 qpair failed and we were unable to recover it. 00:29:11.982 [2024-11-05 19:18:41.162867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.982 [2024-11-05 19:18:41.162879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.982 qpair failed and we were unable to recover it. 00:29:11.982 [2024-11-05 19:18:41.163167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.982 [2024-11-05 19:18:41.163178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.982 qpair failed and we were unable to recover it. 00:29:11.982 [2024-11-05 19:18:41.163480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.982 [2024-11-05 19:18:41.163491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.982 qpair failed and we were unable to recover it. 00:29:11.982 [2024-11-05 19:18:41.163799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.982 [2024-11-05 19:18:41.163813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.982 qpair failed and we were unable to recover it. 00:29:11.982 [2024-11-05 19:18:41.164125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.982 [2024-11-05 19:18:41.164137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.982 qpair failed and we were unable to recover it. 00:29:11.982 [2024-11-05 19:18:41.164437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.982 [2024-11-05 19:18:41.164449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.982 qpair failed and we were unable to recover it. 00:29:11.982 [2024-11-05 19:18:41.164730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.982 [2024-11-05 19:18:41.164741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.982 qpair failed and we were unable to recover it. 00:29:11.982 [2024-11-05 19:18:41.165057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.982 [2024-11-05 19:18:41.165070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.982 qpair failed and we were unable to recover it. 00:29:11.982 [2024-11-05 19:18:41.165246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.982 [2024-11-05 19:18:41.165258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.982 qpair failed and we were unable to recover it. 00:29:11.982 [2024-11-05 19:18:41.165622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.982 [2024-11-05 19:18:41.165634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.982 qpair failed and we were unable to recover it. 00:29:11.982 [2024-11-05 19:18:41.165952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.982 [2024-11-05 19:18:41.165964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.982 qpair failed and we were unable to recover it. 00:29:11.982 [2024-11-05 19:18:41.166287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.982 [2024-11-05 19:18:41.166299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.982 qpair failed and we were unable to recover it. 00:29:11.982 [2024-11-05 19:18:41.166489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.982 [2024-11-05 19:18:41.166500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.982 qpair failed and we were unable to recover it. 00:29:11.982 [2024-11-05 19:18:41.166774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.982 [2024-11-05 19:18:41.166785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.982 qpair failed and we were unable to recover it. 00:29:11.983 [2024-11-05 19:18:41.167140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.983 [2024-11-05 19:18:41.167152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.983 qpair failed and we were unable to recover it. 00:29:11.983 [2024-11-05 19:18:41.167450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.983 [2024-11-05 19:18:41.167461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.983 qpair failed and we were unable to recover it. 00:29:11.983 [2024-11-05 19:18:41.167793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.983 [2024-11-05 19:18:41.167804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.983 qpair failed and we were unable to recover it. 00:29:11.983 [2024-11-05 19:18:41.168121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.983 [2024-11-05 19:18:41.168132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.983 qpair failed and we were unable to recover it. 00:29:11.983 [2024-11-05 19:18:41.168434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.983 [2024-11-05 19:18:41.168445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.983 qpair failed and we were unable to recover it. 00:29:11.983 [2024-11-05 19:18:41.168763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.983 [2024-11-05 19:18:41.168775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.983 qpair failed and we were unable to recover it. 00:29:11.983 [2024-11-05 19:18:41.169133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.983 [2024-11-05 19:18:41.169144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.983 qpair failed and we were unable to recover it. 00:29:11.983 [2024-11-05 19:18:41.169443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.983 [2024-11-05 19:18:41.169455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.983 qpair failed and we were unable to recover it. 00:29:11.983 [2024-11-05 19:18:41.169801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.983 [2024-11-05 19:18:41.169814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.983 qpair failed and we were unable to recover it. 00:29:11.983 [2024-11-05 19:18:41.170085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.983 [2024-11-05 19:18:41.170096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.983 qpair failed and we were unable to recover it. 00:29:11.983 [2024-11-05 19:18:41.170456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.983 [2024-11-05 19:18:41.170467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.983 qpair failed and we were unable to recover it. 00:29:11.983 [2024-11-05 19:18:41.170766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.983 [2024-11-05 19:18:41.170777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.983 qpair failed and we were unable to recover it. 00:29:11.983 [2024-11-05 19:18:41.171102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.983 [2024-11-05 19:18:41.171113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.983 qpair failed and we were unable to recover it. 00:29:11.983 [2024-11-05 19:18:41.171414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.983 [2024-11-05 19:18:41.171427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.983 qpair failed and we were unable to recover it. 00:29:11.983 [2024-11-05 19:18:41.171720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.983 [2024-11-05 19:18:41.171732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.983 qpair failed and we were unable to recover it. 00:29:11.983 [2024-11-05 19:18:41.172045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.983 [2024-11-05 19:18:41.172057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.983 qpair failed and we were unable to recover it. 00:29:11.983 [2024-11-05 19:18:41.172390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.983 [2024-11-05 19:18:41.172401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.983 qpair failed and we were unable to recover it. 00:29:11.983 [2024-11-05 19:18:41.172754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.983 [2024-11-05 19:18:41.172766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.983 qpair failed and we were unable to recover it. 00:29:11.983 [2024-11-05 19:18:41.173130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.983 [2024-11-05 19:18:41.173141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.983 qpair failed and we were unable to recover it. 00:29:11.983 [2024-11-05 19:18:41.173355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.983 [2024-11-05 19:18:41.173366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.983 qpair failed and we were unable to recover it. 00:29:11.983 [2024-11-05 19:18:41.173643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.983 [2024-11-05 19:18:41.173655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.983 qpair failed and we were unable to recover it. 00:29:11.983 [2024-11-05 19:18:41.173978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.983 [2024-11-05 19:18:41.173990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.983 qpair failed and we were unable to recover it. 00:29:11.983 [2024-11-05 19:18:41.174265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.983 [2024-11-05 19:18:41.174276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.983 qpair failed and we were unable to recover it. 00:29:11.983 [2024-11-05 19:18:41.174584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.983 [2024-11-05 19:18:41.174596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.983 qpair failed and we were unable to recover it. 00:29:11.983 [2024-11-05 19:18:41.174894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.983 [2024-11-05 19:18:41.174906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.983 qpair failed and we were unable to recover it. 00:29:11.983 [2024-11-05 19:18:41.175241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.983 [2024-11-05 19:18:41.175253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.983 qpair failed and we were unable to recover it. 00:29:11.983 [2024-11-05 19:18:41.175546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.983 [2024-11-05 19:18:41.175558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.983 qpair failed and we were unable to recover it. 00:29:11.983 [2024-11-05 19:18:41.175775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.983 [2024-11-05 19:18:41.175787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.983 qpair failed and we were unable to recover it. 00:29:11.983 [2024-11-05 19:18:41.176101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.983 [2024-11-05 19:18:41.176113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.983 qpair failed and we were unable to recover it. 00:29:11.983 [2024-11-05 19:18:41.176427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.983 [2024-11-05 19:18:41.176438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.983 qpair failed and we were unable to recover it. 00:29:11.983 [2024-11-05 19:18:41.176716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.983 [2024-11-05 19:18:41.176727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.983 qpair failed and we were unable to recover it. 00:29:11.983 [2024-11-05 19:18:41.177038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.983 [2024-11-05 19:18:41.177051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.983 qpair failed and we were unable to recover it. 00:29:11.983 [2024-11-05 19:18:41.177373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.983 [2024-11-05 19:18:41.177385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.983 qpair failed and we were unable to recover it. 00:29:11.983 [2024-11-05 19:18:41.177697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.983 [2024-11-05 19:18:41.177709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.983 qpair failed and we were unable to recover it. 00:29:11.983 [2024-11-05 19:18:41.178029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.983 [2024-11-05 19:18:41.178041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.983 qpair failed and we were unable to recover it. 00:29:11.983 [2024-11-05 19:18:41.178232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.984 [2024-11-05 19:18:41.178245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.984 qpair failed and we were unable to recover it. 00:29:11.984 [2024-11-05 19:18:41.178614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.984 [2024-11-05 19:18:41.178625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.984 qpair failed and we were unable to recover it. 00:29:11.984 [2024-11-05 19:18:41.178941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.984 [2024-11-05 19:18:41.178953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.984 qpair failed and we were unable to recover it. 00:29:11.984 [2024-11-05 19:18:41.179289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.984 [2024-11-05 19:18:41.179300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.984 qpair failed and we were unable to recover it. 00:29:11.984 [2024-11-05 19:18:41.179599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.984 [2024-11-05 19:18:41.179612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.984 qpair failed and we were unable to recover it. 00:29:11.984 [2024-11-05 19:18:41.179833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.984 [2024-11-05 19:18:41.179844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.984 qpair failed and we were unable to recover it. 00:29:11.984 [2024-11-05 19:18:41.180167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.984 [2024-11-05 19:18:41.180187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.984 qpair failed and we were unable to recover it. 00:29:11.984 [2024-11-05 19:18:41.180470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.984 [2024-11-05 19:18:41.180480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.984 qpair failed and we were unable to recover it. 00:29:11.984 [2024-11-05 19:18:41.180778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.984 [2024-11-05 19:18:41.180790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.984 qpair failed and we were unable to recover it. 00:29:11.984 [2024-11-05 19:18:41.181118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.984 [2024-11-05 19:18:41.181129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.984 qpair failed and we were unable to recover it. 00:29:11.984 [2024-11-05 19:18:41.181426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.984 [2024-11-05 19:18:41.181438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.984 qpair failed and we were unable to recover it. 00:29:11.984 [2024-11-05 19:18:41.181716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.984 [2024-11-05 19:18:41.181727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.984 qpair failed and we were unable to recover it. 00:29:11.984 [2024-11-05 19:18:41.182048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.984 [2024-11-05 19:18:41.182061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.984 qpair failed and we were unable to recover it. 00:29:11.984 [2024-11-05 19:18:41.182239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.984 [2024-11-05 19:18:41.182252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.984 qpair failed and we were unable to recover it. 00:29:11.984 [2024-11-05 19:18:41.182568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.984 [2024-11-05 19:18:41.182580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.984 qpair failed and we were unable to recover it. 00:29:11.984 [2024-11-05 19:18:41.182920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.984 [2024-11-05 19:18:41.182932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.984 qpair failed and we were unable to recover it. 00:29:11.984 [2024-11-05 19:18:41.183284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.984 [2024-11-05 19:18:41.183295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.984 qpair failed and we were unable to recover it. 00:29:11.984 [2024-11-05 19:18:41.183599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.984 [2024-11-05 19:18:41.183610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.984 qpair failed and we were unable to recover it. 00:29:11.984 [2024-11-05 19:18:41.183858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.984 [2024-11-05 19:18:41.183870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.984 qpair failed and we were unable to recover it. 00:29:11.984 [2024-11-05 19:18:41.184165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.984 [2024-11-05 19:18:41.184177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.984 qpair failed and we were unable to recover it. 00:29:11.984 [2024-11-05 19:18:41.184499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.984 [2024-11-05 19:18:41.184511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.984 qpair failed and we were unable to recover it. 00:29:11.984 [2024-11-05 19:18:41.184844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.984 [2024-11-05 19:18:41.184857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.984 qpair failed and we were unable to recover it. 00:29:11.984 [2024-11-05 19:18:41.185195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.984 [2024-11-05 19:18:41.185206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.984 qpair failed and we were unable to recover it. 00:29:11.984 [2024-11-05 19:18:41.185537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.984 [2024-11-05 19:18:41.185549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.984 qpair failed and we were unable to recover it. 00:29:11.984 [2024-11-05 19:18:41.185768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.984 [2024-11-05 19:18:41.185780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.984 qpair failed and we were unable to recover it. 00:29:11.984 [2024-11-05 19:18:41.186110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.984 [2024-11-05 19:18:41.186123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.984 qpair failed and we were unable to recover it. 00:29:11.984 [2024-11-05 19:18:41.186405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.984 [2024-11-05 19:18:41.186417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.984 qpair failed and we were unable to recover it. 00:29:11.984 [2024-11-05 19:18:41.186712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.984 [2024-11-05 19:18:41.186726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.984 qpair failed and we were unable to recover it. 00:29:11.985 [2024-11-05 19:18:41.187047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.985 [2024-11-05 19:18:41.187059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.985 qpair failed and we were unable to recover it. 00:29:11.985 [2024-11-05 19:18:41.187357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.985 [2024-11-05 19:18:41.187368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.985 qpair failed and we were unable to recover it. 00:29:11.985 [2024-11-05 19:18:41.187626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.985 [2024-11-05 19:18:41.187637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.985 qpair failed and we were unable to recover it. 00:29:11.985 [2024-11-05 19:18:41.187957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.985 [2024-11-05 19:18:41.187969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.985 qpair failed and we were unable to recover it. 00:29:11.985 [2024-11-05 19:18:41.188297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.985 [2024-11-05 19:18:41.188308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.985 qpair failed and we were unable to recover it. 00:29:11.985 [2024-11-05 19:18:41.188500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.985 [2024-11-05 19:18:41.188511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.985 qpair failed and we were unable to recover it. 00:29:11.985 [2024-11-05 19:18:41.188708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.985 [2024-11-05 19:18:41.188721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.985 qpair failed and we were unable to recover it. 00:29:11.985 [2024-11-05 19:18:41.189035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.985 [2024-11-05 19:18:41.189047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.985 qpair failed and we were unable to recover it. 00:29:11.985 [2024-11-05 19:18:41.189350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.985 [2024-11-05 19:18:41.189362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.985 qpair failed and we were unable to recover it. 00:29:11.985 [2024-11-05 19:18:41.189660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.985 [2024-11-05 19:18:41.189672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.985 qpair failed and we were unable to recover it. 00:29:11.985 [2024-11-05 19:18:41.189979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.985 [2024-11-05 19:18:41.189990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.985 qpair failed and we were unable to recover it. 00:29:11.985 [2024-11-05 19:18:41.190327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.985 [2024-11-05 19:18:41.190339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.985 qpair failed and we were unable to recover it. 00:29:11.985 [2024-11-05 19:18:41.190509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.985 [2024-11-05 19:18:41.190522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.985 qpair failed and we were unable to recover it. 00:29:11.985 [2024-11-05 19:18:41.190700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.985 [2024-11-05 19:18:41.190712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.985 qpair failed and we were unable to recover it. 00:29:11.985 [2024-11-05 19:18:41.191111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.985 [2024-11-05 19:18:41.191124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.985 qpair failed and we were unable to recover it. 00:29:11.985 [2024-11-05 19:18:41.191423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.985 [2024-11-05 19:18:41.191435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.985 qpair failed and we were unable to recover it. 00:29:11.985 [2024-11-05 19:18:41.191761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.985 [2024-11-05 19:18:41.191773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.985 qpair failed and we were unable to recover it. 00:29:11.985 [2024-11-05 19:18:41.191881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.985 [2024-11-05 19:18:41.191893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.985 qpair failed and we were unable to recover it. 00:29:11.985 [2024-11-05 19:18:41.192223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.985 [2024-11-05 19:18:41.192234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.985 qpair failed and we were unable to recover it. 00:29:11.985 [2024-11-05 19:18:41.192523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.985 [2024-11-05 19:18:41.192534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.985 qpair failed and we were unable to recover it. 00:29:11.985 [2024-11-05 19:18:41.192686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.985 [2024-11-05 19:18:41.192697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.985 qpair failed and we were unable to recover it. 00:29:11.985 [2024-11-05 19:18:41.193054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.985 [2024-11-05 19:18:41.193065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.985 qpair failed and we were unable to recover it. 00:29:11.985 [2024-11-05 19:18:41.193277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.985 [2024-11-05 19:18:41.193288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.985 qpair failed and we were unable to recover it. 00:29:11.985 [2024-11-05 19:18:41.193461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.985 [2024-11-05 19:18:41.193472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.985 qpair failed and we were unable to recover it. 00:29:11.985 [2024-11-05 19:18:41.193791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.985 [2024-11-05 19:18:41.193802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.985 qpair failed and we were unable to recover it. 00:29:11.985 [2024-11-05 19:18:41.194152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.985 [2024-11-05 19:18:41.194169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.985 qpair failed and we were unable to recover it. 00:29:11.985 [2024-11-05 19:18:41.194479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.985 [2024-11-05 19:18:41.194490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.985 qpair failed and we were unable to recover it. 00:29:11.985 [2024-11-05 19:18:41.194791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.985 [2024-11-05 19:18:41.194803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.985 qpair failed and we were unable to recover it. 00:29:11.985 [2024-11-05 19:18:41.195132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.985 [2024-11-05 19:18:41.195143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.985 qpair failed and we were unable to recover it. 00:29:11.985 [2024-11-05 19:18:41.195475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.985 [2024-11-05 19:18:41.195487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.985 qpair failed and we were unable to recover it. 00:29:11.985 [2024-11-05 19:18:41.195787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.985 [2024-11-05 19:18:41.195799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.985 qpair failed and we were unable to recover it. 00:29:11.985 [2024-11-05 19:18:41.196013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.985 [2024-11-05 19:18:41.196023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.985 qpair failed and we were unable to recover it. 00:29:11.985 [2024-11-05 19:18:41.196353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.985 [2024-11-05 19:18:41.196364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.985 qpair failed and we were unable to recover it. 00:29:11.985 [2024-11-05 19:18:41.196685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.985 [2024-11-05 19:18:41.196697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.985 qpair failed and we were unable to recover it. 00:29:11.985 [2024-11-05 19:18:41.196980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.985 [2024-11-05 19:18:41.196992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.985 qpair failed and we were unable to recover it. 00:29:11.985 [2024-11-05 19:18:41.197196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.985 [2024-11-05 19:18:41.197206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.985 qpair failed and we were unable to recover it. 00:29:11.985 [2024-11-05 19:18:41.197437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.985 [2024-11-05 19:18:41.197448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.985 qpair failed and we were unable to recover it. 00:29:11.985 [2024-11-05 19:18:41.197773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.985 [2024-11-05 19:18:41.197785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.985 qpair failed and we were unable to recover it. 00:29:11.985 [2024-11-05 19:18:41.197991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.985 [2024-11-05 19:18:41.198001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.986 qpair failed and we were unable to recover it. 00:29:11.986 [2024-11-05 19:18:41.198271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.986 [2024-11-05 19:18:41.198282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.986 qpair failed and we were unable to recover it. 00:29:11.986 [2024-11-05 19:18:41.198613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.986 [2024-11-05 19:18:41.198625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.986 qpair failed and we were unable to recover it. 00:29:11.986 [2024-11-05 19:18:41.198789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.986 [2024-11-05 19:18:41.198800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.986 qpair failed and we were unable to recover it. 00:29:11.986 [2024-11-05 19:18:41.199109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.986 [2024-11-05 19:18:41.199121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.986 qpair failed and we were unable to recover it. 00:29:11.986 [2024-11-05 19:18:41.199453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.986 [2024-11-05 19:18:41.199466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.986 qpair failed and we were unable to recover it. 00:29:11.986 [2024-11-05 19:18:41.199770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.986 [2024-11-05 19:18:41.199783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.986 qpair failed and we were unable to recover it. 00:29:11.986 [2024-11-05 19:18:41.199963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.986 [2024-11-05 19:18:41.199975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.986 qpair failed and we were unable to recover it. 00:29:11.986 [2024-11-05 19:18:41.200278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.986 [2024-11-05 19:18:41.200289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.986 qpair failed and we were unable to recover it. 00:29:11.986 [2024-11-05 19:18:41.200575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.986 [2024-11-05 19:18:41.200586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.986 qpair failed and we were unable to recover it. 00:29:11.986 [2024-11-05 19:18:41.200895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.986 [2024-11-05 19:18:41.200906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.986 qpair failed and we were unable to recover it. 00:29:11.986 [2024-11-05 19:18:41.201310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.986 [2024-11-05 19:18:41.201321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.986 qpair failed and we were unable to recover it. 00:29:11.986 [2024-11-05 19:18:41.201621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.986 [2024-11-05 19:18:41.201632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.986 qpair failed and we were unable to recover it. 00:29:11.986 [2024-11-05 19:18:41.201950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.986 [2024-11-05 19:18:41.201961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.986 qpair failed and we were unable to recover it. 00:29:11.986 [2024-11-05 19:18:41.202284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.986 [2024-11-05 19:18:41.202296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.986 qpair failed and we were unable to recover it. 00:29:11.986 [2024-11-05 19:18:41.202596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.986 [2024-11-05 19:18:41.202607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.986 qpair failed and we were unable to recover it. 00:29:11.986 [2024-11-05 19:18:41.202932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.986 [2024-11-05 19:18:41.202944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.986 qpair failed and we were unable to recover it. 00:29:11.986 [2024-11-05 19:18:41.203246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.986 [2024-11-05 19:18:41.203257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.986 qpair failed and we were unable to recover it. 00:29:11.986 [2024-11-05 19:18:41.203555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.986 [2024-11-05 19:18:41.203567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.986 qpair failed and we were unable to recover it. 00:29:11.986 [2024-11-05 19:18:41.203823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.986 [2024-11-05 19:18:41.203835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.986 qpair failed and we were unable to recover it. 00:29:11.986 [2024-11-05 19:18:41.204179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.986 [2024-11-05 19:18:41.204191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.986 qpair failed and we were unable to recover it. 00:29:11.986 [2024-11-05 19:18:41.204365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.986 [2024-11-05 19:18:41.204377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.986 qpair failed and we were unable to recover it. 00:29:11.986 [2024-11-05 19:18:41.204592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.986 [2024-11-05 19:18:41.204603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.986 qpair failed and we were unable to recover it. 00:29:11.986 [2024-11-05 19:18:41.204784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.986 [2024-11-05 19:18:41.204796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.986 qpair failed and we were unable to recover it. 00:29:11.986 [2024-11-05 19:18:41.205099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.986 [2024-11-05 19:18:41.205111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.986 qpair failed and we were unable to recover it. 00:29:11.986 [2024-11-05 19:18:41.205383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.986 [2024-11-05 19:18:41.205394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.986 qpair failed and we were unable to recover it. 00:29:11.986 [2024-11-05 19:18:41.205652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.986 [2024-11-05 19:18:41.205664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.986 qpair failed and we were unable to recover it. 00:29:11.986 [2024-11-05 19:18:41.205973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.986 [2024-11-05 19:18:41.205986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.986 qpair failed and we were unable to recover it. 00:29:11.986 [2024-11-05 19:18:41.206293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.986 [2024-11-05 19:18:41.206305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.986 qpair failed and we were unable to recover it. 00:29:11.986 [2024-11-05 19:18:41.206616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.986 [2024-11-05 19:18:41.206630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.986 qpair failed and we were unable to recover it. 00:29:11.986 [2024-11-05 19:18:41.206959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.987 [2024-11-05 19:18:41.206971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.987 qpair failed and we were unable to recover it. 00:29:11.987 [2024-11-05 19:18:41.207179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.987 [2024-11-05 19:18:41.207190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.987 qpair failed and we were unable to recover it. 00:29:11.987 [2024-11-05 19:18:41.207500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.987 [2024-11-05 19:18:41.207512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.987 qpair failed and we were unable to recover it. 00:29:11.987 [2024-11-05 19:18:41.207706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.987 [2024-11-05 19:18:41.207718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.987 qpair failed and we were unable to recover it. 00:29:11.987 [2024-11-05 19:18:41.207887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.987 [2024-11-05 19:18:41.207899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.987 qpair failed and we were unable to recover it. 00:29:11.987 [2024-11-05 19:18:41.208225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.987 [2024-11-05 19:18:41.208236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.987 qpair failed and we were unable to recover it. 00:29:11.987 [2024-11-05 19:18:41.208537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.987 [2024-11-05 19:18:41.208549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.987 qpair failed and we were unable to recover it. 00:29:11.987 [2024-11-05 19:18:41.208868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.987 [2024-11-05 19:18:41.208879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.987 qpair failed and we were unable to recover it. 00:29:11.987 [2024-11-05 19:18:41.209079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.987 [2024-11-05 19:18:41.209090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.987 qpair failed and we were unable to recover it. 00:29:11.987 [2024-11-05 19:18:41.209360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.987 [2024-11-05 19:18:41.209371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.987 qpair failed and we were unable to recover it. 00:29:11.987 [2024-11-05 19:18:41.209697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.987 [2024-11-05 19:18:41.209708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.987 qpair failed and we were unable to recover it. 00:29:11.987 [2024-11-05 19:18:41.210021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.987 [2024-11-05 19:18:41.210032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.987 qpair failed and we were unable to recover it. 00:29:11.987 [2024-11-05 19:18:41.210339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.987 [2024-11-05 19:18:41.210351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.987 qpair failed and we were unable to recover it. 00:29:11.987 [2024-11-05 19:18:41.210682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.987 [2024-11-05 19:18:41.210693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.987 qpair failed and we were unable to recover it. 00:29:11.987 [2024-11-05 19:18:41.211009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.987 [2024-11-05 19:18:41.211022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.987 qpair failed and we were unable to recover it. 00:29:11.987 [2024-11-05 19:18:41.211298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.987 [2024-11-05 19:18:41.211309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.987 qpair failed and we were unable to recover it. 00:29:11.987 [2024-11-05 19:18:41.211619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.987 [2024-11-05 19:18:41.211630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.987 qpair failed and we were unable to recover it. 00:29:11.987 [2024-11-05 19:18:41.211944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.987 [2024-11-05 19:18:41.211956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.987 qpair failed and we were unable to recover it. 00:29:11.987 [2024-11-05 19:18:41.212272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.987 [2024-11-05 19:18:41.212283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.987 qpair failed and we were unable to recover it. 00:29:11.987 [2024-11-05 19:18:41.212561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.987 [2024-11-05 19:18:41.212572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.987 qpair failed and we were unable to recover it. 00:29:11.987 [2024-11-05 19:18:41.212760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.987 [2024-11-05 19:18:41.212771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.987 qpair failed and we were unable to recover it. 00:29:11.987 [2024-11-05 19:18:41.213054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.987 [2024-11-05 19:18:41.213065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.987 qpair failed and we were unable to recover it. 00:29:11.987 [2024-11-05 19:18:41.213366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.987 [2024-11-05 19:18:41.213377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.987 qpair failed and we were unable to recover it. 00:29:11.987 [2024-11-05 19:18:41.213696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.987 [2024-11-05 19:18:41.213706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.987 qpair failed and we were unable to recover it. 00:29:11.987 [2024-11-05 19:18:41.214028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.987 [2024-11-05 19:18:41.214039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.987 qpair failed and we were unable to recover it. 00:29:11.987 [2024-11-05 19:18:41.214369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.987 [2024-11-05 19:18:41.214380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.987 qpair failed and we were unable to recover it. 00:29:11.987 [2024-11-05 19:18:41.214678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.987 [2024-11-05 19:18:41.214692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.987 qpair failed and we were unable to recover it. 00:29:11.987 [2024-11-05 19:18:41.215040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.987 [2024-11-05 19:18:41.215052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.987 qpair failed and we were unable to recover it. 00:29:11.987 [2024-11-05 19:18:41.215297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.987 [2024-11-05 19:18:41.215308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.987 qpair failed and we were unable to recover it. 00:29:11.987 [2024-11-05 19:18:41.215618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.987 [2024-11-05 19:18:41.215629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.987 qpair failed and we were unable to recover it. 00:29:11.987 [2024-11-05 19:18:41.215929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.987 [2024-11-05 19:18:41.215941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.987 qpair failed and we were unable to recover it. 00:29:11.987 [2024-11-05 19:18:41.216213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.987 [2024-11-05 19:18:41.216225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.987 qpair failed and we were unable to recover it. 00:29:11.987 [2024-11-05 19:18:41.216404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.987 [2024-11-05 19:18:41.216414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.987 qpair failed and we were unable to recover it. 00:29:11.987 [2024-11-05 19:18:41.216683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.987 [2024-11-05 19:18:41.216695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.987 qpair failed and we were unable to recover it. 00:29:11.987 [2024-11-05 19:18:41.216989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.987 [2024-11-05 19:18:41.217001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.987 qpair failed and we were unable to recover it. 00:29:11.987 [2024-11-05 19:18:41.217297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.987 [2024-11-05 19:18:41.217308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.987 qpair failed and we were unable to recover it. 00:29:11.987 [2024-11-05 19:18:41.217606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.987 [2024-11-05 19:18:41.217618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.987 qpair failed and we were unable to recover it. 00:29:11.987 [2024-11-05 19:18:41.217878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.987 [2024-11-05 19:18:41.217889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.987 qpair failed and we were unable to recover it. 00:29:11.987 [2024-11-05 19:18:41.218215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.987 [2024-11-05 19:18:41.218227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.988 qpair failed and we were unable to recover it. 00:29:11.988 [2024-11-05 19:18:41.218557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.988 [2024-11-05 19:18:41.218568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.988 qpair failed and we were unable to recover it. 00:29:11.988 [2024-11-05 19:18:41.218802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.988 [2024-11-05 19:18:41.218813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.988 qpair failed and we were unable to recover it. 00:29:11.988 [2024-11-05 19:18:41.219116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.988 [2024-11-05 19:18:41.219126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.988 qpair failed and we were unable to recover it. 00:29:11.988 [2024-11-05 19:18:41.219467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.988 [2024-11-05 19:18:41.219478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.988 qpair failed and we were unable to recover it. 00:29:11.988 [2024-11-05 19:18:41.219774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.988 [2024-11-05 19:18:41.219785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.988 qpair failed and we were unable to recover it. 00:29:11.988 [2024-11-05 19:18:41.220106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.988 [2024-11-05 19:18:41.220117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.988 qpair failed and we were unable to recover it. 00:29:11.988 [2024-11-05 19:18:41.220429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.988 [2024-11-05 19:18:41.220442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.988 qpair failed and we were unable to recover it. 00:29:11.988 [2024-11-05 19:18:41.220761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.988 [2024-11-05 19:18:41.220773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.988 qpair failed and we were unable to recover it. 00:29:11.988 [2024-11-05 19:18:41.221092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.988 [2024-11-05 19:18:41.221104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.988 qpair failed and we were unable to recover it. 00:29:11.988 [2024-11-05 19:18:41.221430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.988 [2024-11-05 19:18:41.221442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.988 qpair failed and we were unable to recover it. 00:29:11.988 [2024-11-05 19:18:41.221691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.988 [2024-11-05 19:18:41.221702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.988 qpair failed and we were unable to recover it. 00:29:11.988 [2024-11-05 19:18:41.222014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.988 [2024-11-05 19:18:41.222026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.988 qpair failed and we were unable to recover it. 00:29:11.988 [2024-11-05 19:18:41.222359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.988 [2024-11-05 19:18:41.222371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.988 qpair failed and we were unable to recover it. 00:29:11.988 [2024-11-05 19:18:41.222737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.988 [2024-11-05 19:18:41.222752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.988 qpair failed and we were unable to recover it. 00:29:11.988 [2024-11-05 19:18:41.223093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.988 [2024-11-05 19:18:41.223105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.988 qpair failed and we were unable to recover it. 00:29:11.988 [2024-11-05 19:18:41.223431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.988 [2024-11-05 19:18:41.223443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.988 qpair failed and we were unable to recover it. 00:29:11.988 [2024-11-05 19:18:41.223736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.988 [2024-11-05 19:18:41.223753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.988 qpair failed and we were unable to recover it. 00:29:11.988 [2024-11-05 19:18:41.224056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.988 [2024-11-05 19:18:41.224067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.988 qpair failed and we were unable to recover it. 00:29:11.988 [2024-11-05 19:18:41.224367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.988 [2024-11-05 19:18:41.224380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.988 qpair failed and we were unable to recover it. 00:29:11.988 [2024-11-05 19:18:41.224743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.988 [2024-11-05 19:18:41.224760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.988 qpair failed and we were unable to recover it. 00:29:11.988 [2024-11-05 19:18:41.225029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.988 [2024-11-05 19:18:41.225041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.988 qpair failed and we were unable to recover it. 00:29:11.988 [2024-11-05 19:18:41.225350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.988 [2024-11-05 19:18:41.225361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.988 qpair failed and we were unable to recover it. 00:29:11.988 [2024-11-05 19:18:41.225660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.988 [2024-11-05 19:18:41.225671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.988 qpair failed and we were unable to recover it. 00:29:11.988 [2024-11-05 19:18:41.225853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.988 [2024-11-05 19:18:41.225865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.988 qpair failed and we were unable to recover it. 00:29:11.988 [2024-11-05 19:18:41.226040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.988 [2024-11-05 19:18:41.226051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.988 qpair failed and we were unable to recover it. 00:29:11.988 [2024-11-05 19:18:41.226276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.988 [2024-11-05 19:18:41.226288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.988 qpair failed and we were unable to recover it. 00:29:11.988 [2024-11-05 19:18:41.226616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.988 [2024-11-05 19:18:41.226627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.988 qpair failed and we were unable to recover it. 00:29:11.988 [2024-11-05 19:18:41.226924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.988 [2024-11-05 19:18:41.226935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.988 qpair failed and we were unable to recover it. 00:29:11.988 [2024-11-05 19:18:41.227232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.988 [2024-11-05 19:18:41.227244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.988 qpair failed and we were unable to recover it. 00:29:11.988 [2024-11-05 19:18:41.227552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.988 [2024-11-05 19:18:41.227564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.988 qpair failed and we were unable to recover it. 00:29:11.988 [2024-11-05 19:18:41.227880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.988 [2024-11-05 19:18:41.227891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.988 qpair failed and we were unable to recover it. 00:29:11.988 [2024-11-05 19:18:41.228196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.988 [2024-11-05 19:18:41.228207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.988 qpair failed and we were unable to recover it. 00:29:11.988 [2024-11-05 19:18:41.228496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.988 [2024-11-05 19:18:41.228507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.989 qpair failed and we were unable to recover it. 00:29:11.989 [2024-11-05 19:18:41.228811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.989 [2024-11-05 19:18:41.228823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.989 qpair failed and we were unable to recover it. 00:29:11.989 [2024-11-05 19:18:41.229144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.989 [2024-11-05 19:18:41.229157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.989 qpair failed and we were unable to recover it. 00:29:11.989 [2024-11-05 19:18:41.229454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.989 [2024-11-05 19:18:41.229465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.989 qpair failed and we were unable to recover it. 00:29:11.989 [2024-11-05 19:18:41.229742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.989 [2024-11-05 19:18:41.229758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.989 qpair failed and we were unable to recover it. 00:29:11.989 [2024-11-05 19:18:41.230060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.989 [2024-11-05 19:18:41.230071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.989 qpair failed and we were unable to recover it. 00:29:11.989 [2024-11-05 19:18:41.230376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.989 [2024-11-05 19:18:41.230389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.989 qpair failed and we were unable to recover it. 00:29:11.989 [2024-11-05 19:18:41.230574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.989 [2024-11-05 19:18:41.230586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.989 qpair failed and we were unable to recover it. 00:29:11.989 [2024-11-05 19:18:41.230898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.989 [2024-11-05 19:18:41.230910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.989 qpair failed and we were unable to recover it. 00:29:11.989 [2024-11-05 19:18:41.231121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.989 [2024-11-05 19:18:41.231133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.989 qpair failed and we were unable to recover it. 00:29:11.989 [2024-11-05 19:18:41.231459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.989 [2024-11-05 19:18:41.231471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.989 qpair failed and we were unable to recover it. 00:29:11.989 [2024-11-05 19:18:41.231671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.989 [2024-11-05 19:18:41.231682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.989 qpair failed and we were unable to recover it. 00:29:11.989 [2024-11-05 19:18:41.232009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.989 [2024-11-05 19:18:41.232021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.989 qpair failed and we were unable to recover it. 00:29:11.989 [2024-11-05 19:18:41.232340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.989 [2024-11-05 19:18:41.232351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.989 qpair failed and we were unable to recover it. 00:29:11.989 [2024-11-05 19:18:41.232661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.989 [2024-11-05 19:18:41.232674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.989 qpair failed and we were unable to recover it. 00:29:11.989 [2024-11-05 19:18:41.232855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.989 [2024-11-05 19:18:41.232868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.989 qpair failed and we were unable to recover it. 00:29:11.989 [2024-11-05 19:18:41.233205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.989 [2024-11-05 19:18:41.233216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.989 qpair failed and we were unable to recover it. 00:29:11.989 [2024-11-05 19:18:41.233515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.989 [2024-11-05 19:18:41.233526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.989 qpair failed and we were unable to recover it. 00:29:11.989 [2024-11-05 19:18:41.233690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.989 [2024-11-05 19:18:41.233701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.989 qpair failed and we were unable to recover it. 00:29:11.989 [2024-11-05 19:18:41.234014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.989 [2024-11-05 19:18:41.234026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.989 qpair failed and we were unable to recover it. 00:29:11.989 [2024-11-05 19:18:41.234309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.989 [2024-11-05 19:18:41.234320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.989 qpair failed and we were unable to recover it. 00:29:11.989 [2024-11-05 19:18:41.234623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.989 [2024-11-05 19:18:41.234634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.989 qpair failed and we were unable to recover it. 00:29:11.989 [2024-11-05 19:18:41.235001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.989 [2024-11-05 19:18:41.235014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.989 qpair failed and we were unable to recover it. 00:29:11.989 [2024-11-05 19:18:41.235341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.989 [2024-11-05 19:18:41.235354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.989 qpair failed and we were unable to recover it. 00:29:11.989 [2024-11-05 19:18:41.235686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.989 [2024-11-05 19:18:41.235698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.989 qpair failed and we were unable to recover it. 00:29:11.989 [2024-11-05 19:18:41.235971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.989 [2024-11-05 19:18:41.235982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.989 qpair failed and we were unable to recover it. 00:29:11.989 [2024-11-05 19:18:41.236199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.989 [2024-11-05 19:18:41.236210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.989 qpair failed and we were unable to recover it. 00:29:11.989 [2024-11-05 19:18:41.236510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.989 [2024-11-05 19:18:41.236521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.989 qpair failed and we were unable to recover it. 00:29:11.989 [2024-11-05 19:18:41.236845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.989 [2024-11-05 19:18:41.236857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.990 qpair failed and we were unable to recover it. 00:29:11.990 [2024-11-05 19:18:41.237186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.990 [2024-11-05 19:18:41.237199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.990 qpair failed and we were unable to recover it. 00:29:11.990 [2024-11-05 19:18:41.237411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.990 [2024-11-05 19:18:41.237423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.990 qpair failed and we were unable to recover it. 00:29:11.990 [2024-11-05 19:18:41.237749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.990 [2024-11-05 19:18:41.237762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.990 qpair failed and we were unable to recover it. 00:29:11.990 [2024-11-05 19:18:41.238045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.990 [2024-11-05 19:18:41.238056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.990 qpair failed and we were unable to recover it. 00:29:11.990 [2024-11-05 19:18:41.238352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.990 [2024-11-05 19:18:41.238364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.990 qpair failed and we were unable to recover it. 00:29:11.990 [2024-11-05 19:18:41.238664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.990 [2024-11-05 19:18:41.238674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.990 qpair failed and we were unable to recover it. 00:29:11.990 [2024-11-05 19:18:41.238976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.990 [2024-11-05 19:18:41.238989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.990 qpair failed and we were unable to recover it. 00:29:11.990 [2024-11-05 19:18:41.239160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.990 [2024-11-05 19:18:41.239172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.990 qpair failed and we were unable to recover it. 00:29:11.990 [2024-11-05 19:18:41.239443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.990 [2024-11-05 19:18:41.239454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.990 qpair failed and we were unable to recover it. 00:29:11.990 [2024-11-05 19:18:41.239764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.990 [2024-11-05 19:18:41.239777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.990 qpair failed and we were unable to recover it. 00:29:11.990 [2024-11-05 19:18:41.239980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.990 [2024-11-05 19:18:41.239992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.990 qpair failed and we were unable to recover it. 00:29:11.990 [2024-11-05 19:18:41.240294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.990 [2024-11-05 19:18:41.240305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.990 qpair failed and we were unable to recover it. 00:29:11.990 [2024-11-05 19:18:41.240636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.990 [2024-11-05 19:18:41.240647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.990 qpair failed and we were unable to recover it. 00:29:11.990 [2024-11-05 19:18:41.240832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.990 [2024-11-05 19:18:41.240844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.990 qpair failed and we were unable to recover it. 00:29:11.990 [2024-11-05 19:18:41.241132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.990 [2024-11-05 19:18:41.241143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.990 qpair failed and we were unable to recover it. 00:29:11.990 [2024-11-05 19:18:41.241421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.990 [2024-11-05 19:18:41.241432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.990 qpair failed and we were unable to recover it. 00:29:11.990 [2024-11-05 19:18:41.241728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.990 [2024-11-05 19:18:41.241740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.990 qpair failed and we were unable to recover it. 00:29:11.990 [2024-11-05 19:18:41.242049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.990 [2024-11-05 19:18:41.242061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.990 qpair failed and we were unable to recover it. 00:29:11.990 [2024-11-05 19:18:41.242361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.990 [2024-11-05 19:18:41.242372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.990 qpair failed and we were unable to recover it. 00:29:11.990 [2024-11-05 19:18:41.242654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.990 [2024-11-05 19:18:41.242665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.990 qpair failed and we were unable to recover it. 00:29:11.990 [2024-11-05 19:18:41.242967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.990 [2024-11-05 19:18:41.242979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.990 qpair failed and we were unable to recover it. 00:29:11.990 [2024-11-05 19:18:41.243300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.990 [2024-11-05 19:18:41.243313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.990 qpair failed and we were unable to recover it. 00:29:11.990 [2024-11-05 19:18:41.243609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.990 [2024-11-05 19:18:41.243620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.990 qpair failed and we were unable to recover it. 00:29:11.990 [2024-11-05 19:18:41.243780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.990 [2024-11-05 19:18:41.243792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.990 qpair failed and we were unable to recover it. 00:29:11.990 [2024-11-05 19:18:41.244071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.990 [2024-11-05 19:18:41.244083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.990 qpair failed and we were unable to recover it. 00:29:11.990 [2024-11-05 19:18:41.244381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.990 [2024-11-05 19:18:41.244393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.990 qpair failed and we were unable to recover it. 00:29:11.990 [2024-11-05 19:18:41.244707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.990 [2024-11-05 19:18:41.244719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.990 qpair failed and we were unable to recover it. 00:29:11.990 [2024-11-05 19:18:41.245041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.990 [2024-11-05 19:18:41.245053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.990 qpair failed and we were unable to recover it. 00:29:11.990 [2024-11-05 19:18:41.245379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.990 [2024-11-05 19:18:41.245390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.990 qpair failed and we were unable to recover it. 00:29:11.990 [2024-11-05 19:18:41.245694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.990 [2024-11-05 19:18:41.245706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.990 qpair failed and we were unable to recover it. 00:29:11.990 [2024-11-05 19:18:41.246006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.990 [2024-11-05 19:18:41.246018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.990 qpair failed and we were unable to recover it. 00:29:11.990 [2024-11-05 19:18:41.246348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.990 [2024-11-05 19:18:41.246361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.990 qpair failed and we were unable to recover it. 00:29:11.990 [2024-11-05 19:18:41.246662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.990 [2024-11-05 19:18:41.246674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.990 qpair failed and we were unable to recover it. 00:29:11.990 [2024-11-05 19:18:41.246986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.990 [2024-11-05 19:18:41.246998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.990 qpair failed and we were unable to recover it. 00:29:11.990 [2024-11-05 19:18:41.247304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.990 [2024-11-05 19:18:41.247316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.990 qpair failed and we were unable to recover it. 00:29:11.990 [2024-11-05 19:18:41.247646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.990 [2024-11-05 19:18:41.247658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.990 qpair failed and we were unable to recover it. 00:29:11.990 [2024-11-05 19:18:41.247983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.990 [2024-11-05 19:18:41.247995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.990 qpair failed and we were unable to recover it. 00:29:11.991 [2024-11-05 19:18:41.248335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.991 [2024-11-05 19:18:41.248346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.991 qpair failed and we were unable to recover it. 00:29:11.991 [2024-11-05 19:18:41.248672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.991 [2024-11-05 19:18:41.248684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.991 qpair failed and we were unable to recover it. 00:29:11.991 [2024-11-05 19:18:41.249011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.991 [2024-11-05 19:18:41.249024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.991 qpair failed and we were unable to recover it. 00:29:11.991 [2024-11-05 19:18:41.249338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.991 [2024-11-05 19:18:41.249350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.991 qpair failed and we were unable to recover it. 00:29:11.991 [2024-11-05 19:18:41.249649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.991 [2024-11-05 19:18:41.249661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.991 qpair failed and we were unable to recover it. 00:29:11.991 [2024-11-05 19:18:41.249977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.991 [2024-11-05 19:18:41.249989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.991 qpair failed and we were unable to recover it. 00:29:11.991 [2024-11-05 19:18:41.250315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.991 [2024-11-05 19:18:41.250328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.991 qpair failed and we were unable to recover it. 00:29:11.991 [2024-11-05 19:18:41.250640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.991 [2024-11-05 19:18:41.250652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.991 qpair failed and we were unable to recover it. 00:29:11.991 [2024-11-05 19:18:41.250958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.991 [2024-11-05 19:18:41.250971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.991 qpair failed and we were unable to recover it. 00:29:11.991 [2024-11-05 19:18:41.251277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.991 [2024-11-05 19:18:41.251290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.991 qpair failed and we were unable to recover it. 00:29:11.991 [2024-11-05 19:18:41.251620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.991 [2024-11-05 19:18:41.251633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.991 qpair failed and we were unable to recover it. 00:29:11.991 [2024-11-05 19:18:41.251958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.991 [2024-11-05 19:18:41.251972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.991 qpair failed and we were unable to recover it. 00:29:11.991 [2024-11-05 19:18:41.252298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.991 [2024-11-05 19:18:41.252310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.991 qpair failed and we were unable to recover it. 00:29:11.991 [2024-11-05 19:18:41.252608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.991 [2024-11-05 19:18:41.252620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.991 qpair failed and we were unable to recover it. 00:29:11.991 [2024-11-05 19:18:41.252921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.991 [2024-11-05 19:18:41.252933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.991 qpair failed and we were unable to recover it. 00:29:11.991 [2024-11-05 19:18:41.253280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.991 [2024-11-05 19:18:41.253291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.991 qpair failed and we were unable to recover it. 00:29:11.991 [2024-11-05 19:18:41.253591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.991 [2024-11-05 19:18:41.253604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.991 qpair failed and we were unable to recover it. 00:29:11.991 [2024-11-05 19:18:41.253937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.991 [2024-11-05 19:18:41.253949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.991 qpair failed and we were unable to recover it. 00:29:11.991 [2024-11-05 19:18:41.254276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.991 [2024-11-05 19:18:41.254287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.991 qpair failed and we were unable to recover it. 00:29:11.991 [2024-11-05 19:18:41.254591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.991 [2024-11-05 19:18:41.254604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.991 qpair failed and we were unable to recover it. 00:29:11.991 [2024-11-05 19:18:41.254918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.991 [2024-11-05 19:18:41.254931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.991 qpair failed and we were unable to recover it. 00:29:11.991 [2024-11-05 19:18:41.255255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.991 [2024-11-05 19:18:41.255266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.991 qpair failed and we were unable to recover it. 00:29:11.991 [2024-11-05 19:18:41.255588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.991 [2024-11-05 19:18:41.255601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.991 qpair failed and we were unable to recover it. 00:29:11.991 [2024-11-05 19:18:41.255905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.991 [2024-11-05 19:18:41.255918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.991 qpair failed and we were unable to recover it. 00:29:11.991 [2024-11-05 19:18:41.256223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.991 [2024-11-05 19:18:41.256235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.991 qpair failed and we were unable to recover it. 00:29:11.991 [2024-11-05 19:18:41.256636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.991 [2024-11-05 19:18:41.256649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.991 qpair failed and we were unable to recover it. 00:29:11.991 [2024-11-05 19:18:41.256943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.991 [2024-11-05 19:18:41.256955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.991 qpair failed and we were unable to recover it. 00:29:11.991 [2024-11-05 19:18:41.257313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.991 [2024-11-05 19:18:41.257324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.991 qpair failed and we were unable to recover it. 00:29:11.991 [2024-11-05 19:18:41.257626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.991 [2024-11-05 19:18:41.257636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.991 qpair failed and we were unable to recover it. 00:29:11.991 [2024-11-05 19:18:41.257824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.991 [2024-11-05 19:18:41.257836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.991 qpair failed and we were unable to recover it. 00:29:11.991 [2024-11-05 19:18:41.258166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.991 [2024-11-05 19:18:41.258176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.991 qpair failed and we were unable to recover it. 00:29:11.991 [2024-11-05 19:18:41.258474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.991 [2024-11-05 19:18:41.258485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.991 qpair failed and we were unable to recover it. 00:29:11.991 [2024-11-05 19:18:41.258818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.991 [2024-11-05 19:18:41.258832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.991 qpair failed and we were unable to recover it. 00:29:11.991 [2024-11-05 19:18:41.259149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.991 [2024-11-05 19:18:41.259160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.991 qpair failed and we were unable to recover it. 00:29:11.991 [2024-11-05 19:18:41.259453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.991 [2024-11-05 19:18:41.259473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.991 qpair failed and we were unable to recover it. 00:29:11.991 [2024-11-05 19:18:41.259792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.991 [2024-11-05 19:18:41.259803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.991 qpair failed and we were unable to recover it. 00:29:11.991 [2024-11-05 19:18:41.260112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.991 [2024-11-05 19:18:41.260123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.991 qpair failed and we were unable to recover it. 00:29:11.991 [2024-11-05 19:18:41.260433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.991 [2024-11-05 19:18:41.260444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.991 qpair failed and we were unable to recover it. 00:29:11.991 [2024-11-05 19:18:41.260771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.991 [2024-11-05 19:18:41.260784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.991 qpair failed and we were unable to recover it. 00:29:11.992 [2024-11-05 19:18:41.261144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.992 [2024-11-05 19:18:41.261155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.992 qpair failed and we were unable to recover it. 00:29:11.992 [2024-11-05 19:18:41.261454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.992 [2024-11-05 19:18:41.261465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.992 qpair failed and we were unable to recover it. 00:29:11.992 [2024-11-05 19:18:41.261765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.992 [2024-11-05 19:18:41.261776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.992 qpair failed and we were unable to recover it. 00:29:11.992 [2024-11-05 19:18:41.261977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.992 [2024-11-05 19:18:41.261988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.992 qpair failed and we were unable to recover it. 00:29:11.992 [2024-11-05 19:18:41.262271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.992 [2024-11-05 19:18:41.262282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.992 qpair failed and we were unable to recover it. 00:29:11.992 [2024-11-05 19:18:41.262473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.992 [2024-11-05 19:18:41.262484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.992 qpair failed and we were unable to recover it. 00:29:11.992 [2024-11-05 19:18:41.262788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.992 [2024-11-05 19:18:41.262800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.992 qpair failed and we were unable to recover it. 00:29:11.992 [2024-11-05 19:18:41.263130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.992 [2024-11-05 19:18:41.263141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.992 qpair failed and we were unable to recover it. 00:29:11.992 [2024-11-05 19:18:41.263433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.992 [2024-11-05 19:18:41.263444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.992 qpair failed and we were unable to recover it. 00:29:11.992 [2024-11-05 19:18:41.263751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.992 [2024-11-05 19:18:41.263763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.992 qpair failed and we were unable to recover it. 00:29:11.992 [2024-11-05 19:18:41.264089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.992 [2024-11-05 19:18:41.264100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.992 qpair failed and we were unable to recover it. 00:29:11.992 [2024-11-05 19:18:41.264430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.992 [2024-11-05 19:18:41.264442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.992 qpair failed and we were unable to recover it. 00:29:11.992 [2024-11-05 19:18:41.264731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.992 [2024-11-05 19:18:41.264742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.992 qpair failed and we were unable to recover it. 00:29:11.992 [2024-11-05 19:18:41.265113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.992 [2024-11-05 19:18:41.265129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.992 qpair failed and we were unable to recover it. 00:29:11.992 [2024-11-05 19:18:41.265430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.992 [2024-11-05 19:18:41.265442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.992 qpair failed and we were unable to recover it. 00:29:11.992 [2024-11-05 19:18:41.265718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.992 [2024-11-05 19:18:41.265730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.992 qpair failed and we were unable to recover it. 00:29:11.992 [2024-11-05 19:18:41.266031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.992 [2024-11-05 19:18:41.266043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.992 qpair failed and we were unable to recover it. 00:29:11.992 [2024-11-05 19:18:41.266371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.992 [2024-11-05 19:18:41.266383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.992 qpair failed and we were unable to recover it. 00:29:11.992 [2024-11-05 19:18:41.266683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.992 [2024-11-05 19:18:41.266695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.992 qpair failed and we were unable to recover it. 00:29:11.992 [2024-11-05 19:18:41.266993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.992 [2024-11-05 19:18:41.267008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.992 qpair failed and we were unable to recover it. 00:29:11.992 [2024-11-05 19:18:41.267316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.992 [2024-11-05 19:18:41.267328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.992 qpair failed and we were unable to recover it. 00:29:11.992 [2024-11-05 19:18:41.267625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.992 [2024-11-05 19:18:41.267637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.992 qpair failed and we were unable to recover it. 00:29:11.992 [2024-11-05 19:18:41.267975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.992 [2024-11-05 19:18:41.267988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.992 qpair failed and we were unable to recover it. 00:29:11.992 [2024-11-05 19:18:41.268318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.992 [2024-11-05 19:18:41.268331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.992 qpair failed and we were unable to recover it. 00:29:11.992 [2024-11-05 19:18:41.268637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.992 [2024-11-05 19:18:41.268649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.992 qpair failed and we were unable to recover it. 00:29:11.992 [2024-11-05 19:18:41.268961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.992 [2024-11-05 19:18:41.268973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.992 qpair failed and we were unable to recover it. 00:29:11.992 [2024-11-05 19:18:41.269295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.992 [2024-11-05 19:18:41.269308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.992 qpair failed and we were unable to recover it. 00:29:11.992 [2024-11-05 19:18:41.269670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.992 [2024-11-05 19:18:41.269682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.992 qpair failed and we were unable to recover it. 00:29:11.992 [2024-11-05 19:18:41.269977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.992 [2024-11-05 19:18:41.269989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.992 qpair failed and we were unable to recover it. 00:29:11.992 [2024-11-05 19:18:41.270257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.992 [2024-11-05 19:18:41.270269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.992 qpair failed and we were unable to recover it. 00:29:11.992 [2024-11-05 19:18:41.270564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.992 [2024-11-05 19:18:41.270576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.992 qpair failed and we were unable to recover it. 00:29:11.992 [2024-11-05 19:18:41.270866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.992 [2024-11-05 19:18:41.270878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.992 qpair failed and we were unable to recover it. 00:29:11.992 [2024-11-05 19:18:41.271203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.992 [2024-11-05 19:18:41.271214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.992 qpair failed and we were unable to recover it. 00:29:11.992 [2024-11-05 19:18:41.271509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.992 [2024-11-05 19:18:41.271522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:11.992 qpair failed and we were unable to recover it. 00:29:12.268 [2024-11-05 19:18:41.271819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.268 [2024-11-05 19:18:41.271832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.268 qpair failed and we were unable to recover it. 00:29:12.268 [2024-11-05 19:18:41.272773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.268 [2024-11-05 19:18:41.272796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.268 qpair failed and we were unable to recover it. 00:29:12.268 [2024-11-05 19:18:41.273104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.268 [2024-11-05 19:18:41.273116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.268 qpair failed and we were unable to recover it. 00:29:12.268 [2024-11-05 19:18:41.273931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.268 [2024-11-05 19:18:41.273952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.268 qpair failed and we were unable to recover it. 00:29:12.268 [2024-11-05 19:18:41.274263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.268 [2024-11-05 19:18:41.274275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.268 qpair failed and we were unable to recover it. 00:29:12.268 [2024-11-05 19:18:41.274613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.268 [2024-11-05 19:18:41.274626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.268 qpair failed and we were unable to recover it. 00:29:12.268 [2024-11-05 19:18:41.274959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.268 [2024-11-05 19:18:41.274974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.268 qpair failed and we were unable to recover it. 00:29:12.268 [2024-11-05 19:18:41.275271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.269 [2024-11-05 19:18:41.275282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.269 qpair failed and we were unable to recover it. 00:29:12.269 [2024-11-05 19:18:41.275585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.269 [2024-11-05 19:18:41.275597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.269 qpair failed and we were unable to recover it. 00:29:12.269 [2024-11-05 19:18:41.275919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.269 [2024-11-05 19:18:41.275930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.269 qpair failed and we were unable to recover it. 00:29:12.269 [2024-11-05 19:18:41.276237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.269 [2024-11-05 19:18:41.276249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.269 qpair failed and we were unable to recover it. 00:29:12.269 [2024-11-05 19:18:41.276544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.269 [2024-11-05 19:18:41.276556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.269 qpair failed and we were unable to recover it. 00:29:12.269 [2024-11-05 19:18:41.276740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.269 [2024-11-05 19:18:41.276765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.269 qpair failed and we were unable to recover it. 00:29:12.269 [2024-11-05 19:18:41.277051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.269 [2024-11-05 19:18:41.277063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.269 qpair failed and we were unable to recover it. 00:29:12.269 [2024-11-05 19:18:41.277404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.269 [2024-11-05 19:18:41.277416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.269 qpair failed and we were unable to recover it. 00:29:12.269 [2024-11-05 19:18:41.277726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.269 [2024-11-05 19:18:41.277737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.269 qpair failed and we were unable to recover it. 00:29:12.269 [2024-11-05 19:18:41.278067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.269 [2024-11-05 19:18:41.278080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.269 qpair failed and we were unable to recover it. 00:29:12.269 [2024-11-05 19:18:41.278410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.269 [2024-11-05 19:18:41.278422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.269 qpair failed and we were unable to recover it. 00:29:12.269 [2024-11-05 19:18:41.278733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.269 [2024-11-05 19:18:41.278745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.269 qpair failed and we were unable to recover it. 00:29:12.269 [2024-11-05 19:18:41.279079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.269 [2024-11-05 19:18:41.279091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.269 qpair failed and we were unable to recover it. 00:29:12.269 [2024-11-05 19:18:41.279398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.269 [2024-11-05 19:18:41.279410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.269 qpair failed and we were unable to recover it. 00:29:12.269 [2024-11-05 19:18:41.279767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.269 [2024-11-05 19:18:41.279778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.269 qpair failed and we were unable to recover it. 00:29:12.269 [2024-11-05 19:18:41.280059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.269 [2024-11-05 19:18:41.280070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.269 qpair failed and we were unable to recover it. 00:29:12.269 [2024-11-05 19:18:41.280372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.269 [2024-11-05 19:18:41.280384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.269 qpair failed and we were unable to recover it. 00:29:12.269 [2024-11-05 19:18:41.280682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.269 [2024-11-05 19:18:41.280694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.269 qpair failed and we were unable to recover it. 00:29:12.269 [2024-11-05 19:18:41.281029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.269 [2024-11-05 19:18:41.281042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.269 qpair failed and we were unable to recover it. 00:29:12.269 [2024-11-05 19:18:41.281356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.269 [2024-11-05 19:18:41.281369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.269 qpair failed and we were unable to recover it. 00:29:12.269 [2024-11-05 19:18:41.281694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.269 [2024-11-05 19:18:41.281706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.269 qpair failed and we were unable to recover it. 00:29:12.269 [2024-11-05 19:18:41.281902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.269 [2024-11-05 19:18:41.281915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.269 qpair failed and we were unable to recover it. 00:29:12.269 [2024-11-05 19:18:41.282248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.269 [2024-11-05 19:18:41.282260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.269 qpair failed and we were unable to recover it. 00:29:12.269 [2024-11-05 19:18:41.282565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.269 [2024-11-05 19:18:41.282577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.269 qpair failed and we were unable to recover it. 00:29:12.269 [2024-11-05 19:18:41.282878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.269 [2024-11-05 19:18:41.282891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.269 qpair failed and we were unable to recover it. 00:29:12.269 [2024-11-05 19:18:41.283182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.269 [2024-11-05 19:18:41.283194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.269 qpair failed and we were unable to recover it. 00:29:12.269 [2024-11-05 19:18:41.283559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.269 [2024-11-05 19:18:41.283571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.269 qpair failed and we were unable to recover it. 00:29:12.269 [2024-11-05 19:18:41.283872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.269 [2024-11-05 19:18:41.283883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.269 qpair failed and we were unable to recover it. 00:29:12.269 [2024-11-05 19:18:41.284180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.269 [2024-11-05 19:18:41.284191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.269 qpair failed and we were unable to recover it. 00:29:12.269 [2024-11-05 19:18:41.284485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.269 [2024-11-05 19:18:41.284496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.269 qpair failed and we were unable to recover it. 00:29:12.269 [2024-11-05 19:18:41.284826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.269 [2024-11-05 19:18:41.284837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.269 qpair failed and we were unable to recover it. 00:29:12.269 [2024-11-05 19:18:41.285236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.269 [2024-11-05 19:18:41.285249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.269 qpair failed and we were unable to recover it. 00:29:12.269 [2024-11-05 19:18:41.285547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.269 [2024-11-05 19:18:41.285558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.269 qpair failed and we were unable to recover it. 00:29:12.269 [2024-11-05 19:18:41.285856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.269 [2024-11-05 19:18:41.285867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.269 qpair failed and we were unable to recover it. 00:29:12.269 [2024-11-05 19:18:41.286169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.269 [2024-11-05 19:18:41.286180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.269 qpair failed and we were unable to recover it. 00:29:12.269 [2024-11-05 19:18:41.286477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.269 [2024-11-05 19:18:41.286489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.269 qpair failed and we were unable to recover it. 00:29:12.269 [2024-11-05 19:18:41.286784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.269 [2024-11-05 19:18:41.286796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.269 qpair failed and we were unable to recover it. 00:29:12.270 [2024-11-05 19:18:41.286973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.270 [2024-11-05 19:18:41.286984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.270 qpair failed and we were unable to recover it. 00:29:12.270 [2024-11-05 19:18:41.287164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.270 [2024-11-05 19:18:41.287176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.270 qpair failed and we were unable to recover it. 00:29:12.270 [2024-11-05 19:18:41.287500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.270 [2024-11-05 19:18:41.287511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.270 qpair failed and we were unable to recover it. 00:29:12.270 [2024-11-05 19:18:41.287808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.270 [2024-11-05 19:18:41.287819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.270 qpair failed and we were unable to recover it. 00:29:12.270 [2024-11-05 19:18:41.288164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.270 [2024-11-05 19:18:41.288175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.270 qpair failed and we were unable to recover it. 00:29:12.270 [2024-11-05 19:18:41.288394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.270 [2024-11-05 19:18:41.288405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.270 qpair failed and we were unable to recover it. 00:29:12.270 [2024-11-05 19:18:41.288690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.270 [2024-11-05 19:18:41.288701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.270 qpair failed and we were unable to recover it. 00:29:12.270 [2024-11-05 19:18:41.289027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.270 [2024-11-05 19:18:41.289039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.270 qpair failed and we were unable to recover it. 00:29:12.270 [2024-11-05 19:18:41.289346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.270 [2024-11-05 19:18:41.289357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.270 qpair failed and we were unable to recover it. 00:29:12.270 [2024-11-05 19:18:41.289628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.270 [2024-11-05 19:18:41.289639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.270 qpair failed and we were unable to recover it. 00:29:12.270 [2024-11-05 19:18:41.289964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.270 [2024-11-05 19:18:41.289976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.270 qpair failed and we were unable to recover it. 00:29:12.270 [2024-11-05 19:18:41.290281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.270 [2024-11-05 19:18:41.290292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.270 qpair failed and we were unable to recover it. 00:29:12.270 [2024-11-05 19:18:41.290470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.270 [2024-11-05 19:18:41.290482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.270 qpair failed and we were unable to recover it. 00:29:12.270 [2024-11-05 19:18:41.290766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.270 [2024-11-05 19:18:41.290777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.270 qpair failed and we were unable to recover it. 00:29:12.270 [2024-11-05 19:18:41.291098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.270 [2024-11-05 19:18:41.291111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.270 qpair failed and we were unable to recover it. 00:29:12.270 [2024-11-05 19:18:41.291392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.270 [2024-11-05 19:18:41.291405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.270 qpair failed and we were unable to recover it. 00:29:12.270 [2024-11-05 19:18:41.291704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.270 [2024-11-05 19:18:41.291716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.270 qpair failed and we were unable to recover it. 00:29:12.270 [2024-11-05 19:18:41.292033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.270 [2024-11-05 19:18:41.292046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.270 qpair failed and we were unable to recover it. 00:29:12.270 [2024-11-05 19:18:41.292368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.270 [2024-11-05 19:18:41.292380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.270 qpair failed and we were unable to recover it. 00:29:12.270 [2024-11-05 19:18:41.292706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.270 [2024-11-05 19:18:41.292718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.270 qpair failed and we were unable to recover it. 00:29:12.270 [2024-11-05 19:18:41.293019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.270 [2024-11-05 19:18:41.293032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.270 qpair failed and we were unable to recover it. 00:29:12.270 [2024-11-05 19:18:41.293362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.270 [2024-11-05 19:18:41.293374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.270 qpair failed and we were unable to recover it. 00:29:12.270 [2024-11-05 19:18:41.293678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.270 [2024-11-05 19:18:41.293690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.270 qpair failed and we were unable to recover it. 00:29:12.270 [2024-11-05 19:18:41.294031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.270 [2024-11-05 19:18:41.294044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.270 qpair failed and we were unable to recover it. 00:29:12.270 [2024-11-05 19:18:41.294390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.270 [2024-11-05 19:18:41.294402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.270 qpair failed and we were unable to recover it. 00:29:12.270 [2024-11-05 19:18:41.294731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.270 [2024-11-05 19:18:41.294743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.270 qpair failed and we were unable to recover it. 00:29:12.270 [2024-11-05 19:18:41.295067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.270 [2024-11-05 19:18:41.295080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.270 qpair failed and we were unable to recover it. 00:29:12.270 [2024-11-05 19:18:41.295388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.270 [2024-11-05 19:18:41.295400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.270 qpair failed and we were unable to recover it. 00:29:12.270 [2024-11-05 19:18:41.295731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.270 [2024-11-05 19:18:41.295743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.270 qpair failed and we were unable to recover it. 00:29:12.270 [2024-11-05 19:18:41.296075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.270 [2024-11-05 19:18:41.296087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.270 qpair failed and we were unable to recover it. 00:29:12.270 [2024-11-05 19:18:41.296392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.270 [2024-11-05 19:18:41.296405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.270 qpair failed and we were unable to recover it. 00:29:12.270 [2024-11-05 19:18:41.296705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.270 [2024-11-05 19:18:41.296717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.270 qpair failed and we were unable to recover it. 00:29:12.270 [2024-11-05 19:18:41.297056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.270 [2024-11-05 19:18:41.297069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.270 qpair failed and we were unable to recover it. 00:29:12.270 [2024-11-05 19:18:41.297357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.270 [2024-11-05 19:18:41.297368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.270 qpair failed and we were unable to recover it. 00:29:12.270 [2024-11-05 19:18:41.297666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.270 [2024-11-05 19:18:41.297679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.270 qpair failed and we were unable to recover it. 00:29:12.270 [2024-11-05 19:18:41.297895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.270 [2024-11-05 19:18:41.297908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.271 qpair failed and we were unable to recover it. 00:29:12.271 [2024-11-05 19:18:41.298251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.271 [2024-11-05 19:18:41.298262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.271 qpair failed and we were unable to recover it. 00:29:12.271 [2024-11-05 19:18:41.298558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.271 [2024-11-05 19:18:41.298569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.271 qpair failed and we were unable to recover it. 00:29:12.271 [2024-11-05 19:18:41.298893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.271 [2024-11-05 19:18:41.298905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.271 qpair failed and we were unable to recover it. 00:29:12.271 [2024-11-05 19:18:41.299210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.271 [2024-11-05 19:18:41.299221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.271 qpair failed and we were unable to recover it. 00:29:12.271 [2024-11-05 19:18:41.299533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.271 [2024-11-05 19:18:41.299545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.271 qpair failed and we were unable to recover it. 00:29:12.271 [2024-11-05 19:18:41.299822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.271 [2024-11-05 19:18:41.299833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.271 qpair failed and we were unable to recover it. 00:29:12.271 [2024-11-05 19:18:41.300163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.271 [2024-11-05 19:18:41.300174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.271 qpair failed and we were unable to recover it. 00:29:12.271 [2024-11-05 19:18:41.300471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.271 [2024-11-05 19:18:41.300483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.271 qpair failed and we were unable to recover it. 00:29:12.271 [2024-11-05 19:18:41.300829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.271 [2024-11-05 19:18:41.300840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.271 qpair failed and we were unable to recover it. 00:29:12.271 [2024-11-05 19:18:41.301128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.271 [2024-11-05 19:18:41.301139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.271 qpair failed and we were unable to recover it. 00:29:12.271 [2024-11-05 19:18:41.301453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.271 [2024-11-05 19:18:41.301464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.271 qpair failed and we were unable to recover it. 00:29:12.271 [2024-11-05 19:18:41.301773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.271 [2024-11-05 19:18:41.301786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.271 qpair failed and we were unable to recover it. 00:29:12.271 [2024-11-05 19:18:41.302152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.271 [2024-11-05 19:18:41.302164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.271 qpair failed and we were unable to recover it. 00:29:12.271 [2024-11-05 19:18:41.302453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.271 [2024-11-05 19:18:41.302465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.271 qpair failed and we were unable to recover it. 00:29:12.271 [2024-11-05 19:18:41.302766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.271 [2024-11-05 19:18:41.302778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.271 qpair failed and we were unable to recover it. 00:29:12.271 [2024-11-05 19:18:41.303104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.271 [2024-11-05 19:18:41.303115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.271 qpair failed and we were unable to recover it. 00:29:12.271 [2024-11-05 19:18:41.303413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.271 [2024-11-05 19:18:41.303425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.271 qpair failed and we were unable to recover it. 00:29:12.271 [2024-11-05 19:18:41.303757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.271 [2024-11-05 19:18:41.303770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.271 qpair failed and we were unable to recover it. 00:29:12.271 [2024-11-05 19:18:41.304077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.271 [2024-11-05 19:18:41.304088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.271 qpair failed and we were unable to recover it. 00:29:12.271 [2024-11-05 19:18:41.304394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.271 [2024-11-05 19:18:41.304406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.271 qpair failed and we were unable to recover it. 00:29:12.271 [2024-11-05 19:18:41.304709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.271 [2024-11-05 19:18:41.304720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.271 qpair failed and we were unable to recover it. 00:29:12.271 [2024-11-05 19:18:41.305021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.271 [2024-11-05 19:18:41.305037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.271 qpair failed and we were unable to recover it. 00:29:12.271 [2024-11-05 19:18:41.305357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.271 [2024-11-05 19:18:41.305368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.271 qpair failed and we were unable to recover it. 00:29:12.271 [2024-11-05 19:18:41.305666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.271 [2024-11-05 19:18:41.305678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.271 qpair failed and we were unable to recover it. 00:29:12.271 [2024-11-05 19:18:41.305995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.271 [2024-11-05 19:18:41.306007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.271 qpair failed and we were unable to recover it. 00:29:12.271 [2024-11-05 19:18:41.306343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.271 [2024-11-05 19:18:41.306355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.271 qpair failed and we were unable to recover it. 00:29:12.271 [2024-11-05 19:18:41.307174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.271 [2024-11-05 19:18:41.307196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.271 qpair failed and we were unable to recover it. 00:29:12.271 [2024-11-05 19:18:41.307504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.271 [2024-11-05 19:18:41.307517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.271 qpair failed and we were unable to recover it. 00:29:12.271 [2024-11-05 19:18:41.307825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.271 [2024-11-05 19:18:41.307837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.271 qpair failed and we were unable to recover it. 00:29:12.271 [2024-11-05 19:18:41.308148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.271 [2024-11-05 19:18:41.308159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.271 qpair failed and we were unable to recover it. 00:29:12.271 [2024-11-05 19:18:41.308430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.271 [2024-11-05 19:18:41.308442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.271 qpair failed and we were unable to recover it. 00:29:12.271 [2024-11-05 19:18:41.308752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.271 [2024-11-05 19:18:41.308765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.271 qpair failed and we were unable to recover it. 00:29:12.271 [2024-11-05 19:18:41.308925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.271 [2024-11-05 19:18:41.308938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.271 qpair failed and we were unable to recover it. 00:29:12.271 [2024-11-05 19:18:41.309256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.271 [2024-11-05 19:18:41.309269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.271 qpair failed and we were unable to recover it. 00:29:12.271 [2024-11-05 19:18:41.309579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.271 [2024-11-05 19:18:41.309591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.271 qpair failed and we were unable to recover it. 00:29:12.271 [2024-11-05 19:18:41.309922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.272 [2024-11-05 19:18:41.309933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.272 qpair failed and we were unable to recover it. 00:29:12.272 [2024-11-05 19:18:41.310256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.272 [2024-11-05 19:18:41.310267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.272 qpair failed and we were unable to recover it. 00:29:12.272 [2024-11-05 19:18:41.310496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.272 [2024-11-05 19:18:41.310508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.272 qpair failed and we were unable to recover it. 00:29:12.272 [2024-11-05 19:18:41.310815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.272 [2024-11-05 19:18:41.310827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.272 qpair failed and we were unable to recover it. 00:29:12.272 [2024-11-05 19:18:41.311134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.272 [2024-11-05 19:18:41.311147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.272 qpair failed and we were unable to recover it. 00:29:12.272 [2024-11-05 19:18:41.311472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.272 [2024-11-05 19:18:41.311484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.272 qpair failed and we were unable to recover it. 00:29:12.272 [2024-11-05 19:18:41.311814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.272 [2024-11-05 19:18:41.311827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.272 qpair failed and we were unable to recover it. 00:29:12.272 [2024-11-05 19:18:41.312148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.272 [2024-11-05 19:18:41.312160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.272 qpair failed and we were unable to recover it. 00:29:12.272 [2024-11-05 19:18:41.312474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.272 [2024-11-05 19:18:41.312486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.272 qpair failed and we were unable to recover it. 00:29:12.272 [2024-11-05 19:18:41.312860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.272 [2024-11-05 19:18:41.312873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.272 qpair failed and we were unable to recover it. 00:29:12.272 [2024-11-05 19:18:41.313196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.272 [2024-11-05 19:18:41.313207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.272 qpair failed and we were unable to recover it. 00:29:12.272 [2024-11-05 19:18:41.313498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.272 [2024-11-05 19:18:41.313510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.272 qpair failed and we were unable to recover it. 00:29:12.272 [2024-11-05 19:18:41.313789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.272 [2024-11-05 19:18:41.313802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.272 qpair failed and we were unable to recover it. 00:29:12.272 [2024-11-05 19:18:41.314118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.272 [2024-11-05 19:18:41.314133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.272 qpair failed and we were unable to recover it. 00:29:12.272 [2024-11-05 19:18:41.314460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.272 [2024-11-05 19:18:41.314473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.272 qpair failed and we were unable to recover it. 00:29:12.272 [2024-11-05 19:18:41.314784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.272 [2024-11-05 19:18:41.314796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.272 qpair failed and we were unable to recover it. 00:29:12.272 [2024-11-05 19:18:41.315121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.272 [2024-11-05 19:18:41.315134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.272 qpair failed and we were unable to recover it. 00:29:12.272 [2024-11-05 19:18:41.315436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.272 [2024-11-05 19:18:41.315449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.272 qpair failed and we were unable to recover it. 00:29:12.272 [2024-11-05 19:18:41.315676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.272 [2024-11-05 19:18:41.315688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.272 qpair failed and we were unable to recover it. 00:29:12.272 [2024-11-05 19:18:41.316022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.272 [2024-11-05 19:18:41.316035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.272 qpair failed and we were unable to recover it. 00:29:12.272 [2024-11-05 19:18:41.316242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.272 [2024-11-05 19:18:41.316254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.272 qpair failed and we were unable to recover it. 00:29:12.272 [2024-11-05 19:18:41.316564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.272 [2024-11-05 19:18:41.316576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.272 qpair failed and we were unable to recover it. 00:29:12.272 [2024-11-05 19:18:41.316906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.272 [2024-11-05 19:18:41.316919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.272 qpair failed and we were unable to recover it. 00:29:12.272 [2024-11-05 19:18:41.317235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.272 [2024-11-05 19:18:41.317248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.272 qpair failed and we were unable to recover it. 00:29:12.272 [2024-11-05 19:18:41.317603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.272 [2024-11-05 19:18:41.317615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.272 qpair failed and we were unable to recover it. 00:29:12.272 [2024-11-05 19:18:41.317937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.272 [2024-11-05 19:18:41.317950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.272 qpair failed and we were unable to recover it. 00:29:12.272 [2024-11-05 19:18:41.318275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.272 [2024-11-05 19:18:41.318287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.272 qpair failed and we were unable to recover it. 00:29:12.272 [2024-11-05 19:18:41.318594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.272 [2024-11-05 19:18:41.318606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.272 qpair failed and we were unable to recover it. 00:29:12.272 [2024-11-05 19:18:41.318810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.272 [2024-11-05 19:18:41.318823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.272 qpair failed and we were unable to recover it. 00:29:12.272 [2024-11-05 19:18:41.319171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.272 [2024-11-05 19:18:41.319182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.272 qpair failed and we were unable to recover it. 00:29:12.272 [2024-11-05 19:18:41.319464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.272 [2024-11-05 19:18:41.319476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.272 qpair failed and we were unable to recover it. 00:29:12.272 [2024-11-05 19:18:41.319774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.272 [2024-11-05 19:18:41.319787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.272 qpair failed and we were unable to recover it. 00:29:12.272 [2024-11-05 19:18:41.319972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.272 [2024-11-05 19:18:41.319985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.272 qpair failed and we were unable to recover it. 00:29:12.272 [2024-11-05 19:18:41.320280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.272 [2024-11-05 19:18:41.320293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.272 qpair failed and we were unable to recover it. 00:29:12.272 [2024-11-05 19:18:41.320467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.272 [2024-11-05 19:18:41.320479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.272 qpair failed and we were unable to recover it. 00:29:12.272 [2024-11-05 19:18:41.320806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.272 [2024-11-05 19:18:41.320819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.272 qpair failed and we were unable to recover it. 00:29:12.272 [2024-11-05 19:18:41.321146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.272 [2024-11-05 19:18:41.321158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.272 qpair failed and we were unable to recover it. 00:29:12.273 [2024-11-05 19:18:41.321364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.273 [2024-11-05 19:18:41.321376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.273 qpair failed and we were unable to recover it. 00:29:12.273 [2024-11-05 19:18:41.321684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.273 [2024-11-05 19:18:41.321696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.273 qpair failed and we were unable to recover it. 00:29:12.273 [2024-11-05 19:18:41.321993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.273 [2024-11-05 19:18:41.322006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.273 qpair failed and we were unable to recover it. 00:29:12.273 [2024-11-05 19:18:41.322335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.273 [2024-11-05 19:18:41.322349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.273 qpair failed and we were unable to recover it. 00:29:12.273 [2024-11-05 19:18:41.322676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.273 [2024-11-05 19:18:41.322688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.273 qpair failed and we were unable to recover it. 00:29:12.273 [2024-11-05 19:18:41.322990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.273 [2024-11-05 19:18:41.323003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.273 qpair failed and we were unable to recover it. 00:29:12.273 [2024-11-05 19:18:41.323307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.273 [2024-11-05 19:18:41.323320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.273 qpair failed and we were unable to recover it. 00:29:12.273 [2024-11-05 19:18:41.323629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.273 [2024-11-05 19:18:41.323641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.273 qpair failed and we were unable to recover it. 00:29:12.273 [2024-11-05 19:18:41.323932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.273 [2024-11-05 19:18:41.323944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.273 qpair failed and we were unable to recover it. 00:29:12.273 [2024-11-05 19:18:41.324291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.273 [2024-11-05 19:18:41.324304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.273 qpair failed and we were unable to recover it. 00:29:12.273 [2024-11-05 19:18:41.324606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.273 [2024-11-05 19:18:41.324617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.273 qpair failed and we were unable to recover it. 00:29:12.273 [2024-11-05 19:18:41.324954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.273 [2024-11-05 19:18:41.324967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.273 qpair failed and we were unable to recover it. 00:29:12.273 [2024-11-05 19:18:41.325266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.273 [2024-11-05 19:18:41.325278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.273 qpair failed and we were unable to recover it. 00:29:12.273 [2024-11-05 19:18:41.325561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.273 [2024-11-05 19:18:41.325573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.273 qpair failed and we were unable to recover it. 00:29:12.273 [2024-11-05 19:18:41.325903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.273 [2024-11-05 19:18:41.325915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.273 qpair failed and we were unable to recover it. 00:29:12.273 [2024-11-05 19:18:41.326226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.273 [2024-11-05 19:18:41.326239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.273 qpair failed and we were unable to recover it. 00:29:12.273 [2024-11-05 19:18:41.326545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.273 [2024-11-05 19:18:41.326557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.273 qpair failed and we were unable to recover it. 00:29:12.273 [2024-11-05 19:18:41.326849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.273 [2024-11-05 19:18:41.326861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.273 qpair failed and we were unable to recover it. 00:29:12.273 [2024-11-05 19:18:41.327167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.273 [2024-11-05 19:18:41.327180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.273 qpair failed and we were unable to recover it. 00:29:12.273 [2024-11-05 19:18:41.327504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.273 [2024-11-05 19:18:41.327517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.273 qpair failed and we were unable to recover it. 00:29:12.273 [2024-11-05 19:18:41.327826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.273 [2024-11-05 19:18:41.327838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.273 qpair failed and we were unable to recover it. 00:29:12.273 [2024-11-05 19:18:41.328163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.273 [2024-11-05 19:18:41.328175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.273 qpair failed and we were unable to recover it. 00:29:12.273 [2024-11-05 19:18:41.328481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.273 [2024-11-05 19:18:41.328493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.273 qpair failed and we were unable to recover it. 00:29:12.273 [2024-11-05 19:18:41.328822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.273 [2024-11-05 19:18:41.328835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.273 qpair failed and we were unable to recover it. 00:29:12.273 [2024-11-05 19:18:41.329140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.273 [2024-11-05 19:18:41.329152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.273 qpair failed and we were unable to recover it. 00:29:12.273 [2024-11-05 19:18:41.329478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.273 [2024-11-05 19:18:41.329490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.273 qpair failed and we were unable to recover it. 00:29:12.273 [2024-11-05 19:18:41.329796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.273 [2024-11-05 19:18:41.329808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.273 qpair failed and we were unable to recover it. 00:29:12.273 [2024-11-05 19:18:41.330112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.273 [2024-11-05 19:18:41.330124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.273 qpair failed and we were unable to recover it. 00:29:12.273 [2024-11-05 19:18:41.330452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.273 [2024-11-05 19:18:41.330464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.273 qpair failed and we were unable to recover it. 00:29:12.274 [2024-11-05 19:18:41.330799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.274 [2024-11-05 19:18:41.330815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.274 qpair failed and we were unable to recover it. 00:29:12.274 [2024-11-05 19:18:41.331171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.274 [2024-11-05 19:18:41.331183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.274 qpair failed and we were unable to recover it. 00:29:12.274 [2024-11-05 19:18:41.331493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.274 [2024-11-05 19:18:41.331505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.274 qpair failed and we were unable to recover it. 00:29:12.274 [2024-11-05 19:18:41.331814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.274 [2024-11-05 19:18:41.331826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.274 qpair failed and we were unable to recover it. 00:29:12.274 [2024-11-05 19:18:41.332134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.274 [2024-11-05 19:18:41.332146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.274 qpair failed and we were unable to recover it. 00:29:12.274 [2024-11-05 19:18:41.332452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.274 [2024-11-05 19:18:41.332465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.274 qpair failed and we were unable to recover it. 00:29:12.274 [2024-11-05 19:18:41.332786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.274 [2024-11-05 19:18:41.332799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.274 qpair failed and we were unable to recover it. 00:29:12.274 [2024-11-05 19:18:41.333130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.274 [2024-11-05 19:18:41.333143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.274 qpair failed and we were unable to recover it. 00:29:12.274 [2024-11-05 19:18:41.333465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.274 [2024-11-05 19:18:41.333476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.274 qpair failed and we were unable to recover it. 00:29:12.274 [2024-11-05 19:18:41.333769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.274 [2024-11-05 19:18:41.333780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.274 qpair failed and we were unable to recover it. 00:29:12.274 [2024-11-05 19:18:41.334096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.274 [2024-11-05 19:18:41.334107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.274 qpair failed and we were unable to recover it. 00:29:12.274 [2024-11-05 19:18:41.334402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.274 [2024-11-05 19:18:41.334413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.274 qpair failed and we were unable to recover it. 00:29:12.274 [2024-11-05 19:18:41.334697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.274 [2024-11-05 19:18:41.334709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.274 qpair failed and we were unable to recover it. 00:29:12.274 [2024-11-05 19:18:41.335042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.274 [2024-11-05 19:18:41.335054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.274 qpair failed and we were unable to recover it. 00:29:12.274 [2024-11-05 19:18:41.335362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.274 [2024-11-05 19:18:41.335373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.274 qpair failed and we were unable to recover it. 00:29:12.274 [2024-11-05 19:18:41.335675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.274 [2024-11-05 19:18:41.335688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.274 qpair failed and we were unable to recover it. 00:29:12.274 [2024-11-05 19:18:41.335970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.274 [2024-11-05 19:18:41.335982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.274 qpair failed and we were unable to recover it. 00:29:12.274 [2024-11-05 19:18:41.336293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.274 [2024-11-05 19:18:41.336304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.274 qpair failed and we were unable to recover it. 00:29:12.274 [2024-11-05 19:18:41.336620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.274 [2024-11-05 19:18:41.336631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.274 qpair failed and we were unable to recover it. 00:29:12.274 [2024-11-05 19:18:41.336914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.274 [2024-11-05 19:18:41.336926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.274 qpair failed and we were unable to recover it. 00:29:12.274 [2024-11-05 19:18:41.337216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.274 [2024-11-05 19:18:41.337227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.274 qpair failed and we were unable to recover it. 00:29:12.274 [2024-11-05 19:18:41.337557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.274 [2024-11-05 19:18:41.337569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.274 qpair failed and we were unable to recover it. 00:29:12.274 [2024-11-05 19:18:41.337768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.274 [2024-11-05 19:18:41.337780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.274 qpair failed and we were unable to recover it. 00:29:12.274 [2024-11-05 19:18:41.338098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.274 [2024-11-05 19:18:41.338109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.274 qpair failed and we were unable to recover it. 00:29:12.274 [2024-11-05 19:18:41.338387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.274 [2024-11-05 19:18:41.338399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.274 qpair failed and we were unable to recover it. 00:29:12.274 [2024-11-05 19:18:41.338704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.274 [2024-11-05 19:18:41.338715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.274 qpair failed and we were unable to recover it. 00:29:12.274 [2024-11-05 19:18:41.339035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.274 [2024-11-05 19:18:41.339047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.274 qpair failed and we were unable to recover it. 00:29:12.274 [2024-11-05 19:18:41.339407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.274 [2024-11-05 19:18:41.339419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.274 qpair failed and we were unable to recover it. 00:29:12.274 [2024-11-05 19:18:41.339755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.274 [2024-11-05 19:18:41.339767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.274 qpair failed and we were unable to recover it. 00:29:12.274 [2024-11-05 19:18:41.340087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.274 [2024-11-05 19:18:41.340098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.274 qpair failed and we were unable to recover it. 00:29:12.274 [2024-11-05 19:18:41.340393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.274 [2024-11-05 19:18:41.340403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.274 qpair failed and we were unable to recover it. 00:29:12.274 [2024-11-05 19:18:41.340682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.274 [2024-11-05 19:18:41.340693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.274 qpair failed and we were unable to recover it. 00:29:12.274 [2024-11-05 19:18:41.341024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.274 [2024-11-05 19:18:41.341037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.274 qpair failed and we were unable to recover it. 00:29:12.274 [2024-11-05 19:18:41.341335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.274 [2024-11-05 19:18:41.341346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.274 qpair failed and we were unable to recover it. 00:29:12.274 [2024-11-05 19:18:41.341649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.274 [2024-11-05 19:18:41.341660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.274 qpair failed and we were unable to recover it. 00:29:12.274 [2024-11-05 19:18:41.341962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.274 [2024-11-05 19:18:41.341974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.274 qpair failed and we were unable to recover it. 00:29:12.274 [2024-11-05 19:18:41.342261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.274 [2024-11-05 19:18:41.342273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.275 qpair failed and we were unable to recover it. 00:29:12.275 [2024-11-05 19:18:41.342606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.275 [2024-11-05 19:18:41.342617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.275 qpair failed and we were unable to recover it. 00:29:12.275 [2024-11-05 19:18:41.342911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.275 [2024-11-05 19:18:41.342923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.275 qpair failed and we were unable to recover it. 00:29:12.275 [2024-11-05 19:18:41.343231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.275 [2024-11-05 19:18:41.343243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.275 qpair failed and we were unable to recover it. 00:29:12.275 [2024-11-05 19:18:41.343521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.275 [2024-11-05 19:18:41.343532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.275 qpair failed and we were unable to recover it. 00:29:12.275 [2024-11-05 19:18:41.343838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.275 [2024-11-05 19:18:41.343850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.275 qpair failed and we were unable to recover it. 00:29:12.275 [2024-11-05 19:18:41.344139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.275 [2024-11-05 19:18:41.344154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.275 qpair failed and we were unable to recover it. 00:29:12.275 [2024-11-05 19:18:41.344478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.275 [2024-11-05 19:18:41.344489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.275 qpair failed and we were unable to recover it. 00:29:12.275 [2024-11-05 19:18:41.344767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.275 [2024-11-05 19:18:41.344778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.275 qpair failed and we were unable to recover it. 00:29:12.275 [2024-11-05 19:18:41.345053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.275 [2024-11-05 19:18:41.345065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.275 qpair failed and we were unable to recover it. 00:29:12.275 [2024-11-05 19:18:41.345372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.275 [2024-11-05 19:18:41.345383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.275 qpair failed and we were unable to recover it. 00:29:12.275 [2024-11-05 19:18:41.345685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.275 [2024-11-05 19:18:41.345696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.275 qpair failed and we were unable to recover it. 00:29:12.275 [2024-11-05 19:18:41.345993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.275 [2024-11-05 19:18:41.346004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.275 qpair failed and we were unable to recover it. 00:29:12.275 [2024-11-05 19:18:41.346302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.275 [2024-11-05 19:18:41.346314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.275 qpair failed and we were unable to recover it. 00:29:12.275 [2024-11-05 19:18:41.346492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.275 [2024-11-05 19:18:41.346504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.275 qpair failed and we were unable to recover it. 00:29:12.275 [2024-11-05 19:18:41.346831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.275 [2024-11-05 19:18:41.346844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.275 qpair failed and we were unable to recover it. 00:29:12.275 [2024-11-05 19:18:41.347178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.275 [2024-11-05 19:18:41.347189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.275 qpair failed and we were unable to recover it. 00:29:12.275 [2024-11-05 19:18:41.347495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.275 [2024-11-05 19:18:41.347506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.275 qpair failed and we were unable to recover it. 00:29:12.275 [2024-11-05 19:18:41.347813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.275 [2024-11-05 19:18:41.347825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.275 qpair failed and we were unable to recover it. 00:29:12.275 [2024-11-05 19:18:41.348134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.275 [2024-11-05 19:18:41.348146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.275 qpair failed and we were unable to recover it. 00:29:12.275 [2024-11-05 19:18:41.348460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.275 [2024-11-05 19:18:41.348472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.275 qpair failed and we were unable to recover it. 00:29:12.275 [2024-11-05 19:18:41.348790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.275 [2024-11-05 19:18:41.348803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.275 qpair failed and we were unable to recover it. 00:29:12.275 [2024-11-05 19:18:41.349118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.275 [2024-11-05 19:18:41.349129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.275 qpair failed and we were unable to recover it. 00:29:12.275 [2024-11-05 19:18:41.349438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.275 [2024-11-05 19:18:41.349449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.275 qpair failed and we were unable to recover it. 00:29:12.275 [2024-11-05 19:18:41.349787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.275 [2024-11-05 19:18:41.349799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.275 qpair failed and we were unable to recover it. 00:29:12.275 [2024-11-05 19:18:41.350106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.275 [2024-11-05 19:18:41.350117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.275 qpair failed and we were unable to recover it. 00:29:12.275 [2024-11-05 19:18:41.350458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.275 [2024-11-05 19:18:41.350470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.275 qpair failed and we were unable to recover it. 00:29:12.275 [2024-11-05 19:18:41.350780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.275 [2024-11-05 19:18:41.350791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.275 qpair failed and we were unable to recover it. 00:29:12.275 [2024-11-05 19:18:41.351105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.275 [2024-11-05 19:18:41.351116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.275 qpair failed and we were unable to recover it. 00:29:12.275 [2024-11-05 19:18:41.351359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.275 [2024-11-05 19:18:41.351370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.275 qpair failed and we were unable to recover it. 00:29:12.275 [2024-11-05 19:18:41.351687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.275 [2024-11-05 19:18:41.351698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.275 qpair failed and we were unable to recover it. 00:29:12.275 [2024-11-05 19:18:41.352020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.275 [2024-11-05 19:18:41.352032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.275 qpair failed and we were unable to recover it. 00:29:12.275 [2024-11-05 19:18:41.352317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.275 [2024-11-05 19:18:41.352328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.275 qpair failed and we were unable to recover it. 00:29:12.275 [2024-11-05 19:18:41.352621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.275 [2024-11-05 19:18:41.352634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.275 qpair failed and we were unable to recover it. 00:29:12.275 [2024-11-05 19:18:41.352943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.275 [2024-11-05 19:18:41.352954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.275 qpair failed and we were unable to recover it. 00:29:12.275 [2024-11-05 19:18:41.353269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.275 [2024-11-05 19:18:41.353279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.275 qpair failed and we were unable to recover it. 00:29:12.276 [2024-11-05 19:18:41.353563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.276 [2024-11-05 19:18:41.353575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.276 qpair failed and we were unable to recover it. 00:29:12.276 [2024-11-05 19:18:41.353789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.276 [2024-11-05 19:18:41.353801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.276 qpair failed and we were unable to recover it. 00:29:12.276 [2024-11-05 19:18:41.353982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.276 [2024-11-05 19:18:41.353993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.276 qpair failed and we were unable to recover it. 00:29:12.276 [2024-11-05 19:18:41.354312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.276 [2024-11-05 19:18:41.354323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.276 qpair failed and we were unable to recover it. 00:29:12.276 [2024-11-05 19:18:41.354621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.276 [2024-11-05 19:18:41.354634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.276 qpair failed and we were unable to recover it. 00:29:12.276 [2024-11-05 19:18:41.354843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.276 [2024-11-05 19:18:41.354856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.276 qpair failed and we were unable to recover it. 00:29:12.276 [2024-11-05 19:18:41.355144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.276 [2024-11-05 19:18:41.355155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.276 qpair failed and we were unable to recover it. 00:29:12.276 [2024-11-05 19:18:41.355480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.276 [2024-11-05 19:18:41.355491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.276 qpair failed and we were unable to recover it. 00:29:12.276 [2024-11-05 19:18:41.355774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.276 [2024-11-05 19:18:41.355786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.276 qpair failed and we were unable to recover it. 00:29:12.276 [2024-11-05 19:18:41.356013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.276 [2024-11-05 19:18:41.356026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.276 qpair failed and we were unable to recover it. 00:29:12.276 [2024-11-05 19:18:41.356335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.276 [2024-11-05 19:18:41.356347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.276 qpair failed and we were unable to recover it. 00:29:12.276 [2024-11-05 19:18:41.356698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.276 [2024-11-05 19:18:41.356710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.276 qpair failed and we were unable to recover it. 00:29:12.276 [2024-11-05 19:18:41.356801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.276 [2024-11-05 19:18:41.356810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.276 qpair failed and we were unable to recover it. 00:29:12.276 [2024-11-05 19:18:41.357092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.276 [2024-11-05 19:18:41.357102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.276 qpair failed and we were unable to recover it. 00:29:12.276 [2024-11-05 19:18:41.357408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.276 [2024-11-05 19:18:41.357420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.276 qpair failed and we were unable to recover it. 00:29:12.276 [2024-11-05 19:18:41.357599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.276 [2024-11-05 19:18:41.357610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.276 qpair failed and we were unable to recover it. 00:29:12.276 [2024-11-05 19:18:41.357951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.276 [2024-11-05 19:18:41.357964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.276 qpair failed and we were unable to recover it. 00:29:12.276 [2024-11-05 19:18:41.358294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.276 [2024-11-05 19:18:41.358306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.276 qpair failed and we were unable to recover it. 00:29:12.276 [2024-11-05 19:18:41.358614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.276 [2024-11-05 19:18:41.358626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.276 qpair failed and we were unable to recover it. 00:29:12.276 [2024-11-05 19:18:41.358853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.276 [2024-11-05 19:18:41.358865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.276 qpair failed and we were unable to recover it. 00:29:12.276 [2024-11-05 19:18:41.359209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.276 [2024-11-05 19:18:41.359220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.276 qpair failed and we were unable to recover it. 00:29:12.276 [2024-11-05 19:18:41.359512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.276 [2024-11-05 19:18:41.359523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.276 qpair failed and we were unable to recover it. 00:29:12.276 [2024-11-05 19:18:41.359830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.276 [2024-11-05 19:18:41.359842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.276 qpair failed and we were unable to recover it. 00:29:12.276 [2024-11-05 19:18:41.360186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.276 [2024-11-05 19:18:41.360197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.276 qpair failed and we were unable to recover it. 00:29:12.276 [2024-11-05 19:18:41.360522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.276 [2024-11-05 19:18:41.360533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.276 qpair failed and we were unable to recover it. 00:29:12.276 [2024-11-05 19:18:41.360723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.276 [2024-11-05 19:18:41.360734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.276 qpair failed and we were unable to recover it. 00:29:12.276 [2024-11-05 19:18:41.361021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.276 [2024-11-05 19:18:41.361033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.276 qpair failed and we were unable to recover it. 00:29:12.276 [2024-11-05 19:18:41.361357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.276 [2024-11-05 19:18:41.361368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.276 qpair failed and we were unable to recover it. 00:29:12.276 [2024-11-05 19:18:41.361671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.276 [2024-11-05 19:18:41.361682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.276 qpair failed and we were unable to recover it. 00:29:12.276 [2024-11-05 19:18:41.361874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.276 [2024-11-05 19:18:41.361885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.276 qpair failed and we were unable to recover it. 00:29:12.276 [2024-11-05 19:18:41.362158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.276 [2024-11-05 19:18:41.362170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.276 qpair failed and we were unable to recover it. 00:29:12.276 [2024-11-05 19:18:41.362393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.276 [2024-11-05 19:18:41.362405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.276 qpair failed and we were unable to recover it. 00:29:12.276 [2024-11-05 19:18:41.362724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.276 [2024-11-05 19:18:41.362736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.276 qpair failed and we were unable to recover it. 00:29:12.276 [2024-11-05 19:18:41.362923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.276 [2024-11-05 19:18:41.362934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.276 qpair failed and we were unable to recover it. 00:29:12.276 [2024-11-05 19:18:41.363247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.276 [2024-11-05 19:18:41.363258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.276 qpair failed and we were unable to recover it. 00:29:12.276 [2024-11-05 19:18:41.363563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.276 [2024-11-05 19:18:41.363575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.276 qpair failed and we were unable to recover it. 00:29:12.276 [2024-11-05 19:18:41.363889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.276 [2024-11-05 19:18:41.363901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.276 qpair failed and we were unable to recover it. 00:29:12.276 [2024-11-05 19:18:41.364222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.276 [2024-11-05 19:18:41.364235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.276 qpair failed and we were unable to recover it. 00:29:12.276 [2024-11-05 19:18:41.364539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.277 [2024-11-05 19:18:41.364551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.277 qpair failed and we were unable to recover it. 00:29:12.277 [2024-11-05 19:18:41.364759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.277 [2024-11-05 19:18:41.364769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.277 qpair failed and we were unable to recover it. 00:29:12.277 [2024-11-05 19:18:41.364961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.277 [2024-11-05 19:18:41.364972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.277 qpair failed and we were unable to recover it. 00:29:12.277 [2024-11-05 19:18:41.365278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.277 [2024-11-05 19:18:41.365290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.277 qpair failed and we were unable to recover it. 00:29:12.277 [2024-11-05 19:18:41.365578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.277 [2024-11-05 19:18:41.365589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.277 qpair failed and we were unable to recover it. 00:29:12.277 [2024-11-05 19:18:41.365924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.277 [2024-11-05 19:18:41.365936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.277 qpair failed and we were unable to recover it. 00:29:12.277 [2024-11-05 19:18:41.366195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.277 [2024-11-05 19:18:41.366206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.277 qpair failed and we were unable to recover it. 00:29:12.277 [2024-11-05 19:18:41.366513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.277 [2024-11-05 19:18:41.366525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.277 qpair failed and we were unable to recover it. 00:29:12.277 [2024-11-05 19:18:41.366850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.277 [2024-11-05 19:18:41.366861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.277 qpair failed and we were unable to recover it. 00:29:12.277 [2024-11-05 19:18:41.367175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.277 [2024-11-05 19:18:41.367186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.277 qpair failed and we were unable to recover it. 00:29:12.277 [2024-11-05 19:18:41.367389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.277 [2024-11-05 19:18:41.367399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.277 qpair failed and we were unable to recover it. 00:29:12.277 [2024-11-05 19:18:41.367671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.277 [2024-11-05 19:18:41.367681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.277 qpair failed and we were unable to recover it. 00:29:12.277 [2024-11-05 19:18:41.367861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.277 [2024-11-05 19:18:41.367873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.277 qpair failed and we were unable to recover it. 00:29:12.277 [2024-11-05 19:18:41.368154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.277 [2024-11-05 19:18:41.368166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.277 qpair failed and we were unable to recover it. 00:29:12.277 [2024-11-05 19:18:41.368465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.277 [2024-11-05 19:18:41.368477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.277 qpair failed and we were unable to recover it. 00:29:12.277 [2024-11-05 19:18:41.368806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.277 [2024-11-05 19:18:41.368818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.277 qpair failed and we were unable to recover it. 00:29:12.277 [2024-11-05 19:18:41.369016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.277 [2024-11-05 19:18:41.369027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.277 qpair failed and we were unable to recover it. 00:29:12.277 [2024-11-05 19:18:41.369335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.277 [2024-11-05 19:18:41.369346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.277 qpair failed and we were unable to recover it. 00:29:12.277 [2024-11-05 19:18:41.369680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.277 [2024-11-05 19:18:41.369690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.277 qpair failed and we were unable to recover it. 00:29:12.277 [2024-11-05 19:18:41.370001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.277 [2024-11-05 19:18:41.370013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.277 qpair failed and we were unable to recover it. 00:29:12.277 [2024-11-05 19:18:41.370339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.277 [2024-11-05 19:18:41.370350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.277 qpair failed and we were unable to recover it. 00:29:12.277 [2024-11-05 19:18:41.370706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.277 [2024-11-05 19:18:41.370718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.277 qpair failed and we were unable to recover it. 00:29:12.277 [2024-11-05 19:18:41.371028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.277 [2024-11-05 19:18:41.371039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.277 qpair failed and we were unable to recover it. 00:29:12.277 [2024-11-05 19:18:41.371361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.277 [2024-11-05 19:18:41.371372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.277 qpair failed and we were unable to recover it. 00:29:12.277 [2024-11-05 19:18:41.371680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.277 [2024-11-05 19:18:41.371692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.277 qpair failed and we were unable to recover it. 00:29:12.277 [2024-11-05 19:18:41.371881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.277 [2024-11-05 19:18:41.371894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.277 qpair failed and we were unable to recover it. 00:29:12.277 [2024-11-05 19:18:41.372205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.277 [2024-11-05 19:18:41.372216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.277 qpair failed and we were unable to recover it. 00:29:12.277 [2024-11-05 19:18:41.372525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.277 [2024-11-05 19:18:41.372538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.277 qpair failed and we were unable to recover it. 00:29:12.277 [2024-11-05 19:18:41.372848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.277 [2024-11-05 19:18:41.372860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.277 qpair failed and we were unable to recover it. 00:29:12.277 [2024-11-05 19:18:41.373215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.277 [2024-11-05 19:18:41.373227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.277 qpair failed and we were unable to recover it. 00:29:12.277 [2024-11-05 19:18:41.373527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.277 [2024-11-05 19:18:41.373539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.277 qpair failed and we were unable to recover it. 00:29:12.277 [2024-11-05 19:18:41.373830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.277 [2024-11-05 19:18:41.373842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.277 qpair failed and we were unable to recover it. 00:29:12.277 [2024-11-05 19:18:41.374149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.277 [2024-11-05 19:18:41.374160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.277 qpair failed and we were unable to recover it. 00:29:12.277 [2024-11-05 19:18:41.374472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.277 [2024-11-05 19:18:41.374484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.277 qpair failed and we were unable to recover it. 00:29:12.277 [2024-11-05 19:18:41.374815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.277 [2024-11-05 19:18:41.374826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.277 qpair failed and we were unable to recover it. 00:29:12.277 [2024-11-05 19:18:41.375148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.277 [2024-11-05 19:18:41.375159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.277 qpair failed and we were unable to recover it. 00:29:12.277 [2024-11-05 19:18:41.375464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.277 [2024-11-05 19:18:41.375474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.277 qpair failed and we were unable to recover it. 00:29:12.277 [2024-11-05 19:18:41.375782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.277 [2024-11-05 19:18:41.375793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.277 qpair failed and we were unable to recover it. 00:29:12.277 [2024-11-05 19:18:41.376087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.277 [2024-11-05 19:18:41.376098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.277 qpair failed and we were unable to recover it. 00:29:12.277 [2024-11-05 19:18:41.376435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.278 [2024-11-05 19:18:41.376446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.278 qpair failed and we were unable to recover it. 00:29:12.278 [2024-11-05 19:18:41.376750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.278 [2024-11-05 19:18:41.376762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.278 qpair failed and we were unable to recover it. 00:29:12.278 [2024-11-05 19:18:41.377092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.278 [2024-11-05 19:18:41.377104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.278 qpair failed and we were unable to recover it. 00:29:12.278 [2024-11-05 19:18:41.377435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.278 [2024-11-05 19:18:41.377445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.278 qpair failed and we were unable to recover it. 00:29:12.278 [2024-11-05 19:18:41.377626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.278 [2024-11-05 19:18:41.377637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.278 qpair failed and we were unable to recover it. 00:29:12.278 [2024-11-05 19:18:41.377917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.278 [2024-11-05 19:18:41.377929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.278 qpair failed and we were unable to recover it. 00:29:12.278 [2024-11-05 19:18:41.378239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.278 [2024-11-05 19:18:41.378250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.278 qpair failed and we were unable to recover it. 00:29:12.278 [2024-11-05 19:18:41.378539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.278 [2024-11-05 19:18:41.378550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.278 qpair failed and we were unable to recover it. 00:29:12.278 [2024-11-05 19:18:41.378819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.278 [2024-11-05 19:18:41.378830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.278 qpair failed and we were unable to recover it. 00:29:12.278 [2024-11-05 19:18:41.379136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.278 [2024-11-05 19:18:41.379147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.278 qpair failed and we were unable to recover it. 00:29:12.278 [2024-11-05 19:18:41.379505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.278 [2024-11-05 19:18:41.379516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.278 qpair failed and we were unable to recover it. 00:29:12.278 [2024-11-05 19:18:41.379817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.278 [2024-11-05 19:18:41.379828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.278 qpair failed and we were unable to recover it. 00:29:12.278 [2024-11-05 19:18:41.380148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.278 [2024-11-05 19:18:41.380159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.278 qpair failed and we were unable to recover it. 00:29:12.278 [2024-11-05 19:18:41.380470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.278 [2024-11-05 19:18:41.380481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.278 qpair failed and we were unable to recover it. 00:29:12.278 [2024-11-05 19:18:41.380783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.278 [2024-11-05 19:18:41.380794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.278 qpair failed and we were unable to recover it. 00:29:12.278 [2024-11-05 19:18:41.381101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.278 [2024-11-05 19:18:41.381115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.278 qpair failed and we were unable to recover it. 00:29:12.278 [2024-11-05 19:18:41.381414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.278 [2024-11-05 19:18:41.381426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.278 qpair failed and we were unable to recover it. 00:29:12.278 [2024-11-05 19:18:41.381765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.278 [2024-11-05 19:18:41.381776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.278 qpair failed and we were unable to recover it. 00:29:12.278 [2024-11-05 19:18:41.382077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.278 [2024-11-05 19:18:41.382088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.278 qpair failed and we were unable to recover it. 00:29:12.278 [2024-11-05 19:18:41.382259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.278 [2024-11-05 19:18:41.382271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.278 qpair failed and we were unable to recover it. 00:29:12.278 [2024-11-05 19:18:41.382597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.278 [2024-11-05 19:18:41.382608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.278 qpair failed and we were unable to recover it. 00:29:12.278 [2024-11-05 19:18:41.382913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.278 [2024-11-05 19:18:41.382924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.278 qpair failed and we were unable to recover it. 00:29:12.278 [2024-11-05 19:18:41.383260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.278 [2024-11-05 19:18:41.383271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.278 qpair failed and we were unable to recover it. 00:29:12.278 [2024-11-05 19:18:41.383604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.278 [2024-11-05 19:18:41.383617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.278 qpair failed and we were unable to recover it. 00:29:12.278 [2024-11-05 19:18:41.384002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.278 [2024-11-05 19:18:41.384014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.278 qpair failed and we were unable to recover it. 00:29:12.278 [2024-11-05 19:18:41.384227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.278 [2024-11-05 19:18:41.384239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.278 qpair failed and we were unable to recover it. 00:29:12.278 [2024-11-05 19:18:41.384457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.278 [2024-11-05 19:18:41.384468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.278 qpair failed and we were unable to recover it. 00:29:12.278 [2024-11-05 19:18:41.384778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.278 [2024-11-05 19:18:41.384790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.278 qpair failed and we were unable to recover it. 00:29:12.278 [2024-11-05 19:18:41.385149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.278 [2024-11-05 19:18:41.385161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.278 qpair failed and we were unable to recover it. 00:29:12.278 [2024-11-05 19:18:41.385460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.278 [2024-11-05 19:18:41.385471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.278 qpair failed and we were unable to recover it. 00:29:12.278 [2024-11-05 19:18:41.385812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.278 [2024-11-05 19:18:41.385825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.278 qpair failed and we were unable to recover it. 00:29:12.278 [2024-11-05 19:18:41.386153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.278 [2024-11-05 19:18:41.386164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.278 qpair failed and we were unable to recover it. 00:29:12.278 [2024-11-05 19:18:41.386439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.279 [2024-11-05 19:18:41.386450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.279 qpair failed and we were unable to recover it. 00:29:12.279 [2024-11-05 19:18:41.386763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.279 [2024-11-05 19:18:41.386774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.279 qpair failed and we were unable to recover it. 00:29:12.279 [2024-11-05 19:18:41.387050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.279 [2024-11-05 19:18:41.387061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.279 qpair failed and we were unable to recover it. 00:29:12.279 [2024-11-05 19:18:41.387403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.279 [2024-11-05 19:18:41.387415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.279 qpair failed and we were unable to recover it. 00:29:12.279 [2024-11-05 19:18:41.387724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.279 [2024-11-05 19:18:41.387737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.279 qpair failed and we were unable to recover it. 00:29:12.279 [2024-11-05 19:18:41.388025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.279 [2024-11-05 19:18:41.388037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.279 qpair failed and we were unable to recover it. 00:29:12.279 [2024-11-05 19:18:41.388210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.279 [2024-11-05 19:18:41.388219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.279 qpair failed and we were unable to recover it. 00:29:12.279 [2024-11-05 19:18:41.388516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.279 [2024-11-05 19:18:41.388525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.279 qpair failed and we were unable to recover it. 00:29:12.279 [2024-11-05 19:18:41.388832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.279 [2024-11-05 19:18:41.388843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.279 qpair failed and we were unable to recover it. 00:29:12.279 [2024-11-05 19:18:41.389161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.279 [2024-11-05 19:18:41.389173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.279 qpair failed and we were unable to recover it. 00:29:12.279 [2024-11-05 19:18:41.389472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.279 [2024-11-05 19:18:41.389484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.279 qpair failed and we were unable to recover it. 00:29:12.279 [2024-11-05 19:18:41.389809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.279 [2024-11-05 19:18:41.389821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.279 qpair failed and we were unable to recover it. 00:29:12.279 [2024-11-05 19:18:41.390033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.279 [2024-11-05 19:18:41.390044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.279 qpair failed and we were unable to recover it. 00:29:12.279 [2024-11-05 19:18:41.390352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.279 [2024-11-05 19:18:41.390364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.279 qpair failed and we were unable to recover it. 00:29:12.279 [2024-11-05 19:18:41.390671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.279 [2024-11-05 19:18:41.390682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.279 qpair failed and we were unable to recover it. 00:29:12.279 [2024-11-05 19:18:41.391007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.279 [2024-11-05 19:18:41.391018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.279 qpair failed and we were unable to recover it. 00:29:12.279 [2024-11-05 19:18:41.391327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.279 [2024-11-05 19:18:41.391338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.279 qpair failed and we were unable to recover it. 00:29:12.279 [2024-11-05 19:18:41.391642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.279 [2024-11-05 19:18:41.391654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.279 qpair failed and we were unable to recover it. 00:29:12.279 [2024-11-05 19:18:41.391972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.279 [2024-11-05 19:18:41.391985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.279 qpair failed and we were unable to recover it. 00:29:12.279 [2024-11-05 19:18:41.392314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.279 [2024-11-05 19:18:41.392326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.279 qpair failed and we were unable to recover it. 00:29:12.279 [2024-11-05 19:18:41.392635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.279 [2024-11-05 19:18:41.392647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.279 qpair failed and we were unable to recover it. 00:29:12.279 [2024-11-05 19:18:41.392963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.279 [2024-11-05 19:18:41.392976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.279 qpair failed and we were unable to recover it. 00:29:12.279 [2024-11-05 19:18:41.393272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.279 [2024-11-05 19:18:41.393283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.279 qpair failed and we were unable to recover it. 00:29:12.279 [2024-11-05 19:18:41.393562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.279 [2024-11-05 19:18:41.393574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.279 qpair failed and we were unable to recover it. 00:29:12.279 [2024-11-05 19:18:41.393761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.279 [2024-11-05 19:18:41.393774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.279 qpair failed and we were unable to recover it. 00:29:12.279 [2024-11-05 19:18:41.394097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.279 [2024-11-05 19:18:41.394108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.279 qpair failed and we were unable to recover it. 00:29:12.279 [2024-11-05 19:18:41.394415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.279 [2024-11-05 19:18:41.394426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.279 qpair failed and we were unable to recover it. 00:29:12.279 [2024-11-05 19:18:41.394756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.279 [2024-11-05 19:18:41.394767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.279 qpair failed and we were unable to recover it. 00:29:12.279 [2024-11-05 19:18:41.395068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.279 [2024-11-05 19:18:41.395079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.279 qpair failed and we were unable to recover it. 00:29:12.279 [2024-11-05 19:18:41.395398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.279 [2024-11-05 19:18:41.395408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.279 qpair failed and we were unable to recover it. 00:29:12.279 [2024-11-05 19:18:41.395706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.279 [2024-11-05 19:18:41.395717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.279 qpair failed and we were unable to recover it. 00:29:12.279 [2024-11-05 19:18:41.396021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.279 [2024-11-05 19:18:41.396032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.279 qpair failed and we were unable to recover it. 00:29:12.279 [2024-11-05 19:18:41.396310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.279 [2024-11-05 19:18:41.396321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.279 qpair failed and we were unable to recover it. 00:29:12.279 [2024-11-05 19:18:41.396637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.279 [2024-11-05 19:18:41.396647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.279 qpair failed and we were unable to recover it. 00:29:12.279 [2024-11-05 19:18:41.397004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.279 [2024-11-05 19:18:41.397016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.279 qpair failed and we were unable to recover it. 00:29:12.279 [2024-11-05 19:18:41.397201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.279 [2024-11-05 19:18:41.397214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.279 qpair failed and we were unable to recover it. 00:29:12.279 [2024-11-05 19:18:41.397496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.279 [2024-11-05 19:18:41.397507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.279 qpair failed and we were unable to recover it. 00:29:12.279 [2024-11-05 19:18:41.397865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.279 [2024-11-05 19:18:41.397876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.279 qpair failed and we were unable to recover it. 00:29:12.279 [2024-11-05 19:18:41.398177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.279 [2024-11-05 19:18:41.398188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.279 qpair failed and we were unable to recover it. 00:29:12.279 [2024-11-05 19:18:41.398541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.280 [2024-11-05 19:18:41.398552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.280 qpair failed and we were unable to recover it. 00:29:12.280 [2024-11-05 19:18:41.398715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.280 [2024-11-05 19:18:41.398727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.280 qpair failed and we were unable to recover it. 00:29:12.280 [2024-11-05 19:18:41.399048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.280 [2024-11-05 19:18:41.399059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.280 qpair failed and we were unable to recover it. 00:29:12.280 [2024-11-05 19:18:41.399278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.280 [2024-11-05 19:18:41.399289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.280 qpair failed and we were unable to recover it. 00:29:12.280 [2024-11-05 19:18:41.399441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.280 [2024-11-05 19:18:41.399452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.280 qpair failed and we were unable to recover it. 00:29:12.280 [2024-11-05 19:18:41.399742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.280 [2024-11-05 19:18:41.399763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.280 qpair failed and we were unable to recover it. 00:29:12.280 [2024-11-05 19:18:41.400080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.280 [2024-11-05 19:18:41.400091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.280 qpair failed and we were unable to recover it. 00:29:12.280 [2024-11-05 19:18:41.400330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.280 [2024-11-05 19:18:41.400341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.280 qpair failed and we were unable to recover it. 00:29:12.280 [2024-11-05 19:18:41.400670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.280 [2024-11-05 19:18:41.400681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.280 qpair failed and we were unable to recover it. 00:29:12.280 [2024-11-05 19:18:41.400949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.280 [2024-11-05 19:18:41.400961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.280 qpair failed and we were unable to recover it. 00:29:12.280 [2024-11-05 19:18:41.401166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.280 [2024-11-05 19:18:41.401177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.280 qpair failed and we were unable to recover it. 00:29:12.280 [2024-11-05 19:18:41.401481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.280 [2024-11-05 19:18:41.401493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.280 qpair failed and we were unable to recover it. 00:29:12.280 [2024-11-05 19:18:41.401785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.280 [2024-11-05 19:18:41.401799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.280 qpair failed and we were unable to recover it. 00:29:12.280 [2024-11-05 19:18:41.402097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.280 [2024-11-05 19:18:41.402109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.280 qpair failed and we were unable to recover it. 00:29:12.280 [2024-11-05 19:18:41.402391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.280 [2024-11-05 19:18:41.402402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.280 qpair failed and we were unable to recover it. 00:29:12.280 [2024-11-05 19:18:41.402701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.280 [2024-11-05 19:18:41.402711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.280 qpair failed and we were unable to recover it. 00:29:12.280 [2024-11-05 19:18:41.402986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.280 [2024-11-05 19:18:41.402997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.280 qpair failed and we were unable to recover it. 00:29:12.280 [2024-11-05 19:18:41.403303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.280 [2024-11-05 19:18:41.403314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.280 qpair failed and we were unable to recover it. 00:29:12.280 [2024-11-05 19:18:41.403612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.280 [2024-11-05 19:18:41.403623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.280 qpair failed and we were unable to recover it. 00:29:12.280 [2024-11-05 19:18:41.403922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.280 [2024-11-05 19:18:41.403933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.280 qpair failed and we were unable to recover it. 00:29:12.280 [2024-11-05 19:18:41.404219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.280 [2024-11-05 19:18:41.404230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.280 qpair failed and we were unable to recover it. 00:29:12.280 [2024-11-05 19:18:41.404422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.280 [2024-11-05 19:18:41.404433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.280 qpair failed and we were unable to recover it. 00:29:12.280 [2024-11-05 19:18:41.404720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.280 [2024-11-05 19:18:41.404731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.280 qpair failed and we were unable to recover it. 00:29:12.280 [2024-11-05 19:18:41.405045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.280 [2024-11-05 19:18:41.405057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.280 qpair failed and we were unable to recover it. 00:29:12.280 [2024-11-05 19:18:41.405218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.280 [2024-11-05 19:18:41.405230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.280 qpair failed and we were unable to recover it. 00:29:12.280 [2024-11-05 19:18:41.405564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.280 [2024-11-05 19:18:41.405575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.280 qpair failed and we were unable to recover it. 00:29:12.280 [2024-11-05 19:18:41.405847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.280 [2024-11-05 19:18:41.405859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.280 qpair failed and we were unable to recover it. 00:29:12.280 [2024-11-05 19:18:41.406190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.280 [2024-11-05 19:18:41.406201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.280 qpair failed and we were unable to recover it. 00:29:12.280 [2024-11-05 19:18:41.406496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.280 [2024-11-05 19:18:41.406506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.280 qpair failed and we were unable to recover it. 00:29:12.280 [2024-11-05 19:18:41.406832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.280 [2024-11-05 19:18:41.406843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.280 qpair failed and we were unable to recover it. 00:29:12.280 [2024-11-05 19:18:41.407110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.280 [2024-11-05 19:18:41.407120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.280 qpair failed and we were unable to recover it. 00:29:12.280 [2024-11-05 19:18:41.407332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.280 [2024-11-05 19:18:41.407344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.280 qpair failed and we were unable to recover it. 00:29:12.280 [2024-11-05 19:18:41.407660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.280 [2024-11-05 19:18:41.407671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.280 qpair failed and we were unable to recover it. 00:29:12.280 [2024-11-05 19:18:41.408003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.280 [2024-11-05 19:18:41.408015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.280 qpair failed and we were unable to recover it. 00:29:12.280 [2024-11-05 19:18:41.408404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.280 [2024-11-05 19:18:41.408415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.280 qpair failed and we were unable to recover it. 00:29:12.280 [2024-11-05 19:18:41.408715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.280 [2024-11-05 19:18:41.408726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.280 qpair failed and we were unable to recover it. 00:29:12.280 [2024-11-05 19:18:41.409060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.280 [2024-11-05 19:18:41.409072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.280 qpair failed and we were unable to recover it. 00:29:12.280 [2024-11-05 19:18:41.409371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.280 [2024-11-05 19:18:41.409382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.280 qpair failed and we were unable to recover it. 00:29:12.280 [2024-11-05 19:18:41.409686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.280 [2024-11-05 19:18:41.409697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.280 qpair failed and we were unable to recover it. 00:29:12.280 [2024-11-05 19:18:41.410083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.281 [2024-11-05 19:18:41.410096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.281 qpair failed and we were unable to recover it. 00:29:12.281 [2024-11-05 19:18:41.410313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.281 [2024-11-05 19:18:41.410324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.281 qpair failed and we were unable to recover it. 00:29:12.281 [2024-11-05 19:18:41.410571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.281 [2024-11-05 19:18:41.410582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.281 qpair failed and we were unable to recover it. 00:29:12.281 [2024-11-05 19:18:41.410907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.281 [2024-11-05 19:18:41.410918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.281 qpair failed and we were unable to recover it. 00:29:12.281 [2024-11-05 19:18:41.411214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.281 [2024-11-05 19:18:41.411225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.281 qpair failed and we were unable to recover it. 00:29:12.281 [2024-11-05 19:18:41.411554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.281 [2024-11-05 19:18:41.411565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.281 qpair failed and we were unable to recover it. 00:29:12.281 [2024-11-05 19:18:41.411891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.281 [2024-11-05 19:18:41.411903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.281 qpair failed and we were unable to recover it. 00:29:12.281 [2024-11-05 19:18:41.412204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.281 [2024-11-05 19:18:41.412214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.281 qpair failed and we were unable to recover it. 00:29:12.281 [2024-11-05 19:18:41.412521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.281 [2024-11-05 19:18:41.412532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.281 qpair failed and we were unable to recover it. 00:29:12.281 [2024-11-05 19:18:41.412816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.281 [2024-11-05 19:18:41.412827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.281 qpair failed and we were unable to recover it. 00:29:12.281 [2024-11-05 19:18:41.413107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.281 [2024-11-05 19:18:41.413119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.281 qpair failed and we were unable to recover it. 00:29:12.281 [2024-11-05 19:18:41.413365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.281 [2024-11-05 19:18:41.413376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.281 qpair failed and we were unable to recover it. 00:29:12.281 [2024-11-05 19:18:41.413700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.281 [2024-11-05 19:18:41.413711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.281 qpair failed and we were unable to recover it. 00:29:12.281 [2024-11-05 19:18:41.413990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.281 [2024-11-05 19:18:41.414001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.281 qpair failed and we were unable to recover it. 00:29:12.281 [2024-11-05 19:18:41.414319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.281 [2024-11-05 19:18:41.414330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.281 qpair failed and we were unable to recover it. 00:29:12.281 [2024-11-05 19:18:41.414640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.281 [2024-11-05 19:18:41.414651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.281 qpair failed and we were unable to recover it. 00:29:12.281 [2024-11-05 19:18:41.414955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.281 [2024-11-05 19:18:41.414967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.281 qpair failed and we were unable to recover it. 00:29:12.281 [2024-11-05 19:18:41.415295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.281 [2024-11-05 19:18:41.415307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.281 qpair failed and we were unable to recover it. 00:29:12.281 [2024-11-05 19:18:41.415610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.281 [2024-11-05 19:18:41.415621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.281 qpair failed and we were unable to recover it. 00:29:12.281 [2024-11-05 19:18:41.415919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.281 [2024-11-05 19:18:41.415930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.281 qpair failed and we were unable to recover it. 00:29:12.281 [2024-11-05 19:18:41.416279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.281 [2024-11-05 19:18:41.416291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.281 qpair failed and we were unable to recover it. 00:29:12.281 [2024-11-05 19:18:41.416582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.281 [2024-11-05 19:18:41.416593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.281 qpair failed and we were unable to recover it. 00:29:12.281 [2024-11-05 19:18:41.416914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.281 [2024-11-05 19:18:41.416926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.281 qpair failed and we were unable to recover it. 00:29:12.281 [2024-11-05 19:18:41.417108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.281 [2024-11-05 19:18:41.417119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.281 qpair failed and we were unable to recover it. 00:29:12.281 [2024-11-05 19:18:41.417392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.281 [2024-11-05 19:18:41.417403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.281 qpair failed and we were unable to recover it. 00:29:12.281 [2024-11-05 19:18:41.417577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.281 [2024-11-05 19:18:41.417589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.281 qpair failed and we were unable to recover it. 00:29:12.281 [2024-11-05 19:18:41.417867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.281 [2024-11-05 19:18:41.417879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.281 qpair failed and we were unable to recover it. 00:29:12.281 [2024-11-05 19:18:41.418215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.281 [2024-11-05 19:18:41.418226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.281 qpair failed and we were unable to recover it. 00:29:12.281 [2024-11-05 19:18:41.418525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.281 [2024-11-05 19:18:41.418535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.281 qpair failed and we were unable to recover it. 00:29:12.281 [2024-11-05 19:18:41.418815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.281 [2024-11-05 19:18:41.418825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.281 qpair failed and we were unable to recover it. 00:29:12.281 [2024-11-05 19:18:41.419138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.281 [2024-11-05 19:18:41.419147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.281 qpair failed and we were unable to recover it. 00:29:12.281 [2024-11-05 19:18:41.419458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.281 [2024-11-05 19:18:41.419468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.281 qpair failed and we were unable to recover it. 00:29:12.281 [2024-11-05 19:18:41.419787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.281 [2024-11-05 19:18:41.419797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.281 qpair failed and we were unable to recover it. 00:29:12.281 [2024-11-05 19:18:41.420116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.281 [2024-11-05 19:18:41.420125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.281 qpair failed and we were unable to recover it. 00:29:12.281 [2024-11-05 19:18:41.420434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.281 [2024-11-05 19:18:41.420443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.281 qpair failed and we were unable to recover it. 00:29:12.281 [2024-11-05 19:18:41.420756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.281 [2024-11-05 19:18:41.420766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.281 qpair failed and we were unable to recover it. 00:29:12.281 [2024-11-05 19:18:41.421080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.281 [2024-11-05 19:18:41.421089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.281 qpair failed and we were unable to recover it. 00:29:12.281 [2024-11-05 19:18:41.421387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.281 [2024-11-05 19:18:41.421398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.281 qpair failed and we were unable to recover it. 00:29:12.281 [2024-11-05 19:18:41.421756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.281 [2024-11-05 19:18:41.421767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.281 qpair failed and we were unable to recover it. 00:29:12.281 [2024-11-05 19:18:41.422042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.282 [2024-11-05 19:18:41.422053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.282 qpair failed and we were unable to recover it. 00:29:12.282 [2024-11-05 19:18:41.422369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.282 [2024-11-05 19:18:41.422380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.282 qpair failed and we were unable to recover it. 00:29:12.282 [2024-11-05 19:18:41.422714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.282 [2024-11-05 19:18:41.422724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.282 qpair failed and we were unable to recover it. 00:29:12.282 [2024-11-05 19:18:41.423033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.282 [2024-11-05 19:18:41.423046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.282 qpair failed and we were unable to recover it. 00:29:12.282 [2024-11-05 19:18:41.423410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.282 [2024-11-05 19:18:41.423422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.282 qpair failed and we were unable to recover it. 00:29:12.282 [2024-11-05 19:18:41.423667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.282 [2024-11-05 19:18:41.423679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.282 qpair failed and we were unable to recover it. 00:29:12.282 [2024-11-05 19:18:41.423970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.282 [2024-11-05 19:18:41.423982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.282 qpair failed and we were unable to recover it. 00:29:12.282 [2024-11-05 19:18:41.424281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.282 [2024-11-05 19:18:41.424293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.282 qpair failed and we were unable to recover it. 00:29:12.282 [2024-11-05 19:18:41.424593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.282 [2024-11-05 19:18:41.424604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.282 qpair failed and we were unable to recover it. 00:29:12.282 [2024-11-05 19:18:41.424946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.282 [2024-11-05 19:18:41.424957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.282 qpair failed and we were unable to recover it. 00:29:12.282 [2024-11-05 19:18:41.425172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.282 [2024-11-05 19:18:41.425182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.282 qpair failed and we were unable to recover it. 00:29:12.282 [2024-11-05 19:18:41.425461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.282 [2024-11-05 19:18:41.425472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.282 qpair failed and we were unable to recover it. 00:29:12.282 [2024-11-05 19:18:41.425786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.282 [2024-11-05 19:18:41.425798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.282 qpair failed and we were unable to recover it. 00:29:12.282 [2024-11-05 19:18:41.426112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.282 [2024-11-05 19:18:41.426124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.282 qpair failed and we were unable to recover it. 00:29:12.282 [2024-11-05 19:18:41.426459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.282 [2024-11-05 19:18:41.426471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.282 qpair failed and we were unable to recover it. 00:29:12.282 [2024-11-05 19:18:41.426670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.282 [2024-11-05 19:18:41.426683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.282 qpair failed and we were unable to recover it. 00:29:12.282 [2024-11-05 19:18:41.426958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.282 [2024-11-05 19:18:41.426972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.282 qpair failed and we were unable to recover it. 00:29:12.282 [2024-11-05 19:18:41.427270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.282 [2024-11-05 19:18:41.427282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.282 qpair failed and we were unable to recover it. 00:29:12.282 [2024-11-05 19:18:41.427616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.282 [2024-11-05 19:18:41.427628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.282 qpair failed and we were unable to recover it. 00:29:12.282 [2024-11-05 19:18:41.427922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.282 [2024-11-05 19:18:41.427934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.282 qpair failed and we were unable to recover it. 00:29:12.282 [2024-11-05 19:18:41.428247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.282 [2024-11-05 19:18:41.428259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.282 qpair failed and we were unable to recover it. 00:29:12.282 [2024-11-05 19:18:41.428581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.282 [2024-11-05 19:18:41.428593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.282 qpair failed and we were unable to recover it. 00:29:12.282 [2024-11-05 19:18:41.428918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.282 [2024-11-05 19:18:41.428929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.282 qpair failed and we were unable to recover it. 00:29:12.282 [2024-11-05 19:18:41.429231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.282 [2024-11-05 19:18:41.429242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.282 qpair failed and we were unable to recover it. 00:29:12.282 [2024-11-05 19:18:41.429402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.282 [2024-11-05 19:18:41.429414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.282 qpair failed and we were unable to recover it. 00:29:12.282 [2024-11-05 19:18:41.429602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.282 [2024-11-05 19:18:41.429614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.282 qpair failed and we were unable to recover it. 00:29:12.282 [2024-11-05 19:18:41.429903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.282 [2024-11-05 19:18:41.429915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.282 qpair failed and we were unable to recover it. 00:29:12.282 [2024-11-05 19:18:41.430226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.282 [2024-11-05 19:18:41.430239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.282 qpair failed and we were unable to recover it. 00:29:12.282 [2024-11-05 19:18:41.430548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.282 [2024-11-05 19:18:41.430560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.282 qpair failed and we were unable to recover it. 00:29:12.282 [2024-11-05 19:18:41.430866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.282 [2024-11-05 19:18:41.430879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.282 qpair failed and we were unable to recover it. 00:29:12.282 [2024-11-05 19:18:41.431205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.282 [2024-11-05 19:18:41.431217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.282 qpair failed and we were unable to recover it. 00:29:12.282 [2024-11-05 19:18:41.431445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.282 [2024-11-05 19:18:41.431457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.282 qpair failed and we were unable to recover it. 00:29:12.282 [2024-11-05 19:18:41.431816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.282 [2024-11-05 19:18:41.431829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.282 qpair failed and we were unable to recover it. 00:29:12.282 [2024-11-05 19:18:41.432119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.282 [2024-11-05 19:18:41.432131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.282 qpair failed and we were unable to recover it. 00:29:12.282 [2024-11-05 19:18:41.432462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.282 [2024-11-05 19:18:41.432474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.282 qpair failed and we were unable to recover it. 00:29:12.283 [2024-11-05 19:18:41.432773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.283 [2024-11-05 19:18:41.432786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.283 qpair failed and we were unable to recover it. 00:29:12.283 [2024-11-05 19:18:41.433178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.283 [2024-11-05 19:18:41.433191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.283 qpair failed and we were unable to recover it. 00:29:12.283 [2024-11-05 19:18:41.433496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.283 [2024-11-05 19:18:41.433508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.283 qpair failed and we were unable to recover it. 00:29:12.283 [2024-11-05 19:18:41.433795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.283 [2024-11-05 19:18:41.433806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.283 qpair failed and we were unable to recover it. 00:29:12.283 [2024-11-05 19:18:41.434128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.283 [2024-11-05 19:18:41.434140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.283 qpair failed and we were unable to recover it. 00:29:12.283 [2024-11-05 19:18:41.434450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.283 [2024-11-05 19:18:41.434461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.283 qpair failed and we were unable to recover it. 00:29:12.283 [2024-11-05 19:18:41.434765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.283 [2024-11-05 19:18:41.434777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.283 qpair failed and we were unable to recover it. 00:29:12.283 [2024-11-05 19:18:41.435109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.283 [2024-11-05 19:18:41.435121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.283 qpair failed and we were unable to recover it. 00:29:12.283 [2024-11-05 19:18:41.435445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.283 [2024-11-05 19:18:41.435456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.283 qpair failed and we were unable to recover it. 00:29:12.283 [2024-11-05 19:18:41.435760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.283 [2024-11-05 19:18:41.435772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.283 qpair failed and we were unable to recover it. 00:29:12.283 [2024-11-05 19:18:41.436057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.283 [2024-11-05 19:18:41.436068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.283 qpair failed and we were unable to recover it. 00:29:12.283 [2024-11-05 19:18:41.436414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.283 [2024-11-05 19:18:41.436425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.283 qpair failed and we were unable to recover it. 00:29:12.283 [2024-11-05 19:18:41.436722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.283 [2024-11-05 19:18:41.436734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.283 qpair failed and we were unable to recover it. 00:29:12.283 [2024-11-05 19:18:41.437034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.283 [2024-11-05 19:18:41.437046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.283 qpair failed and we were unable to recover it. 00:29:12.283 [2024-11-05 19:18:41.437385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.283 [2024-11-05 19:18:41.437397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.283 qpair failed and we were unable to recover it. 00:29:12.283 [2024-11-05 19:18:41.437722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.283 [2024-11-05 19:18:41.437733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.283 qpair failed and we were unable to recover it. 00:29:12.283 [2024-11-05 19:18:41.437953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.283 [2024-11-05 19:18:41.437965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.283 qpair failed and we were unable to recover it. 00:29:12.283 [2024-11-05 19:18:41.438328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.283 [2024-11-05 19:18:41.438340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.283 qpair failed and we were unable to recover it. 00:29:12.283 [2024-11-05 19:18:41.438638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.283 [2024-11-05 19:18:41.438650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.283 qpair failed and we were unable to recover it. 00:29:12.283 [2024-11-05 19:18:41.438966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.283 [2024-11-05 19:18:41.438979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.283 qpair failed and we were unable to recover it. 00:29:12.283 [2024-11-05 19:18:41.439283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.283 [2024-11-05 19:18:41.439295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.283 qpair failed and we were unable to recover it. 00:29:12.283 [2024-11-05 19:18:41.439503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.283 [2024-11-05 19:18:41.439516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.283 qpair failed and we were unable to recover it. 00:29:12.283 [2024-11-05 19:18:41.439915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.283 [2024-11-05 19:18:41.439928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.283 qpair failed and we were unable to recover it. 00:29:12.283 [2024-11-05 19:18:41.440235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.283 [2024-11-05 19:18:41.440248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.283 qpair failed and we were unable to recover it. 00:29:12.283 [2024-11-05 19:18:41.440558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.283 [2024-11-05 19:18:41.440570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.283 qpair failed and we were unable to recover it. 00:29:12.283 [2024-11-05 19:18:41.440875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.283 [2024-11-05 19:18:41.440888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.283 qpair failed and we were unable to recover it. 00:29:12.283 [2024-11-05 19:18:41.441189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.283 [2024-11-05 19:18:41.441201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.283 qpair failed and we were unable to recover it. 00:29:12.283 [2024-11-05 19:18:41.441487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.283 [2024-11-05 19:18:41.441498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.283 qpair failed and we were unable to recover it. 00:29:12.283 [2024-11-05 19:18:41.441803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.283 [2024-11-05 19:18:41.441815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.283 qpair failed and we were unable to recover it. 00:29:12.283 [2024-11-05 19:18:41.442122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.283 [2024-11-05 19:18:41.442133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.283 qpair failed and we were unable to recover it. 00:29:12.283 [2024-11-05 19:18:41.442436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.283 [2024-11-05 19:18:41.442448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.283 qpair failed and we were unable to recover it. 00:29:12.283 [2024-11-05 19:18:41.442754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.283 [2024-11-05 19:18:41.442766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.283 qpair failed and we were unable to recover it. 00:29:12.283 [2024-11-05 19:18:41.443101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.283 [2024-11-05 19:18:41.443113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.283 qpair failed and we were unable to recover it. 00:29:12.283 [2024-11-05 19:18:41.443375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.283 [2024-11-05 19:18:41.443387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.283 qpair failed and we were unable to recover it. 00:29:12.283 [2024-11-05 19:18:41.443696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.283 [2024-11-05 19:18:41.443708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.283 qpair failed and we were unable to recover it. 00:29:12.283 [2024-11-05 19:18:41.443879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.283 [2024-11-05 19:18:41.443891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.283 qpair failed and we were unable to recover it. 00:29:12.283 [2024-11-05 19:18:41.444218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.283 [2024-11-05 19:18:41.444230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.283 qpair failed and we were unable to recover it. 00:29:12.283 [2024-11-05 19:18:41.444528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.283 [2024-11-05 19:18:41.444540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.283 qpair failed and we were unable to recover it. 00:29:12.283 [2024-11-05 19:18:41.444884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.284 [2024-11-05 19:18:41.444895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.284 qpair failed and we were unable to recover it. 00:29:12.284 [2024-11-05 19:18:41.445222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.284 [2024-11-05 19:18:41.445233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.284 qpair failed and we were unable to recover it. 00:29:12.284 [2024-11-05 19:18:41.445540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.284 [2024-11-05 19:18:41.445552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.284 qpair failed and we were unable to recover it. 00:29:12.284 [2024-11-05 19:18:41.445852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.284 [2024-11-05 19:18:41.445863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.284 qpair failed and we were unable to recover it. 00:29:12.284 [2024-11-05 19:18:41.446167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.284 [2024-11-05 19:18:41.446178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.284 qpair failed and we were unable to recover it. 00:29:12.284 [2024-11-05 19:18:41.446501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.284 [2024-11-05 19:18:41.446513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.284 qpair failed and we were unable to recover it. 00:29:12.284 [2024-11-05 19:18:41.446827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.284 [2024-11-05 19:18:41.446838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.284 qpair failed and we were unable to recover it. 00:29:12.284 [2024-11-05 19:18:41.447144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.284 [2024-11-05 19:18:41.447155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.284 qpair failed and we were unable to recover it. 00:29:12.284 [2024-11-05 19:18:41.447453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.284 [2024-11-05 19:18:41.447464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.284 qpair failed and we were unable to recover it. 00:29:12.284 [2024-11-05 19:18:41.447743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.284 [2024-11-05 19:18:41.447759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.284 qpair failed and we were unable to recover it. 00:29:12.284 [2024-11-05 19:18:41.448108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.284 [2024-11-05 19:18:41.448121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.284 qpair failed and we were unable to recover it. 00:29:12.284 [2024-11-05 19:18:41.448422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.284 [2024-11-05 19:18:41.448433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.284 qpair failed and we were unable to recover it. 00:29:12.284 [2024-11-05 19:18:41.448740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.284 [2024-11-05 19:18:41.448756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.284 qpair failed and we were unable to recover it. 00:29:12.284 [2024-11-05 19:18:41.449077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.284 [2024-11-05 19:18:41.449089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.284 qpair failed and we were unable to recover it. 00:29:12.284 [2024-11-05 19:18:41.449397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.284 [2024-11-05 19:18:41.449409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.284 qpair failed and we were unable to recover it. 00:29:12.284 [2024-11-05 19:18:41.449715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.284 [2024-11-05 19:18:41.449727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.284 qpair failed and we were unable to recover it. 00:29:12.284 [2024-11-05 19:18:41.450066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.284 [2024-11-05 19:18:41.450080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.284 qpair failed and we were unable to recover it. 00:29:12.284 [2024-11-05 19:18:41.450411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.284 [2024-11-05 19:18:41.450424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.284 qpair failed and we were unable to recover it. 00:29:12.284 [2024-11-05 19:18:41.450736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.284 [2024-11-05 19:18:41.450752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.284 qpair failed and we were unable to recover it. 00:29:12.284 [2024-11-05 19:18:41.451073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.284 [2024-11-05 19:18:41.451084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.284 qpair failed and we were unable to recover it. 00:29:12.284 [2024-11-05 19:18:41.451387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.284 [2024-11-05 19:18:41.451398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.284 qpair failed and we were unable to recover it. 00:29:12.284 [2024-11-05 19:18:41.451724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.284 [2024-11-05 19:18:41.451735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.284 qpair failed and we were unable to recover it. 00:29:12.284 [2024-11-05 19:18:41.452039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.284 [2024-11-05 19:18:41.452050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.284 qpair failed and we were unable to recover it. 00:29:12.284 [2024-11-05 19:18:41.452246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.284 [2024-11-05 19:18:41.452257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.284 qpair failed and we were unable to recover it. 00:29:12.284 [2024-11-05 19:18:41.452585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.284 [2024-11-05 19:18:41.452596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.284 qpair failed and we were unable to recover it. 00:29:12.284 [2024-11-05 19:18:41.452895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.284 [2024-11-05 19:18:41.452907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.284 qpair failed and we were unable to recover it. 00:29:12.284 [2024-11-05 19:18:41.453240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.284 [2024-11-05 19:18:41.453252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.284 qpair failed and we were unable to recover it. 00:29:12.284 [2024-11-05 19:18:41.453329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.284 [2024-11-05 19:18:41.453340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.284 qpair failed and we were unable to recover it. 00:29:12.284 [2024-11-05 19:18:41.453671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.284 [2024-11-05 19:18:41.453682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.284 qpair failed and we were unable to recover it. 00:29:12.284 [2024-11-05 19:18:41.453869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.284 [2024-11-05 19:18:41.453881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.284 qpair failed and we were unable to recover it. 00:29:12.284 [2024-11-05 19:18:41.454188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.284 [2024-11-05 19:18:41.454199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.284 qpair failed and we were unable to recover it. 00:29:12.284 [2024-11-05 19:18:41.454574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.284 [2024-11-05 19:18:41.454586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.284 qpair failed and we were unable to recover it. 00:29:12.284 [2024-11-05 19:18:41.454893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.284 [2024-11-05 19:18:41.454905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.284 qpair failed and we were unable to recover it. 00:29:12.284 [2024-11-05 19:18:41.455236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.284 [2024-11-05 19:18:41.455248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.284 qpair failed and we were unable to recover it. 00:29:12.284 [2024-11-05 19:18:41.455535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.284 [2024-11-05 19:18:41.455546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.284 qpair failed and we were unable to recover it. 00:29:12.284 [2024-11-05 19:18:41.455848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.284 [2024-11-05 19:18:41.455859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.284 qpair failed and we were unable to recover it. 00:29:12.284 [2024-11-05 19:18:41.456197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.284 [2024-11-05 19:18:41.456209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.284 qpair failed and we were unable to recover it. 00:29:12.284 [2024-11-05 19:18:41.456534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.284 [2024-11-05 19:18:41.456545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.284 qpair failed and we were unable to recover it. 00:29:12.284 [2024-11-05 19:18:41.456874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.284 [2024-11-05 19:18:41.456886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.284 qpair failed and we were unable to recover it. 00:29:12.284 [2024-11-05 19:18:41.457060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.285 [2024-11-05 19:18:41.457072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.285 qpair failed and we were unable to recover it. 00:29:12.285 [2024-11-05 19:18:41.457433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.285 [2024-11-05 19:18:41.457445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.285 qpair failed and we were unable to recover it. 00:29:12.285 [2024-11-05 19:18:41.457749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.285 [2024-11-05 19:18:41.457760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.285 qpair failed and we were unable to recover it. 00:29:12.285 [2024-11-05 19:18:41.458059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.285 [2024-11-05 19:18:41.458071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.285 qpair failed and we were unable to recover it. 00:29:12.285 [2024-11-05 19:18:41.458374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.285 [2024-11-05 19:18:41.458384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.285 qpair failed and we were unable to recover it. 00:29:12.285 [2024-11-05 19:18:41.458692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.285 [2024-11-05 19:18:41.458704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.285 qpair failed and we were unable to recover it. 00:29:12.285 [2024-11-05 19:18:41.459004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.285 [2024-11-05 19:18:41.459016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.285 qpair failed and we were unable to recover it. 00:29:12.285 [2024-11-05 19:18:41.459346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.285 [2024-11-05 19:18:41.459357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.285 qpair failed and we were unable to recover it. 00:29:12.285 [2024-11-05 19:18:41.459507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.285 [2024-11-05 19:18:41.459518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.285 qpair failed and we were unable to recover it. 00:29:12.285 [2024-11-05 19:18:41.459870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.285 [2024-11-05 19:18:41.459882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.285 qpair failed and we were unable to recover it. 00:29:12.285 [2024-11-05 19:18:41.460123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.285 [2024-11-05 19:18:41.460133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.285 qpair failed and we were unable to recover it. 00:29:12.285 [2024-11-05 19:18:41.460459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.285 [2024-11-05 19:18:41.460471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.285 qpair failed and we were unable to recover it. 00:29:12.285 [2024-11-05 19:18:41.460812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.285 [2024-11-05 19:18:41.460825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.285 qpair failed and we were unable to recover it. 00:29:12.285 [2024-11-05 19:18:41.461133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.285 [2024-11-05 19:18:41.461144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.285 qpair failed and we were unable to recover it. 00:29:12.285 [2024-11-05 19:18:41.461448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.285 [2024-11-05 19:18:41.461461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.285 qpair failed and we were unable to recover it. 00:29:12.285 [2024-11-05 19:18:41.461753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.285 [2024-11-05 19:18:41.461765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.285 qpair failed and we were unable to recover it. 00:29:12.285 [2024-11-05 19:18:41.462058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.285 [2024-11-05 19:18:41.462069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.285 qpair failed and we were unable to recover it. 00:29:12.285 [2024-11-05 19:18:41.462401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.285 [2024-11-05 19:18:41.462412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.285 qpair failed and we were unable to recover it. 00:29:12.285 [2024-11-05 19:18:41.462709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.285 [2024-11-05 19:18:41.462720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.285 qpair failed and we were unable to recover it. 00:29:12.285 [2024-11-05 19:18:41.463042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.285 [2024-11-05 19:18:41.463054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.285 qpair failed and we were unable to recover it. 00:29:12.285 [2024-11-05 19:18:41.463382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.285 [2024-11-05 19:18:41.463393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.285 qpair failed and we were unable to recover it. 00:29:12.285 [2024-11-05 19:18:41.463670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.285 [2024-11-05 19:18:41.463681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.285 qpair failed and we were unable to recover it. 00:29:12.285 [2024-11-05 19:18:41.463989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.285 [2024-11-05 19:18:41.464001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.285 qpair failed and we were unable to recover it. 00:29:12.285 [2024-11-05 19:18:41.464330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.285 [2024-11-05 19:18:41.464341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.285 qpair failed and we were unable to recover it. 00:29:12.285 [2024-11-05 19:18:41.464608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.285 [2024-11-05 19:18:41.464621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.285 qpair failed and we were unable to recover it. 00:29:12.285 [2024-11-05 19:18:41.464923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.285 [2024-11-05 19:18:41.464935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.285 qpair failed and we were unable to recover it. 00:29:12.285 [2024-11-05 19:18:41.465279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.285 [2024-11-05 19:18:41.465292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.285 qpair failed and we were unable to recover it. 00:29:12.285 [2024-11-05 19:18:41.465616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.285 [2024-11-05 19:18:41.465628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.285 qpair failed and we were unable to recover it. 00:29:12.285 [2024-11-05 19:18:41.465922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.285 [2024-11-05 19:18:41.465934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.285 qpair failed and we were unable to recover it. 00:29:12.285 [2024-11-05 19:18:41.466287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.285 [2024-11-05 19:18:41.466298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.285 qpair failed and we were unable to recover it. 00:29:12.285 [2024-11-05 19:18:41.466511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.285 [2024-11-05 19:18:41.466522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.285 qpair failed and we were unable to recover it. 00:29:12.285 [2024-11-05 19:18:41.466811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.285 [2024-11-05 19:18:41.466823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.285 qpair failed and we were unable to recover it. 00:29:12.285 [2024-11-05 19:18:41.467178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.285 [2024-11-05 19:18:41.467188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.285 qpair failed and we were unable to recover it. 00:29:12.285 [2024-11-05 19:18:41.467485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.285 [2024-11-05 19:18:41.467496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.285 qpair failed and we were unable to recover it. 00:29:12.285 [2024-11-05 19:18:41.467798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.285 [2024-11-05 19:18:41.467809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.285 qpair failed and we were unable to recover it. 00:29:12.285 [2024-11-05 19:18:41.468169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.285 [2024-11-05 19:18:41.468180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.285 qpair failed and we were unable to recover it. 00:29:12.285 [2024-11-05 19:18:41.468426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.285 [2024-11-05 19:18:41.468436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.285 qpair failed and we were unable to recover it. 00:29:12.285 [2024-11-05 19:18:41.468736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.285 [2024-11-05 19:18:41.468751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.285 qpair failed and we were unable to recover it. 00:29:12.285 [2024-11-05 19:18:41.469077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.285 [2024-11-05 19:18:41.469089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.285 qpair failed and we were unable to recover it. 00:29:12.286 [2024-11-05 19:18:41.469425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.286 [2024-11-05 19:18:41.469438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.286 qpair failed and we were unable to recover it. 00:29:12.286 [2024-11-05 19:18:41.469769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.286 [2024-11-05 19:18:41.469781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.286 qpair failed and we were unable to recover it. 00:29:12.286 [2024-11-05 19:18:41.470100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.286 [2024-11-05 19:18:41.470111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.286 qpair failed and we were unable to recover it. 00:29:12.286 [2024-11-05 19:18:41.470438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.286 [2024-11-05 19:18:41.470450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.286 qpair failed and we were unable to recover it. 00:29:12.286 [2024-11-05 19:18:41.470820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.286 [2024-11-05 19:18:41.470831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.286 qpair failed and we were unable to recover it. 00:29:12.286 [2024-11-05 19:18:41.471158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.286 [2024-11-05 19:18:41.471168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.286 qpair failed and we were unable to recover it. 00:29:12.286 [2024-11-05 19:18:41.471473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.286 [2024-11-05 19:18:41.471485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.286 qpair failed and we were unable to recover it. 00:29:12.286 [2024-11-05 19:18:41.471783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.286 [2024-11-05 19:18:41.471794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.286 qpair failed and we were unable to recover it. 00:29:12.286 [2024-11-05 19:18:41.472111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.286 [2024-11-05 19:18:41.472122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.286 qpair failed and we were unable to recover it. 00:29:12.286 [2024-11-05 19:18:41.472425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.286 [2024-11-05 19:18:41.472435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.286 qpair failed and we were unable to recover it. 00:29:12.286 [2024-11-05 19:18:41.472751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.286 [2024-11-05 19:18:41.472763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.286 qpair failed and we were unable to recover it. 00:29:12.286 [2024-11-05 19:18:41.473068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.286 [2024-11-05 19:18:41.473079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.286 qpair failed and we were unable to recover it. 00:29:12.286 [2024-11-05 19:18:41.473357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.286 [2024-11-05 19:18:41.473369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.286 qpair failed and we were unable to recover it. 00:29:12.286 [2024-11-05 19:18:41.473694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.286 [2024-11-05 19:18:41.473705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.286 qpair failed and we were unable to recover it. 00:29:12.286 [2024-11-05 19:18:41.474082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.286 [2024-11-05 19:18:41.474094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.286 qpair failed and we were unable to recover it. 00:29:12.286 [2024-11-05 19:18:41.474394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.286 [2024-11-05 19:18:41.474405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.286 qpair failed and we were unable to recover it. 00:29:12.286 [2024-11-05 19:18:41.474685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.286 [2024-11-05 19:18:41.474696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.286 qpair failed and we were unable to recover it. 00:29:12.286 [2024-11-05 19:18:41.474998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.286 [2024-11-05 19:18:41.475011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.286 qpair failed and we were unable to recover it. 00:29:12.286 [2024-11-05 19:18:41.475312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.286 [2024-11-05 19:18:41.475323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.286 qpair failed and we were unable to recover it. 00:29:12.286 [2024-11-05 19:18:41.475591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.286 [2024-11-05 19:18:41.475602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.286 qpair failed and we were unable to recover it. 00:29:12.286 [2024-11-05 19:18:41.475882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.286 [2024-11-05 19:18:41.475894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.286 qpair failed and we were unable to recover it. 00:29:12.286 [2024-11-05 19:18:41.476198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.286 [2024-11-05 19:18:41.476209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.286 qpair failed and we were unable to recover it. 00:29:12.286 [2024-11-05 19:18:41.476472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.286 [2024-11-05 19:18:41.476483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.286 qpair failed and we were unable to recover it. 00:29:12.286 [2024-11-05 19:18:41.476826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.286 [2024-11-05 19:18:41.476838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.286 qpair failed and we were unable to recover it. 00:29:12.286 [2024-11-05 19:18:41.477135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.286 [2024-11-05 19:18:41.477147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.286 qpair failed and we were unable to recover it. 00:29:12.286 [2024-11-05 19:18:41.477441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.286 [2024-11-05 19:18:41.477453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.286 qpair failed and we were unable to recover it. 00:29:12.286 [2024-11-05 19:18:41.477621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.286 [2024-11-05 19:18:41.477634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.286 qpair failed and we were unable to recover it. 00:29:12.286 [2024-11-05 19:18:41.477935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.286 [2024-11-05 19:18:41.477949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.286 qpair failed and we were unable to recover it. 00:29:12.286 [2024-11-05 19:18:41.478287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.286 [2024-11-05 19:18:41.478299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.286 qpair failed and we were unable to recover it. 00:29:12.286 [2024-11-05 19:18:41.478598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.286 [2024-11-05 19:18:41.478609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.286 qpair failed and we were unable to recover it. 00:29:12.286 [2024-11-05 19:18:41.478950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.286 [2024-11-05 19:18:41.478962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.286 qpair failed and we were unable to recover it. 00:29:12.286 [2024-11-05 19:18:41.479206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.286 [2024-11-05 19:18:41.479217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.286 qpair failed and we were unable to recover it. 00:29:12.286 [2024-11-05 19:18:41.479552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.286 [2024-11-05 19:18:41.479564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.286 qpair failed and we were unable to recover it. 00:29:12.286 [2024-11-05 19:18:41.479875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.286 [2024-11-05 19:18:41.479887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.286 qpair failed and we were unable to recover it. 00:29:12.286 [2024-11-05 19:18:41.480047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.286 [2024-11-05 19:18:41.480058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.286 qpair failed and we were unable to recover it. 00:29:12.286 [2024-11-05 19:18:41.480345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.286 [2024-11-05 19:18:41.480357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.286 qpair failed and we were unable to recover it. 00:29:12.286 [2024-11-05 19:18:41.480668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.286 [2024-11-05 19:18:41.480680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.286 qpair failed and we were unable to recover it. 00:29:12.286 [2024-11-05 19:18:41.480990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.286 [2024-11-05 19:18:41.481002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.286 qpair failed and we were unable to recover it. 00:29:12.286 [2024-11-05 19:18:41.481290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.286 [2024-11-05 19:18:41.481301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.286 qpair failed and we were unable to recover it. 00:29:12.287 [2024-11-05 19:18:41.481473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.287 [2024-11-05 19:18:41.481486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.287 qpair failed and we were unable to recover it. 00:29:12.287 [2024-11-05 19:18:41.481808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.287 [2024-11-05 19:18:41.481819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.287 qpair failed and we were unable to recover it. 00:29:12.287 [2024-11-05 19:18:41.482146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.287 [2024-11-05 19:18:41.482158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.287 qpair failed and we were unable to recover it. 00:29:12.287 [2024-11-05 19:18:41.482463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.287 [2024-11-05 19:18:41.482474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.287 qpair failed and we were unable to recover it. 00:29:12.287 [2024-11-05 19:18:41.482770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.287 [2024-11-05 19:18:41.482781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.287 qpair failed and we were unable to recover it. 00:29:12.287 [2024-11-05 19:18:41.483111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.287 [2024-11-05 19:18:41.483122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.287 qpair failed and we were unable to recover it. 00:29:12.287 [2024-11-05 19:18:41.483424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.287 [2024-11-05 19:18:41.483434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.287 qpair failed and we were unable to recover it. 00:29:12.287 [2024-11-05 19:18:41.483725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.287 [2024-11-05 19:18:41.483736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.287 qpair failed and we were unable to recover it. 00:29:12.287 [2024-11-05 19:18:41.484073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.287 [2024-11-05 19:18:41.484085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.287 qpair failed and we were unable to recover it. 00:29:12.287 [2024-11-05 19:18:41.484418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.287 [2024-11-05 19:18:41.484429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.287 qpair failed and we were unable to recover it. 00:29:12.287 [2024-11-05 19:18:41.484738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.287 [2024-11-05 19:18:41.484753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.287 qpair failed and we were unable to recover it. 00:29:12.287 [2024-11-05 19:18:41.485066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.287 [2024-11-05 19:18:41.485078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.287 qpair failed and we were unable to recover it. 00:29:12.287 [2024-11-05 19:18:41.485380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.287 [2024-11-05 19:18:41.485391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.287 qpair failed and we were unable to recover it. 00:29:12.287 [2024-11-05 19:18:41.485693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.287 [2024-11-05 19:18:41.485704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.287 qpair failed and we were unable to recover it. 00:29:12.287 [2024-11-05 19:18:41.486043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.287 [2024-11-05 19:18:41.486056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.287 qpair failed and we were unable to recover it. 00:29:12.287 [2024-11-05 19:18:41.486360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.287 [2024-11-05 19:18:41.486373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.287 qpair failed and we were unable to recover it. 00:29:12.287 [2024-11-05 19:18:41.486663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.287 [2024-11-05 19:18:41.486674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.287 qpair failed and we were unable to recover it. 00:29:12.287 [2024-11-05 19:18:41.486965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.287 [2024-11-05 19:18:41.486977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.287 qpair failed and we were unable to recover it. 00:29:12.287 [2024-11-05 19:18:41.487285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.287 [2024-11-05 19:18:41.487296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.287 qpair failed and we were unable to recover it. 00:29:12.287 [2024-11-05 19:18:41.487598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.287 [2024-11-05 19:18:41.487609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.287 qpair failed and we were unable to recover it. 00:29:12.287 [2024-11-05 19:18:41.487899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.287 [2024-11-05 19:18:41.487910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.287 qpair failed and we were unable to recover it. 00:29:12.287 [2024-11-05 19:18:41.488196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.287 [2024-11-05 19:18:41.488207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.287 qpair failed and we were unable to recover it. 00:29:12.287 [2024-11-05 19:18:41.488516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.287 [2024-11-05 19:18:41.488528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.287 qpair failed and we were unable to recover it. 00:29:12.287 [2024-11-05 19:18:41.488772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.287 [2024-11-05 19:18:41.488784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.287 qpair failed and we were unable to recover it. 00:29:12.287 [2024-11-05 19:18:41.489092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.287 [2024-11-05 19:18:41.489104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.287 qpair failed and we were unable to recover it. 00:29:12.287 [2024-11-05 19:18:41.489425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.287 [2024-11-05 19:18:41.489436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.287 qpair failed and we were unable to recover it. 00:29:12.287 [2024-11-05 19:18:41.489598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.287 [2024-11-05 19:18:41.489609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.287 qpair failed and we were unable to recover it. 00:29:12.287 [2024-11-05 19:18:41.489906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.287 [2024-11-05 19:18:41.489926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.287 qpair failed and we were unable to recover it. 00:29:12.287 [2024-11-05 19:18:41.490289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.287 [2024-11-05 19:18:41.490300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.287 qpair failed and we were unable to recover it. 00:29:12.287 [2024-11-05 19:18:41.490597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.287 [2024-11-05 19:18:41.490608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.287 qpair failed and we were unable to recover it. 00:29:12.287 [2024-11-05 19:18:41.490878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.287 [2024-11-05 19:18:41.490890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.287 qpair failed and we were unable to recover it. 00:29:12.287 [2024-11-05 19:18:41.491188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.287 [2024-11-05 19:18:41.491199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.287 qpair failed and we were unable to recover it. 00:29:12.287 [2024-11-05 19:18:41.491510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.287 [2024-11-05 19:18:41.491521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.287 qpair failed and we were unable to recover it. 00:29:12.287 [2024-11-05 19:18:41.491687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.287 [2024-11-05 19:18:41.491698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.288 qpair failed and we were unable to recover it. 00:29:12.288 [2024-11-05 19:18:41.491802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.288 [2024-11-05 19:18:41.491812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.288 qpair failed and we were unable to recover it. 00:29:12.288 [2024-11-05 19:18:41.492108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.288 [2024-11-05 19:18:41.492120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.288 qpair failed and we were unable to recover it. 00:29:12.288 [2024-11-05 19:18:41.492454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.288 [2024-11-05 19:18:41.492466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.288 qpair failed and we were unable to recover it. 00:29:12.288 [2024-11-05 19:18:41.492776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.288 [2024-11-05 19:18:41.492787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.288 qpair failed and we were unable to recover it. 00:29:12.288 [2024-11-05 19:18:41.493116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.288 [2024-11-05 19:18:41.493127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.288 qpair failed and we were unable to recover it. 00:29:12.288 [2024-11-05 19:18:41.493427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.288 [2024-11-05 19:18:41.493438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.288 qpair failed and we were unable to recover it. 00:29:12.288 [2024-11-05 19:18:41.493737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.288 [2024-11-05 19:18:41.493752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.288 qpair failed and we were unable to recover it. 00:29:12.288 [2024-11-05 19:18:41.494046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.288 [2024-11-05 19:18:41.494058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.288 qpair failed and we were unable to recover it. 00:29:12.288 [2024-11-05 19:18:41.494395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.288 [2024-11-05 19:18:41.494406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.288 qpair failed and we were unable to recover it. 00:29:12.288 [2024-11-05 19:18:41.494706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.288 [2024-11-05 19:18:41.494716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.288 qpair failed and we were unable to recover it. 00:29:12.288 [2024-11-05 19:18:41.495021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.288 [2024-11-05 19:18:41.495033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.288 qpair failed and we were unable to recover it. 00:29:12.288 [2024-11-05 19:18:41.495334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.288 [2024-11-05 19:18:41.495345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.288 qpair failed and we were unable to recover it. 00:29:12.288 [2024-11-05 19:18:41.495622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.288 [2024-11-05 19:18:41.495634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.288 qpair failed and we were unable to recover it. 00:29:12.288 [2024-11-05 19:18:41.495967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.288 [2024-11-05 19:18:41.495980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.288 qpair failed and we were unable to recover it. 00:29:12.288 [2024-11-05 19:18:41.496285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.288 [2024-11-05 19:18:41.496297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.288 qpair failed and we were unable to recover it. 00:29:12.288 [2024-11-05 19:18:41.496620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.288 [2024-11-05 19:18:41.496632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.288 qpair failed and we were unable to recover it. 00:29:12.288 [2024-11-05 19:18:41.496934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.288 [2024-11-05 19:18:41.496946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.288 qpair failed and we were unable to recover it. 00:29:12.288 [2024-11-05 19:18:41.497249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.288 [2024-11-05 19:18:41.497260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.288 qpair failed and we were unable to recover it. 00:29:12.288 [2024-11-05 19:18:41.497576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.288 [2024-11-05 19:18:41.497587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.288 qpair failed and we were unable to recover it. 00:29:12.288 [2024-11-05 19:18:41.497885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.288 [2024-11-05 19:18:41.497896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.288 qpair failed and we were unable to recover it. 00:29:12.288 [2024-11-05 19:18:41.498173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.288 [2024-11-05 19:18:41.498185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.288 qpair failed and we were unable to recover it. 00:29:12.288 [2024-11-05 19:18:41.498483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.288 [2024-11-05 19:18:41.498494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.288 qpair failed and we were unable to recover it. 00:29:12.288 [2024-11-05 19:18:41.498685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.288 [2024-11-05 19:18:41.498696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.288 qpair failed and we were unable to recover it. 00:29:12.288 [2024-11-05 19:18:41.498968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.288 [2024-11-05 19:18:41.498981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.288 qpair failed and we were unable to recover it. 00:29:12.288 [2024-11-05 19:18:41.499281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.288 [2024-11-05 19:18:41.499292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.288 qpair failed and we were unable to recover it. 00:29:12.288 [2024-11-05 19:18:41.499592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.288 [2024-11-05 19:18:41.499604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.288 qpair failed and we were unable to recover it. 00:29:12.288 [2024-11-05 19:18:41.499936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.288 [2024-11-05 19:18:41.499948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.288 qpair failed and we were unable to recover it. 00:29:12.288 [2024-11-05 19:18:41.500259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.288 [2024-11-05 19:18:41.500279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.288 qpair failed and we were unable to recover it. 00:29:12.288 [2024-11-05 19:18:41.500605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.288 [2024-11-05 19:18:41.500616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.288 qpair failed and we were unable to recover it. 00:29:12.288 [2024-11-05 19:18:41.500917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.288 [2024-11-05 19:18:41.500929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.288 qpair failed and we were unable to recover it. 00:29:12.288 [2024-11-05 19:18:41.501108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.288 [2024-11-05 19:18:41.501119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.288 qpair failed and we were unable to recover it. 00:29:12.288 [2024-11-05 19:18:41.501469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.288 [2024-11-05 19:18:41.501481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.288 qpair failed and we were unable to recover it. 00:29:12.288 [2024-11-05 19:18:41.501773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.288 [2024-11-05 19:18:41.501785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.288 qpair failed and we were unable to recover it. 00:29:12.288 [2024-11-05 19:18:41.502086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.288 [2024-11-05 19:18:41.502097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.288 qpair failed and we were unable to recover it. 00:29:12.288 [2024-11-05 19:18:41.502466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.288 [2024-11-05 19:18:41.502477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.288 qpair failed and we were unable to recover it. 00:29:12.288 [2024-11-05 19:18:41.502652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.288 [2024-11-05 19:18:41.502663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.288 qpair failed and we were unable to recover it. 00:29:12.288 [2024-11-05 19:18:41.502956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.288 [2024-11-05 19:18:41.502967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.288 qpair failed and we were unable to recover it. 00:29:12.288 [2024-11-05 19:18:41.503278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.288 [2024-11-05 19:18:41.503289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.288 qpair failed and we were unable to recover it. 00:29:12.288 [2024-11-05 19:18:41.503614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.289 [2024-11-05 19:18:41.503625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.289 qpair failed and we were unable to recover it. 00:29:12.289 [2024-11-05 19:18:41.503919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.289 [2024-11-05 19:18:41.503930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.289 qpair failed and we were unable to recover it. 00:29:12.289 [2024-11-05 19:18:41.504221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.289 [2024-11-05 19:18:41.504232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.289 qpair failed and we were unable to recover it. 00:29:12.289 [2024-11-05 19:18:41.504532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.289 [2024-11-05 19:18:41.504544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.289 qpair failed and we were unable to recover it. 00:29:12.289 [2024-11-05 19:18:41.504817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.289 [2024-11-05 19:18:41.504828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.289 qpair failed and we were unable to recover it. 00:29:12.289 [2024-11-05 19:18:41.505148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.289 [2024-11-05 19:18:41.505159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.289 qpair failed and we were unable to recover it. 00:29:12.289 [2024-11-05 19:18:41.505442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.289 [2024-11-05 19:18:41.505453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.289 qpair failed and we were unable to recover it. 00:29:12.289 [2024-11-05 19:18:41.505758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.289 [2024-11-05 19:18:41.505770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.289 qpair failed and we were unable to recover it. 00:29:12.289 [2024-11-05 19:18:41.506084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.289 [2024-11-05 19:18:41.506095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.289 qpair failed and we were unable to recover it. 00:29:12.289 [2024-11-05 19:18:41.506420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.289 [2024-11-05 19:18:41.506432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.289 qpair failed and we were unable to recover it. 00:29:12.289 [2024-11-05 19:18:41.506723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.289 [2024-11-05 19:18:41.506734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.289 qpair failed and we were unable to recover it. 00:29:12.289 [2024-11-05 19:18:41.507051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.289 [2024-11-05 19:18:41.507065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.289 qpair failed and we were unable to recover it. 00:29:12.289 [2024-11-05 19:18:41.507363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.289 [2024-11-05 19:18:41.507375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.289 qpair failed and we were unable to recover it. 00:29:12.289 [2024-11-05 19:18:41.507682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.289 [2024-11-05 19:18:41.507693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.289 qpair failed and we were unable to recover it. 00:29:12.289 [2024-11-05 19:18:41.508029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.289 [2024-11-05 19:18:41.508041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.289 qpair failed and we were unable to recover it. 00:29:12.289 [2024-11-05 19:18:41.508387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.289 [2024-11-05 19:18:41.508399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.289 qpair failed and we were unable to recover it. 00:29:12.289 [2024-11-05 19:18:41.508708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.289 [2024-11-05 19:18:41.508720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.289 qpair failed and we were unable to recover it. 00:29:12.289 [2024-11-05 19:18:41.509024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.289 [2024-11-05 19:18:41.509035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.289 qpair failed and we were unable to recover it. 00:29:12.289 [2024-11-05 19:18:41.509319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.289 [2024-11-05 19:18:41.509330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.289 qpair failed and we were unable to recover it. 00:29:12.289 [2024-11-05 19:18:41.509662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.289 [2024-11-05 19:18:41.509675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.289 qpair failed and we were unable to recover it. 00:29:12.289 [2024-11-05 19:18:41.509983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.289 [2024-11-05 19:18:41.509995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.289 qpair failed and we were unable to recover it. 00:29:12.289 [2024-11-05 19:18:41.510300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.289 [2024-11-05 19:18:41.510312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.289 qpair failed and we were unable to recover it. 00:29:12.289 [2024-11-05 19:18:41.510639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.289 [2024-11-05 19:18:41.510651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.289 qpair failed and we were unable to recover it. 00:29:12.289 [2024-11-05 19:18:41.510922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.289 [2024-11-05 19:18:41.510934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.289 qpair failed and we were unable to recover it. 00:29:12.289 [2024-11-05 19:18:41.511242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.289 [2024-11-05 19:18:41.511254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.289 qpair failed and we were unable to recover it. 00:29:12.289 [2024-11-05 19:18:41.511559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.289 [2024-11-05 19:18:41.511571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.289 qpair failed and we were unable to recover it. 00:29:12.289 [2024-11-05 19:18:41.511850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.289 [2024-11-05 19:18:41.511863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.289 qpair failed and we were unable to recover it. 00:29:12.289 [2024-11-05 19:18:41.512042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.289 [2024-11-05 19:18:41.512055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.289 qpair failed and we were unable to recover it. 00:29:12.289 [2024-11-05 19:18:41.512389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.289 [2024-11-05 19:18:41.512402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.289 qpair failed and we were unable to recover it. 00:29:12.289 [2024-11-05 19:18:41.512702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.289 [2024-11-05 19:18:41.512713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.289 qpair failed and we were unable to recover it. 00:29:12.289 [2024-11-05 19:18:41.513001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.289 [2024-11-05 19:18:41.513012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.289 qpair failed and we were unable to recover it. 00:29:12.289 [2024-11-05 19:18:41.513315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.289 [2024-11-05 19:18:41.513326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.289 qpair failed and we were unable to recover it. 00:29:12.289 [2024-11-05 19:18:41.513640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.289 [2024-11-05 19:18:41.513650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.289 qpair failed and we were unable to recover it. 00:29:12.289 [2024-11-05 19:18:41.513921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.289 [2024-11-05 19:18:41.513933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.289 qpair failed and we were unable to recover it. 00:29:12.289 [2024-11-05 19:18:41.514224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.289 [2024-11-05 19:18:41.514235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.289 qpair failed and we were unable to recover it. 00:29:12.289 [2024-11-05 19:18:41.514537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.289 [2024-11-05 19:18:41.514549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.289 qpair failed and we were unable to recover it. 00:29:12.289 [2024-11-05 19:18:41.514863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.289 [2024-11-05 19:18:41.514875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.289 qpair failed and we were unable to recover it. 00:29:12.289 [2024-11-05 19:18:41.515218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.289 [2024-11-05 19:18:41.515229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.289 qpair failed and we were unable to recover it. 00:29:12.290 [2024-11-05 19:18:41.515556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.290 [2024-11-05 19:18:41.515570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.290 qpair failed and we were unable to recover it. 00:29:12.290 [2024-11-05 19:18:41.515896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.290 [2024-11-05 19:18:41.515908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.290 qpair failed and we were unable to recover it. 00:29:12.290 [2024-11-05 19:18:41.516241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.290 [2024-11-05 19:18:41.516252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.290 qpair failed and we were unable to recover it. 00:29:12.290 [2024-11-05 19:18:41.516558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.290 [2024-11-05 19:18:41.516569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.290 qpair failed and we were unable to recover it. 00:29:12.290 [2024-11-05 19:18:41.516929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.290 [2024-11-05 19:18:41.516941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.290 qpair failed and we were unable to recover it. 00:29:12.290 [2024-11-05 19:18:41.517240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.290 [2024-11-05 19:18:41.517252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.290 qpair failed and we were unable to recover it. 00:29:12.290 [2024-11-05 19:18:41.517554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.290 [2024-11-05 19:18:41.517565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.290 qpair failed and we were unable to recover it. 00:29:12.290 [2024-11-05 19:18:41.517869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.290 [2024-11-05 19:18:41.517880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.290 qpair failed and we were unable to recover it. 00:29:12.290 [2024-11-05 19:18:41.518182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.290 [2024-11-05 19:18:41.518193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.290 qpair failed and we were unable to recover it. 00:29:12.290 [2024-11-05 19:18:41.518491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.290 [2024-11-05 19:18:41.518503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.290 qpair failed and we were unable to recover it. 00:29:12.290 [2024-11-05 19:18:41.518804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.290 [2024-11-05 19:18:41.518816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.290 qpair failed and we were unable to recover it. 00:29:12.290 [2024-11-05 19:18:41.519128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.290 [2024-11-05 19:18:41.519139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.290 qpair failed and we were unable to recover it. 00:29:12.290 [2024-11-05 19:18:41.519477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.290 [2024-11-05 19:18:41.519489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.290 qpair failed and we were unable to recover it. 00:29:12.290 [2024-11-05 19:18:41.519784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.290 [2024-11-05 19:18:41.519796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.290 qpair failed and we were unable to recover it. 00:29:12.290 [2024-11-05 19:18:41.520104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.290 [2024-11-05 19:18:41.520115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.290 qpair failed and we were unable to recover it. 00:29:12.290 [2024-11-05 19:18:41.520505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.290 [2024-11-05 19:18:41.520516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.290 qpair failed and we were unable to recover it. 00:29:12.290 [2024-11-05 19:18:41.520729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.290 [2024-11-05 19:18:41.520740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.290 qpair failed and we were unable to recover it. 00:29:12.290 [2024-11-05 19:18:41.521057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.290 [2024-11-05 19:18:41.521069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.290 qpair failed and we were unable to recover it. 00:29:12.290 [2024-11-05 19:18:41.521334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.290 [2024-11-05 19:18:41.521345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.290 qpair failed and we were unable to recover it. 00:29:12.290 [2024-11-05 19:18:41.521643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.290 [2024-11-05 19:18:41.521654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.290 qpair failed and we were unable to recover it. 00:29:12.290 [2024-11-05 19:18:41.521960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.290 [2024-11-05 19:18:41.521972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.290 qpair failed and we were unable to recover it. 00:29:12.290 [2024-11-05 19:18:41.522290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.290 [2024-11-05 19:18:41.522301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.290 qpair failed and we were unable to recover it. 00:29:12.290 [2024-11-05 19:18:41.522604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.290 [2024-11-05 19:18:41.522615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.290 qpair failed and we were unable to recover it. 00:29:12.290 [2024-11-05 19:18:41.522909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.290 [2024-11-05 19:18:41.522920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.290 qpair failed and we were unable to recover it. 00:29:12.290 [2024-11-05 19:18:41.523220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.290 [2024-11-05 19:18:41.523231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.290 qpair failed and we were unable to recover it. 00:29:12.290 [2024-11-05 19:18:41.523531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.290 [2024-11-05 19:18:41.523542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.290 qpair failed and we were unable to recover it. 00:29:12.290 [2024-11-05 19:18:41.523852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.290 [2024-11-05 19:18:41.523864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.290 qpair failed and we were unable to recover it. 00:29:12.290 [2024-11-05 19:18:41.524179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.290 [2024-11-05 19:18:41.524191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.290 qpair failed and we were unable to recover it. 00:29:12.290 [2024-11-05 19:18:41.524491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.290 [2024-11-05 19:18:41.524502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.290 qpair failed and we were unable to recover it. 00:29:12.290 [2024-11-05 19:18:41.524799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.290 [2024-11-05 19:18:41.524811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.290 qpair failed and we were unable to recover it. 00:29:12.290 [2024-11-05 19:18:41.525108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.290 [2024-11-05 19:18:41.525121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.290 qpair failed and we were unable to recover it. 00:29:12.290 [2024-11-05 19:18:41.525418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.290 [2024-11-05 19:18:41.525429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.290 qpair failed and we were unable to recover it. 00:29:12.290 [2024-11-05 19:18:41.525706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.290 [2024-11-05 19:18:41.525716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.290 qpair failed and we were unable to recover it. 00:29:12.290 [2024-11-05 19:18:41.526076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.290 [2024-11-05 19:18:41.526088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.290 qpair failed and we were unable to recover it. 00:29:12.290 [2024-11-05 19:18:41.526421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.290 [2024-11-05 19:18:41.526432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.290 qpair failed and we were unable to recover it. 00:29:12.290 [2024-11-05 19:18:41.526631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.291 [2024-11-05 19:18:41.526644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.291 qpair failed and we were unable to recover it. 00:29:12.291 [2024-11-05 19:18:41.526945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.291 [2024-11-05 19:18:41.526956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.291 qpair failed and we were unable to recover it. 00:29:12.291 [2024-11-05 19:18:41.527233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.291 [2024-11-05 19:18:41.527244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.291 qpair failed and we were unable to recover it. 00:29:12.291 [2024-11-05 19:18:41.527548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.291 [2024-11-05 19:18:41.527560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.291 qpair failed and we were unable to recover it. 00:29:12.291 [2024-11-05 19:18:41.527730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.291 [2024-11-05 19:18:41.527742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.291 qpair failed and we were unable to recover it. 00:29:12.291 [2024-11-05 19:18:41.528051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.291 [2024-11-05 19:18:41.528063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.291 qpair failed and we were unable to recover it. 00:29:12.291 [2024-11-05 19:18:41.528367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.291 [2024-11-05 19:18:41.528379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.291 qpair failed and we were unable to recover it. 00:29:12.291 [2024-11-05 19:18:41.528685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.291 [2024-11-05 19:18:41.528697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.291 qpair failed and we were unable to recover it. 00:29:12.291 [2024-11-05 19:18:41.528998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.291 [2024-11-05 19:18:41.529010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.291 qpair failed and we were unable to recover it. 00:29:12.291 [2024-11-05 19:18:41.529289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.291 [2024-11-05 19:18:41.529300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.291 qpair failed and we were unable to recover it. 00:29:12.291 [2024-11-05 19:18:41.529541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.291 [2024-11-05 19:18:41.529553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.291 qpair failed and we were unable to recover it. 00:29:12.291 [2024-11-05 19:18:41.529885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.291 [2024-11-05 19:18:41.529896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.291 qpair failed and we were unable to recover it. 00:29:12.291 [2024-11-05 19:18:41.530235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.291 [2024-11-05 19:18:41.530247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.291 qpair failed and we were unable to recover it. 00:29:12.291 [2024-11-05 19:18:41.530575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.291 [2024-11-05 19:18:41.530587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.291 qpair failed and we were unable to recover it. 00:29:12.291 [2024-11-05 19:18:41.530889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.291 [2024-11-05 19:18:41.530900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.291 qpair failed and we were unable to recover it. 00:29:12.291 [2024-11-05 19:18:41.531186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.291 [2024-11-05 19:18:41.531197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.291 qpair failed and we were unable to recover it. 00:29:12.291 [2024-11-05 19:18:41.531536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.291 [2024-11-05 19:18:41.531548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.291 qpair failed and we were unable to recover it. 00:29:12.291 [2024-11-05 19:18:41.531837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.291 [2024-11-05 19:18:41.531849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.291 qpair failed and we were unable to recover it. 00:29:12.291 [2024-11-05 19:18:41.532024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.291 [2024-11-05 19:18:41.532034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.291 qpair failed and we were unable to recover it. 00:29:12.291 [2024-11-05 19:18:41.532352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.291 [2024-11-05 19:18:41.532363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.291 qpair failed and we were unable to recover it. 00:29:12.291 [2024-11-05 19:18:41.532661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.291 [2024-11-05 19:18:41.532672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.291 qpair failed and we were unable to recover it. 00:29:12.291 [2024-11-05 19:18:41.532973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.291 [2024-11-05 19:18:41.532985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.291 qpair failed and we were unable to recover it. 00:29:12.291 [2024-11-05 19:18:41.533293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.291 [2024-11-05 19:18:41.533304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.291 qpair failed and we were unable to recover it. 00:29:12.291 [2024-11-05 19:18:41.533484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.291 [2024-11-05 19:18:41.533496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.291 qpair failed and we were unable to recover it. 00:29:12.291 [2024-11-05 19:18:41.533806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.291 [2024-11-05 19:18:41.533818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.291 qpair failed and we were unable to recover it. 00:29:12.291 [2024-11-05 19:18:41.534117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.291 [2024-11-05 19:18:41.534128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.291 qpair failed and we were unable to recover it. 00:29:12.291 [2024-11-05 19:18:41.534419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.291 [2024-11-05 19:18:41.534430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.291 qpair failed and we were unable to recover it. 00:29:12.291 [2024-11-05 19:18:41.534726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.291 [2024-11-05 19:18:41.534737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.291 qpair failed and we were unable to recover it. 00:29:12.291 [2024-11-05 19:18:41.535083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.291 [2024-11-05 19:18:41.535096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.291 qpair failed and we were unable to recover it. 00:29:12.291 [2024-11-05 19:18:41.535419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.291 [2024-11-05 19:18:41.535431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.291 qpair failed and we were unable to recover it. 00:29:12.291 [2024-11-05 19:18:41.535731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.291 [2024-11-05 19:18:41.535742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.291 qpair failed and we were unable to recover it. 00:29:12.291 [2024-11-05 19:18:41.536049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.291 [2024-11-05 19:18:41.536060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.291 qpair failed and we were unable to recover it. 00:29:12.291 [2024-11-05 19:18:41.536360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.291 [2024-11-05 19:18:41.536371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.291 qpair failed and we were unable to recover it. 00:29:12.291 [2024-11-05 19:18:41.536558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.291 [2024-11-05 19:18:41.536572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.291 qpair failed and we were unable to recover it. 00:29:12.291 [2024-11-05 19:18:41.536880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.291 [2024-11-05 19:18:41.536892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.291 qpair failed and we were unable to recover it. 00:29:12.291 [2024-11-05 19:18:41.537203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.291 [2024-11-05 19:18:41.537214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.291 qpair failed and we were unable to recover it. 00:29:12.291 [2024-11-05 19:18:41.537525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.291 [2024-11-05 19:18:41.537537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.291 qpair failed and we were unable to recover it. 00:29:12.291 [2024-11-05 19:18:41.537740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.292 [2024-11-05 19:18:41.537760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.292 qpair failed and we were unable to recover it. 00:29:12.292 [2024-11-05 19:18:41.537953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.292 [2024-11-05 19:18:41.537965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.292 qpair failed and we were unable to recover it. 00:29:12.292 [2024-11-05 19:18:41.538271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.292 [2024-11-05 19:18:41.538281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.292 qpair failed and we were unable to recover it. 00:29:12.292 [2024-11-05 19:18:41.538622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.292 [2024-11-05 19:18:41.538633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.292 qpair failed and we were unable to recover it. 00:29:12.292 [2024-11-05 19:18:41.538932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.292 [2024-11-05 19:18:41.538943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.292 qpair failed and we were unable to recover it. 00:29:12.292 [2024-11-05 19:18:41.539236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.292 [2024-11-05 19:18:41.539246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.292 qpair failed and we were unable to recover it. 00:29:12.292 [2024-11-05 19:18:41.539560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.292 [2024-11-05 19:18:41.539571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.292 qpair failed and we were unable to recover it. 00:29:12.292 [2024-11-05 19:18:41.539697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.292 [2024-11-05 19:18:41.539711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.292 qpair failed and we were unable to recover it. 00:29:12.292 [2024-11-05 19:18:41.539970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.292 [2024-11-05 19:18:41.539982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.292 qpair failed and we were unable to recover it. 00:29:12.292 [2024-11-05 19:18:41.540343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.292 [2024-11-05 19:18:41.540354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.292 qpair failed and we were unable to recover it. 00:29:12.292 [2024-11-05 19:18:41.540664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.292 [2024-11-05 19:18:41.540676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.292 qpair failed and we were unable to recover it. 00:29:12.292 [2024-11-05 19:18:41.540994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.292 [2024-11-05 19:18:41.541005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.292 qpair failed and we were unable to recover it. 00:29:12.292 [2024-11-05 19:18:41.541340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.292 [2024-11-05 19:18:41.541351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.292 qpair failed and we were unable to recover it. 00:29:12.292 [2024-11-05 19:18:41.541671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.292 [2024-11-05 19:18:41.541682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.292 qpair failed and we were unable to recover it. 00:29:12.292 [2024-11-05 19:18:41.542031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.292 [2024-11-05 19:18:41.542042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.292 qpair failed and we were unable to recover it. 00:29:12.292 [2024-11-05 19:18:41.542334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.292 [2024-11-05 19:18:41.542344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.292 qpair failed and we were unable to recover it. 00:29:12.292 [2024-11-05 19:18:41.542656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.292 [2024-11-05 19:18:41.542667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.292 qpair failed and we were unable to recover it. 00:29:12.292 [2024-11-05 19:18:41.542971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.292 [2024-11-05 19:18:41.542982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.292 qpair failed and we were unable to recover it. 00:29:12.292 [2024-11-05 19:18:41.543287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.292 [2024-11-05 19:18:41.543298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.292 qpair failed and we were unable to recover it. 00:29:12.292 [2024-11-05 19:18:41.543601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.292 [2024-11-05 19:18:41.543612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.292 qpair failed and we were unable to recover it. 00:29:12.292 [2024-11-05 19:18:41.543952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.292 [2024-11-05 19:18:41.543964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.292 qpair failed and we were unable to recover it. 00:29:12.292 [2024-11-05 19:18:41.544261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.292 [2024-11-05 19:18:41.544273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.292 qpair failed and we were unable to recover it. 00:29:12.292 [2024-11-05 19:18:41.544573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.292 [2024-11-05 19:18:41.544585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.292 qpair failed and we were unable to recover it. 00:29:12.292 [2024-11-05 19:18:41.544774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.292 [2024-11-05 19:18:41.544790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.292 qpair failed and we were unable to recover it. 00:29:12.292 [2024-11-05 19:18:41.545080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.292 [2024-11-05 19:18:41.545091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.292 qpair failed and we were unable to recover it. 00:29:12.292 [2024-11-05 19:18:41.545389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.292 [2024-11-05 19:18:41.545399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.292 qpair failed and we were unable to recover it. 00:29:12.292 [2024-11-05 19:18:41.545588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.292 [2024-11-05 19:18:41.545599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.292 qpair failed and we were unable to recover it. 00:29:12.292 [2024-11-05 19:18:41.545907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.292 [2024-11-05 19:18:41.545919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.292 qpair failed and we were unable to recover it. 00:29:12.292 [2024-11-05 19:18:41.546196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.292 [2024-11-05 19:18:41.546206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.292 qpair failed and we were unable to recover it. 00:29:12.292 [2024-11-05 19:18:41.546561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.292 [2024-11-05 19:18:41.546572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.292 qpair failed and we were unable to recover it. 00:29:12.292 [2024-11-05 19:18:41.546927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.292 [2024-11-05 19:18:41.546939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.292 qpair failed and we were unable to recover it. 00:29:12.292 [2024-11-05 19:18:41.547285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.292 [2024-11-05 19:18:41.547296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.292 qpair failed and we were unable to recover it. 00:29:12.292 [2024-11-05 19:18:41.547571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.293 [2024-11-05 19:18:41.547582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.293 qpair failed and we were unable to recover it. 00:29:12.293 [2024-11-05 19:18:41.547890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.293 [2024-11-05 19:18:41.547901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.293 qpair failed and we were unable to recover it. 00:29:12.293 [2024-11-05 19:18:41.548205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.293 [2024-11-05 19:18:41.548218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.293 qpair failed and we were unable to recover it. 00:29:12.293 [2024-11-05 19:18:41.548548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.293 [2024-11-05 19:18:41.548558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.293 qpair failed and we were unable to recover it. 00:29:12.293 [2024-11-05 19:18:41.548883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.293 [2024-11-05 19:18:41.548896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.293 qpair failed and we were unable to recover it. 00:29:12.293 [2024-11-05 19:18:41.549227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.293 [2024-11-05 19:18:41.549238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.293 qpair failed and we were unable to recover it. 00:29:12.293 [2024-11-05 19:18:41.549534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.293 [2024-11-05 19:18:41.549545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.293 qpair failed and we were unable to recover it. 00:29:12.293 [2024-11-05 19:18:41.549812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.293 [2024-11-05 19:18:41.549823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.293 qpair failed and we were unable to recover it. 00:29:12.293 [2024-11-05 19:18:41.550116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.293 [2024-11-05 19:18:41.550127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.293 qpair failed and we were unable to recover it. 00:29:12.293 [2024-11-05 19:18:41.550427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.293 [2024-11-05 19:18:41.550438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.293 qpair failed and we were unable to recover it. 00:29:12.293 [2024-11-05 19:18:41.550744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.293 [2024-11-05 19:18:41.550759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.293 qpair failed and we were unable to recover it. 00:29:12.293 [2024-11-05 19:18:41.551045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.293 [2024-11-05 19:18:41.551056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.293 qpair failed and we were unable to recover it. 00:29:12.293 [2024-11-05 19:18:41.551335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.293 [2024-11-05 19:18:41.551346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.293 qpair failed and we were unable to recover it. 00:29:12.293 [2024-11-05 19:18:41.551655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.293 [2024-11-05 19:18:41.551666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.293 qpair failed and we were unable to recover it. 00:29:12.293 [2024-11-05 19:18:41.551962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.293 [2024-11-05 19:18:41.551974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.293 qpair failed and we were unable to recover it. 00:29:12.293 [2024-11-05 19:18:41.552304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.293 [2024-11-05 19:18:41.552316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.293 qpair failed and we were unable to recover it. 00:29:12.293 [2024-11-05 19:18:41.552646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.293 [2024-11-05 19:18:41.552658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.293 qpair failed and we were unable to recover it. 00:29:12.293 [2024-11-05 19:18:41.552876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.293 [2024-11-05 19:18:41.552888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.293 qpair failed and we were unable to recover it. 00:29:12.293 [2024-11-05 19:18:41.553201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.293 [2024-11-05 19:18:41.553214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.293 qpair failed and we were unable to recover it. 00:29:12.293 [2024-11-05 19:18:41.553543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.293 [2024-11-05 19:18:41.553555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.293 qpair failed and we were unable to recover it. 00:29:12.293 [2024-11-05 19:18:41.553884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.293 [2024-11-05 19:18:41.553895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.293 qpair failed and we were unable to recover it. 00:29:12.293 [2024-11-05 19:18:41.554221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.293 [2024-11-05 19:18:41.554233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.293 qpair failed and we were unable to recover it. 00:29:12.293 [2024-11-05 19:18:41.554533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.293 [2024-11-05 19:18:41.554544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.293 qpair failed and we were unable to recover it. 00:29:12.293 [2024-11-05 19:18:41.554846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.293 [2024-11-05 19:18:41.554857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.293 qpair failed and we were unable to recover it. 00:29:12.293 [2024-11-05 19:18:41.555136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.293 [2024-11-05 19:18:41.555147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.293 qpair failed and we were unable to recover it. 00:29:12.293 [2024-11-05 19:18:41.555447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.293 [2024-11-05 19:18:41.555458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.293 qpair failed and we were unable to recover it. 00:29:12.293 [2024-11-05 19:18:41.555753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.293 [2024-11-05 19:18:41.555765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.293 qpair failed and we were unable to recover it. 00:29:12.293 [2024-11-05 19:18:41.556070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.293 [2024-11-05 19:18:41.556081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.293 qpair failed and we were unable to recover it. 00:29:12.293 [2024-11-05 19:18:41.556413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.293 [2024-11-05 19:18:41.556423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.293 qpair failed and we were unable to recover it. 00:29:12.293 [2024-11-05 19:18:41.556725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.293 [2024-11-05 19:18:41.556737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.293 qpair failed and we were unable to recover it. 00:29:12.293 [2024-11-05 19:18:41.557106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.293 [2024-11-05 19:18:41.557117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.293 qpair failed and we were unable to recover it. 00:29:12.293 [2024-11-05 19:18:41.557336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.293 [2024-11-05 19:18:41.557347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.293 qpair failed and we were unable to recover it. 00:29:12.293 [2024-11-05 19:18:41.557655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.293 [2024-11-05 19:18:41.557667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.293 qpair failed and we were unable to recover it. 00:29:12.293 [2024-11-05 19:18:41.557950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.293 [2024-11-05 19:18:41.557961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.293 qpair failed and we were unable to recover it. 00:29:12.293 [2024-11-05 19:18:41.558340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.293 [2024-11-05 19:18:41.558352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.293 qpair failed and we were unable to recover it. 00:29:12.293 [2024-11-05 19:18:41.558700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.293 [2024-11-05 19:18:41.558711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.293 qpair failed and we were unable to recover it. 00:29:12.293 [2024-11-05 19:18:41.559033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.293 [2024-11-05 19:18:41.559045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.293 qpair failed and we were unable to recover it. 00:29:12.293 [2024-11-05 19:18:41.559387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.294 [2024-11-05 19:18:41.559398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.294 qpair failed and we were unable to recover it. 00:29:12.294 [2024-11-05 19:18:41.559703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.294 [2024-11-05 19:18:41.559715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.294 qpair failed and we were unable to recover it. 00:29:12.294 [2024-11-05 19:18:41.560053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.294 [2024-11-05 19:18:41.560064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.294 qpair failed and we were unable to recover it. 00:29:12.294 [2024-11-05 19:18:41.560356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.294 [2024-11-05 19:18:41.560368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.294 qpair failed and we were unable to recover it. 00:29:12.294 [2024-11-05 19:18:41.560642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.294 [2024-11-05 19:18:41.560652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.294 qpair failed and we were unable to recover it. 00:29:12.294 [2024-11-05 19:18:41.560930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.294 [2024-11-05 19:18:41.560941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.294 qpair failed and we were unable to recover it. 00:29:12.294 [2024-11-05 19:18:41.561259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.294 [2024-11-05 19:18:41.561271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.294 qpair failed and we were unable to recover it. 00:29:12.294 [2024-11-05 19:18:41.561606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.294 [2024-11-05 19:18:41.561618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.294 qpair failed and we were unable to recover it. 00:29:12.294 [2024-11-05 19:18:41.561920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.294 [2024-11-05 19:18:41.561932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.294 qpair failed and we were unable to recover it. 00:29:12.294 [2024-11-05 19:18:41.562245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.294 [2024-11-05 19:18:41.562256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.294 qpair failed and we were unable to recover it. 00:29:12.294 [2024-11-05 19:18:41.562563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.294 [2024-11-05 19:18:41.562574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.294 qpair failed and we were unable to recover it. 00:29:12.294 [2024-11-05 19:18:41.562869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.294 [2024-11-05 19:18:41.562881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.294 qpair failed and we were unable to recover it. 00:29:12.294 [2024-11-05 19:18:41.563188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.294 [2024-11-05 19:18:41.563199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.294 qpair failed and we were unable to recover it. 00:29:12.294 [2024-11-05 19:18:41.563517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.294 [2024-11-05 19:18:41.563529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.294 qpair failed and we were unable to recover it. 00:29:12.294 [2024-11-05 19:18:41.563855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.294 [2024-11-05 19:18:41.563867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.294 qpair failed and we were unable to recover it. 00:29:12.294 [2024-11-05 19:18:41.564161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.294 [2024-11-05 19:18:41.564171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.294 qpair failed and we were unable to recover it. 00:29:12.294 [2024-11-05 19:18:41.564516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.294 [2024-11-05 19:18:41.564526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.294 qpair failed and we were unable to recover it. 00:29:12.294 [2024-11-05 19:18:41.564830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.294 [2024-11-05 19:18:41.564841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.294 qpair failed and we were unable to recover it. 00:29:12.294 [2024-11-05 19:18:41.565176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.294 [2024-11-05 19:18:41.565188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.294 qpair failed and we were unable to recover it. 00:29:12.294 [2024-11-05 19:18:41.565392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.294 [2024-11-05 19:18:41.565405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.294 qpair failed and we were unable to recover it. 00:29:12.294 [2024-11-05 19:18:41.565712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.294 [2024-11-05 19:18:41.565723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.294 qpair failed and we were unable to recover it. 00:29:12.294 [2024-11-05 19:18:41.565994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.294 [2024-11-05 19:18:41.566005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.294 qpair failed and we were unable to recover it. 00:29:12.294 [2024-11-05 19:18:41.566306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.294 [2024-11-05 19:18:41.566316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.294 qpair failed and we were unable to recover it. 00:29:12.294 [2024-11-05 19:18:41.566681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.294 [2024-11-05 19:18:41.566693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.294 qpair failed and we were unable to recover it. 00:29:12.294 [2024-11-05 19:18:41.566993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.294 [2024-11-05 19:18:41.567005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.294 qpair failed and we were unable to recover it. 00:29:12.294 [2024-11-05 19:18:41.567317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.294 [2024-11-05 19:18:41.567328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.294 qpair failed and we were unable to recover it. 00:29:12.294 [2024-11-05 19:18:41.567618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.294 [2024-11-05 19:18:41.567629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.294 qpair failed and we were unable to recover it. 00:29:12.294 [2024-11-05 19:18:41.567980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.294 [2024-11-05 19:18:41.567992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.294 qpair failed and we were unable to recover it. 00:29:12.294 [2024-11-05 19:18:41.568290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.294 [2024-11-05 19:18:41.568302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.294 qpair failed and we were unable to recover it. 00:29:12.294 [2024-11-05 19:18:41.568600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.294 [2024-11-05 19:18:41.568612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.294 qpair failed and we were unable to recover it. 00:29:12.294 [2024-11-05 19:18:41.568893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.294 [2024-11-05 19:18:41.568905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.294 qpair failed and we were unable to recover it. 00:29:12.294 [2024-11-05 19:18:41.569191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.294 [2024-11-05 19:18:41.569201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.294 qpair failed and we were unable to recover it. 00:29:12.294 [2024-11-05 19:18:41.569506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.294 [2024-11-05 19:18:41.569517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.294 qpair failed and we were unable to recover it. 00:29:12.294 [2024-11-05 19:18:41.569815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.294 [2024-11-05 19:18:41.569827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.294 qpair failed and we were unable to recover it. 00:29:12.294 [2024-11-05 19:18:41.570019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.294 [2024-11-05 19:18:41.570030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.294 qpair failed and we were unable to recover it. 00:29:12.295 [2024-11-05 19:18:41.570324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.295 [2024-11-05 19:18:41.570336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.295 qpair failed and we were unable to recover it. 00:29:12.295 [2024-11-05 19:18:41.570675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.295 [2024-11-05 19:18:41.570686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.295 qpair failed and we were unable to recover it. 00:29:12.295 [2024-11-05 19:18:41.570992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.295 [2024-11-05 19:18:41.571003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.295 qpair failed and we were unable to recover it. 00:29:12.295 [2024-11-05 19:18:41.571211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.295 [2024-11-05 19:18:41.571222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.295 qpair failed and we were unable to recover it. 00:29:12.295 [2024-11-05 19:18:41.571554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.295 [2024-11-05 19:18:41.571566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.295 qpair failed and we were unable to recover it. 00:29:12.295 [2024-11-05 19:18:41.572004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.295 [2024-11-05 19:18:41.572016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.295 qpair failed and we were unable to recover it. 00:29:12.295 [2024-11-05 19:18:41.572321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.295 [2024-11-05 19:18:41.572333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.295 qpair failed and we were unable to recover it. 00:29:12.295 [2024-11-05 19:18:41.572635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.295 [2024-11-05 19:18:41.572645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.295 qpair failed and we were unable to recover it. 00:29:12.295 [2024-11-05 19:18:41.572867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.295 [2024-11-05 19:18:41.572876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.295 qpair failed and we were unable to recover it. 00:29:12.295 [2024-11-05 19:18:41.573195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.295 [2024-11-05 19:18:41.573205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.295 qpair failed and we were unable to recover it. 00:29:12.295 [2024-11-05 19:18:41.573509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.295 [2024-11-05 19:18:41.573519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.295 qpair failed and we were unable to recover it. 00:29:12.295 [2024-11-05 19:18:41.573821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.295 [2024-11-05 19:18:41.573832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.295 qpair failed and we were unable to recover it. 00:29:12.295 [2024-11-05 19:18:41.574156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.295 [2024-11-05 19:18:41.574165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.295 qpair failed and we were unable to recover it. 00:29:12.295 [2024-11-05 19:18:41.574478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.295 [2024-11-05 19:18:41.574487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.295 qpair failed and we were unable to recover it. 00:29:12.295 [2024-11-05 19:18:41.574886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.295 [2024-11-05 19:18:41.574898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.295 qpair failed and we were unable to recover it. 00:29:12.295 [2024-11-05 19:18:41.575212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.295 [2024-11-05 19:18:41.575222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.295 qpair failed and we were unable to recover it. 00:29:12.295 [2024-11-05 19:18:41.575546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.295 [2024-11-05 19:18:41.575555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.295 qpair failed and we were unable to recover it. 00:29:12.295 [2024-11-05 19:18:41.575766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.295 [2024-11-05 19:18:41.575776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.295 qpair failed and we were unable to recover it. 00:29:12.295 [2024-11-05 19:18:41.576027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.295 [2024-11-05 19:18:41.576036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.295 qpair failed and we were unable to recover it. 00:29:12.295 [2024-11-05 19:18:41.576345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.295 [2024-11-05 19:18:41.576355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.295 qpair failed and we were unable to recover it. 00:29:12.295 [2024-11-05 19:18:41.576666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.295 [2024-11-05 19:18:41.576676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.295 qpair failed and we were unable to recover it. 00:29:12.295 [2024-11-05 19:18:41.576963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.295 [2024-11-05 19:18:41.576973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.295 qpair failed and we were unable to recover it. 00:29:12.295 [2024-11-05 19:18:41.577281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.295 [2024-11-05 19:18:41.577290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.295 qpair failed and we were unable to recover it. 00:29:12.295 [2024-11-05 19:18:41.577593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.295 [2024-11-05 19:18:41.577602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.295 qpair failed and we were unable to recover it. 00:29:12.295 [2024-11-05 19:18:41.577858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.295 [2024-11-05 19:18:41.577869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.295 qpair failed and we were unable to recover it. 00:29:12.295 [2024-11-05 19:18:41.578181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.295 [2024-11-05 19:18:41.578191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.295 qpair failed and we were unable to recover it. 00:29:12.295 [2024-11-05 19:18:41.578496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.295 [2024-11-05 19:18:41.578506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.295 qpair failed and we were unable to recover it. 00:29:12.562 [2024-11-05 19:18:41.578813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.562 [2024-11-05 19:18:41.578825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.562 qpair failed and we were unable to recover it. 00:29:12.562 [2024-11-05 19:18:41.579132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.562 [2024-11-05 19:18:41.579142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.562 qpair failed and we were unable to recover it. 00:29:12.562 [2024-11-05 19:18:41.579451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.562 [2024-11-05 19:18:41.579460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.562 qpair failed and we were unable to recover it. 00:29:12.562 [2024-11-05 19:18:41.579771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.562 [2024-11-05 19:18:41.579781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.562 qpair failed and we were unable to recover it. 00:29:12.562 [2024-11-05 19:18:41.579978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.562 [2024-11-05 19:18:41.579988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.562 qpair failed and we were unable to recover it. 00:29:12.562 [2024-11-05 19:18:41.580304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.562 [2024-11-05 19:18:41.580313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.562 qpair failed and we were unable to recover it. 00:29:12.562 [2024-11-05 19:18:41.580654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.562 [2024-11-05 19:18:41.580663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.562 qpair failed and we were unable to recover it. 00:29:12.562 [2024-11-05 19:18:41.580992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.562 [2024-11-05 19:18:41.581003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.562 qpair failed and we were unable to recover it. 00:29:12.562 [2024-11-05 19:18:41.581314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.562 [2024-11-05 19:18:41.581323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.562 qpair failed and we were unable to recover it. 00:29:12.562 [2024-11-05 19:18:41.581600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.562 [2024-11-05 19:18:41.581610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.562 qpair failed and we were unable to recover it. 00:29:12.562 [2024-11-05 19:18:41.581912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.562 [2024-11-05 19:18:41.581923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.562 qpair failed and we were unable to recover it. 00:29:12.562 [2024-11-05 19:18:41.582242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.562 [2024-11-05 19:18:41.582252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.562 qpair failed and we were unable to recover it. 00:29:12.562 [2024-11-05 19:18:41.582561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.562 [2024-11-05 19:18:41.582572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.562 qpair failed and we were unable to recover it. 00:29:12.562 [2024-11-05 19:18:41.582904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.562 [2024-11-05 19:18:41.582914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.562 qpair failed and we were unable to recover it. 00:29:12.562 [2024-11-05 19:18:41.583261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.562 [2024-11-05 19:18:41.583273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.562 qpair failed and we were unable to recover it. 00:29:12.562 [2024-11-05 19:18:41.583581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.562 [2024-11-05 19:18:41.583591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.562 qpair failed and we were unable to recover it. 00:29:12.562 [2024-11-05 19:18:41.583892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.562 [2024-11-05 19:18:41.583902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.562 qpair failed and we were unable to recover it. 00:29:12.562 [2024-11-05 19:18:41.584107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.562 [2024-11-05 19:18:41.584118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.562 qpair failed and we were unable to recover it. 00:29:12.562 [2024-11-05 19:18:41.584425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.562 [2024-11-05 19:18:41.584435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.562 qpair failed and we were unable to recover it. 00:29:12.562 [2024-11-05 19:18:41.584734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.562 [2024-11-05 19:18:41.584744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.562 qpair failed and we were unable to recover it. 00:29:12.562 [2024-11-05 19:18:41.585052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.562 [2024-11-05 19:18:41.585062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.562 qpair failed and we were unable to recover it. 00:29:12.562 [2024-11-05 19:18:41.585265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.562 [2024-11-05 19:18:41.585276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.562 qpair failed and we were unable to recover it. 00:29:12.562 [2024-11-05 19:18:41.585601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.562 [2024-11-05 19:18:41.585610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.562 qpair failed and we were unable to recover it. 00:29:12.562 [2024-11-05 19:18:41.585910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.562 [2024-11-05 19:18:41.585920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.562 qpair failed and we were unable to recover it. 00:29:12.562 [2024-11-05 19:18:41.586227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.562 [2024-11-05 19:18:41.586237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.562 qpair failed and we were unable to recover it. 00:29:12.562 [2024-11-05 19:18:41.586518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.562 [2024-11-05 19:18:41.586529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.562 qpair failed and we were unable to recover it. 00:29:12.562 [2024-11-05 19:18:41.586829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.563 [2024-11-05 19:18:41.586840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.563 qpair failed and we were unable to recover it. 00:29:12.563 [2024-11-05 19:18:41.587146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.563 [2024-11-05 19:18:41.587156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.563 qpair failed and we were unable to recover it. 00:29:12.563 [2024-11-05 19:18:41.587485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.563 [2024-11-05 19:18:41.587495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.563 qpair failed and we were unable to recover it. 00:29:12.563 [2024-11-05 19:18:41.587820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.563 [2024-11-05 19:18:41.587830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.563 qpair failed and we were unable to recover it. 00:29:12.563 [2024-11-05 19:18:41.588012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.563 [2024-11-05 19:18:41.588022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.563 qpair failed and we were unable to recover it. 00:29:12.563 [2024-11-05 19:18:41.588329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.563 [2024-11-05 19:18:41.588339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.563 qpair failed and we were unable to recover it. 00:29:12.563 [2024-11-05 19:18:41.588656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.563 [2024-11-05 19:18:41.588666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.563 qpair failed and we were unable to recover it. 00:29:12.563 [2024-11-05 19:18:41.589003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.563 [2024-11-05 19:18:41.589013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.563 qpair failed and we were unable to recover it. 00:29:12.563 [2024-11-05 19:18:41.589313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.563 [2024-11-05 19:18:41.589323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.563 qpair failed and we were unable to recover it. 00:29:12.563 [2024-11-05 19:18:41.589651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.563 [2024-11-05 19:18:41.589661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.563 qpair failed and we were unable to recover it. 00:29:12.563 [2024-11-05 19:18:41.589975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.563 [2024-11-05 19:18:41.589985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.563 qpair failed and we were unable to recover it. 00:29:12.563 [2024-11-05 19:18:41.590301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.563 [2024-11-05 19:18:41.590311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.563 qpair failed and we were unable to recover it. 00:29:12.563 [2024-11-05 19:18:41.590609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.563 [2024-11-05 19:18:41.590619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.563 qpair failed and we were unable to recover it. 00:29:12.563 [2024-11-05 19:18:41.590951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.563 [2024-11-05 19:18:41.590962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.563 qpair failed and we were unable to recover it. 00:29:12.563 [2024-11-05 19:18:41.591270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.563 [2024-11-05 19:18:41.591280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.563 qpair failed and we were unable to recover it. 00:29:12.563 [2024-11-05 19:18:41.591610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.563 [2024-11-05 19:18:41.591620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.563 qpair failed and we were unable to recover it. 00:29:12.563 [2024-11-05 19:18:41.591942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.563 [2024-11-05 19:18:41.591952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.563 qpair failed and we were unable to recover it. 00:29:12.563 [2024-11-05 19:18:41.592261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.563 [2024-11-05 19:18:41.592271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.563 qpair failed and we were unable to recover it. 00:29:12.563 [2024-11-05 19:18:41.592573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.563 [2024-11-05 19:18:41.592583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.563 qpair failed and we were unable to recover it. 00:29:12.563 [2024-11-05 19:18:41.592908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.563 [2024-11-05 19:18:41.592919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.563 qpair failed and we were unable to recover it. 00:29:12.563 [2024-11-05 19:18:41.593227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.563 [2024-11-05 19:18:41.593237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.563 qpair failed and we were unable to recover it. 00:29:12.563 [2024-11-05 19:18:41.593535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.563 [2024-11-05 19:18:41.593545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.563 qpair failed and we were unable to recover it. 00:29:12.563 [2024-11-05 19:18:41.593849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.563 [2024-11-05 19:18:41.593860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.563 qpair failed and we were unable to recover it. 00:29:12.563 [2024-11-05 19:18:41.594206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.563 [2024-11-05 19:18:41.594216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.563 qpair failed and we were unable to recover it. 00:29:12.563 [2024-11-05 19:18:41.594527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.563 [2024-11-05 19:18:41.594537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.563 qpair failed and we were unable to recover it. 00:29:12.563 [2024-11-05 19:18:41.594854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.563 [2024-11-05 19:18:41.594864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.563 qpair failed and we were unable to recover it. 00:29:12.563 [2024-11-05 19:18:41.595182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.563 [2024-11-05 19:18:41.595192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.563 qpair failed and we were unable to recover it. 00:29:12.563 [2024-11-05 19:18:41.595525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.563 [2024-11-05 19:18:41.595535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.563 qpair failed and we were unable to recover it. 00:29:12.563 [2024-11-05 19:18:41.595838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.563 [2024-11-05 19:18:41.595849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.563 qpair failed and we were unable to recover it. 00:29:12.563 [2024-11-05 19:18:41.596173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.563 [2024-11-05 19:18:41.596183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.563 qpair failed and we were unable to recover it. 00:29:12.563 [2024-11-05 19:18:41.596480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.563 [2024-11-05 19:18:41.596490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.563 qpair failed and we were unable to recover it. 00:29:12.563 [2024-11-05 19:18:41.596676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.563 [2024-11-05 19:18:41.596687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.563 qpair failed and we were unable to recover it. 00:29:12.563 [2024-11-05 19:18:41.596901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.563 [2024-11-05 19:18:41.596911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.563 qpair failed and we were unable to recover it. 00:29:12.563 [2024-11-05 19:18:41.597113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.563 [2024-11-05 19:18:41.597123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.563 qpair failed and we were unable to recover it. 00:29:12.563 [2024-11-05 19:18:41.597444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.563 [2024-11-05 19:18:41.597455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.563 qpair failed and we were unable to recover it. 00:29:12.563 [2024-11-05 19:18:41.597781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.563 [2024-11-05 19:18:41.597791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.563 qpair failed and we were unable to recover it. 00:29:12.563 [2024-11-05 19:18:41.598109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.563 [2024-11-05 19:18:41.598118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.563 qpair failed and we were unable to recover it. 00:29:12.563 [2024-11-05 19:18:41.598423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.563 [2024-11-05 19:18:41.598434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.563 qpair failed and we were unable to recover it. 00:29:12.563 [2024-11-05 19:18:41.598624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.563 [2024-11-05 19:18:41.598634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.563 qpair failed and we were unable to recover it. 00:29:12.564 [2024-11-05 19:18:41.598919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.564 [2024-11-05 19:18:41.598930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.564 qpair failed and we were unable to recover it. 00:29:12.564 [2024-11-05 19:18:41.599233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.564 [2024-11-05 19:18:41.599244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.564 qpair failed and we were unable to recover it. 00:29:12.564 [2024-11-05 19:18:41.599561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.564 [2024-11-05 19:18:41.599571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.564 qpair failed and we were unable to recover it. 00:29:12.564 [2024-11-05 19:18:41.599893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.564 [2024-11-05 19:18:41.599904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.564 qpair failed and we were unable to recover it. 00:29:12.564 [2024-11-05 19:18:41.600247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.564 [2024-11-05 19:18:41.600257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.564 qpair failed and we were unable to recover it. 00:29:12.564 [2024-11-05 19:18:41.600566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.564 [2024-11-05 19:18:41.600577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.564 qpair failed and we were unable to recover it. 00:29:12.564 [2024-11-05 19:18:41.600883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.564 [2024-11-05 19:18:41.600894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.564 qpair failed and we were unable to recover it. 00:29:12.564 [2024-11-05 19:18:41.601072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.564 [2024-11-05 19:18:41.601083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.564 qpair failed and we were unable to recover it. 00:29:12.564 [2024-11-05 19:18:41.601415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.564 [2024-11-05 19:18:41.601425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.564 qpair failed and we were unable to recover it. 00:29:12.564 [2024-11-05 19:18:41.601730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.564 [2024-11-05 19:18:41.601740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.564 qpair failed and we were unable to recover it. 00:29:12.564 [2024-11-05 19:18:41.602056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.564 [2024-11-05 19:18:41.602066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.564 qpair failed and we were unable to recover it. 00:29:12.564 [2024-11-05 19:18:41.602377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.564 [2024-11-05 19:18:41.602387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.564 qpair failed and we were unable to recover it. 00:29:12.564 [2024-11-05 19:18:41.602676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.564 [2024-11-05 19:18:41.602686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.564 qpair failed and we were unable to recover it. 00:29:12.564 [2024-11-05 19:18:41.603063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.564 [2024-11-05 19:18:41.603074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.564 qpair failed and we were unable to recover it. 00:29:12.564 [2024-11-05 19:18:41.603393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.564 [2024-11-05 19:18:41.603404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.564 qpair failed and we were unable to recover it. 00:29:12.564 [2024-11-05 19:18:41.603584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.564 [2024-11-05 19:18:41.603594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.564 qpair failed and we were unable to recover it. 00:29:12.564 [2024-11-05 19:18:41.603894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.564 [2024-11-05 19:18:41.603905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.564 qpair failed and we were unable to recover it. 00:29:12.564 [2024-11-05 19:18:41.604214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.564 [2024-11-05 19:18:41.604226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.564 qpair failed and we were unable to recover it. 00:29:12.564 [2024-11-05 19:18:41.604554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.564 [2024-11-05 19:18:41.604565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.564 qpair failed and we were unable to recover it. 00:29:12.564 [2024-11-05 19:18:41.604787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.564 [2024-11-05 19:18:41.604798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.564 qpair failed and we were unable to recover it. 00:29:12.564 [2024-11-05 19:18:41.605096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.564 [2024-11-05 19:18:41.605106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.564 qpair failed and we were unable to recover it. 00:29:12.564 [2024-11-05 19:18:41.605412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.564 [2024-11-05 19:18:41.605422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.564 qpair failed and we were unable to recover it. 00:29:12.564 [2024-11-05 19:18:41.605724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.564 [2024-11-05 19:18:41.605733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.564 qpair failed and we were unable to recover it. 00:29:12.564 [2024-11-05 19:18:41.606051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.564 [2024-11-05 19:18:41.606061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.564 qpair failed and we were unable to recover it. 00:29:12.564 [2024-11-05 19:18:41.606346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.564 [2024-11-05 19:18:41.606356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.564 qpair failed and we were unable to recover it. 00:29:12.564 [2024-11-05 19:18:41.606727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.564 [2024-11-05 19:18:41.606737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.564 qpair failed and we were unable to recover it. 00:29:12.564 [2024-11-05 19:18:41.607069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.564 [2024-11-05 19:18:41.607079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.564 qpair failed and we were unable to recover it. 00:29:12.564 [2024-11-05 19:18:41.607283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.564 [2024-11-05 19:18:41.607293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.564 qpair failed and we were unable to recover it. 00:29:12.564 [2024-11-05 19:18:41.607503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.564 [2024-11-05 19:18:41.607513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.564 qpair failed and we were unable to recover it. 00:29:12.564 [2024-11-05 19:18:41.607833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.564 [2024-11-05 19:18:41.607843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.564 qpair failed and we were unable to recover it. 00:29:12.564 [2024-11-05 19:18:41.608145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.564 [2024-11-05 19:18:41.608156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.564 qpair failed and we were unable to recover it. 00:29:12.564 [2024-11-05 19:18:41.608328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.564 [2024-11-05 19:18:41.608338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.564 qpair failed and we were unable to recover it. 00:29:12.564 [2024-11-05 19:18:41.608642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.564 [2024-11-05 19:18:41.608651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.564 qpair failed and we were unable to recover it. 00:29:12.564 [2024-11-05 19:18:41.608960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.564 [2024-11-05 19:18:41.608970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.564 qpair failed and we were unable to recover it. 00:29:12.564 [2024-11-05 19:18:41.609285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.564 [2024-11-05 19:18:41.609295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.564 qpair failed and we were unable to recover it. 00:29:12.564 [2024-11-05 19:18:41.609600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.564 [2024-11-05 19:18:41.609610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.564 qpair failed and we were unable to recover it. 00:29:12.564 [2024-11-05 19:18:41.609938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.564 [2024-11-05 19:18:41.609948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.564 qpair failed and we were unable to recover it. 00:29:12.564 [2024-11-05 19:18:41.610274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.564 [2024-11-05 19:18:41.610285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.564 qpair failed and we were unable to recover it. 00:29:12.564 [2024-11-05 19:18:41.610595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.564 [2024-11-05 19:18:41.610606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.565 qpair failed and we were unable to recover it. 00:29:12.565 [2024-11-05 19:18:41.610914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.565 [2024-11-05 19:18:41.610926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.565 qpair failed and we were unable to recover it. 00:29:12.565 [2024-11-05 19:18:41.611261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.565 [2024-11-05 19:18:41.611271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.565 qpair failed and we were unable to recover it. 00:29:12.565 [2024-11-05 19:18:41.611609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.565 [2024-11-05 19:18:41.611620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.565 qpair failed and we were unable to recover it. 00:29:12.565 [2024-11-05 19:18:41.611965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.565 [2024-11-05 19:18:41.611976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.565 qpair failed and we were unable to recover it. 00:29:12.565 [2024-11-05 19:18:41.612285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.565 [2024-11-05 19:18:41.612296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.565 qpair failed and we were unable to recover it. 00:29:12.565 [2024-11-05 19:18:41.612583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.565 [2024-11-05 19:18:41.612595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.565 qpair failed and we were unable to recover it. 00:29:12.565 [2024-11-05 19:18:41.612905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.565 [2024-11-05 19:18:41.612917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.565 qpair failed and we were unable to recover it. 00:29:12.565 [2024-11-05 19:18:41.613204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.565 [2024-11-05 19:18:41.613213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.565 qpair failed and we were unable to recover it. 00:29:12.565 [2024-11-05 19:18:41.613519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.565 [2024-11-05 19:18:41.613529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.565 qpair failed and we were unable to recover it. 00:29:12.565 [2024-11-05 19:18:41.613857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.565 [2024-11-05 19:18:41.613867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.565 qpair failed and we were unable to recover it. 00:29:12.565 [2024-11-05 19:18:41.614166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.565 [2024-11-05 19:18:41.614176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.565 qpair failed and we were unable to recover it. 00:29:12.565 [2024-11-05 19:18:41.614478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.565 [2024-11-05 19:18:41.614488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.565 qpair failed and we were unable to recover it. 00:29:12.565 [2024-11-05 19:18:41.614795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.565 [2024-11-05 19:18:41.614806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.565 qpair failed and we were unable to recover it. 00:29:12.565 [2024-11-05 19:18:41.615113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.565 [2024-11-05 19:18:41.615123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.565 qpair failed and we were unable to recover it. 00:29:12.565 [2024-11-05 19:18:41.615424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.565 [2024-11-05 19:18:41.615433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.565 qpair failed and we were unable to recover it. 00:29:12.565 [2024-11-05 19:18:41.615743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.565 [2024-11-05 19:18:41.808397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.565 qpair failed and we were unable to recover it. 00:29:12.565 [2024-11-05 19:18:41.808758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.565 [2024-11-05 19:18:41.808774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.565 qpair failed and we were unable to recover it. 00:29:12.565 [2024-11-05 19:18:41.809183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.565 [2024-11-05 19:18:41.809224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.565 qpair failed and we were unable to recover it. 00:29:12.565 [2024-11-05 19:18:41.809559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.565 [2024-11-05 19:18:41.809574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.565 qpair failed and we were unable to recover it. 00:29:12.565 [2024-11-05 19:18:41.809994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.565 [2024-11-05 19:18:41.810039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.565 qpair failed and we were unable to recover it. 00:29:12.565 [2024-11-05 19:18:41.810368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.565 [2024-11-05 19:18:41.810382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.565 qpair failed and we were unable to recover it. 00:29:12.565 [2024-11-05 19:18:41.810759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.565 [2024-11-05 19:18:41.810773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.565 qpair failed and we were unable to recover it. 00:29:12.565 [2024-11-05 19:18:41.811186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.565 [2024-11-05 19:18:41.811228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.565 qpair failed and we were unable to recover it. 00:29:12.565 [2024-11-05 19:18:41.811549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.565 [2024-11-05 19:18:41.811564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.565 qpair failed and we were unable to recover it. 00:29:12.565 [2024-11-05 19:18:41.811996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.565 [2024-11-05 19:18:41.812039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.565 qpair failed and we were unable to recover it. 00:29:12.565 [2024-11-05 19:18:41.812370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.565 [2024-11-05 19:18:41.812385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.565 qpair failed and we were unable to recover it. 00:29:12.565 [2024-11-05 19:18:41.812714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.565 [2024-11-05 19:18:41.812726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.565 qpair failed and we were unable to recover it. 00:29:12.565 [2024-11-05 19:18:41.812944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.565 [2024-11-05 19:18:41.812957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.565 qpair failed and we were unable to recover it. 00:29:12.565 [2024-11-05 19:18:41.813339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.565 [2024-11-05 19:18:41.813351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.565 qpair failed and we were unable to recover it. 00:29:12.565 [2024-11-05 19:18:41.813651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.565 [2024-11-05 19:18:41.813663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.565 qpair failed and we were unable to recover it. 00:29:12.565 [2024-11-05 19:18:41.813870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.565 [2024-11-05 19:18:41.813883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.565 qpair failed and we were unable to recover it. 00:29:12.565 [2024-11-05 19:18:41.814203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.565 [2024-11-05 19:18:41.814216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.565 qpair failed and we were unable to recover it. 00:29:12.565 [2024-11-05 19:18:41.814526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.565 [2024-11-05 19:18:41.814546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.565 qpair failed and we were unable to recover it. 00:29:12.565 [2024-11-05 19:18:41.814942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.565 [2024-11-05 19:18:41.814954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.565 qpair failed and we were unable to recover it. 00:29:12.565 [2024-11-05 19:18:41.815256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.565 [2024-11-05 19:18:41.815268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.565 qpair failed and we were unable to recover it. 00:29:12.565 [2024-11-05 19:18:41.815595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.565 [2024-11-05 19:18:41.815607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.565 qpair failed and we were unable to recover it. 00:29:12.565 [2024-11-05 19:18:41.815792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.565 [2024-11-05 19:18:41.815805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.565 qpair failed and we were unable to recover it. 00:29:12.565 [2024-11-05 19:18:41.816203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.565 [2024-11-05 19:18:41.816214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.565 qpair failed and we were unable to recover it. 00:29:12.565 [2024-11-05 19:18:41.816429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.566 [2024-11-05 19:18:41.816440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.566 qpair failed and we were unable to recover it. 00:29:12.566 [2024-11-05 19:18:41.816732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.566 [2024-11-05 19:18:41.816743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.566 qpair failed and we were unable to recover it. 00:29:12.566 [2024-11-05 19:18:41.817036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.566 [2024-11-05 19:18:41.817048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.566 qpair failed and we were unable to recover it. 00:29:12.566 [2024-11-05 19:18:41.817331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.566 [2024-11-05 19:18:41.817343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.566 qpair failed and we were unable to recover it. 00:29:12.566 [2024-11-05 19:18:41.817650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.566 [2024-11-05 19:18:41.817662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.566 qpair failed and we were unable to recover it. 00:29:12.566 [2024-11-05 19:18:41.817982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.566 [2024-11-05 19:18:41.817993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.566 qpair failed and we were unable to recover it. 00:29:12.566 [2024-11-05 19:18:41.818303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.566 [2024-11-05 19:18:41.818316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.566 qpair failed and we were unable to recover it. 00:29:12.566 [2024-11-05 19:18:41.818546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.566 [2024-11-05 19:18:41.818558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.566 qpair failed and we were unable to recover it. 00:29:12.566 [2024-11-05 19:18:41.818781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.566 [2024-11-05 19:18:41.818793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.566 qpair failed and we were unable to recover it. 00:29:12.566 [2024-11-05 19:18:41.819091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.566 [2024-11-05 19:18:41.819102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.566 qpair failed and we were unable to recover it. 00:29:12.566 [2024-11-05 19:18:41.819411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.566 [2024-11-05 19:18:41.819423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.566 qpair failed and we were unable to recover it. 00:29:12.566 [2024-11-05 19:18:41.819756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.566 [2024-11-05 19:18:41.819768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.566 qpair failed and we were unable to recover it. 00:29:12.566 [2024-11-05 19:18:41.820077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.566 [2024-11-05 19:18:41.820090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.566 qpair failed and we were unable to recover it. 00:29:12.566 [2024-11-05 19:18:41.820405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.566 [2024-11-05 19:18:41.820416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.566 qpair failed and we were unable to recover it. 00:29:12.566 [2024-11-05 19:18:41.820717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.566 [2024-11-05 19:18:41.820728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.566 qpair failed and we were unable to recover it. 00:29:12.566 [2024-11-05 19:18:41.821032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.566 [2024-11-05 19:18:41.821045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.566 qpair failed and we were unable to recover it. 00:29:12.566 [2024-11-05 19:18:41.821348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.566 [2024-11-05 19:18:41.821360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.566 qpair failed and we were unable to recover it. 00:29:12.566 [2024-11-05 19:18:41.821695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.566 [2024-11-05 19:18:41.821706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.566 qpair failed and we were unable to recover it. 00:29:12.566 [2024-11-05 19:18:41.821988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.566 [2024-11-05 19:18:41.822000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.566 qpair failed and we were unable to recover it. 00:29:12.566 [2024-11-05 19:18:41.822306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.566 [2024-11-05 19:18:41.822318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.566 qpair failed and we were unable to recover it. 00:29:12.566 [2024-11-05 19:18:41.822577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.566 [2024-11-05 19:18:41.822588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.566 qpair failed and we were unable to recover it. 00:29:12.566 [2024-11-05 19:18:41.822781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.566 [2024-11-05 19:18:41.822795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.566 qpair failed and we were unable to recover it. 00:29:12.566 [2024-11-05 19:18:41.823115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.566 [2024-11-05 19:18:41.823127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.566 qpair failed and we were unable to recover it. 00:29:12.566 [2024-11-05 19:18:41.823345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.566 [2024-11-05 19:18:41.823357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.566 qpair failed and we were unable to recover it. 00:29:12.566 [2024-11-05 19:18:41.823695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.566 [2024-11-05 19:18:41.823706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.566 qpair failed and we were unable to recover it. 00:29:12.566 [2024-11-05 19:18:41.823921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.566 [2024-11-05 19:18:41.823934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.566 qpair failed and we were unable to recover it. 00:29:12.566 [2024-11-05 19:18:41.824088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.566 [2024-11-05 19:18:41.824101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.566 qpair failed and we were unable to recover it. 00:29:12.566 [2024-11-05 19:18:41.824400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.566 [2024-11-05 19:18:41.824412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.566 qpair failed and we were unable to recover it. 00:29:12.566 [2024-11-05 19:18:41.824722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.566 [2024-11-05 19:18:41.824734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.566 qpair failed and we were unable to recover it. 00:29:12.566 [2024-11-05 19:18:41.824885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.566 [2024-11-05 19:18:41.824898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.566 qpair failed and we were unable to recover it. 00:29:12.566 [2024-11-05 19:18:41.825095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.566 [2024-11-05 19:18:41.825108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.566 qpair failed and we were unable to recover it. 00:29:12.566 [2024-11-05 19:18:41.825422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.566 [2024-11-05 19:18:41.825433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.566 qpair failed and we were unable to recover it. 00:29:12.566 [2024-11-05 19:18:41.825761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.566 [2024-11-05 19:18:41.825773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.566 qpair failed and we were unable to recover it. 00:29:12.566 [2024-11-05 19:18:41.825964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.567 [2024-11-05 19:18:41.825975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.567 qpair failed and we were unable to recover it. 00:29:12.567 [2024-11-05 19:18:41.826141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.567 [2024-11-05 19:18:41.826152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.567 qpair failed and we were unable to recover it. 00:29:12.567 [2024-11-05 19:18:41.826471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.567 [2024-11-05 19:18:41.826482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.567 qpair failed and we were unable to recover it. 00:29:12.567 [2024-11-05 19:18:41.826799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.567 [2024-11-05 19:18:41.826812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.567 qpair failed and we were unable to recover it. 00:29:12.567 [2024-11-05 19:18:41.827124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.567 [2024-11-05 19:18:41.827135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.567 qpair failed and we were unable to recover it. 00:29:12.567 [2024-11-05 19:18:41.827448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.567 [2024-11-05 19:18:41.827461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.567 qpair failed and we were unable to recover it. 00:29:12.567 [2024-11-05 19:18:41.827775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.567 [2024-11-05 19:18:41.827786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.567 qpair failed and we were unable to recover it. 00:29:12.567 [2024-11-05 19:18:41.828108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.567 [2024-11-05 19:18:41.828121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.567 qpair failed and we were unable to recover it. 00:29:12.567 [2024-11-05 19:18:41.828389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.567 [2024-11-05 19:18:41.828400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.567 qpair failed and we were unable to recover it. 00:29:12.567 [2024-11-05 19:18:41.828752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.567 [2024-11-05 19:18:41.828764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.567 qpair failed and we were unable to recover it. 00:29:12.567 [2024-11-05 19:18:41.829083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.567 [2024-11-05 19:18:41.829095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.567 qpair failed and we were unable to recover it. 00:29:12.567 [2024-11-05 19:18:41.829292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.567 [2024-11-05 19:18:41.829303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.567 qpair failed and we were unable to recover it. 00:29:12.567 [2024-11-05 19:18:41.829513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.567 [2024-11-05 19:18:41.829523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.567 qpair failed and we were unable to recover it. 00:29:12.567 [2024-11-05 19:18:41.829846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.567 [2024-11-05 19:18:41.829858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.567 qpair failed and we were unable to recover it. 00:29:12.567 [2024-11-05 19:18:41.830192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.567 [2024-11-05 19:18:41.830204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.567 qpair failed and we were unable to recover it. 00:29:12.567 [2024-11-05 19:18:41.830479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.567 [2024-11-05 19:18:41.830490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.567 qpair failed and we were unable to recover it. 00:29:12.567 [2024-11-05 19:18:41.830687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.567 [2024-11-05 19:18:41.830698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.567 qpair failed and we were unable to recover it. 00:29:12.567 [2024-11-05 19:18:41.830976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.567 [2024-11-05 19:18:41.830989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.567 qpair failed and we were unable to recover it. 00:29:12.567 [2024-11-05 19:18:41.831289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.567 [2024-11-05 19:18:41.831301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.567 qpair failed and we were unable to recover it. 00:29:12.567 [2024-11-05 19:18:41.831643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.567 [2024-11-05 19:18:41.831655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.567 qpair failed and we were unable to recover it. 00:29:12.567 [2024-11-05 19:18:41.831986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.567 [2024-11-05 19:18:41.831997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.567 qpair failed and we were unable to recover it. 00:29:12.567 [2024-11-05 19:18:41.832325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.567 [2024-11-05 19:18:41.832337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.567 qpair failed and we were unable to recover it. 00:29:12.567 [2024-11-05 19:18:41.832635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.567 [2024-11-05 19:18:41.832648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.567 qpair failed and we were unable to recover it. 00:29:12.567 [2024-11-05 19:18:41.832975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.567 [2024-11-05 19:18:41.832986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.567 qpair failed and we were unable to recover it. 00:29:12.567 [2024-11-05 19:18:41.833256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.567 [2024-11-05 19:18:41.833267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.567 qpair failed and we were unable to recover it. 00:29:12.567 [2024-11-05 19:18:41.833438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.567 [2024-11-05 19:18:41.833451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.567 qpair failed and we were unable to recover it. 00:29:12.567 [2024-11-05 19:18:41.833738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.567 [2024-11-05 19:18:41.833753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.567 qpair failed and we were unable to recover it. 00:29:12.567 [2024-11-05 19:18:41.834066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.567 [2024-11-05 19:18:41.834079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.567 qpair failed and we were unable to recover it. 00:29:12.567 [2024-11-05 19:18:41.834378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.567 [2024-11-05 19:18:41.834390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.567 qpair failed and we were unable to recover it. 00:29:12.567 [2024-11-05 19:18:41.834577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.567 [2024-11-05 19:18:41.834591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.567 qpair failed and we were unable to recover it. 00:29:12.567 [2024-11-05 19:18:41.834879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.567 [2024-11-05 19:18:41.834891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.567 qpair failed and we were unable to recover it. 00:29:12.567 [2024-11-05 19:18:41.835199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.567 [2024-11-05 19:18:41.835211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.567 qpair failed and we were unable to recover it. 00:29:12.567 [2024-11-05 19:18:41.835546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.567 [2024-11-05 19:18:41.835557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.567 qpair failed and we were unable to recover it. 00:29:12.567 [2024-11-05 19:18:41.835859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.567 [2024-11-05 19:18:41.835871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.567 qpair failed and we were unable to recover it. 00:29:12.567 [2024-11-05 19:18:41.836183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.567 [2024-11-05 19:18:41.836194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.567 qpair failed and we were unable to recover it. 00:29:12.567 [2024-11-05 19:18:41.836494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.567 [2024-11-05 19:18:41.836506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.567 qpair failed and we were unable to recover it. 00:29:12.567 [2024-11-05 19:18:41.836805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.567 [2024-11-05 19:18:41.836816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.567 qpair failed and we were unable to recover it. 00:29:12.567 [2024-11-05 19:18:41.837118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.567 [2024-11-05 19:18:41.837130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.567 qpair failed and we were unable to recover it. 00:29:12.567 [2024-11-05 19:18:41.837410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.567 [2024-11-05 19:18:41.837422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.567 qpair failed and we were unable to recover it. 00:29:12.568 [2024-11-05 19:18:41.837723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.568 [2024-11-05 19:18:41.837735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.568 qpair failed and we were unable to recover it. 00:29:12.568 [2024-11-05 19:18:41.838049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.568 [2024-11-05 19:18:41.838061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.568 qpair failed and we were unable to recover it. 00:29:12.568 [2024-11-05 19:18:41.838359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.568 [2024-11-05 19:18:41.838370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.568 qpair failed and we were unable to recover it. 00:29:12.568 [2024-11-05 19:18:41.838684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.568 [2024-11-05 19:18:41.838695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.568 qpair failed and we were unable to recover it. 00:29:12.568 [2024-11-05 19:18:41.839004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.568 [2024-11-05 19:18:41.839016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.568 qpair failed and we were unable to recover it. 00:29:12.568 [2024-11-05 19:18:41.839357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.568 [2024-11-05 19:18:41.839368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.568 qpair failed and we were unable to recover it. 00:29:12.568 [2024-11-05 19:18:41.839668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.568 [2024-11-05 19:18:41.839680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.568 qpair failed and we were unable to recover it. 00:29:12.568 [2024-11-05 19:18:41.839994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.568 [2024-11-05 19:18:41.840007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.568 qpair failed and we were unable to recover it. 00:29:12.568 [2024-11-05 19:18:41.840327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.568 [2024-11-05 19:18:41.840338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.568 qpair failed and we were unable to recover it. 00:29:12.568 [2024-11-05 19:18:41.840644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.568 [2024-11-05 19:18:41.840656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.568 qpair failed and we were unable to recover it. 00:29:12.568 [2024-11-05 19:18:41.841000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.568 [2024-11-05 19:18:41.841014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.568 qpair failed and we were unable to recover it. 00:29:12.568 [2024-11-05 19:18:41.841311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.568 [2024-11-05 19:18:41.841322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.568 qpair failed and we were unable to recover it. 00:29:12.568 [2024-11-05 19:18:41.841629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.568 [2024-11-05 19:18:41.841642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.568 qpair failed and we were unable to recover it. 00:29:12.568 [2024-11-05 19:18:41.841940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.568 [2024-11-05 19:18:41.841952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.568 qpair failed and we were unable to recover it. 00:29:12.568 [2024-11-05 19:18:41.842262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.568 [2024-11-05 19:18:41.842273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.568 qpair failed and we were unable to recover it. 00:29:12.568 [2024-11-05 19:18:41.842556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.568 [2024-11-05 19:18:41.842567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.568 qpair failed and we were unable to recover it. 00:29:12.568 [2024-11-05 19:18:41.842874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.568 [2024-11-05 19:18:41.842886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.568 qpair failed and we were unable to recover it. 00:29:12.568 [2024-11-05 19:18:41.843216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.568 [2024-11-05 19:18:41.843230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.568 qpair failed and we were unable to recover it. 00:29:12.568 [2024-11-05 19:18:41.843564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.568 [2024-11-05 19:18:41.843577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.568 qpair failed and we were unable to recover it. 00:29:12.568 [2024-11-05 19:18:41.843909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.568 [2024-11-05 19:18:41.843921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.568 qpair failed and we were unable to recover it. 00:29:12.568 [2024-11-05 19:18:41.844250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.568 [2024-11-05 19:18:41.844262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.568 qpair failed and we were unable to recover it. 00:29:12.568 [2024-11-05 19:18:41.844590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.568 [2024-11-05 19:18:41.844601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.568 qpair failed and we were unable to recover it. 00:29:12.568 [2024-11-05 19:18:41.844958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.568 [2024-11-05 19:18:41.844970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.568 qpair failed and we were unable to recover it. 00:29:12.568 [2024-11-05 19:18:41.845326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.568 [2024-11-05 19:18:41.845339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.568 qpair failed and we were unable to recover it. 00:29:12.568 [2024-11-05 19:18:41.845639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.568 [2024-11-05 19:18:41.845650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.568 qpair failed and we were unable to recover it. 00:29:12.568 [2024-11-05 19:18:41.845965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.568 [2024-11-05 19:18:41.845976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.568 qpair failed and we were unable to recover it. 00:29:12.568 [2024-11-05 19:18:41.846275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.568 [2024-11-05 19:18:41.846288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.568 qpair failed and we were unable to recover it. 00:29:12.568 [2024-11-05 19:18:41.846573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.568 [2024-11-05 19:18:41.846584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.568 qpair failed and we were unable to recover it. 00:29:12.568 [2024-11-05 19:18:41.846764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.568 [2024-11-05 19:18:41.846777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.568 qpair failed and we were unable to recover it. 00:29:12.568 [2024-11-05 19:18:41.847087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.568 [2024-11-05 19:18:41.847099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.568 qpair failed and we were unable to recover it. 00:29:12.568 [2024-11-05 19:18:41.847407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.568 [2024-11-05 19:18:41.847419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.568 qpair failed and we were unable to recover it. 00:29:12.568 [2024-11-05 19:18:41.847730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.568 [2024-11-05 19:18:41.847741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.568 qpair failed and we were unable to recover it. 00:29:12.568 [2024-11-05 19:18:41.848066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.568 [2024-11-05 19:18:41.848078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.568 qpair failed and we were unable to recover it. 00:29:12.568 [2024-11-05 19:18:41.848397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.568 [2024-11-05 19:18:41.848408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.568 qpair failed and we were unable to recover it. 00:29:12.568 [2024-11-05 19:18:41.848648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.568 [2024-11-05 19:18:41.848660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.568 qpair failed and we were unable to recover it. 00:29:12.568 [2024-11-05 19:18:41.848958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.568 [2024-11-05 19:18:41.848970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.568 qpair failed and we were unable to recover it. 00:29:12.568 [2024-11-05 19:18:41.849279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.568 [2024-11-05 19:18:41.849290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.568 qpair failed and we were unable to recover it. 00:29:12.568 [2024-11-05 19:18:41.849633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.568 [2024-11-05 19:18:41.849644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.568 qpair failed and we were unable to recover it. 00:29:12.568 [2024-11-05 19:18:41.849945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.568 [2024-11-05 19:18:41.849956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.569 qpair failed and we were unable to recover it. 00:29:12.569 [2024-11-05 19:18:41.850261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.569 [2024-11-05 19:18:41.850273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.569 qpair failed and we were unable to recover it. 00:29:12.569 [2024-11-05 19:18:41.850579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.569 [2024-11-05 19:18:41.850590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.569 qpair failed and we were unable to recover it. 00:29:12.569 [2024-11-05 19:18:41.850895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.569 [2024-11-05 19:18:41.850907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.569 qpair failed and we were unable to recover it. 00:29:12.569 [2024-11-05 19:18:41.851245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.569 [2024-11-05 19:18:41.851257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.569 qpair failed and we were unable to recover it. 00:29:12.569 [2024-11-05 19:18:41.851586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.569 [2024-11-05 19:18:41.851598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.569 qpair failed and we were unable to recover it. 00:29:12.569 [2024-11-05 19:18:41.851938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.569 [2024-11-05 19:18:41.851950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.569 qpair failed and we were unable to recover it. 00:29:12.569 [2024-11-05 19:18:41.852292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.569 [2024-11-05 19:18:41.852303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.569 qpair failed and we were unable to recover it. 00:29:12.569 [2024-11-05 19:18:41.852614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.569 [2024-11-05 19:18:41.852625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.569 qpair failed and we were unable to recover it. 00:29:12.569 [2024-11-05 19:18:41.852906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.569 [2024-11-05 19:18:41.852917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.569 qpair failed and we were unable to recover it. 00:29:12.569 [2024-11-05 19:18:41.853245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.569 [2024-11-05 19:18:41.853256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.569 qpair failed and we were unable to recover it. 00:29:12.569 [2024-11-05 19:18:41.853557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.569 [2024-11-05 19:18:41.853568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.569 qpair failed and we were unable to recover it. 00:29:12.569 [2024-11-05 19:18:41.853883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.569 [2024-11-05 19:18:41.853895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.569 qpair failed and we were unable to recover it. 00:29:12.569 [2024-11-05 19:18:41.854195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.569 [2024-11-05 19:18:41.854207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.569 qpair failed and we were unable to recover it. 00:29:12.569 [2024-11-05 19:18:41.854512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.569 [2024-11-05 19:18:41.854524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.569 qpair failed and we were unable to recover it. 00:29:12.569 [2024-11-05 19:18:41.854809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.569 [2024-11-05 19:18:41.854820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.569 qpair failed and we were unable to recover it. 00:29:12.569 [2024-11-05 19:18:41.855128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.569 [2024-11-05 19:18:41.855140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.569 qpair failed and we were unable to recover it. 00:29:12.569 [2024-11-05 19:18:41.855423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.569 [2024-11-05 19:18:41.855434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.569 qpair failed and we were unable to recover it. 00:29:12.569 [2024-11-05 19:18:41.855736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.569 [2024-11-05 19:18:41.855753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.569 qpair failed and we were unable to recover it. 00:29:12.569 [2024-11-05 19:18:41.856080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.569 [2024-11-05 19:18:41.856092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.569 qpair failed and we were unable to recover it. 00:29:12.569 [2024-11-05 19:18:41.856300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.569 [2024-11-05 19:18:41.856312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.569 qpair failed and we were unable to recover it. 00:29:12.569 [2024-11-05 19:18:41.856604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.569 [2024-11-05 19:18:41.856616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.569 qpair failed and we were unable to recover it. 00:29:12.569 [2024-11-05 19:18:41.856930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.569 [2024-11-05 19:18:41.856942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.569 qpair failed and we were unable to recover it. 00:29:12.569 [2024-11-05 19:18:41.857254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.569 [2024-11-05 19:18:41.857266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.569 qpair failed and we were unable to recover it. 00:29:12.569 [2024-11-05 19:18:41.857572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.569 [2024-11-05 19:18:41.857583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.569 qpair failed and we were unable to recover it. 00:29:12.569 [2024-11-05 19:18:41.857885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.569 [2024-11-05 19:18:41.857896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.569 qpair failed and we were unable to recover it. 00:29:12.569 [2024-11-05 19:18:41.858197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.569 [2024-11-05 19:18:41.858208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.569 qpair failed and we were unable to recover it. 00:29:12.569 [2024-11-05 19:18:41.858508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.569 [2024-11-05 19:18:41.858518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.569 qpair failed and we were unable to recover it. 00:29:12.569 [2024-11-05 19:18:41.858885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.569 [2024-11-05 19:18:41.858896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.569 qpair failed and we were unable to recover it. 00:29:12.569 [2024-11-05 19:18:41.859223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.569 [2024-11-05 19:18:41.859234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.569 qpair failed and we were unable to recover it. 00:29:12.569 [2024-11-05 19:18:41.859560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.569 [2024-11-05 19:18:41.859571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.569 qpair failed and we were unable to recover it. 00:29:12.569 [2024-11-05 19:18:41.859893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.569 [2024-11-05 19:18:41.859906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.569 qpair failed and we were unable to recover it. 00:29:12.569 [2024-11-05 19:18:41.860242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.569 [2024-11-05 19:18:41.860253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.569 qpair failed and we were unable to recover it. 00:29:12.569 [2024-11-05 19:18:41.860575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.569 [2024-11-05 19:18:41.860588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.569 qpair failed and we were unable to recover it. 00:29:12.569 [2024-11-05 19:18:41.860915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.569 [2024-11-05 19:18:41.860927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.569 qpair failed and we were unable to recover it. 00:29:12.569 [2024-11-05 19:18:41.861240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.569 [2024-11-05 19:18:41.861251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.569 qpair failed and we were unable to recover it. 00:29:12.569 [2024-11-05 19:18:41.861558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.569 [2024-11-05 19:18:41.861569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.569 qpair failed and we were unable to recover it. 00:29:12.569 [2024-11-05 19:18:41.861871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.569 [2024-11-05 19:18:41.861882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.569 qpair failed and we were unable to recover it. 00:29:12.569 [2024-11-05 19:18:41.862204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.569 [2024-11-05 19:18:41.862216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.569 qpair failed and we were unable to recover it. 00:29:12.570 [2024-11-05 19:18:41.862403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.570 [2024-11-05 19:18:41.862414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.570 qpair failed and we were unable to recover it. 00:29:12.570 [2024-11-05 19:18:41.862718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.570 [2024-11-05 19:18:41.862729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.570 qpair failed and we were unable to recover it. 00:29:12.570 [2024-11-05 19:18:41.862946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.570 [2024-11-05 19:18:41.862959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.570 qpair failed and we were unable to recover it. 00:29:12.570 [2024-11-05 19:18:41.863273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.570 [2024-11-05 19:18:41.863284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.570 qpair failed and we were unable to recover it. 00:29:12.570 [2024-11-05 19:18:41.863583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.570 [2024-11-05 19:18:41.863596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.570 qpair failed and we were unable to recover it. 00:29:12.570 [2024-11-05 19:18:41.863867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.570 [2024-11-05 19:18:41.863879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.570 qpair failed and we were unable to recover it. 00:29:12.570 [2024-11-05 19:18:41.864235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.570 [2024-11-05 19:18:41.864246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.570 qpair failed and we were unable to recover it. 00:29:12.570 [2024-11-05 19:18:41.864546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.570 [2024-11-05 19:18:41.864557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.570 qpair failed and we were unable to recover it. 00:29:12.570 [2024-11-05 19:18:41.864871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.570 [2024-11-05 19:18:41.864885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.570 qpair failed and we were unable to recover it. 00:29:12.570 [2024-11-05 19:18:41.865210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.570 [2024-11-05 19:18:41.865221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.570 qpair failed and we were unable to recover it. 00:29:12.570 [2024-11-05 19:18:41.865502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.570 [2024-11-05 19:18:41.865514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.570 qpair failed and we were unable to recover it. 00:29:12.570 [2024-11-05 19:18:41.865809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.570 [2024-11-05 19:18:41.865821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.570 qpair failed and we were unable to recover it. 00:29:12.570 [2024-11-05 19:18:41.866146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.570 [2024-11-05 19:18:41.866158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.570 qpair failed and we were unable to recover it. 00:29:12.570 [2024-11-05 19:18:41.866464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.570 [2024-11-05 19:18:41.866476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.570 qpair failed and we were unable to recover it. 00:29:12.570 [2024-11-05 19:18:41.866838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.570 [2024-11-05 19:18:41.866851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.570 qpair failed and we were unable to recover it. 00:29:12.570 [2024-11-05 19:18:41.867149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.570 [2024-11-05 19:18:41.867160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.570 qpair failed and we were unable to recover it. 00:29:12.570 [2024-11-05 19:18:41.867499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.570 [2024-11-05 19:18:41.867510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.570 qpair failed and we were unable to recover it. 00:29:12.570 [2024-11-05 19:18:41.867816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.570 [2024-11-05 19:18:41.867828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.570 qpair failed and we were unable to recover it. 00:29:12.570 [2024-11-05 19:18:41.868242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.570 [2024-11-05 19:18:41.868253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.570 qpair failed and we were unable to recover it. 00:29:12.570 [2024-11-05 19:18:41.868556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.570 [2024-11-05 19:18:41.868569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.570 qpair failed and we were unable to recover it. 00:29:12.570 [2024-11-05 19:18:41.868891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.570 [2024-11-05 19:18:41.868902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.570 qpair failed and we were unable to recover it. 00:29:12.570 [2024-11-05 19:18:41.869210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.570 [2024-11-05 19:18:41.869222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.570 qpair failed and we were unable to recover it. 00:29:12.570 [2024-11-05 19:18:41.869560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.570 [2024-11-05 19:18:41.869572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.570 qpair failed and we were unable to recover it. 00:29:12.570 [2024-11-05 19:18:41.869874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.570 [2024-11-05 19:18:41.869885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.570 qpair failed and we were unable to recover it. 00:29:12.570 [2024-11-05 19:18:41.870229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.570 [2024-11-05 19:18:41.870240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.570 qpair failed and we were unable to recover it. 00:29:12.570 [2024-11-05 19:18:41.870546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.570 [2024-11-05 19:18:41.870558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.570 qpair failed and we were unable to recover it. 00:29:12.570 [2024-11-05 19:18:41.870875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.570 [2024-11-05 19:18:41.870886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.570 qpair failed and we were unable to recover it. 00:29:12.570 [2024-11-05 19:18:41.871234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.570 [2024-11-05 19:18:41.871254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.570 qpair failed and we were unable to recover it. 00:29:12.570 [2024-11-05 19:18:41.871525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.570 [2024-11-05 19:18:41.871536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.570 qpair failed and we were unable to recover it. 00:29:12.570 [2024-11-05 19:18:41.871813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.570 [2024-11-05 19:18:41.871825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.570 qpair failed and we were unable to recover it. 00:29:12.570 [2024-11-05 19:18:41.872130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.570 [2024-11-05 19:18:41.872141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.570 qpair failed and we were unable to recover it. 00:29:12.570 [2024-11-05 19:18:41.872442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.570 [2024-11-05 19:18:41.872454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.570 qpair failed and we were unable to recover it. 00:29:12.570 [2024-11-05 19:18:41.872789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.570 [2024-11-05 19:18:41.872800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.570 qpair failed and we were unable to recover it. 00:29:12.570 [2024-11-05 19:18:41.873106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.570 [2024-11-05 19:18:41.873118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.570 qpair failed and we were unable to recover it. 00:29:12.570 [2024-11-05 19:18:41.873443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.570 [2024-11-05 19:18:41.873455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.570 qpair failed and we were unable to recover it. 00:29:12.570 [2024-11-05 19:18:41.873754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.570 [2024-11-05 19:18:41.873767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.570 qpair failed and we were unable to recover it. 00:29:12.570 [2024-11-05 19:18:41.874103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.570 [2024-11-05 19:18:41.874115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.570 qpair failed and we were unable to recover it. 00:29:12.570 [2024-11-05 19:18:41.874421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.570 [2024-11-05 19:18:41.874433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.570 qpair failed and we were unable to recover it. 00:29:12.570 [2024-11-05 19:18:41.874760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.570 [2024-11-05 19:18:41.874772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.570 qpair failed and we were unable to recover it. 00:29:12.571 [2024-11-05 19:18:41.874953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.571 [2024-11-05 19:18:41.874965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.571 qpair failed and we were unable to recover it. 00:29:12.571 [2024-11-05 19:18:41.875240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.571 [2024-11-05 19:18:41.875251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.571 qpair failed and we were unable to recover it. 00:29:12.571 [2024-11-05 19:18:41.875548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.571 [2024-11-05 19:18:41.875559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.571 qpair failed and we were unable to recover it. 00:29:12.571 [2024-11-05 19:18:41.875843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.571 [2024-11-05 19:18:41.875855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.571 qpair failed and we were unable to recover it. 00:29:12.571 [2024-11-05 19:18:41.876156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.571 [2024-11-05 19:18:41.876167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.571 qpair failed and we were unable to recover it. 00:29:12.571 [2024-11-05 19:18:41.876494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.571 [2024-11-05 19:18:41.876505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.571 qpair failed and we were unable to recover it. 00:29:12.571 [2024-11-05 19:18:41.876805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.571 [2024-11-05 19:18:41.876817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.571 qpair failed and we were unable to recover it. 00:29:12.571 [2024-11-05 19:18:41.877158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.571 [2024-11-05 19:18:41.877169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.571 qpair failed and we were unable to recover it. 00:29:12.571 [2024-11-05 19:18:41.877471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.571 [2024-11-05 19:18:41.877484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.571 qpair failed and we were unable to recover it. 00:29:12.571 [2024-11-05 19:18:41.877820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.571 [2024-11-05 19:18:41.877832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.571 qpair failed and we were unable to recover it. 00:29:12.571 [2024-11-05 19:18:41.878144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.571 [2024-11-05 19:18:41.878156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.571 qpair failed and we were unable to recover it. 00:29:12.571 [2024-11-05 19:18:41.878487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.571 [2024-11-05 19:18:41.878499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.571 qpair failed and we were unable to recover it. 00:29:12.571 [2024-11-05 19:18:41.878799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.571 [2024-11-05 19:18:41.878810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.571 qpair failed and we were unable to recover it. 00:29:12.571 [2024-11-05 19:18:41.879114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.571 [2024-11-05 19:18:41.879126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.571 qpair failed and we were unable to recover it. 00:29:12.571 [2024-11-05 19:18:41.879432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.571 [2024-11-05 19:18:41.879444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.571 qpair failed and we were unable to recover it. 00:29:12.571 [2024-11-05 19:18:41.879720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.571 [2024-11-05 19:18:41.879732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.571 qpair failed and we were unable to recover it. 00:29:12.571 [2024-11-05 19:18:41.880041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.571 [2024-11-05 19:18:41.880053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.571 qpair failed and we were unable to recover it. 00:29:12.571 [2024-11-05 19:18:41.880374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.571 [2024-11-05 19:18:41.880385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.571 qpair failed and we were unable to recover it. 00:29:12.571 [2024-11-05 19:18:41.880693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.571 [2024-11-05 19:18:41.880705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.571 qpair failed and we were unable to recover it. 00:29:12.571 [2024-11-05 19:18:41.881048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.571 [2024-11-05 19:18:41.881060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.571 qpair failed and we were unable to recover it. 00:29:12.571 [2024-11-05 19:18:41.881385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.571 [2024-11-05 19:18:41.881397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.571 qpair failed and we were unable to recover it. 00:29:12.571 [2024-11-05 19:18:41.881786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.571 [2024-11-05 19:18:41.881798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.571 qpair failed and we were unable to recover it. 00:29:12.846 [2024-11-05 19:18:41.882122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.846 [2024-11-05 19:18:41.882134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.846 qpair failed and we were unable to recover it. 00:29:12.846 [2024-11-05 19:18:41.882465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.846 [2024-11-05 19:18:41.882479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.846 qpair failed and we were unable to recover it. 00:29:12.846 [2024-11-05 19:18:41.882692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.846 [2024-11-05 19:18:41.882703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.846 qpair failed and we were unable to recover it. 00:29:12.846 [2024-11-05 19:18:41.882924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.846 [2024-11-05 19:18:41.882935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.846 qpair failed and we were unable to recover it. 00:29:12.846 [2024-11-05 19:18:41.883221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.846 [2024-11-05 19:18:41.883232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.846 qpair failed and we were unable to recover it. 00:29:12.846 [2024-11-05 19:18:41.883527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.846 [2024-11-05 19:18:41.883538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.846 qpair failed and we were unable to recover it. 00:29:12.846 [2024-11-05 19:18:41.883838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.846 [2024-11-05 19:18:41.883851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.846 qpair failed and we were unable to recover it. 00:29:12.846 [2024-11-05 19:18:41.884098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.846 [2024-11-05 19:18:41.884110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.846 qpair failed and we were unable to recover it. 00:29:12.846 [2024-11-05 19:18:41.884428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.846 [2024-11-05 19:18:41.884440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.846 qpair failed and we were unable to recover it. 00:29:12.846 [2024-11-05 19:18:41.884721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.846 [2024-11-05 19:18:41.884733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.846 qpair failed and we were unable to recover it. 00:29:12.846 [2024-11-05 19:18:41.885073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.846 [2024-11-05 19:18:41.885090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.846 qpair failed and we were unable to recover it. 00:29:12.846 [2024-11-05 19:18:41.885396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.846 [2024-11-05 19:18:41.885409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.846 qpair failed and we were unable to recover it. 00:29:12.846 [2024-11-05 19:18:41.885719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.846 [2024-11-05 19:18:41.885731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.846 qpair failed and we were unable to recover it. 00:29:12.846 [2024-11-05 19:18:41.886030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.846 [2024-11-05 19:18:41.886043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.846 qpair failed and we were unable to recover it. 00:29:12.846 [2024-11-05 19:18:41.886369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.846 [2024-11-05 19:18:41.886381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.846 qpair failed and we were unable to recover it. 00:29:12.846 [2024-11-05 19:18:41.886687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.846 [2024-11-05 19:18:41.886700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.846 qpair failed and we were unable to recover it. 00:29:12.846 [2024-11-05 19:18:41.886994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.846 [2024-11-05 19:18:41.887007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.846 qpair failed and we were unable to recover it. 00:29:12.846 [2024-11-05 19:18:41.887338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.846 [2024-11-05 19:18:41.887350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.846 qpair failed and we were unable to recover it. 00:29:12.846 [2024-11-05 19:18:41.887653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.846 [2024-11-05 19:18:41.887668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.846 qpair failed and we were unable to recover it. 00:29:12.846 [2024-11-05 19:18:41.887976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.846 [2024-11-05 19:18:41.887989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.846 qpair failed and we were unable to recover it. 00:29:12.846 [2024-11-05 19:18:41.888315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.846 [2024-11-05 19:18:41.888327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.846 qpair failed and we were unable to recover it. 00:29:12.846 [2024-11-05 19:18:41.888650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.846 [2024-11-05 19:18:41.888662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.846 qpair failed and we were unable to recover it. 00:29:12.846 [2024-11-05 19:18:41.888925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.846 [2024-11-05 19:18:41.888937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.846 qpair failed and we were unable to recover it. 00:29:12.846 [2024-11-05 19:18:41.889249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.846 [2024-11-05 19:18:41.889260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.846 qpair failed and we were unable to recover it. 00:29:12.846 [2024-11-05 19:18:41.889559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.846 [2024-11-05 19:18:41.889570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.846 qpair failed and we were unable to recover it. 00:29:12.846 [2024-11-05 19:18:41.889852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.846 [2024-11-05 19:18:41.889864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.846 qpair failed and we were unable to recover it. 00:29:12.846 [2024-11-05 19:18:41.890135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.846 [2024-11-05 19:18:41.890145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.846 qpair failed and we were unable to recover it. 00:29:12.846 [2024-11-05 19:18:41.890443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.846 [2024-11-05 19:18:41.890454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.846 qpair failed and we were unable to recover it. 00:29:12.846 [2024-11-05 19:18:41.890717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.846 [2024-11-05 19:18:41.890728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.846 qpair failed and we were unable to recover it. 00:29:12.846 [2024-11-05 19:18:41.891066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.846 [2024-11-05 19:18:41.891080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.846 qpair failed and we were unable to recover it. 00:29:12.847 [2024-11-05 19:18:41.891408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.847 [2024-11-05 19:18:41.891419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.847 qpair failed and we were unable to recover it. 00:29:12.847 [2024-11-05 19:18:41.891717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.847 [2024-11-05 19:18:41.891729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.847 qpair failed and we were unable to recover it. 00:29:12.847 [2024-11-05 19:18:41.892058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.847 [2024-11-05 19:18:41.892070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.847 qpair failed and we were unable to recover it. 00:29:12.847 [2024-11-05 19:18:41.892427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.847 [2024-11-05 19:18:41.892438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.847 qpair failed and we were unable to recover it. 00:29:12.847 [2024-11-05 19:18:41.892740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.847 [2024-11-05 19:18:41.892757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.847 qpair failed and we were unable to recover it. 00:29:12.847 [2024-11-05 19:18:41.893050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.847 [2024-11-05 19:18:41.893061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.847 qpair failed and we were unable to recover it. 00:29:12.847 [2024-11-05 19:18:41.893399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.847 [2024-11-05 19:18:41.893411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.847 qpair failed and we were unable to recover it. 00:29:12.847 [2024-11-05 19:18:41.893735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.847 [2024-11-05 19:18:41.893750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.847 qpair failed and we were unable to recover it. 00:29:12.847 [2024-11-05 19:18:41.894050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.847 [2024-11-05 19:18:41.894062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.847 qpair failed and we were unable to recover it. 00:29:12.847 [2024-11-05 19:18:41.894399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.847 [2024-11-05 19:18:41.894410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.847 qpair failed and we were unable to recover it. 00:29:12.847 [2024-11-05 19:18:41.894795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.847 [2024-11-05 19:18:41.894808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.847 qpair failed and we were unable to recover it. 00:29:12.847 [2024-11-05 19:18:41.895113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.847 [2024-11-05 19:18:41.895125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.847 qpair failed and we were unable to recover it. 00:29:12.847 [2024-11-05 19:18:41.895446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.847 [2024-11-05 19:18:41.895458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.847 qpair failed and we were unable to recover it. 00:29:12.847 [2024-11-05 19:18:41.895782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.847 [2024-11-05 19:18:41.895793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.847 qpair failed and we were unable to recover it. 00:29:12.847 [2024-11-05 19:18:41.896104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.847 [2024-11-05 19:18:41.896114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.847 qpair failed and we were unable to recover it. 00:29:12.847 [2024-11-05 19:18:41.896410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.847 [2024-11-05 19:18:41.896421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.847 qpair failed and we were unable to recover it. 00:29:12.847 [2024-11-05 19:18:41.896609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.847 [2024-11-05 19:18:41.896620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.847 qpair failed and we were unable to recover it. 00:29:12.847 [2024-11-05 19:18:41.896913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.847 [2024-11-05 19:18:41.896924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.847 qpair failed and we were unable to recover it. 00:29:12.847 [2024-11-05 19:18:41.897232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.847 [2024-11-05 19:18:41.897244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.847 qpair failed and we were unable to recover it. 00:29:12.847 [2024-11-05 19:18:41.897521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.847 [2024-11-05 19:18:41.897533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.847 qpair failed and we were unable to recover it. 00:29:12.847 [2024-11-05 19:18:41.897833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.847 [2024-11-05 19:18:41.897845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.847 qpair failed and we were unable to recover it. 00:29:12.847 [2024-11-05 19:18:41.898149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.847 [2024-11-05 19:18:41.898160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.847 qpair failed and we were unable to recover it. 00:29:12.847 [2024-11-05 19:18:41.898465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.847 [2024-11-05 19:18:41.898477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.847 qpair failed and we were unable to recover it. 00:29:12.847 [2024-11-05 19:18:41.898803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.847 [2024-11-05 19:18:41.898815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.847 qpair failed and we were unable to recover it. 00:29:12.847 [2024-11-05 19:18:41.899141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.847 [2024-11-05 19:18:41.899153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.847 qpair failed and we were unable to recover it. 00:29:12.847 [2024-11-05 19:18:41.899448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.847 [2024-11-05 19:18:41.899460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.847 qpair failed and we were unable to recover it. 00:29:12.847 [2024-11-05 19:18:41.899772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.847 [2024-11-05 19:18:41.899785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.847 qpair failed and we were unable to recover it. 00:29:12.847 [2024-11-05 19:18:41.900102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.847 [2024-11-05 19:18:41.900114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.847 qpair failed and we were unable to recover it. 00:29:12.847 [2024-11-05 19:18:41.900454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.847 [2024-11-05 19:18:41.900466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.847 qpair failed and we were unable to recover it. 00:29:12.847 [2024-11-05 19:18:41.900663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.847 [2024-11-05 19:18:41.900674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.847 qpair failed and we were unable to recover it. 00:29:12.847 [2024-11-05 19:18:41.900937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.847 [2024-11-05 19:18:41.900949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.847 qpair failed and we were unable to recover it. 00:29:12.847 [2024-11-05 19:18:41.901278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.847 [2024-11-05 19:18:41.901290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.847 qpair failed and we were unable to recover it. 00:29:12.847 [2024-11-05 19:18:41.901599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.847 [2024-11-05 19:18:41.901610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.847 qpair failed and we were unable to recover it. 00:29:12.847 [2024-11-05 19:18:41.901901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.847 [2024-11-05 19:18:41.901912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.847 qpair failed and we were unable to recover it. 00:29:12.847 [2024-11-05 19:18:41.902241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.847 [2024-11-05 19:18:41.902252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.847 qpair failed and we were unable to recover it. 00:29:12.847 [2024-11-05 19:18:41.902527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.847 [2024-11-05 19:18:41.902537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.847 qpair failed and we were unable to recover it. 00:29:12.847 [2024-11-05 19:18:41.902845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.847 [2024-11-05 19:18:41.902856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.847 qpair failed and we were unable to recover it. 00:29:12.847 [2024-11-05 19:18:41.903197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.847 [2024-11-05 19:18:41.903208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.847 qpair failed and we were unable to recover it. 00:29:12.847 [2024-11-05 19:18:41.903551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.848 [2024-11-05 19:18:41.903562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.848 qpair failed and we were unable to recover it. 00:29:12.848 [2024-11-05 19:18:41.903886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.848 [2024-11-05 19:18:41.903900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.848 qpair failed and we were unable to recover it. 00:29:12.848 [2024-11-05 19:18:41.904225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.848 [2024-11-05 19:18:41.904236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.848 qpair failed and we were unable to recover it. 00:29:12.848 [2024-11-05 19:18:41.904538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.848 [2024-11-05 19:18:41.904550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.848 qpair failed and we were unable to recover it. 00:29:12.848 [2024-11-05 19:18:41.904733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.848 [2024-11-05 19:18:41.904750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.848 qpair failed and we were unable to recover it. 00:29:12.848 [2024-11-05 19:18:41.905051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.848 [2024-11-05 19:18:41.905063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.848 qpair failed and we were unable to recover it. 00:29:12.848 [2024-11-05 19:18:41.905361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.848 [2024-11-05 19:18:41.905372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.848 qpair failed and we were unable to recover it. 00:29:12.848 [2024-11-05 19:18:41.905646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.848 [2024-11-05 19:18:41.905658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.848 qpair failed and we were unable to recover it. 00:29:12.848 [2024-11-05 19:18:41.906012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.848 [2024-11-05 19:18:41.906024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.848 qpair failed and we were unable to recover it. 00:29:12.848 [2024-11-05 19:18:41.906321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.848 [2024-11-05 19:18:41.906333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.848 qpair failed and we were unable to recover it. 00:29:12.848 [2024-11-05 19:18:41.906631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.848 [2024-11-05 19:18:41.906642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.848 qpair failed and we were unable to recover it. 00:29:12.848 [2024-11-05 19:18:41.906950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.848 [2024-11-05 19:18:41.906962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.848 qpair failed and we were unable to recover it. 00:29:12.848 [2024-11-05 19:18:41.907278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.848 [2024-11-05 19:18:41.907289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.848 qpair failed and we were unable to recover it. 00:29:12.848 [2024-11-05 19:18:41.907616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.848 [2024-11-05 19:18:41.907628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.848 qpair failed and we were unable to recover it. 00:29:12.848 [2024-11-05 19:18:41.907994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.848 [2024-11-05 19:18:41.908006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.848 qpair failed and we were unable to recover it. 00:29:12.848 [2024-11-05 19:18:41.908337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.848 [2024-11-05 19:18:41.908348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.848 qpair failed and we were unable to recover it. 00:29:12.848 [2024-11-05 19:18:41.908548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.848 [2024-11-05 19:18:41.908559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.848 qpair failed and we were unable to recover it. 00:29:12.848 [2024-11-05 19:18:41.908855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.848 [2024-11-05 19:18:41.908867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.848 qpair failed and we were unable to recover it. 00:29:12.848 [2024-11-05 19:18:41.909206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.848 [2024-11-05 19:18:41.909217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.848 qpair failed and we were unable to recover it. 00:29:12.848 [2024-11-05 19:18:41.909514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.848 [2024-11-05 19:18:41.909525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.848 qpair failed and we were unable to recover it. 00:29:12.848 [2024-11-05 19:18:41.909823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.848 [2024-11-05 19:18:41.909834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.848 qpair failed and we were unable to recover it. 00:29:12.848 [2024-11-05 19:18:41.910181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.848 [2024-11-05 19:18:41.910192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.848 qpair failed and we were unable to recover it. 00:29:12.848 [2024-11-05 19:18:41.910499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.848 [2024-11-05 19:18:41.910511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.848 qpair failed and we were unable to recover it. 00:29:12.848 [2024-11-05 19:18:41.910836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.848 [2024-11-05 19:18:41.910847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.848 qpair failed and we were unable to recover it. 00:29:12.848 [2024-11-05 19:18:41.911139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.848 [2024-11-05 19:18:41.911151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.848 qpair failed and we were unable to recover it. 00:29:12.848 [2024-11-05 19:18:41.911452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.848 [2024-11-05 19:18:41.911463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.848 qpair failed and we were unable to recover it. 00:29:12.848 [2024-11-05 19:18:41.911803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.848 [2024-11-05 19:18:41.911816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.848 qpair failed and we were unable to recover it. 00:29:12.848 [2024-11-05 19:18:41.912123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.848 [2024-11-05 19:18:41.912134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.848 qpair failed and we were unable to recover it. 00:29:12.848 [2024-11-05 19:18:41.912441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.848 [2024-11-05 19:18:41.912455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.848 qpair failed and we were unable to recover it. 00:29:12.848 [2024-11-05 19:18:41.912672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.848 [2024-11-05 19:18:41.912683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.848 qpair failed and we were unable to recover it. 00:29:12.848 [2024-11-05 19:18:41.913009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.848 [2024-11-05 19:18:41.913021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.848 qpair failed and we were unable to recover it. 00:29:12.848 [2024-11-05 19:18:41.913323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.848 [2024-11-05 19:18:41.913335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.848 qpair failed and we were unable to recover it. 00:29:12.848 [2024-11-05 19:18:41.913630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.848 [2024-11-05 19:18:41.913642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.848 qpair failed and we were unable to recover it. 00:29:12.848 [2024-11-05 19:18:41.913951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.848 [2024-11-05 19:18:41.913963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.848 qpair failed and we were unable to recover it. 00:29:12.848 [2024-11-05 19:18:41.914258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.848 [2024-11-05 19:18:41.914270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.848 qpair failed and we were unable to recover it. 00:29:12.848 [2024-11-05 19:18:41.914595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.848 [2024-11-05 19:18:41.914606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.848 qpair failed and we were unable to recover it. 00:29:12.848 [2024-11-05 19:18:41.914913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.848 [2024-11-05 19:18:41.914925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.848 qpair failed and we were unable to recover it. 00:29:12.848 [2024-11-05 19:18:41.915285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.848 [2024-11-05 19:18:41.915296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.848 qpair failed and we were unable to recover it. 00:29:12.848 [2024-11-05 19:18:41.915599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.848 [2024-11-05 19:18:41.915611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.848 qpair failed and we were unable to recover it. 00:29:12.848 [2024-11-05 19:18:41.915903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.849 [2024-11-05 19:18:41.915914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.849 qpair failed and we were unable to recover it. 00:29:12.849 [2024-11-05 19:18:41.916290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.849 [2024-11-05 19:18:41.916302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.849 qpair failed and we were unable to recover it. 00:29:12.849 [2024-11-05 19:18:41.916594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.849 [2024-11-05 19:18:41.916605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.849 qpair failed and we were unable to recover it. 00:29:12.849 [2024-11-05 19:18:41.916914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.849 [2024-11-05 19:18:41.916926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.849 qpair failed and we were unable to recover it. 00:29:12.849 [2024-11-05 19:18:41.917237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.849 [2024-11-05 19:18:41.917249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.849 qpair failed and we were unable to recover it. 00:29:12.849 [2024-11-05 19:18:41.917540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.849 [2024-11-05 19:18:41.917551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.849 qpair failed and we were unable to recover it. 00:29:12.849 [2024-11-05 19:18:41.917833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.849 [2024-11-05 19:18:41.917845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.849 qpair failed and we were unable to recover it. 00:29:12.849 [2024-11-05 19:18:41.918224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.849 [2024-11-05 19:18:41.918236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.849 qpair failed and we were unable to recover it. 00:29:12.849 [2024-11-05 19:18:41.918485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.849 [2024-11-05 19:18:41.918496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.849 qpair failed and we were unable to recover it. 00:29:12.849 [2024-11-05 19:18:41.918815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.849 [2024-11-05 19:18:41.918826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.849 qpair failed and we were unable to recover it. 00:29:12.849 [2024-11-05 19:18:41.919114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.849 [2024-11-05 19:18:41.919126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.849 qpair failed and we were unable to recover it. 00:29:12.849 [2024-11-05 19:18:41.919442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.849 [2024-11-05 19:18:41.919452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.849 qpair failed and we were unable to recover it. 00:29:12.849 [2024-11-05 19:18:41.920049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.849 [2024-11-05 19:18:41.920071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.849 qpair failed and we were unable to recover it. 00:29:12.849 [2024-11-05 19:18:41.920409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.849 [2024-11-05 19:18:41.920422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.849 qpair failed and we were unable to recover it. 00:29:12.849 [2024-11-05 19:18:41.920795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.849 [2024-11-05 19:18:41.920807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.849 qpair failed and we were unable to recover it. 00:29:12.849 [2024-11-05 19:18:41.921080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.849 [2024-11-05 19:18:41.921090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.849 qpair failed and we were unable to recover it. 00:29:12.849 [2024-11-05 19:18:41.921393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.849 [2024-11-05 19:18:41.921403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.849 qpair failed and we were unable to recover it. 00:29:12.849 [2024-11-05 19:18:41.921749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.849 [2024-11-05 19:18:41.921760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.849 qpair failed and we were unable to recover it. 00:29:12.849 [2024-11-05 19:18:41.922144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.849 [2024-11-05 19:18:41.922154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.849 qpair failed and we were unable to recover it. 00:29:12.849 [2024-11-05 19:18:41.922468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.849 [2024-11-05 19:18:41.922478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.849 qpair failed and we were unable to recover it. 00:29:12.849 [2024-11-05 19:18:41.922808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.849 [2024-11-05 19:18:41.922820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.849 qpair failed and we were unable to recover it. 00:29:12.849 [2024-11-05 19:18:41.923161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.849 [2024-11-05 19:18:41.923171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.849 qpair failed and we were unable to recover it. 00:29:12.849 [2024-11-05 19:18:41.923494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.849 [2024-11-05 19:18:41.923505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.849 qpair failed and we were unable to recover it. 00:29:12.849 [2024-11-05 19:18:41.923794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.849 [2024-11-05 19:18:41.923805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.849 qpair failed and we were unable to recover it. 00:29:12.849 [2024-11-05 19:18:41.924122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.849 [2024-11-05 19:18:41.924132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.849 qpair failed and we were unable to recover it. 00:29:12.849 [2024-11-05 19:18:41.924418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.849 [2024-11-05 19:18:41.924428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.849 qpair failed and we were unable to recover it. 00:29:12.849 [2024-11-05 19:18:41.924713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.849 [2024-11-05 19:18:41.924722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.849 qpair failed and we were unable to recover it. 00:29:12.849 [2024-11-05 19:18:41.925024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.849 [2024-11-05 19:18:41.925034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.849 qpair failed and we were unable to recover it. 00:29:12.849 [2024-11-05 19:18:41.925318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.849 [2024-11-05 19:18:41.925328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.849 qpair failed and we were unable to recover it. 00:29:12.849 [2024-11-05 19:18:41.925614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.849 [2024-11-05 19:18:41.925625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.849 qpair failed and we were unable to recover it. 00:29:12.849 [2024-11-05 19:18:41.925827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.849 [2024-11-05 19:18:41.925839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.849 qpair failed and we were unable to recover it. 00:29:12.849 [2024-11-05 19:18:41.926168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.849 [2024-11-05 19:18:41.926178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.849 qpair failed and we were unable to recover it. 00:29:12.849 [2024-11-05 19:18:41.926374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.849 [2024-11-05 19:18:41.926384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.849 qpair failed and we were unable to recover it. 00:29:12.849 [2024-11-05 19:18:41.926692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.849 [2024-11-05 19:18:41.926702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.849 qpair failed and we were unable to recover it. 00:29:12.849 [2024-11-05 19:18:41.926995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.849 [2024-11-05 19:18:41.927006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.849 qpair failed and we were unable to recover it. 00:29:12.849 [2024-11-05 19:18:41.927345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.849 [2024-11-05 19:18:41.927355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.849 qpair failed and we were unable to recover it. 00:29:12.849 [2024-11-05 19:18:41.927656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.849 [2024-11-05 19:18:41.927666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.849 qpair failed and we were unable to recover it. 00:29:12.849 [2024-11-05 19:18:41.928009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.849 [2024-11-05 19:18:41.928020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.849 qpair failed and we were unable to recover it. 00:29:12.849 [2024-11-05 19:18:41.928188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.849 [2024-11-05 19:18:41.928199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.849 qpair failed and we were unable to recover it. 00:29:12.850 [2024-11-05 19:18:41.928479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.850 [2024-11-05 19:18:41.928491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.850 qpair failed and we were unable to recover it. 00:29:12.850 [2024-11-05 19:18:41.928797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.850 [2024-11-05 19:18:41.928808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.850 qpair failed and we were unable to recover it. 00:29:12.850 [2024-11-05 19:18:41.929137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.850 [2024-11-05 19:18:41.929147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.850 qpair failed and we were unable to recover it. 00:29:12.850 [2024-11-05 19:18:41.929480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.850 [2024-11-05 19:18:41.929490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.850 qpair failed and we were unable to recover it. 00:29:12.850 [2024-11-05 19:18:41.929793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.850 [2024-11-05 19:18:41.929804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.850 qpair failed and we were unable to recover it. 00:29:12.850 [2024-11-05 19:18:41.930116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.850 [2024-11-05 19:18:41.930127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.850 qpair failed and we were unable to recover it. 00:29:12.850 [2024-11-05 19:18:41.930406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.850 [2024-11-05 19:18:41.930425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.850 qpair failed and we were unable to recover it. 00:29:12.850 [2024-11-05 19:18:41.930742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.850 [2024-11-05 19:18:41.930757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.850 qpair failed and we were unable to recover it. 00:29:12.850 [2024-11-05 19:18:41.931051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.850 [2024-11-05 19:18:41.931062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.850 qpair failed and we were unable to recover it. 00:29:12.850 [2024-11-05 19:18:41.931350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.850 [2024-11-05 19:18:41.931360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.850 qpair failed and we were unable to recover it. 00:29:12.850 [2024-11-05 19:18:41.931546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.850 [2024-11-05 19:18:41.931556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.850 qpair failed and we were unable to recover it. 00:29:12.850 [2024-11-05 19:18:41.931856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.850 [2024-11-05 19:18:41.931867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.850 qpair failed and we were unable to recover it. 00:29:12.850 [2024-11-05 19:18:41.932198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.850 [2024-11-05 19:18:41.932208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.850 qpair failed and we were unable to recover it. 00:29:12.850 [2024-11-05 19:18:41.932499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.850 [2024-11-05 19:18:41.932509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.850 qpair failed and we were unable to recover it. 00:29:12.850 [2024-11-05 19:18:41.932790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.850 [2024-11-05 19:18:41.932800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.850 qpair failed and we were unable to recover it. 00:29:12.850 [2024-11-05 19:18:41.933091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.850 [2024-11-05 19:18:41.933101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.850 qpair failed and we were unable to recover it. 00:29:12.850 [2024-11-05 19:18:41.933387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.850 [2024-11-05 19:18:41.933397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.850 qpair failed and we were unable to recover it. 00:29:12.850 [2024-11-05 19:18:41.933675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.850 [2024-11-05 19:18:41.933684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.850 qpair failed and we were unable to recover it. 00:29:12.850 [2024-11-05 19:18:41.934056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.850 [2024-11-05 19:18:41.934069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.850 qpair failed and we were unable to recover it. 00:29:12.850 [2024-11-05 19:18:41.934247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.850 [2024-11-05 19:18:41.934257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.850 qpair failed and we were unable to recover it. 00:29:12.850 [2024-11-05 19:18:41.934467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.850 [2024-11-05 19:18:41.934477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.850 qpair failed and we were unable to recover it. 00:29:12.850 [2024-11-05 19:18:41.934804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.850 [2024-11-05 19:18:41.934815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.850 qpair failed and we were unable to recover it. 00:29:12.850 [2024-11-05 19:18:41.935172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.850 [2024-11-05 19:18:41.935182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.850 qpair failed and we were unable to recover it. 00:29:12.850 [2024-11-05 19:18:41.935463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.850 [2024-11-05 19:18:41.935473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.850 qpair failed and we were unable to recover it. 00:29:12.850 [2024-11-05 19:18:41.935642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.850 [2024-11-05 19:18:41.935651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.850 qpair failed and we were unable to recover it. 00:29:12.850 [2024-11-05 19:18:41.936030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.850 [2024-11-05 19:18:41.936041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.850 qpair failed and we were unable to recover it. 00:29:12.850 [2024-11-05 19:18:41.936361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.850 [2024-11-05 19:18:41.936370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.850 qpair failed and we were unable to recover it. 00:29:12.850 [2024-11-05 19:18:41.936654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.850 [2024-11-05 19:18:41.936664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.850 qpair failed and we were unable to recover it. 00:29:12.850 [2024-11-05 19:18:41.936970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.850 [2024-11-05 19:18:41.936981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.850 qpair failed and we were unable to recover it. 00:29:12.850 [2024-11-05 19:18:41.937181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.850 [2024-11-05 19:18:41.937191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.850 qpair failed and we were unable to recover it. 00:29:12.850 [2024-11-05 19:18:41.937487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.850 [2024-11-05 19:18:41.937496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.850 qpair failed and we were unable to recover it. 00:29:12.850 [2024-11-05 19:18:41.937789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.850 [2024-11-05 19:18:41.937799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.850 qpair failed and we were unable to recover it. 00:29:12.851 [2024-11-05 19:18:41.938107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.851 [2024-11-05 19:18:41.938117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.851 qpair failed and we were unable to recover it. 00:29:12.851 [2024-11-05 19:18:41.938395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.851 [2024-11-05 19:18:41.938405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.851 qpair failed and we were unable to recover it. 00:29:12.851 [2024-11-05 19:18:41.938684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.851 [2024-11-05 19:18:41.938694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.851 qpair failed and we were unable to recover it. 00:29:12.851 [2024-11-05 19:18:41.938992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.851 [2024-11-05 19:18:41.939002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.851 qpair failed and we were unable to recover it. 00:29:12.851 [2024-11-05 19:18:41.939308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.851 [2024-11-05 19:18:41.939318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.851 qpair failed and we were unable to recover it. 00:29:12.851 [2024-11-05 19:18:41.939600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.851 [2024-11-05 19:18:41.939610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.851 qpair failed and we were unable to recover it. 00:29:12.851 [2024-11-05 19:18:41.939889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.851 [2024-11-05 19:18:41.939900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.851 qpair failed and we were unable to recover it. 00:29:12.851 [2024-11-05 19:18:41.940241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.851 [2024-11-05 19:18:41.940252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.851 qpair failed and we were unable to recover it. 00:29:12.851 [2024-11-05 19:18:41.940606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.851 [2024-11-05 19:18:41.940616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.851 qpair failed and we were unable to recover it. 00:29:12.851 [2024-11-05 19:18:41.940921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.851 [2024-11-05 19:18:41.940932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.851 qpair failed and we were unable to recover it. 00:29:12.851 [2024-11-05 19:18:41.941224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.851 [2024-11-05 19:18:41.941233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.851 qpair failed and we were unable to recover it. 00:29:12.851 [2024-11-05 19:18:41.941401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.851 [2024-11-05 19:18:41.941413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.851 qpair failed and we were unable to recover it. 00:29:12.851 [2024-11-05 19:18:41.941737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.851 [2024-11-05 19:18:41.941752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.851 qpair failed and we were unable to recover it. 00:29:12.851 [2024-11-05 19:18:41.942078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.851 [2024-11-05 19:18:41.942090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.851 qpair failed and we were unable to recover it. 00:29:12.851 [2024-11-05 19:18:41.942274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.851 [2024-11-05 19:18:41.942285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.851 qpair failed and we were unable to recover it. 00:29:12.851 [2024-11-05 19:18:41.942440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.851 [2024-11-05 19:18:41.942452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.851 qpair failed and we were unable to recover it. 00:29:12.851 [2024-11-05 19:18:41.942776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.851 [2024-11-05 19:18:41.942788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.851 qpair failed and we were unable to recover it. 00:29:12.851 [2024-11-05 19:18:41.943075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.851 [2024-11-05 19:18:41.943086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.851 qpair failed and we were unable to recover it. 00:29:12.851 [2024-11-05 19:18:41.943388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.851 [2024-11-05 19:18:41.943398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.851 qpair failed and we were unable to recover it. 00:29:12.851 [2024-11-05 19:18:41.943684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.851 [2024-11-05 19:18:41.943693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.851 qpair failed and we were unable to recover it. 00:29:12.851 [2024-11-05 19:18:41.943999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.851 [2024-11-05 19:18:41.944010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.851 qpair failed and we were unable to recover it. 00:29:12.851 [2024-11-05 19:18:41.944340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.851 [2024-11-05 19:18:41.944351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.851 qpair failed and we were unable to recover it. 00:29:12.851 [2024-11-05 19:18:41.944651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.851 [2024-11-05 19:18:41.944661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.851 qpair failed and we were unable to recover it. 00:29:12.851 [2024-11-05 19:18:41.944976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.851 [2024-11-05 19:18:41.944987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.851 qpair failed and we were unable to recover it. 00:29:12.851 [2024-11-05 19:18:41.945310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.851 [2024-11-05 19:18:41.945320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.851 qpair failed and we were unable to recover it. 00:29:12.851 [2024-11-05 19:18:41.945599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.851 [2024-11-05 19:18:41.945609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.851 qpair failed and we were unable to recover it. 00:29:12.851 [2024-11-05 19:18:41.945915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.851 [2024-11-05 19:18:41.945925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.851 qpair failed and we were unable to recover it. 00:29:12.851 [2024-11-05 19:18:41.946207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.851 [2024-11-05 19:18:41.946218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.851 qpair failed and we were unable to recover it. 00:29:12.851 [2024-11-05 19:18:41.946524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.851 [2024-11-05 19:18:41.946534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.851 qpair failed and we were unable to recover it. 00:29:12.851 [2024-11-05 19:18:41.946902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.851 [2024-11-05 19:18:41.946913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.851 qpair failed and we were unable to recover it. 00:29:12.851 [2024-11-05 19:18:41.947189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.851 [2024-11-05 19:18:41.947199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.851 qpair failed and we were unable to recover it. 00:29:12.851 [2024-11-05 19:18:41.947509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.851 [2024-11-05 19:18:41.947519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.851 qpair failed and we were unable to recover it. 00:29:12.851 [2024-11-05 19:18:41.947841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.851 [2024-11-05 19:18:41.947852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.851 qpair failed and we were unable to recover it. 00:29:12.851 [2024-11-05 19:18:41.948151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.851 [2024-11-05 19:18:41.948160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.851 qpair failed and we were unable to recover it. 00:29:12.851 [2024-11-05 19:18:41.948480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.851 [2024-11-05 19:18:41.948490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.851 qpair failed and we were unable to recover it. 00:29:12.851 [2024-11-05 19:18:41.948761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.851 [2024-11-05 19:18:41.948771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.851 qpair failed and we were unable to recover it. 00:29:12.851 [2024-11-05 19:18:41.949100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.851 [2024-11-05 19:18:41.949110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.851 qpair failed and we were unable to recover it. 00:29:12.851 [2024-11-05 19:18:41.949397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.851 [2024-11-05 19:18:41.949406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.851 qpair failed and we were unable to recover it. 00:29:12.851 [2024-11-05 19:18:41.949698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.852 [2024-11-05 19:18:41.949709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.852 qpair failed and we were unable to recover it. 00:29:12.852 [2024-11-05 19:18:41.950000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.852 [2024-11-05 19:18:41.950010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.852 qpair failed and we were unable to recover it. 00:29:12.852 [2024-11-05 19:18:41.950314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.852 [2024-11-05 19:18:41.950326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.852 qpair failed and we were unable to recover it. 00:29:12.852 [2024-11-05 19:18:41.950552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.852 [2024-11-05 19:18:41.950562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.852 qpair failed and we were unable to recover it. 00:29:12.852 [2024-11-05 19:18:41.950860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.852 [2024-11-05 19:18:41.950871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.852 qpair failed and we were unable to recover it. 00:29:12.852 [2024-11-05 19:18:41.951184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.852 [2024-11-05 19:18:41.951194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.852 qpair failed and we were unable to recover it. 00:29:12.852 [2024-11-05 19:18:41.951500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.852 [2024-11-05 19:18:41.951510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.852 qpair failed and we were unable to recover it. 00:29:12.852 [2024-11-05 19:18:41.951836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.852 [2024-11-05 19:18:41.951847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.852 qpair failed and we were unable to recover it. 00:29:12.852 [2024-11-05 19:18:41.952226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.852 [2024-11-05 19:18:41.952236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.852 qpair failed and we were unable to recover it. 00:29:12.852 [2024-11-05 19:18:41.952522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.852 [2024-11-05 19:18:41.952532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.852 qpair failed and we were unable to recover it. 00:29:12.852 [2024-11-05 19:18:41.952836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.852 [2024-11-05 19:18:41.952847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.852 qpair failed and we were unable to recover it. 00:29:12.852 [2024-11-05 19:18:41.953067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.852 [2024-11-05 19:18:41.953079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.852 qpair failed and we were unable to recover it. 00:29:12.852 [2024-11-05 19:18:41.953383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.852 [2024-11-05 19:18:41.953393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.852 qpair failed and we were unable to recover it. 00:29:12.852 [2024-11-05 19:18:41.953674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.852 [2024-11-05 19:18:41.953684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.852 qpair failed and we were unable to recover it. 00:29:12.852 [2024-11-05 19:18:41.953982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.852 [2024-11-05 19:18:41.953992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.852 qpair failed and we were unable to recover it. 00:29:12.852 [2024-11-05 19:18:41.954322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.852 [2024-11-05 19:18:41.954331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.852 qpair failed and we were unable to recover it. 00:29:12.852 [2024-11-05 19:18:41.954674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.852 [2024-11-05 19:18:41.954684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.852 qpair failed and we were unable to recover it. 00:29:12.852 [2024-11-05 19:18:41.954982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.852 [2024-11-05 19:18:41.954992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.852 qpair failed and we were unable to recover it. 00:29:12.852 [2024-11-05 19:18:41.955278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.852 [2024-11-05 19:18:41.955288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.852 qpair failed and we were unable to recover it. 00:29:12.852 [2024-11-05 19:18:41.955478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.852 [2024-11-05 19:18:41.955488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.852 qpair failed and we were unable to recover it. 00:29:12.852 [2024-11-05 19:18:41.955819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.852 [2024-11-05 19:18:41.955830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.852 qpair failed and we were unable to recover it. 00:29:12.852 [2024-11-05 19:18:41.956190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.852 [2024-11-05 19:18:41.956200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.852 qpair failed and we were unable to recover it. 00:29:12.852 [2024-11-05 19:18:41.956483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.852 [2024-11-05 19:18:41.956492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.852 qpair failed and we were unable to recover it. 00:29:12.852 [2024-11-05 19:18:41.956842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.852 [2024-11-05 19:18:41.956852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.852 qpair failed and we were unable to recover it. 00:29:12.852 [2024-11-05 19:18:41.957186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.852 [2024-11-05 19:18:41.957196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.852 qpair failed and we were unable to recover it. 00:29:12.852 [2024-11-05 19:18:41.957518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.852 [2024-11-05 19:18:41.957528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.852 qpair failed and we were unable to recover it. 00:29:12.852 [2024-11-05 19:18:41.957861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.852 [2024-11-05 19:18:41.957871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.852 qpair failed and we were unable to recover it. 00:29:12.852 [2024-11-05 19:18:41.958194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.852 [2024-11-05 19:18:41.958204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.852 qpair failed and we were unable to recover it. 00:29:12.852 [2024-11-05 19:18:41.958534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.852 [2024-11-05 19:18:41.958544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.852 qpair failed and we were unable to recover it. 00:29:12.852 [2024-11-05 19:18:41.958857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.852 [2024-11-05 19:18:41.958868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.852 qpair failed and we were unable to recover it. 00:29:12.852 [2024-11-05 19:18:41.959175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.852 [2024-11-05 19:18:41.959185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.852 qpair failed and we were unable to recover it. 00:29:12.852 [2024-11-05 19:18:41.959531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.852 [2024-11-05 19:18:41.959541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.852 qpair failed and we were unable to recover it. 00:29:12.852 [2024-11-05 19:18:41.959717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.852 [2024-11-05 19:18:41.959727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.852 qpair failed and we were unable to recover it. 00:29:12.852 [2024-11-05 19:18:41.960058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.852 [2024-11-05 19:18:41.960069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.852 qpair failed and we were unable to recover it. 00:29:12.852 [2024-11-05 19:18:41.960348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.852 [2024-11-05 19:18:41.960358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.852 qpair failed and we were unable to recover it. 00:29:12.852 [2024-11-05 19:18:41.960687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.852 [2024-11-05 19:18:41.960696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.852 qpair failed and we were unable to recover it. 00:29:12.852 [2024-11-05 19:18:41.960992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.852 [2024-11-05 19:18:41.961002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.852 qpair failed and we were unable to recover it. 00:29:12.852 [2024-11-05 19:18:41.961285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.852 [2024-11-05 19:18:41.961295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.852 qpair failed and we were unable to recover it. 00:29:12.852 [2024-11-05 19:18:41.961576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.852 [2024-11-05 19:18:41.961585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.852 qpair failed and we were unable to recover it. 00:29:12.853 [2024-11-05 19:18:41.961865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.853 [2024-11-05 19:18:41.961876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.853 qpair failed and we were unable to recover it. 00:29:12.853 [2024-11-05 19:18:41.962200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.853 [2024-11-05 19:18:41.962210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.853 qpair failed and we were unable to recover it. 00:29:12.853 [2024-11-05 19:18:41.962532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.853 [2024-11-05 19:18:41.962543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.853 qpair failed and we were unable to recover it. 00:29:12.853 [2024-11-05 19:18:41.962881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.853 [2024-11-05 19:18:41.962892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.853 qpair failed and we were unable to recover it. 00:29:12.853 [2024-11-05 19:18:41.963219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.853 [2024-11-05 19:18:41.963229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.853 qpair failed and we were unable to recover it. 00:29:12.853 [2024-11-05 19:18:41.963533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.853 [2024-11-05 19:18:41.963543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.853 qpair failed and we were unable to recover it. 00:29:12.853 [2024-11-05 19:18:41.963871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.853 [2024-11-05 19:18:41.963881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.853 qpair failed and we were unable to recover it. 00:29:12.853 [2024-11-05 19:18:41.964230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.853 [2024-11-05 19:18:41.964239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.853 qpair failed and we were unable to recover it. 00:29:12.853 [2024-11-05 19:18:41.964521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.853 [2024-11-05 19:18:41.964531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.853 qpair failed and we were unable to recover it. 00:29:12.853 [2024-11-05 19:18:41.964839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.853 [2024-11-05 19:18:41.964849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.853 qpair failed and we were unable to recover it. 00:29:12.853 [2024-11-05 19:18:41.965151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.853 [2024-11-05 19:18:41.965161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.853 qpair failed and we were unable to recover it. 00:29:12.853 [2024-11-05 19:18:41.965508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.853 [2024-11-05 19:18:41.965518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.853 qpair failed and we were unable to recover it. 00:29:12.853 [2024-11-05 19:18:41.965798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.853 [2024-11-05 19:18:41.965808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.853 qpair failed and we were unable to recover it. 00:29:12.853 [2024-11-05 19:18:41.966121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.853 [2024-11-05 19:18:41.966131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.853 qpair failed and we were unable to recover it. 00:29:12.853 [2024-11-05 19:18:41.966324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.853 [2024-11-05 19:18:41.966336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.853 qpair failed and we were unable to recover it. 00:29:12.853 [2024-11-05 19:18:41.966680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.853 [2024-11-05 19:18:41.966690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.853 qpair failed and we were unable to recover it. 00:29:12.853 [2024-11-05 19:18:41.966978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.853 [2024-11-05 19:18:41.966988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.853 qpair failed and we were unable to recover it. 00:29:12.853 [2024-11-05 19:18:41.967324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.853 [2024-11-05 19:18:41.967334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.853 qpair failed and we were unable to recover it. 00:29:12.853 [2024-11-05 19:18:41.967625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.853 [2024-11-05 19:18:41.967635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.853 qpair failed and we were unable to recover it. 00:29:12.853 [2024-11-05 19:18:41.968000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.853 [2024-11-05 19:18:41.968011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.853 qpair failed and we were unable to recover it. 00:29:12.853 [2024-11-05 19:18:41.968261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.853 [2024-11-05 19:18:41.968271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.853 qpair failed and we were unable to recover it. 00:29:12.853 [2024-11-05 19:18:41.968611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.853 [2024-11-05 19:18:41.968621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.853 qpair failed and we were unable to recover it. 00:29:12.853 [2024-11-05 19:18:41.968911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.853 [2024-11-05 19:18:41.968921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.853 qpair failed and we were unable to recover it. 00:29:12.853 [2024-11-05 19:18:41.969259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.853 [2024-11-05 19:18:41.969270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.853 qpair failed and we were unable to recover it. 00:29:12.853 [2024-11-05 19:18:41.969578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.853 [2024-11-05 19:18:41.969588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.853 qpair failed and we were unable to recover it. 00:29:12.853 [2024-11-05 19:18:41.969884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.853 [2024-11-05 19:18:41.969894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.853 qpair failed and we were unable to recover it. 00:29:12.853 [2024-11-05 19:18:41.970218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.853 [2024-11-05 19:18:41.970228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.853 qpair failed and we were unable to recover it. 00:29:12.853 [2024-11-05 19:18:41.970550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.853 [2024-11-05 19:18:41.970560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.853 qpair failed and we were unable to recover it. 00:29:12.853 [2024-11-05 19:18:41.970908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.853 [2024-11-05 19:18:41.970919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.853 qpair failed and we were unable to recover it. 00:29:12.853 [2024-11-05 19:18:41.971208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.853 [2024-11-05 19:18:41.971218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.853 qpair failed and we were unable to recover it. 00:29:12.853 [2024-11-05 19:18:41.971533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.853 [2024-11-05 19:18:41.971543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.853 qpair failed and we were unable to recover it. 00:29:12.853 [2024-11-05 19:18:41.971744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.853 [2024-11-05 19:18:41.971761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.853 qpair failed and we were unable to recover it. 00:29:12.853 [2024-11-05 19:18:41.972055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.853 [2024-11-05 19:18:41.972066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.853 qpair failed and we were unable to recover it. 00:29:12.853 [2024-11-05 19:18:41.972363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.853 [2024-11-05 19:18:41.972373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.853 qpair failed and we were unable to recover it. 00:29:12.853 [2024-11-05 19:18:41.972653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.853 [2024-11-05 19:18:41.972663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.853 qpair failed and we were unable to recover it. 00:29:12.853 [2024-11-05 19:18:41.973008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.853 [2024-11-05 19:18:41.973019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.853 qpair failed and we were unable to recover it. 00:29:12.853 [2024-11-05 19:18:41.973376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.853 [2024-11-05 19:18:41.973386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.853 qpair failed and we were unable to recover it. 00:29:12.853 [2024-11-05 19:18:41.973674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.853 [2024-11-05 19:18:41.973684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.853 qpair failed and we were unable to recover it. 00:29:12.853 [2024-11-05 19:18:41.973874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.853 [2024-11-05 19:18:41.973885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.854 qpair failed and we were unable to recover it. 00:29:12.854 [2024-11-05 19:18:41.974201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.854 [2024-11-05 19:18:41.974211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.854 qpair failed and we were unable to recover it. 00:29:12.854 [2024-11-05 19:18:41.974500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.854 [2024-11-05 19:18:41.974510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.854 qpair failed and we were unable to recover it. 00:29:12.854 [2024-11-05 19:18:41.974795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.854 [2024-11-05 19:18:41.974805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.854 qpair failed and we were unable to recover it. 00:29:12.854 [2024-11-05 19:18:41.975119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.854 [2024-11-05 19:18:41.975129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.854 qpair failed and we were unable to recover it. 00:29:12.854 [2024-11-05 19:18:41.975472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.854 [2024-11-05 19:18:41.975482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.854 qpair failed and we were unable to recover it. 00:29:12.854 [2024-11-05 19:18:41.975669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.854 [2024-11-05 19:18:41.975680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.854 qpair failed and we were unable to recover it. 00:29:12.854 [2024-11-05 19:18:41.976010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.854 [2024-11-05 19:18:41.976021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.854 qpair failed and we were unable to recover it. 00:29:12.854 [2024-11-05 19:18:41.976380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.854 [2024-11-05 19:18:41.976390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.854 qpair failed and we were unable to recover it. 00:29:12.854 [2024-11-05 19:18:41.976677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.854 [2024-11-05 19:18:41.976687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.854 qpair failed and we were unable to recover it. 00:29:12.854 [2024-11-05 19:18:41.976998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.854 [2024-11-05 19:18:41.977009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.854 qpair failed and we were unable to recover it. 00:29:12.854 [2024-11-05 19:18:41.977341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.854 [2024-11-05 19:18:41.977352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.854 qpair failed and we were unable to recover it. 00:29:12.854 [2024-11-05 19:18:41.977665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.854 [2024-11-05 19:18:41.977675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.854 qpair failed and we were unable to recover it. 00:29:12.854 [2024-11-05 19:18:41.977957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.854 [2024-11-05 19:18:41.977968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.854 qpair failed and we were unable to recover it. 00:29:12.854 [2024-11-05 19:18:41.978245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.854 [2024-11-05 19:18:41.978255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.854 qpair failed and we were unable to recover it. 00:29:12.854 [2024-11-05 19:18:41.978542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.854 [2024-11-05 19:18:41.978552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.854 qpair failed and we were unable to recover it. 00:29:12.854 [2024-11-05 19:18:41.978715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.854 [2024-11-05 19:18:41.978726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.854 qpair failed and we were unable to recover it. 00:29:12.854 [2024-11-05 19:18:41.979005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.854 [2024-11-05 19:18:41.979015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.854 qpair failed and we were unable to recover it. 00:29:12.854 [2024-11-05 19:18:41.979347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.854 [2024-11-05 19:18:41.979357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.854 qpair failed and we were unable to recover it. 00:29:12.854 [2024-11-05 19:18:41.979653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.854 [2024-11-05 19:18:41.979663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.854 qpair failed and we were unable to recover it. 00:29:12.854 [2024-11-05 19:18:41.980015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.854 [2024-11-05 19:18:41.980028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.854 qpair failed and we were unable to recover it. 00:29:12.854 [2024-11-05 19:18:41.980351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.854 [2024-11-05 19:18:41.980361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.854 qpair failed and we were unable to recover it. 00:29:12.854 [2024-11-05 19:18:41.980493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.854 [2024-11-05 19:18:41.980504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.854 qpair failed and we were unable to recover it. 00:29:12.854 [2024-11-05 19:18:41.980778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.854 [2024-11-05 19:18:41.980789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.854 qpair failed and we were unable to recover it. 00:29:12.854 [2024-11-05 19:18:41.981093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.854 [2024-11-05 19:18:41.981103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.854 qpair failed and we were unable to recover it. 00:29:12.854 [2024-11-05 19:18:41.981385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.854 [2024-11-05 19:18:41.981395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.854 qpair failed and we were unable to recover it. 00:29:12.854 [2024-11-05 19:18:41.981737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.854 [2024-11-05 19:18:41.981751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.854 qpair failed and we were unable to recover it. 00:29:12.854 [2024-11-05 19:18:41.982059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.854 [2024-11-05 19:18:41.982069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.854 qpair failed and we were unable to recover it. 00:29:12.854 [2024-11-05 19:18:41.982381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.854 [2024-11-05 19:18:41.982391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.854 qpair failed and we were unable to recover it. 00:29:12.854 [2024-11-05 19:18:41.982694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.854 [2024-11-05 19:18:41.982704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.854 qpair failed and we were unable to recover it. 00:29:12.854 [2024-11-05 19:18:41.982998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.854 [2024-11-05 19:18:41.983008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.854 qpair failed and we were unable to recover it. 00:29:12.854 [2024-11-05 19:18:41.983289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.854 [2024-11-05 19:18:41.983299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.854 qpair failed and we were unable to recover it. 00:29:12.854 [2024-11-05 19:18:41.983520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.854 [2024-11-05 19:18:41.983531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.854 qpair failed and we were unable to recover it. 00:29:12.854 [2024-11-05 19:18:41.983853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.854 [2024-11-05 19:18:41.983863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.854 qpair failed and we were unable to recover it. 00:29:12.854 [2024-11-05 19:18:41.984230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.854 [2024-11-05 19:18:41.984240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.854 qpair failed and we were unable to recover it. 00:29:12.854 [2024-11-05 19:18:41.984561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.854 [2024-11-05 19:18:41.984571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.854 qpair failed and we were unable to recover it. 00:29:12.854 [2024-11-05 19:18:41.984907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.854 [2024-11-05 19:18:41.984918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.854 qpair failed and we were unable to recover it. 00:29:12.854 [2024-11-05 19:18:41.985226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.854 [2024-11-05 19:18:41.985236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.854 qpair failed and we were unable to recover it. 00:29:12.854 [2024-11-05 19:18:41.985425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.854 [2024-11-05 19:18:41.985436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.854 qpair failed and we were unable to recover it. 00:29:12.854 [2024-11-05 19:18:41.985708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.855 [2024-11-05 19:18:41.985719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.855 qpair failed and we were unable to recover it. 00:29:12.855 [2024-11-05 19:18:41.986040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.855 [2024-11-05 19:18:41.986050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.855 qpair failed and we were unable to recover it. 00:29:12.855 [2024-11-05 19:18:41.986245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.855 [2024-11-05 19:18:41.986256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.855 qpair failed and we were unable to recover it. 00:29:12.855 [2024-11-05 19:18:41.986558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.855 [2024-11-05 19:18:41.986568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.855 qpair failed and we were unable to recover it. 00:29:12.855 [2024-11-05 19:18:41.986891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.855 [2024-11-05 19:18:41.986902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.855 qpair failed and we were unable to recover it. 00:29:12.855 [2024-11-05 19:18:41.987242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.855 [2024-11-05 19:18:41.987251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.855 qpair failed and we were unable to recover it. 00:29:12.855 [2024-11-05 19:18:41.987533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.855 [2024-11-05 19:18:41.987543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.855 qpair failed and we were unable to recover it. 00:29:12.855 [2024-11-05 19:18:41.987829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.855 [2024-11-05 19:18:41.987839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.855 qpair failed and we were unable to recover it. 00:29:12.855 [2024-11-05 19:18:41.988012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.855 [2024-11-05 19:18:41.988023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.855 qpair failed and we were unable to recover it. 00:29:12.855 [2024-11-05 19:18:41.988355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.855 [2024-11-05 19:18:41.988365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.855 qpair failed and we were unable to recover it. 00:29:12.855 [2024-11-05 19:18:41.988685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.855 [2024-11-05 19:18:41.988695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.855 qpair failed and we were unable to recover it. 00:29:12.855 [2024-11-05 19:18:41.989004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.855 [2024-11-05 19:18:41.989015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.855 qpair failed and we were unable to recover it. 00:29:12.855 [2024-11-05 19:18:41.989307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.855 [2024-11-05 19:18:41.989317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.855 qpair failed and we were unable to recover it. 00:29:12.855 [2024-11-05 19:18:41.989632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.855 [2024-11-05 19:18:41.989642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.855 qpair failed and we were unable to recover it. 00:29:12.855 [2024-11-05 19:18:41.989979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.855 [2024-11-05 19:18:41.989989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.855 qpair failed and we were unable to recover it. 00:29:12.855 [2024-11-05 19:18:41.990293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.855 [2024-11-05 19:18:41.990303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.855 qpair failed and we were unable to recover it. 00:29:12.855 [2024-11-05 19:18:41.990511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.855 [2024-11-05 19:18:41.990522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.855 qpair failed and we were unable to recover it. 00:29:12.855 [2024-11-05 19:18:41.990824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.855 [2024-11-05 19:18:41.990834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.855 qpair failed and we were unable to recover it. 00:29:12.855 [2024-11-05 19:18:41.991142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.855 [2024-11-05 19:18:41.991152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.855 qpair failed and we were unable to recover it. 00:29:12.855 [2024-11-05 19:18:41.991435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.855 [2024-11-05 19:18:41.991445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.855 qpair failed and we were unable to recover it. 00:29:12.855 [2024-11-05 19:18:41.991728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.855 [2024-11-05 19:18:41.991737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.855 qpair failed and we were unable to recover it. 00:29:12.855 [2024-11-05 19:18:41.992074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.855 [2024-11-05 19:18:41.992085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.855 qpair failed and we were unable to recover it. 00:29:12.855 [2024-11-05 19:18:41.992394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.855 [2024-11-05 19:18:41.992404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.855 qpair failed and we were unable to recover it. 00:29:12.855 [2024-11-05 19:18:41.992739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.855 [2024-11-05 19:18:41.992754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.855 qpair failed and we were unable to recover it. 00:29:12.855 [2024-11-05 19:18:41.993044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.855 [2024-11-05 19:18:41.993054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.855 qpair failed and we were unable to recover it. 00:29:12.855 [2024-11-05 19:18:41.993336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.855 [2024-11-05 19:18:41.993346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.855 qpair failed and we were unable to recover it. 00:29:12.855 [2024-11-05 19:18:41.993624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.855 [2024-11-05 19:18:41.993634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.855 qpair failed and we were unable to recover it. 00:29:12.855 [2024-11-05 19:18:41.993863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.855 [2024-11-05 19:18:41.993874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.855 qpair failed and we were unable to recover it. 00:29:12.855 [2024-11-05 19:18:41.994196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.855 [2024-11-05 19:18:41.994205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.855 qpair failed and we were unable to recover it. 00:29:12.855 [2024-11-05 19:18:41.994493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.855 [2024-11-05 19:18:41.994503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.855 qpair failed and we were unable to recover it. 00:29:12.855 [2024-11-05 19:18:41.994794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.855 [2024-11-05 19:18:41.994804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.855 qpair failed and we were unable to recover it. 00:29:12.855 [2024-11-05 19:18:41.995113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.855 [2024-11-05 19:18:41.995123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.855 qpair failed and we were unable to recover it. 00:29:12.855 [2024-11-05 19:18:41.995342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.855 [2024-11-05 19:18:41.995352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.855 qpair failed and we were unable to recover it. 00:29:12.855 [2024-11-05 19:18:41.995687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.855 [2024-11-05 19:18:41.995697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.855 qpair failed and we were unable to recover it. 00:29:12.855 [2024-11-05 19:18:41.996009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.856 [2024-11-05 19:18:41.996019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.856 qpair failed and we were unable to recover it. 00:29:12.856 [2024-11-05 19:18:41.996327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.856 [2024-11-05 19:18:41.996337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.856 qpair failed and we were unable to recover it. 00:29:12.856 [2024-11-05 19:18:41.996622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.856 [2024-11-05 19:18:41.996632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.856 qpair failed and we were unable to recover it. 00:29:12.856 [2024-11-05 19:18:41.996916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.856 [2024-11-05 19:18:41.996926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.856 qpair failed and we were unable to recover it. 00:29:12.856 [2024-11-05 19:18:41.997223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.856 [2024-11-05 19:18:41.997233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.856 qpair failed and we were unable to recover it. 00:29:12.856 [2024-11-05 19:18:41.997516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.856 [2024-11-05 19:18:41.997526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.856 qpair failed and we were unable to recover it. 00:29:12.856 [2024-11-05 19:18:41.997823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.856 [2024-11-05 19:18:41.997833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.856 qpair failed and we were unable to recover it. 00:29:12.856 [2024-11-05 19:18:41.998146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.856 [2024-11-05 19:18:41.998156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.856 qpair failed and we were unable to recover it. 00:29:12.856 [2024-11-05 19:18:41.998482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.856 [2024-11-05 19:18:41.998492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.856 qpair failed and we were unable to recover it. 00:29:12.856 [2024-11-05 19:18:41.998807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.856 [2024-11-05 19:18:41.998818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.856 qpair failed and we were unable to recover it. 00:29:12.856 [2024-11-05 19:18:41.999118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.856 [2024-11-05 19:18:41.999128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.856 qpair failed and we were unable to recover it. 00:29:12.856 [2024-11-05 19:18:41.999407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.856 [2024-11-05 19:18:41.999417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.856 qpair failed and we were unable to recover it. 00:29:12.856 [2024-11-05 19:18:41.999750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.856 [2024-11-05 19:18:41.999760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.856 qpair failed and we were unable to recover it. 00:29:12.856 [2024-11-05 19:18:42.000102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.856 [2024-11-05 19:18:42.000112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.856 qpair failed and we were unable to recover it. 00:29:12.856 [2024-11-05 19:18:42.000435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.856 [2024-11-05 19:18:42.000445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.856 qpair failed and we were unable to recover it. 00:29:12.856 [2024-11-05 19:18:42.000829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.856 [2024-11-05 19:18:42.000845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.856 qpair failed and we were unable to recover it. 00:29:12.856 [2024-11-05 19:18:42.001127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.856 [2024-11-05 19:18:42.001137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.856 qpair failed and we were unable to recover it. 00:29:12.856 [2024-11-05 19:18:42.001469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.856 [2024-11-05 19:18:42.001479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.856 qpair failed and we were unable to recover it. 00:29:12.856 [2024-11-05 19:18:42.001688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.856 [2024-11-05 19:18:42.001698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.856 qpair failed and we were unable to recover it. 00:29:12.856 [2024-11-05 19:18:42.001881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.856 [2024-11-05 19:18:42.001892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.856 qpair failed and we were unable to recover it. 00:29:12.856 [2024-11-05 19:18:42.002173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.856 [2024-11-05 19:18:42.002183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.856 qpair failed and we were unable to recover it. 00:29:12.856 [2024-11-05 19:18:42.002523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.856 [2024-11-05 19:18:42.002532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.856 qpair failed and we were unable to recover it. 00:29:12.856 [2024-11-05 19:18:42.002816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.856 [2024-11-05 19:18:42.002827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.856 qpair failed and we were unable to recover it. 00:29:12.856 [2024-11-05 19:18:42.003120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.856 [2024-11-05 19:18:42.003130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.856 qpair failed and we were unable to recover it. 00:29:12.856 [2024-11-05 19:18:42.003417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.856 [2024-11-05 19:18:42.003427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.856 qpair failed and we were unable to recover it. 00:29:12.856 [2024-11-05 19:18:42.003767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.856 [2024-11-05 19:18:42.003777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.856 qpair failed and we were unable to recover it. 00:29:12.856 [2024-11-05 19:18:42.004134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.856 [2024-11-05 19:18:42.004144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.856 qpair failed and we were unable to recover it. 00:29:12.856 [2024-11-05 19:18:42.004451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.856 [2024-11-05 19:18:42.004461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.856 qpair failed and we were unable to recover it. 00:29:12.856 [2024-11-05 19:18:42.004776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.856 [2024-11-05 19:18:42.004786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.856 qpair failed and we were unable to recover it. 00:29:12.856 [2024-11-05 19:18:42.005107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.856 [2024-11-05 19:18:42.005117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.856 qpair failed and we were unable to recover it. 00:29:12.856 [2024-11-05 19:18:42.005461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.856 [2024-11-05 19:18:42.005470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.856 qpair failed and we were unable to recover it. 00:29:12.856 [2024-11-05 19:18:42.005778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.856 [2024-11-05 19:18:42.005789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.856 qpair failed and we were unable to recover it. 00:29:12.856 [2024-11-05 19:18:42.006131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.856 [2024-11-05 19:18:42.006141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.856 qpair failed and we were unable to recover it. 00:29:12.856 [2024-11-05 19:18:42.006438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.856 [2024-11-05 19:18:42.006448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.856 qpair failed and we were unable to recover it. 00:29:12.856 [2024-11-05 19:18:42.006780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.856 [2024-11-05 19:18:42.006791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.856 qpair failed and we were unable to recover it. 00:29:12.856 [2024-11-05 19:18:42.007091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.856 [2024-11-05 19:18:42.007101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.856 qpair failed and we were unable to recover it. 00:29:12.856 [2024-11-05 19:18:42.007412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.856 [2024-11-05 19:18:42.007422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.856 qpair failed and we were unable to recover it. 00:29:12.856 [2024-11-05 19:18:42.007734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.856 [2024-11-05 19:18:42.007744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.856 qpair failed and we were unable to recover it. 00:29:12.856 [2024-11-05 19:18:42.008091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.856 [2024-11-05 19:18:42.008102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.856 qpair failed and we were unable to recover it. 00:29:12.857 [2024-11-05 19:18:42.008452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.857 [2024-11-05 19:18:42.008462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.857 qpair failed and we were unable to recover it. 00:29:12.857 [2024-11-05 19:18:42.008635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.857 [2024-11-05 19:18:42.008646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.857 qpair failed and we were unable to recover it. 00:29:12.857 [2024-11-05 19:18:42.008918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.857 [2024-11-05 19:18:42.008929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.857 qpair failed and we were unable to recover it. 00:29:12.857 [2024-11-05 19:18:42.009129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.857 [2024-11-05 19:18:42.009143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.857 qpair failed and we were unable to recover it. 00:29:12.857 [2024-11-05 19:18:42.009497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.857 [2024-11-05 19:18:42.009508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.857 qpair failed and we were unable to recover it. 00:29:12.857 [2024-11-05 19:18:42.009682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.857 [2024-11-05 19:18:42.009692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.857 qpair failed and we were unable to recover it. 00:29:12.857 [2024-11-05 19:18:42.010015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.857 [2024-11-05 19:18:42.010026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.857 qpair failed and we were unable to recover it. 00:29:12.857 [2024-11-05 19:18:42.010307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.857 [2024-11-05 19:18:42.010316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.857 qpair failed and we were unable to recover it. 00:29:12.857 [2024-11-05 19:18:42.010669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.857 [2024-11-05 19:18:42.010679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.857 qpair failed and we were unable to recover it. 00:29:12.857 [2024-11-05 19:18:42.011016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.857 [2024-11-05 19:18:42.011027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.857 qpair failed and we were unable to recover it. 00:29:12.857 [2024-11-05 19:18:42.011311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.857 [2024-11-05 19:18:42.011321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.857 qpair failed and we were unable to recover it. 00:29:12.857 [2024-11-05 19:18:42.011592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.857 [2024-11-05 19:18:42.011602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.857 qpair failed and we were unable to recover it. 00:29:12.857 [2024-11-05 19:18:42.011885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.857 [2024-11-05 19:18:42.011895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.857 qpair failed and we were unable to recover it. 00:29:12.857 [2024-11-05 19:18:42.012193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.857 [2024-11-05 19:18:42.012203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.857 qpair failed and we were unable to recover it. 00:29:12.857 [2024-11-05 19:18:42.012463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.857 [2024-11-05 19:18:42.012473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.857 qpair failed and we were unable to recover it. 00:29:12.857 [2024-11-05 19:18:42.012686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.857 [2024-11-05 19:18:42.012696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.857 qpair failed and we were unable to recover it. 00:29:12.857 [2024-11-05 19:18:42.013001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.857 [2024-11-05 19:18:42.013011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.857 qpair failed and we were unable to recover it. 00:29:12.857 [2024-11-05 19:18:42.013311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.857 [2024-11-05 19:18:42.013322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.857 qpair failed and we were unable to recover it. 00:29:12.857 [2024-11-05 19:18:42.013514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.857 [2024-11-05 19:18:42.013524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.857 qpair failed and we were unable to recover it. 00:29:12.857 [2024-11-05 19:18:42.013844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.857 [2024-11-05 19:18:42.013855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.857 qpair failed and we were unable to recover it. 00:29:12.857 [2024-11-05 19:18:42.014178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.857 [2024-11-05 19:18:42.014188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.857 qpair failed and we were unable to recover it. 00:29:12.857 [2024-11-05 19:18:42.014518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.857 [2024-11-05 19:18:42.014528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.857 qpair failed and we were unable to recover it. 00:29:12.857 [2024-11-05 19:18:42.014851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.857 [2024-11-05 19:18:42.014861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.857 qpair failed and we were unable to recover it. 00:29:12.857 [2024-11-05 19:18:42.015186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.857 [2024-11-05 19:18:42.015196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.857 qpair failed and we were unable to recover it. 00:29:12.857 [2024-11-05 19:18:42.015387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.857 [2024-11-05 19:18:42.015396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.857 qpair failed and we were unable to recover it. 00:29:12.857 [2024-11-05 19:18:42.015687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.857 [2024-11-05 19:18:42.015697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.857 qpair failed and we were unable to recover it. 00:29:12.857 [2024-11-05 19:18:42.015987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.857 [2024-11-05 19:18:42.015998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.857 qpair failed and we were unable to recover it. 00:29:12.857 [2024-11-05 19:18:42.016317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.857 [2024-11-05 19:18:42.016327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.857 qpair failed and we were unable to recover it. 00:29:12.857 [2024-11-05 19:18:42.016661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.857 [2024-11-05 19:18:42.016671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.857 qpair failed and we were unable to recover it. 00:29:12.857 [2024-11-05 19:18:42.017001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.857 [2024-11-05 19:18:42.017011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.857 qpair failed and we were unable to recover it. 00:29:12.857 [2024-11-05 19:18:42.017333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.857 [2024-11-05 19:18:42.017345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.857 qpair failed and we were unable to recover it. 00:29:12.857 [2024-11-05 19:18:42.017537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.857 [2024-11-05 19:18:42.017548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.857 qpair failed and we were unable to recover it. 00:29:12.857 [2024-11-05 19:18:42.017891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.857 [2024-11-05 19:18:42.017902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.857 qpair failed and we were unable to recover it. 00:29:12.857 [2024-11-05 19:18:42.018065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.857 [2024-11-05 19:18:42.018076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.857 qpair failed and we were unable to recover it. 00:29:12.857 [2024-11-05 19:18:42.018405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.857 [2024-11-05 19:18:42.018415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.857 qpair failed and we were unable to recover it. 00:29:12.857 [2024-11-05 19:18:42.018705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.857 [2024-11-05 19:18:42.018715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.857 qpair failed and we were unable to recover it. 00:29:12.857 [2024-11-05 19:18:42.019022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.857 [2024-11-05 19:18:42.019032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.857 qpair failed and we were unable to recover it. 00:29:12.857 [2024-11-05 19:18:42.019328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.857 [2024-11-05 19:18:42.019338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.857 qpair failed and we were unable to recover it. 00:29:12.857 [2024-11-05 19:18:42.019687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.857 [2024-11-05 19:18:42.019697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.857 qpair failed and we were unable to recover it. 00:29:12.858 [2024-11-05 19:18:42.019999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.858 [2024-11-05 19:18:42.020009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.858 qpair failed and we were unable to recover it. 00:29:12.858 [2024-11-05 19:18:42.020294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.858 [2024-11-05 19:18:42.020303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.858 qpair failed and we were unable to recover it. 00:29:12.858 [2024-11-05 19:18:42.020588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.858 [2024-11-05 19:18:42.020598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.858 qpair failed and we were unable to recover it. 00:29:12.858 [2024-11-05 19:18:42.020884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.858 [2024-11-05 19:18:42.020894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.858 qpair failed and we were unable to recover it. 00:29:12.858 [2024-11-05 19:18:42.021226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.858 [2024-11-05 19:18:42.021237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.858 qpair failed and we were unable to recover it. 00:29:12.858 [2024-11-05 19:18:42.021529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.858 [2024-11-05 19:18:42.021540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.858 qpair failed and we were unable to recover it. 00:29:12.858 [2024-11-05 19:18:42.021924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.858 [2024-11-05 19:18:42.021935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.858 qpair failed and we were unable to recover it. 00:29:12.858 [2024-11-05 19:18:42.022133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.858 [2024-11-05 19:18:42.022142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.858 qpair failed and we were unable to recover it. 00:29:12.858 [2024-11-05 19:18:42.022408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.858 [2024-11-05 19:18:42.022418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.858 qpair failed and we were unable to recover it. 00:29:12.858 [2024-11-05 19:18:42.022601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.858 [2024-11-05 19:18:42.022610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.858 qpair failed and we were unable to recover it. 00:29:12.858 [2024-11-05 19:18:42.022987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.858 [2024-11-05 19:18:42.022998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.858 qpair failed and we were unable to recover it. 00:29:12.858 [2024-11-05 19:18:42.023319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.858 [2024-11-05 19:18:42.023329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.858 qpair failed and we were unable to recover it. 00:29:12.858 [2024-11-05 19:18:42.023649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.858 [2024-11-05 19:18:42.023658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.858 qpair failed and we were unable to recover it. 00:29:12.858 [2024-11-05 19:18:42.023965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.858 [2024-11-05 19:18:42.023975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.858 qpair failed and we were unable to recover it. 00:29:12.858 [2024-11-05 19:18:42.024261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.858 [2024-11-05 19:18:42.024271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.858 qpair failed and we were unable to recover it. 00:29:12.858 [2024-11-05 19:18:42.024583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.858 [2024-11-05 19:18:42.024593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.858 qpair failed and we were unable to recover it. 00:29:12.858 [2024-11-05 19:18:42.024907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.858 [2024-11-05 19:18:42.024917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.858 qpair failed and we were unable to recover it. 00:29:12.858 [2024-11-05 19:18:42.025213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.858 [2024-11-05 19:18:42.025223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.858 qpair failed and we were unable to recover it. 00:29:12.858 [2024-11-05 19:18:42.025406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.858 [2024-11-05 19:18:42.025417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.858 qpair failed and we were unable to recover it. 00:29:12.858 [2024-11-05 19:18:42.025760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.858 [2024-11-05 19:18:42.025771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.858 qpair failed and we were unable to recover it. 00:29:12.858 [2024-11-05 19:18:42.026123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.858 [2024-11-05 19:18:42.026133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.858 qpair failed and we were unable to recover it. 00:29:12.858 [2024-11-05 19:18:42.026321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.858 [2024-11-05 19:18:42.026332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.858 qpair failed and we were unable to recover it. 00:29:12.858 [2024-11-05 19:18:42.026663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.858 [2024-11-05 19:18:42.026673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.858 qpair failed and we were unable to recover it. 00:29:12.858 [2024-11-05 19:18:42.027003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.858 [2024-11-05 19:18:42.027014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.858 qpair failed and we were unable to recover it. 00:29:12.858 [2024-11-05 19:18:42.027314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.858 [2024-11-05 19:18:42.027323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.858 qpair failed and we were unable to recover it. 00:29:12.858 [2024-11-05 19:18:42.027662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.858 [2024-11-05 19:18:42.027672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.858 qpair failed and we were unable to recover it. 00:29:12.858 [2024-11-05 19:18:42.028030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.858 [2024-11-05 19:18:42.028040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.858 qpair failed and we were unable to recover it. 00:29:12.858 [2024-11-05 19:18:42.028330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.858 [2024-11-05 19:18:42.028340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.858 qpair failed and we were unable to recover it. 00:29:12.858 [2024-11-05 19:18:42.028653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.858 [2024-11-05 19:18:42.028663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.858 qpair failed and we were unable to recover it. 00:29:12.858 [2024-11-05 19:18:42.028982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.858 [2024-11-05 19:18:42.028993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.858 qpair failed and we were unable to recover it. 00:29:12.858 [2024-11-05 19:18:42.029180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.858 [2024-11-05 19:18:42.029190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.858 qpair failed and we were unable to recover it. 00:29:12.858 [2024-11-05 19:18:42.029393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.858 [2024-11-05 19:18:42.029404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.858 qpair failed and we were unable to recover it. 00:29:12.858 [2024-11-05 19:18:42.029578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.858 [2024-11-05 19:18:42.029588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.858 qpair failed and we were unable to recover it. 00:29:12.858 [2024-11-05 19:18:42.029918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.858 [2024-11-05 19:18:42.029929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.858 qpair failed and we were unable to recover it. 00:29:12.858 [2024-11-05 19:18:42.030239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.858 [2024-11-05 19:18:42.030249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.858 qpair failed and we were unable to recover it. 00:29:12.858 [2024-11-05 19:18:42.030540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.858 [2024-11-05 19:18:42.030550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.858 qpair failed and we were unable to recover it. 00:29:12.858 [2024-11-05 19:18:42.030877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.858 [2024-11-05 19:18:42.030888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.858 qpair failed and we were unable to recover it. 00:29:12.858 [2024-11-05 19:18:42.031171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.858 [2024-11-05 19:18:42.031181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.858 qpair failed and we were unable to recover it. 00:29:12.858 [2024-11-05 19:18:42.031465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.858 [2024-11-05 19:18:42.031475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.859 qpair failed and we were unable to recover it. 00:29:12.859 [2024-11-05 19:18:42.031787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.859 [2024-11-05 19:18:42.031798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.859 qpair failed and we were unable to recover it. 00:29:12.859 [2024-11-05 19:18:42.032102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.859 [2024-11-05 19:18:42.032111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.859 qpair failed and we were unable to recover it. 00:29:12.859 [2024-11-05 19:18:42.032467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.859 [2024-11-05 19:18:42.032477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.859 qpair failed and we were unable to recover it. 00:29:12.859 [2024-11-05 19:18:42.032754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.859 [2024-11-05 19:18:42.032764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.859 qpair failed and we were unable to recover it. 00:29:12.859 [2024-11-05 19:18:42.033059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.859 [2024-11-05 19:18:42.033069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.859 qpair failed and we were unable to recover it. 00:29:12.859 [2024-11-05 19:18:42.033426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.859 [2024-11-05 19:18:42.033436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.859 qpair failed and we were unable to recover it. 00:29:12.859 [2024-11-05 19:18:42.033737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.859 [2024-11-05 19:18:42.033758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.859 qpair failed and we were unable to recover it. 00:29:12.859 [2024-11-05 19:18:42.034110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.859 [2024-11-05 19:18:42.034120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.859 qpair failed and we were unable to recover it. 00:29:12.859 [2024-11-05 19:18:42.034455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.859 [2024-11-05 19:18:42.034465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.859 qpair failed and we were unable to recover it. 00:29:12.859 [2024-11-05 19:18:42.034759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.859 [2024-11-05 19:18:42.034770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.859 qpair failed and we were unable to recover it. 00:29:12.859 [2024-11-05 19:18:42.035111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.859 [2024-11-05 19:18:42.035121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.859 qpair failed and we were unable to recover it. 00:29:12.859 [2024-11-05 19:18:42.035427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.859 [2024-11-05 19:18:42.035436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.859 qpair failed and we were unable to recover it. 00:29:12.859 [2024-11-05 19:18:42.035727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.859 [2024-11-05 19:18:42.035736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.859 qpair failed and we were unable to recover it. 00:29:12.859 [2024-11-05 19:18:42.036082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.859 [2024-11-05 19:18:42.036092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.859 qpair failed and we were unable to recover it. 00:29:12.859 [2024-11-05 19:18:42.036377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.859 [2024-11-05 19:18:42.036387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.859 qpair failed and we were unable to recover it. 00:29:12.859 [2024-11-05 19:18:42.036671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.859 [2024-11-05 19:18:42.036681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.859 qpair failed and we were unable to recover it. 00:29:12.859 [2024-11-05 19:18:42.037014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.859 [2024-11-05 19:18:42.037024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.859 qpair failed and we were unable to recover it. 00:29:12.859 [2024-11-05 19:18:42.037305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.859 [2024-11-05 19:18:42.037315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.859 qpair failed and we were unable to recover it. 00:29:12.859 [2024-11-05 19:18:42.037642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.859 [2024-11-05 19:18:42.037652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.859 qpair failed and we were unable to recover it. 00:29:12.859 [2024-11-05 19:18:42.037996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.859 [2024-11-05 19:18:42.038007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.859 qpair failed and we were unable to recover it. 00:29:12.859 [2024-11-05 19:18:42.038309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.859 [2024-11-05 19:18:42.038321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.859 qpair failed and we were unable to recover it. 00:29:12.859 [2024-11-05 19:18:42.038608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.859 [2024-11-05 19:18:42.038618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.859 qpair failed and we were unable to recover it. 00:29:12.859 [2024-11-05 19:18:42.038908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.859 [2024-11-05 19:18:42.038919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.859 qpair failed and we were unable to recover it. 00:29:12.859 [2024-11-05 19:18:42.039262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.859 [2024-11-05 19:18:42.039271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.859 qpair failed and we were unable to recover it. 00:29:12.859 [2024-11-05 19:18:42.039558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.859 [2024-11-05 19:18:42.039567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.859 qpair failed and we were unable to recover it. 00:29:12.859 [2024-11-05 19:18:42.039851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.859 [2024-11-05 19:18:42.039861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.859 qpair failed and we were unable to recover it. 00:29:12.859 [2024-11-05 19:18:42.040165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.859 [2024-11-05 19:18:42.040174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.859 qpair failed and we were unable to recover it. 00:29:12.859 [2024-11-05 19:18:42.040514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.859 [2024-11-05 19:18:42.040525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.859 qpair failed and we were unable to recover it. 00:29:12.859 [2024-11-05 19:18:42.040839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.859 [2024-11-05 19:18:42.040849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.859 qpair failed and we were unable to recover it. 00:29:12.859 [2024-11-05 19:18:42.041037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.859 [2024-11-05 19:18:42.041047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.859 qpair failed and we were unable to recover it. 00:29:12.859 [2024-11-05 19:18:42.041366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.860 [2024-11-05 19:18:42.041376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.860 qpair failed and we were unable to recover it. 00:29:12.860 [2024-11-05 19:18:42.041681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.860 [2024-11-05 19:18:42.041691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.860 qpair failed and we were unable to recover it. 00:29:12.860 [2024-11-05 19:18:42.042010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.860 [2024-11-05 19:18:42.042020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.860 qpair failed and we were unable to recover it. 00:29:12.860 [2024-11-05 19:18:42.042358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.860 [2024-11-05 19:18:42.042367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.860 qpair failed and we were unable to recover it. 00:29:12.860 [2024-11-05 19:18:42.042706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.860 [2024-11-05 19:18:42.042715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.860 qpair failed and we were unable to recover it. 00:29:12.860 [2024-11-05 19:18:42.043005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.860 [2024-11-05 19:18:42.043015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.860 qpair failed and we were unable to recover it. 00:29:12.860 [2024-11-05 19:18:42.043326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.860 [2024-11-05 19:18:42.043336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.860 qpair failed and we were unable to recover it. 00:29:12.860 [2024-11-05 19:18:42.043633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.860 [2024-11-05 19:18:42.043643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.860 qpair failed and we were unable to recover it. 00:29:12.860 [2024-11-05 19:18:42.043954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.860 [2024-11-05 19:18:42.043965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.860 qpair failed and we were unable to recover it. 00:29:12.860 [2024-11-05 19:18:42.044301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.860 [2024-11-05 19:18:42.044312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.860 qpair failed and we were unable to recover it. 00:29:12.860 [2024-11-05 19:18:42.044616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.860 [2024-11-05 19:18:42.044626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.860 qpair failed and we were unable to recover it. 00:29:12.860 [2024-11-05 19:18:42.045055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.860 [2024-11-05 19:18:42.045066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.860 qpair failed and we were unable to recover it. 00:29:12.860 [2024-11-05 19:18:42.045367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.860 [2024-11-05 19:18:42.045377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.860 qpair failed and we were unable to recover it. 00:29:12.860 [2024-11-05 19:18:42.045658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.860 [2024-11-05 19:18:42.045668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.860 qpair failed and we were unable to recover it. 00:29:12.860 [2024-11-05 19:18:42.045975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.860 [2024-11-05 19:18:42.045985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.860 qpair failed and we were unable to recover it. 00:29:12.860 [2024-11-05 19:18:42.046279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.860 [2024-11-05 19:18:42.046289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.860 qpair failed and we were unable to recover it. 00:29:12.860 [2024-11-05 19:18:42.046468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.860 [2024-11-05 19:18:42.046477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.860 qpair failed and we were unable to recover it. 00:29:12.860 [2024-11-05 19:18:42.046769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.860 [2024-11-05 19:18:42.046782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.860 qpair failed and we were unable to recover it. 00:29:12.860 [2024-11-05 19:18:42.047135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.860 [2024-11-05 19:18:42.047145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.860 qpair failed and we were unable to recover it. 00:29:12.860 [2024-11-05 19:18:42.047457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.860 [2024-11-05 19:18:42.047467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.860 qpair failed and we were unable to recover it. 00:29:12.860 [2024-11-05 19:18:42.047801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.860 [2024-11-05 19:18:42.047811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.860 qpair failed and we were unable to recover it. 00:29:12.860 [2024-11-05 19:18:42.048037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.860 [2024-11-05 19:18:42.048046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.860 qpair failed and we were unable to recover it. 00:29:12.860 [2024-11-05 19:18:42.048327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.860 [2024-11-05 19:18:42.048337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.860 qpair failed and we were unable to recover it. 00:29:12.860 [2024-11-05 19:18:42.048638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.860 [2024-11-05 19:18:42.048648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.860 qpair failed and we were unable to recover it. 00:29:12.860 [2024-11-05 19:18:42.048935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.860 [2024-11-05 19:18:42.048945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.860 qpair failed and we were unable to recover it. 00:29:12.860 [2024-11-05 19:18:42.049231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.860 [2024-11-05 19:18:42.049241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.860 qpair failed and we were unable to recover it. 00:29:12.860 [2024-11-05 19:18:42.049560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.860 [2024-11-05 19:18:42.049570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.860 qpair failed and we were unable to recover it. 00:29:12.860 [2024-11-05 19:18:42.049891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.860 [2024-11-05 19:18:42.049901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.860 qpair failed and we were unable to recover it. 00:29:12.860 [2024-11-05 19:18:42.050181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.860 [2024-11-05 19:18:42.050191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.860 qpair failed and we were unable to recover it. 00:29:12.860 [2024-11-05 19:18:42.050529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.860 [2024-11-05 19:18:42.050539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.860 qpair failed and we were unable to recover it. 00:29:12.860 [2024-11-05 19:18:42.050820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.860 [2024-11-05 19:18:42.050831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.860 qpair failed and we were unable to recover it. 00:29:12.860 [2024-11-05 19:18:42.051161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.860 [2024-11-05 19:18:42.051171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.860 qpair failed and we were unable to recover it. 00:29:12.860 [2024-11-05 19:18:42.051341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.860 [2024-11-05 19:18:42.051352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.860 qpair failed and we were unable to recover it. 00:29:12.860 [2024-11-05 19:18:42.051621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.860 [2024-11-05 19:18:42.051632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.860 qpair failed and we were unable to recover it. 00:29:12.860 [2024-11-05 19:18:42.051970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.860 [2024-11-05 19:18:42.051982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.860 qpair failed and we were unable to recover it. 00:29:12.860 [2024-11-05 19:18:42.052280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.860 [2024-11-05 19:18:42.052290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.860 qpair failed and we were unable to recover it. 00:29:12.860 [2024-11-05 19:18:42.052591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.860 [2024-11-05 19:18:42.052600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.860 qpair failed and we were unable to recover it. 00:29:12.860 [2024-11-05 19:18:42.052902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.860 [2024-11-05 19:18:42.052913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.860 qpair failed and we were unable to recover it. 00:29:12.861 [2024-11-05 19:18:42.053217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.861 [2024-11-05 19:18:42.053227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.861 qpair failed and we were unable to recover it. 00:29:12.861 [2024-11-05 19:18:42.053582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.861 [2024-11-05 19:18:42.053591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.861 qpair failed and we were unable to recover it. 00:29:12.861 [2024-11-05 19:18:42.053883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.861 [2024-11-05 19:18:42.053894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.861 qpair failed and we were unable to recover it. 00:29:12.861 [2024-11-05 19:18:42.054221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.861 [2024-11-05 19:18:42.054231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.861 qpair failed and we were unable to recover it. 00:29:12.861 [2024-11-05 19:18:42.054517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.861 [2024-11-05 19:18:42.054527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.861 qpair failed and we were unable to recover it. 00:29:12.861 [2024-11-05 19:18:42.054830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.861 [2024-11-05 19:18:42.054840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.861 qpair failed and we were unable to recover it. 00:29:12.861 [2024-11-05 19:18:42.055146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.861 [2024-11-05 19:18:42.055155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.861 qpair failed and we were unable to recover it. 00:29:12.861 [2024-11-05 19:18:42.055366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.861 [2024-11-05 19:18:42.055376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.861 qpair failed and we were unable to recover it. 00:29:12.861 [2024-11-05 19:18:42.055655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.861 [2024-11-05 19:18:42.055665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.861 qpair failed and we were unable to recover it. 00:29:12.861 [2024-11-05 19:18:42.055977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.861 [2024-11-05 19:18:42.055987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.861 qpair failed and we were unable to recover it. 00:29:12.861 [2024-11-05 19:18:42.056162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.861 [2024-11-05 19:18:42.056172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.861 qpair failed and we were unable to recover it. 00:29:12.861 [2024-11-05 19:18:42.056447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.861 [2024-11-05 19:18:42.056457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.861 qpair failed and we were unable to recover it. 00:29:12.861 [2024-11-05 19:18:42.056787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.861 [2024-11-05 19:18:42.056798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.861 qpair failed and we were unable to recover it. 00:29:12.861 [2024-11-05 19:18:42.057107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.861 [2024-11-05 19:18:42.057117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.861 qpair failed and we were unable to recover it. 00:29:12.861 [2024-11-05 19:18:42.057395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.861 [2024-11-05 19:18:42.057405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.861 qpair failed and we were unable to recover it. 00:29:12.861 [2024-11-05 19:18:42.057704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.861 [2024-11-05 19:18:42.057715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.861 qpair failed and we were unable to recover it. 00:29:12.861 [2024-11-05 19:18:42.058018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.861 [2024-11-05 19:18:42.058029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.861 qpair failed and we were unable to recover it. 00:29:12.861 [2024-11-05 19:18:42.058334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.861 [2024-11-05 19:18:42.058345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.861 qpair failed and we were unable to recover it. 00:29:12.861 [2024-11-05 19:18:42.058711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.861 [2024-11-05 19:18:42.058722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.861 qpair failed and we were unable to recover it. 00:29:12.861 [2024-11-05 19:18:42.059038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.861 [2024-11-05 19:18:42.059048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.861 qpair failed and we were unable to recover it. 00:29:12.861 [2024-11-05 19:18:42.059274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.861 [2024-11-05 19:18:42.059284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.861 qpair failed and we were unable to recover it. 00:29:12.861 [2024-11-05 19:18:42.059589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.861 [2024-11-05 19:18:42.059600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.861 qpair failed and we were unable to recover it. 00:29:12.861 [2024-11-05 19:18:42.059876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.861 [2024-11-05 19:18:42.059886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.861 qpair failed and we were unable to recover it. 00:29:12.861 [2024-11-05 19:18:42.060204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.861 [2024-11-05 19:18:42.060214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.861 qpair failed and we were unable to recover it. 00:29:12.861 [2024-11-05 19:18:42.060520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.861 [2024-11-05 19:18:42.060530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.861 qpair failed and we were unable to recover it. 00:29:12.861 [2024-11-05 19:18:42.060830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.861 [2024-11-05 19:18:42.060841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.861 qpair failed and we were unable to recover it. 00:29:12.861 [2024-11-05 19:18:42.061172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.861 [2024-11-05 19:18:42.061182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.861 qpair failed and we were unable to recover it. 00:29:12.861 [2024-11-05 19:18:42.061494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.861 [2024-11-05 19:18:42.061505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.861 qpair failed and we were unable to recover it. 00:29:12.861 [2024-11-05 19:18:42.061783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.861 [2024-11-05 19:18:42.061794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.861 qpair failed and we were unable to recover it. 00:29:12.861 [2024-11-05 19:18:42.062061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.861 [2024-11-05 19:18:42.062079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.861 qpair failed and we were unable to recover it. 00:29:12.861 [2024-11-05 19:18:42.062410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.861 [2024-11-05 19:18:42.062419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.861 qpair failed and we were unable to recover it. 00:29:12.861 [2024-11-05 19:18:42.062820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.861 [2024-11-05 19:18:42.062830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.861 qpair failed and we were unable to recover it. 00:29:12.861 [2024-11-05 19:18:42.063103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.861 [2024-11-05 19:18:42.063113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.861 qpair failed and we were unable to recover it. 00:29:12.861 [2024-11-05 19:18:42.063434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.861 [2024-11-05 19:18:42.063444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.861 qpair failed and we were unable to recover it. 00:29:12.861 [2024-11-05 19:18:42.063729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.861 [2024-11-05 19:18:42.063742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.861 qpair failed and we were unable to recover it. 00:29:12.861 [2024-11-05 19:18:42.064046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.861 [2024-11-05 19:18:42.064056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.861 qpair failed and we were unable to recover it. 00:29:12.861 [2024-11-05 19:18:42.064347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.861 [2024-11-05 19:18:42.064357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.861 qpair failed and we were unable to recover it. 00:29:12.861 [2024-11-05 19:18:42.064659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.861 [2024-11-05 19:18:42.064668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.861 qpair failed and we were unable to recover it. 00:29:12.861 [2024-11-05 19:18:42.064982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.861 [2024-11-05 19:18:42.064992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.862 qpair failed and we were unable to recover it. 00:29:12.862 [2024-11-05 19:18:42.065312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.862 [2024-11-05 19:18:42.065322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.862 qpair failed and we were unable to recover it. 00:29:12.862 [2024-11-05 19:18:42.065605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.862 [2024-11-05 19:18:42.065615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.862 qpair failed and we were unable to recover it. 00:29:12.862 [2024-11-05 19:18:42.066007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.862 [2024-11-05 19:18:42.066017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.862 qpair failed and we were unable to recover it. 00:29:12.862 [2024-11-05 19:18:42.066331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.862 [2024-11-05 19:18:42.066341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.862 qpair failed and we were unable to recover it. 00:29:12.862 [2024-11-05 19:18:42.066655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.862 [2024-11-05 19:18:42.066665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.862 qpair failed and we were unable to recover it. 00:29:12.862 [2024-11-05 19:18:42.066969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.862 [2024-11-05 19:18:42.066979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.862 qpair failed and we were unable to recover it. 00:29:12.862 [2024-11-05 19:18:42.067305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.862 [2024-11-05 19:18:42.067315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.862 qpair failed and we were unable to recover it. 00:29:12.862 [2024-11-05 19:18:42.067639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.862 [2024-11-05 19:18:42.067648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.862 qpair failed and we were unable to recover it. 00:29:12.862 [2024-11-05 19:18:42.067905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.862 [2024-11-05 19:18:42.067918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.862 qpair failed and we were unable to recover it. 00:29:12.862 [2024-11-05 19:18:42.068216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.862 [2024-11-05 19:18:42.068226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.862 qpair failed and we were unable to recover it. 00:29:12.862 [2024-11-05 19:18:42.068508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.862 [2024-11-05 19:18:42.068518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.862 qpair failed and we were unable to recover it. 00:29:12.862 [2024-11-05 19:18:42.068882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.862 [2024-11-05 19:18:42.068893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.862 qpair failed and we were unable to recover it. 00:29:12.862 [2024-11-05 19:18:42.069229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.862 [2024-11-05 19:18:42.069239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.862 qpair failed and we were unable to recover it. 00:29:12.862 [2024-11-05 19:18:42.069554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.862 [2024-11-05 19:18:42.069564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.862 qpair failed and we were unable to recover it. 00:29:12.862 [2024-11-05 19:18:42.069819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.862 [2024-11-05 19:18:42.069829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.862 qpair failed and we were unable to recover it. 00:29:12.862 [2024-11-05 19:18:42.070133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.862 [2024-11-05 19:18:42.070143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.862 qpair failed and we were unable to recover it. 00:29:12.862 [2024-11-05 19:18:42.070460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.862 [2024-11-05 19:18:42.070469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.862 qpair failed and we were unable to recover it. 00:29:12.862 [2024-11-05 19:18:42.070760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.862 [2024-11-05 19:18:42.070771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.862 qpair failed and we were unable to recover it. 00:29:12.862 [2024-11-05 19:18:42.070970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.862 [2024-11-05 19:18:42.070980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.862 qpair failed and we were unable to recover it. 00:29:12.862 [2024-11-05 19:18:42.071262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.862 [2024-11-05 19:18:42.071272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.862 qpair failed and we were unable to recover it. 00:29:12.862 [2024-11-05 19:18:42.071603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.862 [2024-11-05 19:18:42.071613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.862 qpair failed and we were unable to recover it. 00:29:12.862 [2024-11-05 19:18:42.071909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.862 [2024-11-05 19:18:42.071919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.862 qpair failed and we were unable to recover it. 00:29:12.862 [2024-11-05 19:18:42.072247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.862 [2024-11-05 19:18:42.072257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.862 qpair failed and we were unable to recover it. 00:29:12.862 [2024-11-05 19:18:42.072565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.862 [2024-11-05 19:18:42.072576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.862 qpair failed and we were unable to recover it. 00:29:12.862 [2024-11-05 19:18:42.072916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.862 [2024-11-05 19:18:42.072926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.862 qpair failed and we were unable to recover it. 00:29:12.862 [2024-11-05 19:18:42.073218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.862 [2024-11-05 19:18:42.073228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.862 qpair failed and we were unable to recover it. 00:29:12.862 [2024-11-05 19:18:42.073553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.862 [2024-11-05 19:18:42.073563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.862 qpair failed and we were unable to recover it. 00:29:12.862 [2024-11-05 19:18:42.073836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.862 [2024-11-05 19:18:42.073847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.862 qpair failed and we were unable to recover it. 00:29:12.862 [2024-11-05 19:18:42.074134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.862 [2024-11-05 19:18:42.074144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.862 qpair failed and we were unable to recover it. 00:29:12.862 [2024-11-05 19:18:42.074490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.862 [2024-11-05 19:18:42.074500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.862 qpair failed and we were unable to recover it. 00:29:12.862 [2024-11-05 19:18:42.074853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.862 [2024-11-05 19:18:42.074864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.862 qpair failed and we were unable to recover it. 00:29:12.862 [2024-11-05 19:18:42.075189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.862 [2024-11-05 19:18:42.075199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.862 qpair failed and we were unable to recover it. 00:29:12.862 [2024-11-05 19:18:42.075521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.862 [2024-11-05 19:18:42.075531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.862 qpair failed and we were unable to recover it. 00:29:12.862 [2024-11-05 19:18:42.075842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.862 [2024-11-05 19:18:42.075853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.862 qpair failed and we were unable to recover it. 00:29:12.862 [2024-11-05 19:18:42.076203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.862 [2024-11-05 19:18:42.076213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.862 qpair failed and we were unable to recover it. 00:29:12.862 [2024-11-05 19:18:42.076505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.862 [2024-11-05 19:18:42.076517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.862 qpair failed and we were unable to recover it. 00:29:12.862 [2024-11-05 19:18:42.076807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.862 [2024-11-05 19:18:42.076818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.862 qpair failed and we were unable to recover it. 00:29:12.862 [2024-11-05 19:18:42.077131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.862 [2024-11-05 19:18:42.077141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.862 qpair failed and we were unable to recover it. 00:29:12.862 [2024-11-05 19:18:42.077425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.863 [2024-11-05 19:18:42.077435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.863 qpair failed and we were unable to recover it. 00:29:12.863 [2024-11-05 19:18:42.077788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.863 [2024-11-05 19:18:42.077798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.863 qpair failed and we were unable to recover it. 00:29:12.863 [2024-11-05 19:18:42.077998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.863 [2024-11-05 19:18:42.078009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.863 qpair failed and we were unable to recover it. 00:29:12.863 [2024-11-05 19:18:42.078336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.863 [2024-11-05 19:18:42.078347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.863 qpair failed and we were unable to recover it. 00:29:12.863 [2024-11-05 19:18:42.078660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.863 [2024-11-05 19:18:42.078669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.863 qpair failed and we were unable to recover it. 00:29:12.863 [2024-11-05 19:18:42.078988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.863 [2024-11-05 19:18:42.078998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.863 qpair failed and we were unable to recover it. 00:29:12.863 [2024-11-05 19:18:42.079331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.863 [2024-11-05 19:18:42.079341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.863 qpair failed and we were unable to recover it. 00:29:12.863 [2024-11-05 19:18:42.079728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.863 [2024-11-05 19:18:42.079739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.863 qpair failed and we were unable to recover it. 00:29:12.863 [2024-11-05 19:18:42.080058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.863 [2024-11-05 19:18:42.080068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.863 qpair failed and we were unable to recover it. 00:29:12.863 [2024-11-05 19:18:42.080374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.863 [2024-11-05 19:18:42.080384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.863 qpair failed and we were unable to recover it. 00:29:12.863 [2024-11-05 19:18:42.080713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.863 [2024-11-05 19:18:42.080723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.863 qpair failed and we were unable to recover it. 00:29:12.863 [2024-11-05 19:18:42.081030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.863 [2024-11-05 19:18:42.081041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.863 qpair failed and we were unable to recover it. 00:29:12.863 [2024-11-05 19:18:42.081355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.863 [2024-11-05 19:18:42.081365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.863 qpair failed and we were unable to recover it. 00:29:12.863 [2024-11-05 19:18:42.081674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.863 [2024-11-05 19:18:42.081684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.863 qpair failed and we were unable to recover it. 00:29:12.863 [2024-11-05 19:18:42.081991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.863 [2024-11-05 19:18:42.082002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.863 qpair failed and we were unable to recover it. 00:29:12.863 [2024-11-05 19:18:42.082273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.863 [2024-11-05 19:18:42.082283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.863 qpair failed and we were unable to recover it. 00:29:12.863 [2024-11-05 19:18:42.082556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.863 [2024-11-05 19:18:42.082567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.863 qpair failed and we were unable to recover it. 00:29:12.863 [2024-11-05 19:18:42.082873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.863 [2024-11-05 19:18:42.082884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.863 qpair failed and we were unable to recover it. 00:29:12.863 [2024-11-05 19:18:42.083094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.863 [2024-11-05 19:18:42.083104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.863 qpair failed and we were unable to recover it. 00:29:12.863 [2024-11-05 19:18:42.083373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.863 [2024-11-05 19:18:42.083384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.863 qpair failed and we were unable to recover it. 00:29:12.863 [2024-11-05 19:18:42.083710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.863 [2024-11-05 19:18:42.083720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.863 qpair failed and we were unable to recover it. 00:29:12.863 [2024-11-05 19:18:42.083958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.863 [2024-11-05 19:18:42.083968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.863 qpair failed and we were unable to recover it. 00:29:12.863 [2024-11-05 19:18:42.084276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.863 [2024-11-05 19:18:42.084286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.863 qpair failed and we were unable to recover it. 00:29:12.863 [2024-11-05 19:18:42.084499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.863 [2024-11-05 19:18:42.084509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.863 qpair failed and we were unable to recover it. 00:29:12.863 [2024-11-05 19:18:42.084787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.863 [2024-11-05 19:18:42.084800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.863 qpair failed and we were unable to recover it. 00:29:12.863 [2024-11-05 19:18:42.085113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.863 [2024-11-05 19:18:42.085124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.863 qpair failed and we were unable to recover it. 00:29:12.863 [2024-11-05 19:18:42.085423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.863 [2024-11-05 19:18:42.085434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.863 qpair failed and we were unable to recover it. 00:29:12.863 [2024-11-05 19:18:42.085620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.863 [2024-11-05 19:18:42.085630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.863 qpair failed and we were unable to recover it. 00:29:12.863 [2024-11-05 19:18:42.085984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.863 [2024-11-05 19:18:42.085995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.863 qpair failed and we were unable to recover it. 00:29:12.863 [2024-11-05 19:18:42.086314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.863 [2024-11-05 19:18:42.086325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.863 qpair failed and we were unable to recover it. 00:29:12.863 [2024-11-05 19:18:42.086605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.863 [2024-11-05 19:18:42.086616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.863 qpair failed and we were unable to recover it. 00:29:12.863 [2024-11-05 19:18:42.086924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.863 [2024-11-05 19:18:42.086934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.863 qpair failed and we were unable to recover it. 00:29:12.863 [2024-11-05 19:18:42.087221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.863 [2024-11-05 19:18:42.087231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.863 qpair failed and we were unable to recover it. 00:29:12.863 [2024-11-05 19:18:42.087545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.863 [2024-11-05 19:18:42.087555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.863 qpair failed and we were unable to recover it. 00:29:12.863 [2024-11-05 19:18:42.087846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.863 [2024-11-05 19:18:42.087857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.863 qpair failed and we were unable to recover it. 00:29:12.863 [2024-11-05 19:18:42.088214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.863 [2024-11-05 19:18:42.088224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.863 qpair failed and we were unable to recover it. 00:29:12.863 [2024-11-05 19:18:42.088525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.863 [2024-11-05 19:18:42.088536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.863 qpair failed and we were unable to recover it. 00:29:12.863 [2024-11-05 19:18:42.088845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.863 [2024-11-05 19:18:42.088856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.863 qpair failed and we were unable to recover it. 00:29:12.863 [2024-11-05 19:18:42.089178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.863 [2024-11-05 19:18:42.089189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.863 qpair failed and we were unable to recover it. 00:29:12.864 [2024-11-05 19:18:42.089505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.864 [2024-11-05 19:18:42.089515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.864 qpair failed and we were unable to recover it. 00:29:12.864 [2024-11-05 19:18:42.089832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.864 [2024-11-05 19:18:42.089843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.864 qpair failed and we were unable to recover it. 00:29:12.864 [2024-11-05 19:18:42.090155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.864 [2024-11-05 19:18:42.090165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.864 qpair failed and we were unable to recover it. 00:29:12.864 [2024-11-05 19:18:42.090582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.864 [2024-11-05 19:18:42.090593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.864 qpair failed and we were unable to recover it. 00:29:12.864 [2024-11-05 19:18:42.090807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.864 [2024-11-05 19:18:42.090818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.864 qpair failed and we were unable to recover it. 00:29:12.864 [2024-11-05 19:18:42.091099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.864 [2024-11-05 19:18:42.091109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.864 qpair failed and we were unable to recover it. 00:29:12.864 [2024-11-05 19:18:42.091385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.864 [2024-11-05 19:18:42.091396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.864 qpair failed and we were unable to recover it. 00:29:12.864 [2024-11-05 19:18:42.091700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.864 [2024-11-05 19:18:42.091711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.864 qpair failed and we were unable to recover it. 00:29:12.864 [2024-11-05 19:18:42.092047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.864 [2024-11-05 19:18:42.092058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.864 qpair failed and we were unable to recover it. 00:29:12.864 [2024-11-05 19:18:42.092357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.864 [2024-11-05 19:18:42.092367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.864 qpair failed and we were unable to recover it. 00:29:12.864 [2024-11-05 19:18:42.092680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.864 [2024-11-05 19:18:42.092690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.864 qpair failed and we were unable to recover it. 00:29:12.864 [2024-11-05 19:18:42.092865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.864 [2024-11-05 19:18:42.092877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.864 qpair failed and we were unable to recover it. 00:29:12.864 [2024-11-05 19:18:42.093262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.864 [2024-11-05 19:18:42.093273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.864 qpair failed and we were unable to recover it. 00:29:12.864 [2024-11-05 19:18:42.093599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.864 [2024-11-05 19:18:42.093609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.864 qpair failed and we were unable to recover it. 00:29:12.864 [2024-11-05 19:18:42.093998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.864 [2024-11-05 19:18:42.094009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.864 qpair failed and we were unable to recover it. 00:29:12.864 [2024-11-05 19:18:42.094239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.864 [2024-11-05 19:18:42.094249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.864 qpair failed and we were unable to recover it. 00:29:12.864 [2024-11-05 19:18:42.094587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.864 [2024-11-05 19:18:42.094598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.864 qpair failed and we were unable to recover it. 00:29:12.864 [2024-11-05 19:18:42.094918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.864 [2024-11-05 19:18:42.094929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.864 qpair failed and we were unable to recover it. 00:29:12.864 [2024-11-05 19:18:42.095320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.864 [2024-11-05 19:18:42.095330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.864 qpair failed and we were unable to recover it. 00:29:12.864 [2024-11-05 19:18:42.095631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.864 [2024-11-05 19:18:42.095641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.864 qpair failed and we were unable to recover it. 00:29:12.864 [2024-11-05 19:18:42.095919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.864 [2024-11-05 19:18:42.095929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.864 qpair failed and we were unable to recover it. 00:29:12.864 [2024-11-05 19:18:42.096223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.864 [2024-11-05 19:18:42.096233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.864 qpair failed and we were unable to recover it. 00:29:12.864 [2024-11-05 19:18:42.096519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.864 [2024-11-05 19:18:42.096529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.864 qpair failed and we were unable to recover it. 00:29:12.864 [2024-11-05 19:18:42.096816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.864 [2024-11-05 19:18:42.096826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.864 qpair failed and we were unable to recover it. 00:29:12.864 [2024-11-05 19:18:42.097172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.864 [2024-11-05 19:18:42.097182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.864 qpair failed and we were unable to recover it. 00:29:12.864 [2024-11-05 19:18:42.097478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.864 [2024-11-05 19:18:42.097489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.864 qpair failed and we were unable to recover it. 00:29:12.864 [2024-11-05 19:18:42.097804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.864 [2024-11-05 19:18:42.097816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.864 qpair failed and we were unable to recover it. 00:29:12.864 [2024-11-05 19:18:42.098110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.864 [2024-11-05 19:18:42.098120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.864 qpair failed and we were unable to recover it. 00:29:12.864 [2024-11-05 19:18:42.098419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.864 [2024-11-05 19:18:42.098428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.864 qpair failed and we were unable to recover it. 00:29:12.864 [2024-11-05 19:18:42.098839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.864 [2024-11-05 19:18:42.098853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.864 qpair failed and we were unable to recover it. 00:29:12.864 [2024-11-05 19:18:42.099199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.864 [2024-11-05 19:18:42.099209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.864 qpair failed and we were unable to recover it. 00:29:12.864 [2024-11-05 19:18:42.099431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.864 [2024-11-05 19:18:42.099441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.864 qpair failed and we were unable to recover it. 00:29:12.864 [2024-11-05 19:18:42.099772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.864 [2024-11-05 19:18:42.099782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.864 qpair failed and we were unable to recover it. 00:29:12.864 [2024-11-05 19:18:42.100077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.864 [2024-11-05 19:18:42.100087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.864 qpair failed and we were unable to recover it. 00:29:12.864 [2024-11-05 19:18:42.100432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.865 [2024-11-05 19:18:42.100442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.865 qpair failed and we were unable to recover it. 00:29:12.865 [2024-11-05 19:18:42.100737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.865 [2024-11-05 19:18:42.100751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.865 qpair failed and we were unable to recover it. 00:29:12.865 [2024-11-05 19:18:42.100949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.865 [2024-11-05 19:18:42.100959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.865 qpair failed and we were unable to recover it. 00:29:12.865 [2024-11-05 19:18:42.101259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.865 [2024-11-05 19:18:42.101269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.865 qpair failed and we were unable to recover it. 00:29:12.865 [2024-11-05 19:18:42.101633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.865 [2024-11-05 19:18:42.101643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.865 qpair failed and we were unable to recover it. 00:29:12.865 [2024-11-05 19:18:42.101924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.865 [2024-11-05 19:18:42.101934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.865 qpair failed and we were unable to recover it. 00:29:12.865 [2024-11-05 19:18:42.102286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.865 [2024-11-05 19:18:42.102296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.865 qpair failed and we were unable to recover it. 00:29:12.865 [2024-11-05 19:18:42.102581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.865 [2024-11-05 19:18:42.102591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.865 qpair failed and we were unable to recover it. 00:29:12.865 [2024-11-05 19:18:42.102766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.865 [2024-11-05 19:18:42.102778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.865 qpair failed and we were unable to recover it. 00:29:12.865 [2024-11-05 19:18:42.103084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.865 [2024-11-05 19:18:42.103095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.865 qpair failed and we were unable to recover it. 00:29:12.865 [2024-11-05 19:18:42.103383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.865 [2024-11-05 19:18:42.103393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.865 qpair failed and we were unable to recover it. 00:29:12.865 [2024-11-05 19:18:42.103531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.865 [2024-11-05 19:18:42.103541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.865 qpair failed and we were unable to recover it. 00:29:12.865 [2024-11-05 19:18:42.103792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.865 [2024-11-05 19:18:42.103804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.865 qpair failed and we were unable to recover it. 00:29:12.865 [2024-11-05 19:18:42.104083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.865 [2024-11-05 19:18:42.104093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.865 qpair failed and we were unable to recover it. 00:29:12.865 [2024-11-05 19:18:42.104396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.865 [2024-11-05 19:18:42.104407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.865 qpair failed and we were unable to recover it. 00:29:12.865 [2024-11-05 19:18:42.104712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.865 [2024-11-05 19:18:42.104722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.865 qpair failed and we were unable to recover it. 00:29:12.865 [2024-11-05 19:18:42.104876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.865 [2024-11-05 19:18:42.104888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.865 qpair failed and we were unable to recover it. 00:29:12.865 [2024-11-05 19:18:42.105064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.865 [2024-11-05 19:18:42.105075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.865 qpair failed and we were unable to recover it. 00:29:12.865 [2024-11-05 19:18:42.105369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.865 [2024-11-05 19:18:42.105380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.865 qpair failed and we were unable to recover it. 00:29:12.865 [2024-11-05 19:18:42.105694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.865 [2024-11-05 19:18:42.105708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.865 qpair failed and we were unable to recover it. 00:29:12.865 [2024-11-05 19:18:42.106034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.865 [2024-11-05 19:18:42.106045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.865 qpair failed and we were unable to recover it. 00:29:12.865 [2024-11-05 19:18:42.106385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.865 [2024-11-05 19:18:42.106395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.865 qpair failed and we were unable to recover it. 00:29:12.865 [2024-11-05 19:18:42.106562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.865 [2024-11-05 19:18:42.106574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.865 qpair failed and we were unable to recover it. 00:29:12.865 [2024-11-05 19:18:42.106849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.865 [2024-11-05 19:18:42.106859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.865 qpair failed and we were unable to recover it. 00:29:12.865 [2024-11-05 19:18:42.107048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.865 [2024-11-05 19:18:42.107060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.865 qpair failed and we were unable to recover it. 00:29:12.865 [2024-11-05 19:18:42.107393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.865 [2024-11-05 19:18:42.107404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.865 qpair failed and we were unable to recover it. 00:29:12.865 [2024-11-05 19:18:42.107759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.865 [2024-11-05 19:18:42.107770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.865 qpair failed and we were unable to recover it. 00:29:12.865 [2024-11-05 19:18:42.107970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.865 [2024-11-05 19:18:42.107980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.865 qpair failed and we were unable to recover it. 00:29:12.865 [2024-11-05 19:18:42.108345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.865 [2024-11-05 19:18:42.108355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.865 qpair failed and we were unable to recover it. 00:29:12.865 [2024-11-05 19:18:42.108644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.865 [2024-11-05 19:18:42.108654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.865 qpair failed and we were unable to recover it. 00:29:12.865 [2024-11-05 19:18:42.108962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.865 [2024-11-05 19:18:42.108973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.865 qpair failed and we were unable to recover it. 00:29:12.865 [2024-11-05 19:18:42.109357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.865 [2024-11-05 19:18:42.109368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.865 qpair failed and we were unable to recover it. 00:29:12.865 [2024-11-05 19:18:42.109561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.865 [2024-11-05 19:18:42.109571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.865 qpair failed and we were unable to recover it. 00:29:12.865 [2024-11-05 19:18:42.109809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.865 [2024-11-05 19:18:42.109820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.865 qpair failed and we were unable to recover it. 00:29:12.865 [2024-11-05 19:18:42.110036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.865 [2024-11-05 19:18:42.110047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.865 qpair failed and we were unable to recover it. 00:29:12.865 [2024-11-05 19:18:42.110339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.865 [2024-11-05 19:18:42.110349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.865 qpair failed and we were unable to recover it. 00:29:12.865 [2024-11-05 19:18:42.110532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.865 [2024-11-05 19:18:42.110542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.865 qpair failed and we were unable to recover it. 00:29:12.865 [2024-11-05 19:18:42.110825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.865 [2024-11-05 19:18:42.110836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.865 qpair failed and we were unable to recover it. 00:29:12.865 [2024-11-05 19:18:42.111011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.865 [2024-11-05 19:18:42.111022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.865 qpair failed and we were unable to recover it. 00:29:12.866 [2024-11-05 19:18:42.111299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.866 [2024-11-05 19:18:42.111309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.866 qpair failed and we were unable to recover it. 00:29:12.866 [2024-11-05 19:18:42.111601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.866 [2024-11-05 19:18:42.111611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.866 qpair failed and we were unable to recover it. 00:29:12.866 [2024-11-05 19:18:42.111833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.866 [2024-11-05 19:18:42.111844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.866 qpair failed and we were unable to recover it. 00:29:12.866 [2024-11-05 19:18:42.112176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.866 [2024-11-05 19:18:42.112186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.866 qpair failed and we were unable to recover it. 00:29:12.866 [2024-11-05 19:18:42.112470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.866 [2024-11-05 19:18:42.112480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.866 qpair failed and we were unable to recover it. 00:29:12.866 [2024-11-05 19:18:42.112794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.866 [2024-11-05 19:18:42.112804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.866 qpair failed and we were unable to recover it. 00:29:12.866 [2024-11-05 19:18:42.113108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.866 [2024-11-05 19:18:42.113117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.866 qpair failed and we were unable to recover it. 00:29:12.866 [2024-11-05 19:18:42.113283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.866 [2024-11-05 19:18:42.113297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.866 qpair failed and we were unable to recover it. 00:29:12.866 [2024-11-05 19:18:42.113583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.866 [2024-11-05 19:18:42.113593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.866 qpair failed and we were unable to recover it. 00:29:12.866 [2024-11-05 19:18:42.113885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.866 [2024-11-05 19:18:42.113895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.866 qpair failed and we were unable to recover it. 00:29:12.866 [2024-11-05 19:18:42.114203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.866 [2024-11-05 19:18:42.114213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.866 qpair failed and we were unable to recover it. 00:29:12.866 [2024-11-05 19:18:42.114510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.866 [2024-11-05 19:18:42.114520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.866 qpair failed and we were unable to recover it. 00:29:12.866 [2024-11-05 19:18:42.114807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.866 [2024-11-05 19:18:42.114817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.866 qpair failed and we were unable to recover it. 00:29:12.866 [2024-11-05 19:18:42.115136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.866 [2024-11-05 19:18:42.115146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.866 qpair failed and we were unable to recover it. 00:29:12.866 [2024-11-05 19:18:42.115328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.866 [2024-11-05 19:18:42.115338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.866 qpair failed and we were unable to recover it. 00:29:12.866 [2024-11-05 19:18:42.115554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.866 [2024-11-05 19:18:42.115564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.866 qpair failed and we were unable to recover it. 00:29:12.866 [2024-11-05 19:18:42.115788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.866 [2024-11-05 19:18:42.115799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.866 qpair failed and we were unable to recover it. 00:29:12.866 [2024-11-05 19:18:42.116086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.866 [2024-11-05 19:18:42.116097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.866 qpair failed and we were unable to recover it. 00:29:12.866 [2024-11-05 19:18:42.116280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.866 [2024-11-05 19:18:42.116291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.866 qpair failed and we were unable to recover it. 00:29:12.866 [2024-11-05 19:18:42.116631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.866 [2024-11-05 19:18:42.116642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.866 qpair failed and we were unable to recover it. 00:29:12.866 [2024-11-05 19:18:42.116958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.866 [2024-11-05 19:18:42.116969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.866 qpair failed and we were unable to recover it. 00:29:12.866 [2024-11-05 19:18:42.117292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.866 [2024-11-05 19:18:42.117302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.866 qpair failed and we were unable to recover it. 00:29:12.866 [2024-11-05 19:18:42.117577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.866 [2024-11-05 19:18:42.117587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.866 qpair failed and we were unable to recover it. 00:29:12.866 [2024-11-05 19:18:42.117864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.866 [2024-11-05 19:18:42.117875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.866 qpair failed and we were unable to recover it. 00:29:12.866 [2024-11-05 19:18:42.118178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.866 [2024-11-05 19:18:42.118188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.866 qpair failed and we were unable to recover it. 00:29:12.866 [2024-11-05 19:18:42.118482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.866 [2024-11-05 19:18:42.118492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.866 qpair failed and we were unable to recover it. 00:29:12.866 [2024-11-05 19:18:42.118826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.866 [2024-11-05 19:18:42.118837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.866 qpair failed and we were unable to recover it. 00:29:12.866 [2024-11-05 19:18:42.119130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.866 [2024-11-05 19:18:42.119141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.866 qpair failed and we were unable to recover it. 00:29:12.866 [2024-11-05 19:18:42.119456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.866 [2024-11-05 19:18:42.119466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.866 qpair failed and we were unable to recover it. 00:29:12.866 [2024-11-05 19:18:42.119779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.866 [2024-11-05 19:18:42.119790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.866 qpair failed and we were unable to recover it. 00:29:12.866 [2024-11-05 19:18:42.120103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.866 [2024-11-05 19:18:42.120113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.866 qpair failed and we were unable to recover it. 00:29:12.866 [2024-11-05 19:18:42.120300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.866 [2024-11-05 19:18:42.120310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.866 qpair failed and we were unable to recover it. 00:29:12.866 [2024-11-05 19:18:42.120481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.866 [2024-11-05 19:18:42.120491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.866 qpair failed and we were unable to recover it. 00:29:12.866 [2024-11-05 19:18:42.120768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.866 [2024-11-05 19:18:42.120779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.866 qpair failed and we were unable to recover it. 00:29:12.866 [2024-11-05 19:18:42.121107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.866 [2024-11-05 19:18:42.121117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.866 qpair failed and we were unable to recover it. 00:29:12.866 [2024-11-05 19:18:42.121401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.866 [2024-11-05 19:18:42.121411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.866 qpair failed and we were unable to recover it. 00:29:12.866 [2024-11-05 19:18:42.121696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.866 [2024-11-05 19:18:42.121706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.866 qpair failed and we were unable to recover it. 00:29:12.866 [2024-11-05 19:18:42.122013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.866 [2024-11-05 19:18:42.122024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.866 qpair failed and we were unable to recover it. 00:29:12.866 [2024-11-05 19:18:42.122306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.866 [2024-11-05 19:18:42.122316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.867 qpair failed and we were unable to recover it. 00:29:12.867 [2024-11-05 19:18:42.122638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.867 [2024-11-05 19:18:42.122648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.867 qpair failed and we were unable to recover it. 00:29:12.867 [2024-11-05 19:18:42.122924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.867 [2024-11-05 19:18:42.122934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.867 qpair failed and we were unable to recover it. 00:29:12.867 [2024-11-05 19:18:42.123111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.867 [2024-11-05 19:18:42.123121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.867 qpair failed and we were unable to recover it. 00:29:12.867 [2024-11-05 19:18:42.123389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.867 [2024-11-05 19:18:42.123399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.867 qpair failed and we were unable to recover it. 00:29:12.867 [2024-11-05 19:18:42.123750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.867 [2024-11-05 19:18:42.123760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.867 qpair failed and we were unable to recover it. 00:29:12.867 [2024-11-05 19:18:42.124062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.867 [2024-11-05 19:18:42.124073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.867 qpair failed and we were unable to recover it. 00:29:12.867 [2024-11-05 19:18:42.124264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.867 [2024-11-05 19:18:42.124276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.867 qpair failed and we were unable to recover it. 00:29:12.867 [2024-11-05 19:18:42.124479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.867 [2024-11-05 19:18:42.124489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.867 qpair failed and we were unable to recover it. 00:29:12.867 [2024-11-05 19:18:42.124846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.867 [2024-11-05 19:18:42.124856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.867 qpair failed and we were unable to recover it. 00:29:12.867 [2024-11-05 19:18:42.125235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.867 [2024-11-05 19:18:42.125249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.867 qpair failed and we were unable to recover it. 00:29:12.867 [2024-11-05 19:18:42.125590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.867 [2024-11-05 19:18:42.125600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.867 qpair failed and we were unable to recover it. 00:29:12.867 [2024-11-05 19:18:42.125894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.867 [2024-11-05 19:18:42.125905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.867 qpair failed and we were unable to recover it. 00:29:12.867 [2024-11-05 19:18:42.126271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.867 [2024-11-05 19:18:42.126281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.867 qpair failed and we were unable to recover it. 00:29:12.867 [2024-11-05 19:18:42.126627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.867 [2024-11-05 19:18:42.126637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.867 qpair failed and we were unable to recover it. 00:29:12.867 [2024-11-05 19:18:42.126945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.867 [2024-11-05 19:18:42.126955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.867 qpair failed and we were unable to recover it. 00:29:12.867 [2024-11-05 19:18:42.127263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.867 [2024-11-05 19:18:42.127273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.867 qpair failed and we were unable to recover it. 00:29:12.867 [2024-11-05 19:18:42.127612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.867 [2024-11-05 19:18:42.127622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.867 qpair failed and we were unable to recover it. 00:29:12.867 [2024-11-05 19:18:42.127956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.867 [2024-11-05 19:18:42.127966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.867 qpair failed and we were unable to recover it. 00:29:12.867 [2024-11-05 19:18:42.128274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.867 [2024-11-05 19:18:42.128284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.867 qpair failed and we were unable to recover it. 00:29:12.867 [2024-11-05 19:18:42.128447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.867 [2024-11-05 19:18:42.128457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.867 qpair failed and we were unable to recover it. 00:29:12.867 [2024-11-05 19:18:42.128737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.867 [2024-11-05 19:18:42.128751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.867 qpair failed and we were unable to recover it. 00:29:12.867 [2024-11-05 19:18:42.129058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.867 [2024-11-05 19:18:42.129068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.867 qpair failed and we were unable to recover it. 00:29:12.867 [2024-11-05 19:18:42.129491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.867 [2024-11-05 19:18:42.129501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.867 qpair failed and we were unable to recover it. 00:29:12.867 [2024-11-05 19:18:42.129710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.867 [2024-11-05 19:18:42.129720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.867 qpair failed and we were unable to recover it. 00:29:12.867 [2024-11-05 19:18:42.130038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.867 [2024-11-05 19:18:42.130049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.867 qpair failed and we were unable to recover it. 00:29:12.867 [2024-11-05 19:18:42.130254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.867 [2024-11-05 19:18:42.130264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.867 qpair failed and we were unable to recover it. 00:29:12.867 [2024-11-05 19:18:42.130579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.867 [2024-11-05 19:18:42.130590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.867 qpair failed and we were unable to recover it. 00:29:12.867 [2024-11-05 19:18:42.130909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.867 [2024-11-05 19:18:42.130920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.867 qpair failed and we were unable to recover it. 00:29:12.867 [2024-11-05 19:18:42.131125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.867 [2024-11-05 19:18:42.131135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.867 qpair failed and we were unable to recover it. 00:29:12.867 [2024-11-05 19:18:42.131410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.867 [2024-11-05 19:18:42.131420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.867 qpair failed and we were unable to recover it. 00:29:12.867 [2024-11-05 19:18:42.131784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.867 [2024-11-05 19:18:42.131794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.867 qpair failed and we were unable to recover it. 00:29:12.867 [2024-11-05 19:18:42.132145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.867 [2024-11-05 19:18:42.132155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.867 qpair failed and we were unable to recover it. 00:29:12.867 [2024-11-05 19:18:42.132441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.867 [2024-11-05 19:18:42.132450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.867 qpair failed and we were unable to recover it. 00:29:12.867 [2024-11-05 19:18:42.132837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.867 [2024-11-05 19:18:42.132847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.867 qpair failed and we were unable to recover it. 00:29:12.867 [2024-11-05 19:18:42.133167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.867 [2024-11-05 19:18:42.133177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.867 qpair failed and we were unable to recover it. 00:29:12.867 [2024-11-05 19:18:42.133486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.867 [2024-11-05 19:18:42.133496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.867 qpair failed and we were unable to recover it. 00:29:12.867 [2024-11-05 19:18:42.133811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.867 [2024-11-05 19:18:42.133826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.867 qpair failed and we were unable to recover it. 00:29:12.867 [2024-11-05 19:18:42.134156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.867 [2024-11-05 19:18:42.134166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.867 qpair failed and we were unable to recover it. 00:29:12.867 [2024-11-05 19:18:42.134495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.868 [2024-11-05 19:18:42.134505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.868 qpair failed and we were unable to recover it. 00:29:12.868 [2024-11-05 19:18:42.134833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.868 [2024-11-05 19:18:42.134844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.868 qpair failed and we were unable to recover it. 00:29:12.868 [2024-11-05 19:18:42.135134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.868 [2024-11-05 19:18:42.135143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.868 qpair failed and we were unable to recover it. 00:29:12.868 [2024-11-05 19:18:42.135476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.868 [2024-11-05 19:18:42.135486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.868 qpair failed and we were unable to recover it. 00:29:12.868 [2024-11-05 19:18:42.135781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.868 [2024-11-05 19:18:42.135792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.868 qpair failed and we were unable to recover it. 00:29:12.868 [2024-11-05 19:18:42.136181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.868 [2024-11-05 19:18:42.136191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.868 qpair failed and we were unable to recover it. 00:29:12.868 [2024-11-05 19:18:42.136508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.868 [2024-11-05 19:18:42.136518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.868 qpair failed and we were unable to recover it. 00:29:12.868 [2024-11-05 19:18:42.136819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.868 [2024-11-05 19:18:42.136830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.868 qpair failed and we were unable to recover it. 00:29:12.868 [2024-11-05 19:18:42.137200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.868 [2024-11-05 19:18:42.137210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.868 qpair failed and we were unable to recover it. 00:29:12.868 [2024-11-05 19:18:42.137497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.868 [2024-11-05 19:18:42.137508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.868 qpair failed and we were unable to recover it. 00:29:12.868 [2024-11-05 19:18:42.137719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.868 [2024-11-05 19:18:42.137729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.868 qpair failed and we were unable to recover it. 00:29:12.868 [2024-11-05 19:18:42.138059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.868 [2024-11-05 19:18:42.138070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.868 qpair failed and we were unable to recover it. 00:29:12.868 [2024-11-05 19:18:42.138368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.868 [2024-11-05 19:18:42.138378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.868 qpair failed and we were unable to recover it. 00:29:12.868 [2024-11-05 19:18:42.138763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.868 [2024-11-05 19:18:42.138774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.868 qpair failed and we were unable to recover it. 00:29:12.868 [2024-11-05 19:18:42.139080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.868 [2024-11-05 19:18:42.139091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.868 qpair failed and we were unable to recover it. 00:29:12.868 [2024-11-05 19:18:42.139417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.868 [2024-11-05 19:18:42.139427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.868 qpair failed and we were unable to recover it. 00:29:12.868 [2024-11-05 19:18:42.139755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.868 [2024-11-05 19:18:42.139766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.868 qpair failed and we were unable to recover it. 00:29:12.868 [2024-11-05 19:18:42.140059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.868 [2024-11-05 19:18:42.140070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.868 qpair failed and we were unable to recover it. 00:29:12.868 [2024-11-05 19:18:42.140323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.868 [2024-11-05 19:18:42.140333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.868 qpair failed and we were unable to recover it. 00:29:12.868 [2024-11-05 19:18:42.140685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.868 [2024-11-05 19:18:42.140695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.868 qpair failed and we were unable to recover it. 00:29:12.868 [2024-11-05 19:18:42.140910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.868 [2024-11-05 19:18:42.140920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.868 qpair failed and we were unable to recover it. 00:29:12.868 [2024-11-05 19:18:42.141252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.868 [2024-11-05 19:18:42.141262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.868 qpair failed and we were unable to recover it. 00:29:12.868 [2024-11-05 19:18:42.141644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.868 [2024-11-05 19:18:42.141654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.868 qpair failed and we were unable to recover it. 00:29:12.868 [2024-11-05 19:18:42.141943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.868 [2024-11-05 19:18:42.141954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.868 qpair failed and we were unable to recover it. 00:29:12.868 [2024-11-05 19:18:42.142265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.868 [2024-11-05 19:18:42.142275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.868 qpair failed and we were unable to recover it. 00:29:12.868 [2024-11-05 19:18:42.142572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.868 [2024-11-05 19:18:42.142584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.868 qpair failed and we were unable to recover it. 00:29:12.868 [2024-11-05 19:18:42.142875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.868 [2024-11-05 19:18:42.142886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.868 qpair failed and we were unable to recover it. 00:29:12.868 [2024-11-05 19:18:42.143218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.868 [2024-11-05 19:18:42.143228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.868 qpair failed and we were unable to recover it. 00:29:12.868 [2024-11-05 19:18:42.143552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.868 [2024-11-05 19:18:42.143563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.868 qpair failed and we were unable to recover it. 00:29:12.868 [2024-11-05 19:18:42.143874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.868 [2024-11-05 19:18:42.143885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.868 qpair failed and we were unable to recover it. 00:29:12.868 [2024-11-05 19:18:42.144243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.868 [2024-11-05 19:18:42.144253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.868 qpair failed and we were unable to recover it. 00:29:12.868 [2024-11-05 19:18:42.144573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.868 [2024-11-05 19:18:42.144583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.868 qpair failed and we were unable to recover it. 00:29:12.868 [2024-11-05 19:18:42.144878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.868 [2024-11-05 19:18:42.144888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.868 qpair failed and we were unable to recover it. 00:29:12.868 [2024-11-05 19:18:42.145222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.868 [2024-11-05 19:18:42.145232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.868 qpair failed and we were unable to recover it. 00:29:12.868 [2024-11-05 19:18:42.145526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.868 [2024-11-05 19:18:42.145536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.868 qpair failed and we were unable to recover it. 00:29:12.868 [2024-11-05 19:18:42.145852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.868 [2024-11-05 19:18:42.145863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.868 qpair failed and we were unable to recover it. 00:29:12.868 [2024-11-05 19:18:42.146180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.868 [2024-11-05 19:18:42.146190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.868 qpair failed and we were unable to recover it. 00:29:12.868 [2024-11-05 19:18:42.146521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.868 [2024-11-05 19:18:42.146531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.868 qpair failed and we were unable to recover it. 00:29:12.868 [2024-11-05 19:18:42.146858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.869 [2024-11-05 19:18:42.146868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.869 qpair failed and we were unable to recover it. 00:29:12.869 [2024-11-05 19:18:42.147224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.869 [2024-11-05 19:18:42.147234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.869 qpair failed and we were unable to recover it. 00:29:12.869 [2024-11-05 19:18:42.147589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.869 [2024-11-05 19:18:42.147600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.869 qpair failed and we were unable to recover it. 00:29:12.869 [2024-11-05 19:18:42.147790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.869 [2024-11-05 19:18:42.147801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.869 qpair failed and we were unable to recover it. 00:29:12.869 [2024-11-05 19:18:42.148129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.869 [2024-11-05 19:18:42.148139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.869 qpair failed and we were unable to recover it. 00:29:12.869 [2024-11-05 19:18:42.148467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.869 [2024-11-05 19:18:42.148477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.869 qpair failed and we were unable to recover it. 00:29:12.869 [2024-11-05 19:18:42.148795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.869 [2024-11-05 19:18:42.148806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.869 qpair failed and we were unable to recover it. 00:29:12.869 [2024-11-05 19:18:42.149137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.869 [2024-11-05 19:18:42.149147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.869 qpair failed and we were unable to recover it. 00:29:12.869 [2024-11-05 19:18:42.149466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.869 [2024-11-05 19:18:42.149476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.869 qpair failed and we were unable to recover it. 00:29:12.869 [2024-11-05 19:18:42.149823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.869 [2024-11-05 19:18:42.149833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.869 qpair failed and we were unable to recover it. 00:29:12.869 [2024-11-05 19:18:42.150121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.869 [2024-11-05 19:18:42.150131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.869 qpair failed and we were unable to recover it. 00:29:12.869 [2024-11-05 19:18:42.150464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.869 [2024-11-05 19:18:42.150474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.869 qpair failed and we were unable to recover it. 00:29:12.869 [2024-11-05 19:18:42.150760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.869 [2024-11-05 19:18:42.150771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.869 qpair failed and we were unable to recover it. 00:29:12.869 [2024-11-05 19:18:42.151079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.869 [2024-11-05 19:18:42.151089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.869 qpair failed and we were unable to recover it. 00:29:12.869 [2024-11-05 19:18:42.151374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.869 [2024-11-05 19:18:42.151386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.869 qpair failed and we were unable to recover it. 00:29:12.869 [2024-11-05 19:18:42.151714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.869 [2024-11-05 19:18:42.151724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.869 qpair failed and we were unable to recover it. 00:29:12.869 [2024-11-05 19:18:42.152069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.869 [2024-11-05 19:18:42.152079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.869 qpair failed and we were unable to recover it. 00:29:12.869 [2024-11-05 19:18:42.152417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.869 [2024-11-05 19:18:42.152428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.869 qpair failed and we were unable to recover it. 00:29:12.869 [2024-11-05 19:18:42.152763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.869 [2024-11-05 19:18:42.152773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.869 qpair failed and we were unable to recover it. 00:29:12.869 [2024-11-05 19:18:42.153085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.869 [2024-11-05 19:18:42.153095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.869 qpair failed and we were unable to recover it. 00:29:12.869 [2024-11-05 19:18:42.153414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.869 [2024-11-05 19:18:42.153424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.869 qpair failed and we were unable to recover it. 00:29:12.869 [2024-11-05 19:18:42.153709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.869 [2024-11-05 19:18:42.153719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.869 qpair failed and we were unable to recover it. 00:29:12.869 [2024-11-05 19:18:42.154014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.869 [2024-11-05 19:18:42.154024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.869 qpair failed and we were unable to recover it. 00:29:12.869 [2024-11-05 19:18:42.154360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.869 [2024-11-05 19:18:42.154370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.869 qpair failed and we were unable to recover it. 00:29:12.869 [2024-11-05 19:18:42.154700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.869 [2024-11-05 19:18:42.154710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.869 qpair failed and we were unable to recover it. 00:29:12.869 [2024-11-05 19:18:42.155020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.869 [2024-11-05 19:18:42.155030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.869 qpair failed and we were unable to recover it. 00:29:12.869 [2024-11-05 19:18:42.155373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.869 [2024-11-05 19:18:42.155383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.869 qpair failed and we were unable to recover it. 00:29:12.869 [2024-11-05 19:18:42.155687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.869 [2024-11-05 19:18:42.155697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.869 qpair failed and we were unable to recover it. 00:29:12.869 [2024-11-05 19:18:42.156015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.869 [2024-11-05 19:18:42.156026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.869 qpair failed and we were unable to recover it. 00:29:12.869 [2024-11-05 19:18:42.156361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.869 [2024-11-05 19:18:42.156370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.869 qpair failed and we were unable to recover it. 00:29:12.869 [2024-11-05 19:18:42.156684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.869 [2024-11-05 19:18:42.156694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.869 qpair failed and we were unable to recover it. 00:29:12.869 [2024-11-05 19:18:42.156999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.869 [2024-11-05 19:18:42.157009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.869 qpair failed and we were unable to recover it. 00:29:12.869 [2024-11-05 19:18:42.157337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.869 [2024-11-05 19:18:42.157347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:12.869 qpair failed and we were unable to recover it. 00:29:13.145 [2024-11-05 19:18:42.157657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.145 [2024-11-05 19:18:42.157669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.145 qpair failed and we were unable to recover it. 00:29:13.145 [2024-11-05 19:18:42.157968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.145 [2024-11-05 19:18:42.157978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.145 qpair failed and we were unable to recover it. 00:29:13.145 [2024-11-05 19:18:42.158280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.145 [2024-11-05 19:18:42.158290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.145 qpair failed and we were unable to recover it. 00:29:13.145 [2024-11-05 19:18:42.158625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.145 [2024-11-05 19:18:42.158635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.145 qpair failed and we were unable to recover it. 00:29:13.145 [2024-11-05 19:18:42.158958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.145 [2024-11-05 19:18:42.158968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.145 qpair failed and we were unable to recover it. 00:29:13.145 [2024-11-05 19:18:42.159153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.145 [2024-11-05 19:18:42.159163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.145 qpair failed and we were unable to recover it. 00:29:13.145 [2024-11-05 19:18:42.159482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.145 [2024-11-05 19:18:42.159491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.145 qpair failed and we were unable to recover it. 00:29:13.145 [2024-11-05 19:18:42.159807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.145 [2024-11-05 19:18:42.159817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.145 qpair failed and we were unable to recover it. 00:29:13.145 [2024-11-05 19:18:42.160153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.145 [2024-11-05 19:18:42.160162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.145 qpair failed and we were unable to recover it. 00:29:13.145 [2024-11-05 19:18:42.160498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.145 [2024-11-05 19:18:42.160508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.145 qpair failed and we were unable to recover it. 00:29:13.145 [2024-11-05 19:18:42.160895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.145 [2024-11-05 19:18:42.160906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.145 qpair failed and we were unable to recover it. 00:29:13.145 [2024-11-05 19:18:42.161161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.145 [2024-11-05 19:18:42.161171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.145 qpair failed and we were unable to recover it. 00:29:13.145 [2024-11-05 19:18:42.161499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.145 [2024-11-05 19:18:42.161509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.145 qpair failed and we were unable to recover it. 00:29:13.145 [2024-11-05 19:18:42.161701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.145 [2024-11-05 19:18:42.161711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.145 qpair failed and we were unable to recover it. 00:29:13.145 [2024-11-05 19:18:42.162023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.145 [2024-11-05 19:18:42.162033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.145 qpair failed and we were unable to recover it. 00:29:13.145 [2024-11-05 19:18:42.162197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.145 [2024-11-05 19:18:42.162208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.145 qpair failed and we were unable to recover it. 00:29:13.145 [2024-11-05 19:18:42.162485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.145 [2024-11-05 19:18:42.162495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.145 qpair failed and we were unable to recover it. 00:29:13.145 [2024-11-05 19:18:42.162829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.146 [2024-11-05 19:18:42.162839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.146 qpair failed and we were unable to recover it. 00:29:13.146 [2024-11-05 19:18:42.163137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.146 [2024-11-05 19:18:42.163147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.146 qpair failed and we were unable to recover it. 00:29:13.146 [2024-11-05 19:18:42.163420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.146 [2024-11-05 19:18:42.163430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.146 qpair failed and we were unable to recover it. 00:29:13.146 [2024-11-05 19:18:42.163714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.146 [2024-11-05 19:18:42.163723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.146 qpair failed and we were unable to recover it. 00:29:13.146 [2024-11-05 19:18:42.164033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.146 [2024-11-05 19:18:42.164044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.146 qpair failed and we were unable to recover it. 00:29:13.146 [2024-11-05 19:18:42.164331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.146 [2024-11-05 19:18:42.164343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.146 qpair failed and we were unable to recover it. 00:29:13.146 [2024-11-05 19:18:42.164648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.146 [2024-11-05 19:18:42.164658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.146 qpair failed and we were unable to recover it. 00:29:13.146 [2024-11-05 19:18:42.164976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.146 [2024-11-05 19:18:42.164986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.146 qpair failed and we were unable to recover it. 00:29:13.146 [2024-11-05 19:18:42.165271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.146 [2024-11-05 19:18:42.165281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.146 qpair failed and we were unable to recover it. 00:29:13.146 [2024-11-05 19:18:42.165612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.146 [2024-11-05 19:18:42.165622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.146 qpair failed and we were unable to recover it. 00:29:13.146 [2024-11-05 19:18:42.165831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.146 [2024-11-05 19:18:42.165841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.146 qpair failed and we were unable to recover it. 00:29:13.146 [2024-11-05 19:18:42.165934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.146 [2024-11-05 19:18:42.165944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.146 qpair failed and we were unable to recover it. 00:29:13.146 [2024-11-05 19:18:42.166221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.146 [2024-11-05 19:18:42.166231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.146 qpair failed and we were unable to recover it. 00:29:13.146 [2024-11-05 19:18:42.166519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.146 [2024-11-05 19:18:42.166528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.146 qpair failed and we were unable to recover it. 00:29:13.146 [2024-11-05 19:18:42.166762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.146 [2024-11-05 19:18:42.166772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.146 qpair failed and we were unable to recover it. 00:29:13.146 [2024-11-05 19:18:42.166973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.146 [2024-11-05 19:18:42.166983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.146 qpair failed and we were unable to recover it. 00:29:13.146 [2024-11-05 19:18:42.167338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.146 [2024-11-05 19:18:42.167348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.146 qpair failed and we were unable to recover it. 00:29:13.146 [2024-11-05 19:18:42.167672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.146 [2024-11-05 19:18:42.167682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.146 qpair failed and we were unable to recover it. 00:29:13.146 [2024-11-05 19:18:42.167977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.146 [2024-11-05 19:18:42.167987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.146 qpair failed and we were unable to recover it. 00:29:13.146 [2024-11-05 19:18:42.168279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.146 [2024-11-05 19:18:42.168290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.146 qpair failed and we were unable to recover it. 00:29:13.146 [2024-11-05 19:18:42.168602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.146 [2024-11-05 19:18:42.168612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.146 qpair failed and we were unable to recover it. 00:29:13.146 [2024-11-05 19:18:42.168916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.146 [2024-11-05 19:18:42.168926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.146 qpair failed and we were unable to recover it. 00:29:13.146 [2024-11-05 19:18:42.169264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.146 [2024-11-05 19:18:42.169273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.146 qpair failed and we were unable to recover it. 00:29:13.146 [2024-11-05 19:18:42.169476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.146 [2024-11-05 19:18:42.169486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.146 qpair failed and we were unable to recover it. 00:29:13.146 [2024-11-05 19:18:42.169761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.146 [2024-11-05 19:18:42.169771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.146 qpair failed and we were unable to recover it. 00:29:13.146 [2024-11-05 19:18:42.170092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.146 [2024-11-05 19:18:42.170101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.146 qpair failed and we were unable to recover it. 00:29:13.146 [2024-11-05 19:18:42.170499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.146 [2024-11-05 19:18:42.170509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.146 qpair failed and we were unable to recover it. 00:29:13.146 [2024-11-05 19:18:42.170802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.146 [2024-11-05 19:18:42.170822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.146 qpair failed and we were unable to recover it. 00:29:13.146 [2024-11-05 19:18:42.171155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.146 [2024-11-05 19:18:42.171165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.146 qpair failed and we were unable to recover it. 00:29:13.146 [2024-11-05 19:18:42.171500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.146 [2024-11-05 19:18:42.171510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.146 qpair failed and we were unable to recover it. 00:29:13.146 [2024-11-05 19:18:42.171829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.146 [2024-11-05 19:18:42.171839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.146 qpair failed and we were unable to recover it. 00:29:13.146 [2024-11-05 19:18:42.172212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.146 [2024-11-05 19:18:42.172221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.146 qpair failed and we were unable to recover it. 00:29:13.146 [2024-11-05 19:18:42.172547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.146 [2024-11-05 19:18:42.172559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.146 qpair failed and we were unable to recover it. 00:29:13.146 [2024-11-05 19:18:42.172929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.146 [2024-11-05 19:18:42.172939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.146 qpair failed and we were unable to recover it. 00:29:13.146 [2024-11-05 19:18:42.173225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.146 [2024-11-05 19:18:42.173234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.146 qpair failed and we were unable to recover it. 00:29:13.146 [2024-11-05 19:18:42.173522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.146 [2024-11-05 19:18:42.173531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.146 qpair failed and we were unable to recover it. 00:29:13.146 [2024-11-05 19:18:42.173814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.146 [2024-11-05 19:18:42.173825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.146 qpair failed and we were unable to recover it. 00:29:13.146 [2024-11-05 19:18:42.174132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.146 [2024-11-05 19:18:42.174142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.146 qpair failed and we were unable to recover it. 00:29:13.146 [2024-11-05 19:18:42.174425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.146 [2024-11-05 19:18:42.174434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.146 qpair failed and we were unable to recover it. 00:29:13.147 [2024-11-05 19:18:42.174734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.147 [2024-11-05 19:18:42.174744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.147 qpair failed and we were unable to recover it. 00:29:13.147 [2024-11-05 19:18:42.175059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.147 [2024-11-05 19:18:42.175069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.147 qpair failed and we were unable to recover it. 00:29:13.147 [2024-11-05 19:18:42.175409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.147 [2024-11-05 19:18:42.175419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.147 qpair failed and we were unable to recover it. 00:29:13.147 [2024-11-05 19:18:42.175706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.147 [2024-11-05 19:18:42.175716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.147 qpair failed and we were unable to recover it. 00:29:13.147 [2024-11-05 19:18:42.175927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.147 [2024-11-05 19:18:42.175938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.147 qpair failed and we were unable to recover it. 00:29:13.147 [2024-11-05 19:18:42.176231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.147 [2024-11-05 19:18:42.176241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.147 qpair failed and we were unable to recover it. 00:29:13.147 [2024-11-05 19:18:42.176525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.147 [2024-11-05 19:18:42.176535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.147 qpair failed and we were unable to recover it. 00:29:13.147 [2024-11-05 19:18:42.176866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.147 [2024-11-05 19:18:42.176877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.147 qpair failed and we were unable to recover it. 00:29:13.147 [2024-11-05 19:18:42.177187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.147 [2024-11-05 19:18:42.177197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.147 qpair failed and we were unable to recover it. 00:29:13.147 [2024-11-05 19:18:42.177531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.147 [2024-11-05 19:18:42.177542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.147 qpair failed and we were unable to recover it. 00:29:13.147 [2024-11-05 19:18:42.177858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.147 [2024-11-05 19:18:42.177868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.147 qpair failed and we were unable to recover it. 00:29:13.147 [2024-11-05 19:18:42.178232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.147 [2024-11-05 19:18:42.178242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.147 qpair failed and we were unable to recover it. 00:29:13.147 [2024-11-05 19:18:42.178443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.147 [2024-11-05 19:18:42.178453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.147 qpair failed and we were unable to recover it. 00:29:13.147 [2024-11-05 19:18:42.178743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.147 [2024-11-05 19:18:42.178757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.147 qpair failed and we were unable to recover it. 00:29:13.147 [2024-11-05 19:18:42.179066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.147 [2024-11-05 19:18:42.179076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.147 qpair failed and we were unable to recover it. 00:29:13.147 [2024-11-05 19:18:42.179403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.147 [2024-11-05 19:18:42.179413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.147 qpair failed and we were unable to recover it. 00:29:13.147 [2024-11-05 19:18:42.179766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.147 [2024-11-05 19:18:42.179777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.147 qpair failed and we were unable to recover it. 00:29:13.147 [2024-11-05 19:18:42.180110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.147 [2024-11-05 19:18:42.180120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.147 qpair failed and we were unable to recover it. 00:29:13.147 [2024-11-05 19:18:42.180421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.147 [2024-11-05 19:18:42.180431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.147 qpair failed and we were unable to recover it. 00:29:13.147 [2024-11-05 19:18:42.180711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.147 [2024-11-05 19:18:42.180721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.147 qpair failed and we were unable to recover it. 00:29:13.147 [2024-11-05 19:18:42.181045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.147 [2024-11-05 19:18:42.181057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.147 qpair failed and we were unable to recover it. 00:29:13.147 [2024-11-05 19:18:42.181343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.147 [2024-11-05 19:18:42.181353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.147 qpair failed and we were unable to recover it. 00:29:13.147 [2024-11-05 19:18:42.181698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.147 [2024-11-05 19:18:42.181708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.147 qpair failed and we were unable to recover it. 00:29:13.147 [2024-11-05 19:18:42.182007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.147 [2024-11-05 19:18:42.182017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.147 qpair failed and we were unable to recover it. 00:29:13.147 [2024-11-05 19:18:42.182347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.147 [2024-11-05 19:18:42.182357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.147 qpair failed and we were unable to recover it. 00:29:13.147 [2024-11-05 19:18:42.182690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.147 [2024-11-05 19:18:42.182700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.147 qpair failed and we were unable to recover it. 00:29:13.147 [2024-11-05 19:18:42.183017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.147 [2024-11-05 19:18:42.183028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.147 qpair failed and we were unable to recover it. 00:29:13.147 [2024-11-05 19:18:42.183346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.147 [2024-11-05 19:18:42.183356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.147 qpair failed and we were unable to recover it. 00:29:13.147 [2024-11-05 19:18:42.183663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.147 [2024-11-05 19:18:42.183673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.147 qpair failed and we were unable to recover it. 00:29:13.147 [2024-11-05 19:18:42.183982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.147 [2024-11-05 19:18:42.183993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.147 qpair failed and we were unable to recover it. 00:29:13.147 [2024-11-05 19:18:42.184291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.147 [2024-11-05 19:18:42.184301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.147 qpair failed and we were unable to recover it. 00:29:13.147 [2024-11-05 19:18:42.184625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.147 [2024-11-05 19:18:42.184634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.147 qpair failed and we were unable to recover it. 00:29:13.147 [2024-11-05 19:18:42.184957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.147 [2024-11-05 19:18:42.184967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.147 qpair failed and we were unable to recover it. 00:29:13.147 [2024-11-05 19:18:42.185248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.147 [2024-11-05 19:18:42.185258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.147 qpair failed and we were unable to recover it. 00:29:13.147 [2024-11-05 19:18:42.185548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.147 [2024-11-05 19:18:42.185558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.147 qpair failed and we were unable to recover it. 00:29:13.147 [2024-11-05 19:18:42.185865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.147 [2024-11-05 19:18:42.185876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.147 qpair failed and we were unable to recover it. 00:29:13.147 [2024-11-05 19:18:42.186204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.147 [2024-11-05 19:18:42.186214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.147 qpair failed and we were unable to recover it. 00:29:13.147 [2024-11-05 19:18:42.186505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.147 [2024-11-05 19:18:42.186515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.147 qpair failed and we were unable to recover it. 00:29:13.147 [2024-11-05 19:18:42.186797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.148 [2024-11-05 19:18:42.186807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.148 qpair failed and we were unable to recover it. 00:29:13.148 [2024-11-05 19:18:42.187165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.148 [2024-11-05 19:18:42.187175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.148 qpair failed and we were unable to recover it. 00:29:13.148 [2024-11-05 19:18:42.187462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.148 [2024-11-05 19:18:42.187472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.148 qpair failed and we were unable to recover it. 00:29:13.148 [2024-11-05 19:18:42.187755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.148 [2024-11-05 19:18:42.187765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.148 qpair failed and we were unable to recover it. 00:29:13.148 [2024-11-05 19:18:42.188093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.148 [2024-11-05 19:18:42.188103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.148 qpair failed and we were unable to recover it. 00:29:13.148 [2024-11-05 19:18:42.188384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.148 [2024-11-05 19:18:42.188394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.148 qpair failed and we were unable to recover it. 00:29:13.148 [2024-11-05 19:18:42.188716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.148 [2024-11-05 19:18:42.188726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.148 qpair failed and we were unable to recover it. 00:29:13.148 [2024-11-05 19:18:42.189060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.148 [2024-11-05 19:18:42.189071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.148 qpair failed and we were unable to recover it. 00:29:13.148 [2024-11-05 19:18:42.189382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.148 [2024-11-05 19:18:42.189392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.148 qpair failed and we were unable to recover it. 00:29:13.148 [2024-11-05 19:18:42.189706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.148 [2024-11-05 19:18:42.189716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.148 qpair failed and we were unable to recover it. 00:29:13.148 [2024-11-05 19:18:42.190111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.148 [2024-11-05 19:18:42.190121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.148 qpair failed and we were unable to recover it. 00:29:13.148 [2024-11-05 19:18:42.190414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.148 [2024-11-05 19:18:42.190424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.148 qpair failed and we were unable to recover it. 00:29:13.148 [2024-11-05 19:18:42.190716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.148 [2024-11-05 19:18:42.190726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.148 qpair failed and we were unable to recover it. 00:29:13.148 [2024-11-05 19:18:42.191107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.148 [2024-11-05 19:18:42.191117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.148 qpair failed and we were unable to recover it. 00:29:13.148 [2024-11-05 19:18:42.191400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.148 [2024-11-05 19:18:42.191410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.148 qpair failed and we were unable to recover it. 00:29:13.148 [2024-11-05 19:18:42.191730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.148 [2024-11-05 19:18:42.191740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.148 qpair failed and we were unable to recover it. 00:29:13.148 [2024-11-05 19:18:42.192089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.148 [2024-11-05 19:18:42.192099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.148 qpair failed and we were unable to recover it. 00:29:13.148 [2024-11-05 19:18:42.192378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.148 [2024-11-05 19:18:42.192388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.148 qpair failed and we were unable to recover it. 00:29:13.148 [2024-11-05 19:18:42.192745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.148 [2024-11-05 19:18:42.192761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.148 qpair failed and we were unable to recover it. 00:29:13.148 [2024-11-05 19:18:42.193133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.148 [2024-11-05 19:18:42.193143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.148 qpair failed and we were unable to recover it. 00:29:13.148 [2024-11-05 19:18:42.193440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.148 [2024-11-05 19:18:42.193450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.148 qpair failed and we were unable to recover it. 00:29:13.148 [2024-11-05 19:18:42.193814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.148 [2024-11-05 19:18:42.193825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.148 qpair failed and we were unable to recover it. 00:29:13.148 [2024-11-05 19:18:42.194148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.148 [2024-11-05 19:18:42.194158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.148 qpair failed and we were unable to recover it. 00:29:13.148 [2024-11-05 19:18:42.194486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.148 [2024-11-05 19:18:42.194496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.148 qpair failed and we were unable to recover it. 00:29:13.148 [2024-11-05 19:18:42.194821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.148 [2024-11-05 19:18:42.194831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.148 qpair failed and we were unable to recover it. 00:29:13.148 [2024-11-05 19:18:42.195143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.148 [2024-11-05 19:18:42.195153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.148 qpair failed and we were unable to recover it. 00:29:13.148 [2024-11-05 19:18:42.195434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.148 [2024-11-05 19:18:42.195444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.148 qpair failed and we were unable to recover it. 00:29:13.148 [2024-11-05 19:18:42.195734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.148 [2024-11-05 19:18:42.195744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.148 qpair failed and we were unable to recover it. 00:29:13.148 [2024-11-05 19:18:42.196106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.148 [2024-11-05 19:18:42.196116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.148 qpair failed and we were unable to recover it. 00:29:13.148 [2024-11-05 19:18:42.196469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.148 [2024-11-05 19:18:42.196479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.148 qpair failed and we were unable to recover it. 00:29:13.148 [2024-11-05 19:18:42.196782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.148 [2024-11-05 19:18:42.196793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.148 qpair failed and we were unable to recover it. 00:29:13.148 [2024-11-05 19:18:42.197124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.148 [2024-11-05 19:18:42.197134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.148 qpair failed and we were unable to recover it. 00:29:13.148 [2024-11-05 19:18:42.197478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.148 [2024-11-05 19:18:42.197488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.148 qpair failed and we were unable to recover it. 00:29:13.148 [2024-11-05 19:18:42.197798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.148 [2024-11-05 19:18:42.197809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.148 qpair failed and we were unable to recover it. 00:29:13.148 [2024-11-05 19:18:42.198162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.148 [2024-11-05 19:18:42.198172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.148 qpair failed and we were unable to recover it. 00:29:13.148 [2024-11-05 19:18:42.198526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.148 [2024-11-05 19:18:42.198536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.148 qpair failed and we were unable to recover it. 00:29:13.148 [2024-11-05 19:18:42.198863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.148 [2024-11-05 19:18:42.198873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.148 qpair failed and we were unable to recover it. 00:29:13.148 [2024-11-05 19:18:42.199202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.148 [2024-11-05 19:18:42.199212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.148 qpair failed and we were unable to recover it. 00:29:13.148 [2024-11-05 19:18:42.199541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.148 [2024-11-05 19:18:42.199551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.148 qpair failed and we were unable to recover it. 00:29:13.148 [2024-11-05 19:18:42.199917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.149 [2024-11-05 19:18:42.199927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.149 qpair failed and we were unable to recover it. 00:29:13.149 [2024-11-05 19:18:42.200242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.149 [2024-11-05 19:18:42.200252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.149 qpair failed and we were unable to recover it. 00:29:13.149 [2024-11-05 19:18:42.200466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.149 [2024-11-05 19:18:42.200476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.149 qpair failed and we were unable to recover it. 00:29:13.149 [2024-11-05 19:18:42.200689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.149 [2024-11-05 19:18:42.200699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.149 qpair failed and we were unable to recover it. 00:29:13.149 [2024-11-05 19:18:42.201002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.149 [2024-11-05 19:18:42.201012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.149 qpair failed and we were unable to recover it. 00:29:13.149 [2024-11-05 19:18:42.201297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.149 [2024-11-05 19:18:42.201306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.149 qpair failed and we were unable to recover it. 00:29:13.149 [2024-11-05 19:18:42.201596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.149 [2024-11-05 19:18:42.201605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.149 qpair failed and we were unable to recover it. 00:29:13.149 [2024-11-05 19:18:42.201791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.149 [2024-11-05 19:18:42.201801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.149 qpair failed and we were unable to recover it. 00:29:13.149 [2024-11-05 19:18:42.202102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.149 [2024-11-05 19:18:42.202111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.149 qpair failed and we were unable to recover it. 00:29:13.149 [2024-11-05 19:18:42.202441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.149 [2024-11-05 19:18:42.202451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.149 qpair failed and we were unable to recover it. 00:29:13.149 [2024-11-05 19:18:42.202756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.149 [2024-11-05 19:18:42.202766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.149 qpair failed and we were unable to recover it. 00:29:13.149 [2024-11-05 19:18:42.203076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.149 [2024-11-05 19:18:42.203088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.149 qpair failed and we were unable to recover it. 00:29:13.149 [2024-11-05 19:18:42.203425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.149 [2024-11-05 19:18:42.203435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.149 qpair failed and we were unable to recover it. 00:29:13.149 [2024-11-05 19:18:42.203717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.149 [2024-11-05 19:18:42.203727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.149 qpair failed and we were unable to recover it. 00:29:13.149 [2024-11-05 19:18:42.204093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.149 [2024-11-05 19:18:42.204104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.149 qpair failed and we were unable to recover it. 00:29:13.149 [2024-11-05 19:18:42.204401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.149 [2024-11-05 19:18:42.204411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.149 qpair failed and we were unable to recover it. 00:29:13.149 [2024-11-05 19:18:42.204753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.149 [2024-11-05 19:18:42.204763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.149 qpair failed and we were unable to recover it. 00:29:13.149 [2024-11-05 19:18:42.205097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.149 [2024-11-05 19:18:42.205107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.149 qpair failed and we were unable to recover it. 00:29:13.149 [2024-11-05 19:18:42.205391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.149 [2024-11-05 19:18:42.205402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.149 qpair failed and we were unable to recover it. 00:29:13.149 [2024-11-05 19:18:42.205672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.149 [2024-11-05 19:18:42.205682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.149 qpair failed and we were unable to recover it. 00:29:13.149 [2024-11-05 19:18:42.205973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.149 [2024-11-05 19:18:42.205983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.149 qpair failed and we were unable to recover it. 00:29:13.149 [2024-11-05 19:18:42.206342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.149 [2024-11-05 19:18:42.206352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.149 qpair failed and we were unable to recover it. 00:29:13.149 [2024-11-05 19:18:42.206652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.149 [2024-11-05 19:18:42.206662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.149 qpair failed and we were unable to recover it. 00:29:13.149 [2024-11-05 19:18:42.206977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.149 [2024-11-05 19:18:42.206987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.149 qpair failed and we were unable to recover it. 00:29:13.149 [2024-11-05 19:18:42.207274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.149 [2024-11-05 19:18:42.207284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.149 qpair failed and we were unable to recover it. 00:29:13.149 [2024-11-05 19:18:42.207672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.149 [2024-11-05 19:18:42.207682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.149 qpair failed and we were unable to recover it. 00:29:13.149 [2024-11-05 19:18:42.208044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.149 [2024-11-05 19:18:42.208054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.149 qpair failed and we were unable to recover it. 00:29:13.149 [2024-11-05 19:18:42.208385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.149 [2024-11-05 19:18:42.208395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.149 qpair failed and we were unable to recover it. 00:29:13.149 [2024-11-05 19:18:42.208691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.149 [2024-11-05 19:18:42.208702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.149 qpair failed and we were unable to recover it. 00:29:13.149 [2024-11-05 19:18:42.209007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.149 [2024-11-05 19:18:42.209017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.149 qpair failed and we were unable to recover it. 00:29:13.149 [2024-11-05 19:18:42.209337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.149 [2024-11-05 19:18:42.209347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.149 qpair failed and we were unable to recover it. 00:29:13.149 [2024-11-05 19:18:42.209523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.149 [2024-11-05 19:18:42.209533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.149 qpair failed and we were unable to recover it. 00:29:13.149 [2024-11-05 19:18:42.209908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.149 [2024-11-05 19:18:42.209919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.149 qpair failed and we were unable to recover it. 00:29:13.149 [2024-11-05 19:18:42.210201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.149 [2024-11-05 19:18:42.210211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.149 qpair failed and we were unable to recover it. 00:29:13.149 [2024-11-05 19:18:42.210544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.149 [2024-11-05 19:18:42.210555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.149 qpair failed and we were unable to recover it. 00:29:13.149 [2024-11-05 19:18:42.210854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.149 [2024-11-05 19:18:42.210864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.149 qpair failed and we were unable to recover it. 00:29:13.149 [2024-11-05 19:18:42.211185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.149 [2024-11-05 19:18:42.211196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.149 qpair failed and we were unable to recover it. 00:29:13.149 [2024-11-05 19:18:42.211479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.149 [2024-11-05 19:18:42.211489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.149 qpair failed and we were unable to recover it. 00:29:13.149 [2024-11-05 19:18:42.211801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.150 [2024-11-05 19:18:42.211813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.150 qpair failed and we were unable to recover it. 00:29:13.150 [2024-11-05 19:18:42.212150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.150 [2024-11-05 19:18:42.212160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.150 qpair failed and we were unable to recover it. 00:29:13.150 [2024-11-05 19:18:42.212452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.150 [2024-11-05 19:18:42.212462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.150 qpair failed and we were unable to recover it. 00:29:13.150 [2024-11-05 19:18:42.212654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.150 [2024-11-05 19:18:42.212664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.150 qpair failed and we were unable to recover it. 00:29:13.150 [2024-11-05 19:18:42.212987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.150 [2024-11-05 19:18:42.212997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.150 qpair failed and we were unable to recover it. 00:29:13.150 [2024-11-05 19:18:42.213319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.150 [2024-11-05 19:18:42.213329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.150 qpair failed and we were unable to recover it. 00:29:13.150 [2024-11-05 19:18:42.213668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.150 [2024-11-05 19:18:42.213678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.150 qpair failed and we were unable to recover it. 00:29:13.150 [2024-11-05 19:18:42.213988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.150 [2024-11-05 19:18:42.213999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.150 qpair failed and we were unable to recover it. 00:29:13.150 [2024-11-05 19:18:42.214303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.150 [2024-11-05 19:18:42.214313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.150 qpair failed and we were unable to recover it. 00:29:13.150 [2024-11-05 19:18:42.214618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.150 [2024-11-05 19:18:42.214629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.150 qpair failed and we were unable to recover it. 00:29:13.150 [2024-11-05 19:18:42.214927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.150 [2024-11-05 19:18:42.214938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.150 qpair failed and we were unable to recover it. 00:29:13.150 [2024-11-05 19:18:42.215131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.150 [2024-11-05 19:18:42.215142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.150 qpair failed and we were unable to recover it. 00:29:13.150 [2024-11-05 19:18:42.215331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.150 [2024-11-05 19:18:42.215341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.150 qpair failed and we were unable to recover it. 00:29:13.150 [2024-11-05 19:18:42.215620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.150 [2024-11-05 19:18:42.215630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.150 qpair failed and we were unable to recover it. 00:29:13.150 [2024-11-05 19:18:42.215969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.150 [2024-11-05 19:18:42.215980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.150 qpair failed and we were unable to recover it. 00:29:13.150 [2024-11-05 19:18:42.216262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.150 [2024-11-05 19:18:42.216272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.150 qpair failed and we were unable to recover it. 00:29:13.150 [2024-11-05 19:18:42.216601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.150 [2024-11-05 19:18:42.216611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.150 qpair failed and we were unable to recover it. 00:29:13.150 [2024-11-05 19:18:42.216823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.150 [2024-11-05 19:18:42.216833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.150 qpair failed and we were unable to recover it. 00:29:13.150 [2024-11-05 19:18:42.217155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.150 [2024-11-05 19:18:42.217165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.150 qpair failed and we were unable to recover it. 00:29:13.150 [2024-11-05 19:18:42.217374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.150 [2024-11-05 19:18:42.217383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.150 qpair failed and we were unable to recover it. 00:29:13.150 [2024-11-05 19:18:42.217579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.150 [2024-11-05 19:18:42.217588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.150 qpair failed and we were unable to recover it. 00:29:13.150 [2024-11-05 19:18:42.217822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.150 [2024-11-05 19:18:42.217833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.150 qpair failed and we were unable to recover it. 00:29:13.150 [2024-11-05 19:18:42.218155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.150 [2024-11-05 19:18:42.218165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.150 qpair failed and we were unable to recover it. 00:29:13.150 [2024-11-05 19:18:42.218441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.150 [2024-11-05 19:18:42.218451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.150 qpair failed and we were unable to recover it. 00:29:13.150 [2024-11-05 19:18:42.218755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.150 [2024-11-05 19:18:42.218766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.150 qpair failed and we were unable to recover it. 00:29:13.150 [2024-11-05 19:18:42.219126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.150 [2024-11-05 19:18:42.219135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.150 qpair failed and we were unable to recover it. 00:29:13.150 [2024-11-05 19:18:42.219317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.150 [2024-11-05 19:18:42.219327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.150 qpair failed and we were unable to recover it. 00:29:13.150 [2024-11-05 19:18:42.219658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.150 [2024-11-05 19:18:42.219670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.150 qpair failed and we were unable to recover it. 00:29:13.150 [2024-11-05 19:18:42.220034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.150 [2024-11-05 19:18:42.220045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.150 qpair failed and we were unable to recover it. 00:29:13.150 [2024-11-05 19:18:42.220385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.150 [2024-11-05 19:18:42.220394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.150 qpair failed and we were unable to recover it. 00:29:13.150 [2024-11-05 19:18:42.220722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.150 [2024-11-05 19:18:42.220732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.150 qpair failed and we were unable to recover it. 00:29:13.150 [2024-11-05 19:18:42.220914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.150 [2024-11-05 19:18:42.220925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.150 qpair failed and we were unable to recover it. 00:29:13.150 [2024-11-05 19:18:42.221205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.150 [2024-11-05 19:18:42.221215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.150 qpair failed and we were unable to recover it. 00:29:13.150 [2024-11-05 19:18:42.221532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.150 [2024-11-05 19:18:42.221542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.151 qpair failed and we were unable to recover it. 00:29:13.151 [2024-11-05 19:18:42.221749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.151 [2024-11-05 19:18:42.221761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.151 qpair failed and we were unable to recover it. 00:29:13.151 [2024-11-05 19:18:42.222061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.151 [2024-11-05 19:18:42.222071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.151 qpair failed and we were unable to recover it. 00:29:13.151 [2024-11-05 19:18:42.222363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.151 [2024-11-05 19:18:42.222373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.151 qpair failed and we were unable to recover it. 00:29:13.151 [2024-11-05 19:18:42.222679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.151 [2024-11-05 19:18:42.222689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.151 qpair failed and we were unable to recover it. 00:29:13.151 [2024-11-05 19:18:42.223005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.151 [2024-11-05 19:18:42.223016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.151 qpair failed and we were unable to recover it. 00:29:13.151 [2024-11-05 19:18:42.223365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.151 [2024-11-05 19:18:42.223375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.151 qpair failed and we were unable to recover it. 00:29:13.151 [2024-11-05 19:18:42.223738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.151 [2024-11-05 19:18:42.223756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.151 qpair failed and we were unable to recover it. 00:29:13.151 [2024-11-05 19:18:42.224090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.151 [2024-11-05 19:18:42.224100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.151 qpair failed and we were unable to recover it. 00:29:13.151 [2024-11-05 19:18:42.224417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.151 [2024-11-05 19:18:42.224427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.151 qpair failed and we were unable to recover it. 00:29:13.151 [2024-11-05 19:18:42.224751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.151 [2024-11-05 19:18:42.224762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.151 qpair failed and we were unable to recover it. 00:29:13.151 [2024-11-05 19:18:42.225106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.151 [2024-11-05 19:18:42.225116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.151 qpair failed and we were unable to recover it. 00:29:13.151 [2024-11-05 19:18:42.225401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.151 [2024-11-05 19:18:42.225412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.151 qpair failed and we were unable to recover it. 00:29:13.151 [2024-11-05 19:18:42.225708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.151 [2024-11-05 19:18:42.225719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.151 qpair failed and we were unable to recover it. 00:29:13.151 [2024-11-05 19:18:42.226033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.151 [2024-11-05 19:18:42.226044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.151 qpair failed and we were unable to recover it. 00:29:13.151 [2024-11-05 19:18:42.226324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.151 [2024-11-05 19:18:42.226334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.151 qpair failed and we were unable to recover it. 00:29:13.151 [2024-11-05 19:18:42.226697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.151 [2024-11-05 19:18:42.226707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.151 qpair failed and we were unable to recover it. 00:29:13.151 [2024-11-05 19:18:42.227054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.151 [2024-11-05 19:18:42.227065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.151 qpair failed and we were unable to recover it. 00:29:13.151 [2024-11-05 19:18:42.227407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.151 [2024-11-05 19:18:42.227419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.151 qpair failed and we were unable to recover it. 00:29:13.151 [2024-11-05 19:18:42.227731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.151 [2024-11-05 19:18:42.227741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.151 qpair failed and we were unable to recover it. 00:29:13.151 [2024-11-05 19:18:42.228112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.151 [2024-11-05 19:18:42.228123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.151 qpair failed and we were unable to recover it. 00:29:13.151 [2024-11-05 19:18:42.228425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.151 [2024-11-05 19:18:42.228435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.151 qpair failed and we were unable to recover it. 00:29:13.151 [2024-11-05 19:18:42.228744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.151 [2024-11-05 19:18:42.228759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.151 qpair failed and we were unable to recover it. 00:29:13.151 [2024-11-05 19:18:42.229045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.151 [2024-11-05 19:18:42.229055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.151 qpair failed and we were unable to recover it. 00:29:13.151 [2024-11-05 19:18:42.229325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.151 [2024-11-05 19:18:42.229335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.151 qpair failed and we were unable to recover it. 00:29:13.151 [2024-11-05 19:18:42.229665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.151 [2024-11-05 19:18:42.229675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.151 qpair failed and we were unable to recover it. 00:29:13.151 [2024-11-05 19:18:42.229856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.151 [2024-11-05 19:18:42.229868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.151 qpair failed and we were unable to recover it. 00:29:13.151 [2024-11-05 19:18:42.230225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.151 [2024-11-05 19:18:42.230235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.151 qpair failed and we were unable to recover it. 00:29:13.151 [2024-11-05 19:18:42.230528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.151 [2024-11-05 19:18:42.230538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.151 qpair failed and we were unable to recover it. 00:29:13.151 [2024-11-05 19:18:42.230830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.151 [2024-11-05 19:18:42.230840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.151 qpair failed and we were unable to recover it. 00:29:13.151 [2024-11-05 19:18:42.231189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.151 [2024-11-05 19:18:42.231198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.151 qpair failed and we were unable to recover it. 00:29:13.151 [2024-11-05 19:18:42.231492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.151 [2024-11-05 19:18:42.231502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.151 qpair failed and we were unable to recover it. 00:29:13.151 [2024-11-05 19:18:42.231813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.151 [2024-11-05 19:18:42.231824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.151 qpair failed and we were unable to recover it. 00:29:13.151 [2024-11-05 19:18:42.232157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.151 [2024-11-05 19:18:42.232167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.151 qpair failed and we were unable to recover it. 00:29:13.151 [2024-11-05 19:18:42.232475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.151 [2024-11-05 19:18:42.232485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.151 qpair failed and we were unable to recover it. 00:29:13.151 [2024-11-05 19:18:42.232766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.151 [2024-11-05 19:18:42.232779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.151 qpair failed and we were unable to recover it. 00:29:13.151 [2024-11-05 19:18:42.233095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.151 [2024-11-05 19:18:42.233105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.151 qpair failed and we were unable to recover it. 00:29:13.151 [2024-11-05 19:18:42.233433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.151 [2024-11-05 19:18:42.233442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.151 qpair failed and we were unable to recover it. 00:29:13.152 [2024-11-05 19:18:42.233726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.152 [2024-11-05 19:18:42.233736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.152 qpair failed and we were unable to recover it. 00:29:13.152 [2024-11-05 19:18:42.234043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.152 [2024-11-05 19:18:42.234053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.152 qpair failed and we were unable to recover it. 00:29:13.152 [2024-11-05 19:18:42.234328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.152 [2024-11-05 19:18:42.234338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.152 qpair failed and we were unable to recover it. 00:29:13.152 [2024-11-05 19:18:42.234664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.152 [2024-11-05 19:18:42.234674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.152 qpair failed and we were unable to recover it. 00:29:13.152 [2024-11-05 19:18:42.234997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.152 [2024-11-05 19:18:42.235008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.152 qpair failed and we were unable to recover it. 00:29:13.152 [2024-11-05 19:18:42.235320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.152 [2024-11-05 19:18:42.235331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.152 qpair failed and we were unable to recover it. 00:29:13.152 [2024-11-05 19:18:42.235520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.152 [2024-11-05 19:18:42.235530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.152 qpair failed and we were unable to recover it. 00:29:13.152 [2024-11-05 19:18:42.235840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.152 [2024-11-05 19:18:42.235851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.152 qpair failed and we were unable to recover it. 00:29:13.152 [2024-11-05 19:18:42.236141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.152 [2024-11-05 19:18:42.236151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.152 qpair failed and we were unable to recover it. 00:29:13.152 [2024-11-05 19:18:42.236432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.152 [2024-11-05 19:18:42.236441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.152 qpair failed and we were unable to recover it. 00:29:13.152 [2024-11-05 19:18:42.236723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.152 [2024-11-05 19:18:42.236734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.152 qpair failed and we were unable to recover it. 00:29:13.152 [2024-11-05 19:18:42.237039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.152 [2024-11-05 19:18:42.237049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.152 qpair failed and we were unable to recover it. 00:29:13.152 [2024-11-05 19:18:42.237347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.152 [2024-11-05 19:18:42.237357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.152 qpair failed and we were unable to recover it. 00:29:13.152 [2024-11-05 19:18:42.237669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.152 [2024-11-05 19:18:42.237679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.152 qpair failed and we were unable to recover it. 00:29:13.152 [2024-11-05 19:18:42.238010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.152 [2024-11-05 19:18:42.238020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.152 qpair failed and we were unable to recover it. 00:29:13.152 [2024-11-05 19:18:42.238304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.152 [2024-11-05 19:18:42.238314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.152 qpair failed and we were unable to recover it. 00:29:13.152 [2024-11-05 19:18:42.238598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.152 [2024-11-05 19:18:42.238608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.152 qpair failed and we were unable to recover it. 00:29:13.152 [2024-11-05 19:18:42.238835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.152 [2024-11-05 19:18:42.238845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.152 qpair failed and we were unable to recover it. 00:29:13.152 [2024-11-05 19:18:42.239115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.152 [2024-11-05 19:18:42.239125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.152 qpair failed and we were unable to recover it. 00:29:13.152 [2024-11-05 19:18:42.239449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.152 [2024-11-05 19:18:42.239459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.152 qpair failed and we were unable to recover it. 00:29:13.152 [2024-11-05 19:18:42.239791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.152 [2024-11-05 19:18:42.239801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.152 qpair failed and we were unable to recover it. 00:29:13.152 [2024-11-05 19:18:42.240101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.152 [2024-11-05 19:18:42.240110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.152 qpair failed and we were unable to recover it. 00:29:13.152 [2024-11-05 19:18:42.240391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.152 [2024-11-05 19:18:42.240401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.152 qpair failed and we were unable to recover it. 00:29:13.152 [2024-11-05 19:18:42.240715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.152 [2024-11-05 19:18:42.240725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.152 qpair failed and we were unable to recover it. 00:29:13.152 [2024-11-05 19:18:42.240822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.152 [2024-11-05 19:18:42.240835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.152 qpair failed and we were unable to recover it. 00:29:13.152 [2024-11-05 19:18:42.241112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.152 [2024-11-05 19:18:42.241123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.152 qpair failed and we were unable to recover it. 00:29:13.152 [2024-11-05 19:18:42.241434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.152 [2024-11-05 19:18:42.241444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.152 qpair failed and we were unable to recover it. 00:29:13.152 [2024-11-05 19:18:42.241735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.152 [2024-11-05 19:18:42.241745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.152 qpair failed and we were unable to recover it. 00:29:13.152 [2024-11-05 19:18:42.241939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.152 [2024-11-05 19:18:42.241949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.152 qpair failed and we were unable to recover it. 00:29:13.152 [2024-11-05 19:18:42.242257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.152 [2024-11-05 19:18:42.242267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.152 qpair failed and we were unable to recover it. 00:29:13.152 [2024-11-05 19:18:42.242558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.152 [2024-11-05 19:18:42.242568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.152 qpair failed and we were unable to recover it. 00:29:13.152 [2024-11-05 19:18:42.242813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.152 [2024-11-05 19:18:42.242824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.152 qpair failed and we were unable to recover it. 00:29:13.152 [2024-11-05 19:18:42.243020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.152 [2024-11-05 19:18:42.243030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.152 qpair failed and we were unable to recover it. 00:29:13.152 [2024-11-05 19:18:42.243412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.152 [2024-11-05 19:18:42.243423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.152 qpair failed and we were unable to recover it. 00:29:13.152 [2024-11-05 19:18:42.243760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.153 [2024-11-05 19:18:42.243771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.153 qpair failed and we were unable to recover it. 00:29:13.153 [2024-11-05 19:18:42.244106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.153 [2024-11-05 19:18:42.244116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.153 qpair failed and we were unable to recover it. 00:29:13.153 [2024-11-05 19:18:42.244405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.153 [2024-11-05 19:18:42.244415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.153 qpair failed and we were unable to recover it. 00:29:13.153 [2024-11-05 19:18:42.244728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.153 [2024-11-05 19:18:42.244738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.153 qpair failed and we were unable to recover it. 00:29:13.153 [2024-11-05 19:18:42.245062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.153 [2024-11-05 19:18:42.245072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.153 qpair failed and we were unable to recover it. 00:29:13.153 [2024-11-05 19:18:42.245351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.153 [2024-11-05 19:18:42.245362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.153 qpair failed and we were unable to recover it. 00:29:13.153 [2024-11-05 19:18:42.245717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.153 [2024-11-05 19:18:42.245727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.153 qpair failed and we were unable to recover it. 00:29:13.153 [2024-11-05 19:18:42.246037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.153 [2024-11-05 19:18:42.246047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.153 qpair failed and we were unable to recover it. 00:29:13.153 [2024-11-05 19:18:42.246238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.153 [2024-11-05 19:18:42.246250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.153 qpair failed and we were unable to recover it. 00:29:13.153 [2024-11-05 19:18:42.246584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.153 [2024-11-05 19:18:42.246595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.153 qpair failed and we were unable to recover it. 00:29:13.153 [2024-11-05 19:18:42.246907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.153 [2024-11-05 19:18:42.246918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.153 qpair failed and we were unable to recover it. 00:29:13.153 [2024-11-05 19:18:42.247199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.153 [2024-11-05 19:18:42.247208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.153 qpair failed and we were unable to recover it. 00:29:13.153 [2024-11-05 19:18:42.247580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.153 [2024-11-05 19:18:42.247590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.153 qpair failed and we were unable to recover it. 00:29:13.153 [2024-11-05 19:18:42.247928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.153 [2024-11-05 19:18:42.247938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.153 qpair failed and we were unable to recover it. 00:29:13.153 [2024-11-05 19:18:42.248221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.153 [2024-11-05 19:18:42.248231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.153 qpair failed and we were unable to recover it. 00:29:13.153 [2024-11-05 19:18:42.248513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.153 [2024-11-05 19:18:42.248523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.153 qpair failed and we were unable to recover it. 00:29:13.153 [2024-11-05 19:18:42.248804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.153 [2024-11-05 19:18:42.248815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.153 qpair failed and we were unable to recover it. 00:29:13.153 [2024-11-05 19:18:42.249134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.153 [2024-11-05 19:18:42.249147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.153 qpair failed and we were unable to recover it. 00:29:13.153 [2024-11-05 19:18:42.249473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.153 [2024-11-05 19:18:42.249483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.153 qpair failed and we were unable to recover it. 00:29:13.153 [2024-11-05 19:18:42.249766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.153 [2024-11-05 19:18:42.249776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.153 qpair failed and we were unable to recover it. 00:29:13.153 [2024-11-05 19:18:42.250083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.153 [2024-11-05 19:18:42.250092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.153 qpair failed and we were unable to recover it. 00:29:13.153 [2024-11-05 19:18:42.250398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.153 [2024-11-05 19:18:42.250407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.153 qpair failed and we were unable to recover it. 00:29:13.153 [2024-11-05 19:18:42.250697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.153 [2024-11-05 19:18:42.250707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.153 qpair failed and we were unable to recover it. 00:29:13.153 [2024-11-05 19:18:42.251068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.153 [2024-11-05 19:18:42.251079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.153 qpair failed and we were unable to recover it. 00:29:13.153 [2024-11-05 19:18:42.251359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.153 [2024-11-05 19:18:42.251369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.153 qpair failed and we were unable to recover it. 00:29:13.153 [2024-11-05 19:18:42.251700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.153 [2024-11-05 19:18:42.251710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.153 qpair failed and we were unable to recover it. 00:29:13.153 [2024-11-05 19:18:42.251998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.153 [2024-11-05 19:18:42.252008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.153 qpair failed and we were unable to recover it. 00:29:13.153 [2024-11-05 19:18:42.252314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.153 [2024-11-05 19:18:42.252323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.153 qpair failed and we were unable to recover it. 00:29:13.153 [2024-11-05 19:18:42.252489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.153 [2024-11-05 19:18:42.252500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.153 qpair failed and we were unable to recover it. 00:29:13.153 [2024-11-05 19:18:42.252854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.153 [2024-11-05 19:18:42.252864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.153 qpair failed and we were unable to recover it. 00:29:13.153 [2024-11-05 19:18:42.253187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.153 [2024-11-05 19:18:42.253197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.153 qpair failed and we were unable to recover it. 00:29:13.153 [2024-11-05 19:18:42.253541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.153 [2024-11-05 19:18:42.253551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.153 qpair failed and we were unable to recover it. 00:29:13.153 [2024-11-05 19:18:42.253862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.153 [2024-11-05 19:18:42.253872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.153 qpair failed and we were unable to recover it. 00:29:13.153 [2024-11-05 19:18:42.254099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.153 [2024-11-05 19:18:42.254109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.153 qpair failed and we were unable to recover it. 00:29:13.153 [2024-11-05 19:18:42.254437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.153 [2024-11-05 19:18:42.254447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.153 qpair failed and we were unable to recover it. 00:29:13.153 [2024-11-05 19:18:42.254637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.153 [2024-11-05 19:18:42.254647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.153 qpair failed and we were unable to recover it. 00:29:13.153 [2024-11-05 19:18:42.254942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.153 [2024-11-05 19:18:42.254952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.153 qpair failed and we were unable to recover it. 00:29:13.153 [2024-11-05 19:18:42.255240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.153 [2024-11-05 19:18:42.255250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.153 qpair failed and we were unable to recover it. 00:29:13.153 [2024-11-05 19:18:42.255533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.154 [2024-11-05 19:18:42.255543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.154 qpair failed and we were unable to recover it. 00:29:13.154 [2024-11-05 19:18:42.255851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.154 [2024-11-05 19:18:42.255861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.154 qpair failed and we were unable to recover it. 00:29:13.154 [2024-11-05 19:18:42.256168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.154 [2024-11-05 19:18:42.256178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.154 qpair failed and we were unable to recover it. 00:29:13.154 [2024-11-05 19:18:42.256458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.154 [2024-11-05 19:18:42.256468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.154 qpair failed and we were unable to recover it. 00:29:13.154 [2024-11-05 19:18:42.256804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.154 [2024-11-05 19:18:42.256814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.154 qpair failed and we were unable to recover it. 00:29:13.154 [2024-11-05 19:18:42.257111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.154 [2024-11-05 19:18:42.257121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.154 qpair failed and we were unable to recover it. 00:29:13.154 [2024-11-05 19:18:42.257408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.154 [2024-11-05 19:18:42.257418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.154 qpair failed and we were unable to recover it. 00:29:13.154 [2024-11-05 19:18:42.257761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.154 [2024-11-05 19:18:42.257771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.154 qpair failed and we were unable to recover it. 00:29:13.154 [2024-11-05 19:18:42.258088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.154 [2024-11-05 19:18:42.258098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.154 qpair failed and we were unable to recover it. 00:29:13.154 [2024-11-05 19:18:42.258379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.154 [2024-11-05 19:18:42.258389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.154 qpair failed and we were unable to recover it. 00:29:13.154 [2024-11-05 19:18:42.258666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.154 [2024-11-05 19:18:42.258676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.154 qpair failed and we were unable to recover it. 00:29:13.154 [2024-11-05 19:18:42.258756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.154 [2024-11-05 19:18:42.258767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.154 qpair failed and we were unable to recover it. 00:29:13.154 [2024-11-05 19:18:42.259055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.154 [2024-11-05 19:18:42.259065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.154 qpair failed and we were unable to recover it. 00:29:13.154 [2024-11-05 19:18:42.259363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.154 [2024-11-05 19:18:42.259373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.154 qpair failed and we were unable to recover it. 00:29:13.154 [2024-11-05 19:18:42.259673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.154 [2024-11-05 19:18:42.259683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.154 qpair failed and we were unable to recover it. 00:29:13.154 [2024-11-05 19:18:42.259858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.154 [2024-11-05 19:18:42.259869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.154 qpair failed and we were unable to recover it. 00:29:13.154 [2024-11-05 19:18:42.260183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.154 [2024-11-05 19:18:42.260193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.154 qpair failed and we were unable to recover it. 00:29:13.154 [2024-11-05 19:18:42.260520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.154 [2024-11-05 19:18:42.260530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.154 qpair failed and we were unable to recover it. 00:29:13.154 [2024-11-05 19:18:42.260727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.154 [2024-11-05 19:18:42.260737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.154 qpair failed and we were unable to recover it. 00:29:13.154 [2024-11-05 19:18:42.261071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.154 [2024-11-05 19:18:42.261081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.154 qpair failed and we were unable to recover it. 00:29:13.154 [2024-11-05 19:18:42.261392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.154 [2024-11-05 19:18:42.261402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.154 qpair failed and we were unable to recover it. 00:29:13.154 [2024-11-05 19:18:42.261684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.154 [2024-11-05 19:18:42.261693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.154 qpair failed and we were unable to recover it. 00:29:13.154 [2024-11-05 19:18:42.262002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.154 [2024-11-05 19:18:42.262012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.154 qpair failed and we were unable to recover it. 00:29:13.154 [2024-11-05 19:18:42.262206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.154 [2024-11-05 19:18:42.262216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.154 qpair failed and we were unable to recover it. 00:29:13.154 [2024-11-05 19:18:42.262543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.154 [2024-11-05 19:18:42.262553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.154 qpair failed and we were unable to recover it. 00:29:13.154 [2024-11-05 19:18:42.262868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.154 [2024-11-05 19:18:42.262879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.154 qpair failed and we were unable to recover it. 00:29:13.154 [2024-11-05 19:18:42.263214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.154 [2024-11-05 19:18:42.263224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.154 qpair failed and we were unable to recover it. 00:29:13.154 [2024-11-05 19:18:42.263540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.154 [2024-11-05 19:18:42.263550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.154 qpair failed and we were unable to recover it. 00:29:13.154 [2024-11-05 19:18:42.263713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.154 [2024-11-05 19:18:42.263725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.154 qpair failed and we were unable to recover it. 00:29:13.154 [2024-11-05 19:18:42.264059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.154 [2024-11-05 19:18:42.264071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.154 qpair failed and we were unable to recover it. 00:29:13.154 [2024-11-05 19:18:42.264372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.154 [2024-11-05 19:18:42.264382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.154 qpair failed and we were unable to recover it. 00:29:13.154 [2024-11-05 19:18:42.264692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.154 [2024-11-05 19:18:42.264702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.154 qpair failed and we were unable to recover it. 00:29:13.154 [2024-11-05 19:18:42.265010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.154 [2024-11-05 19:18:42.265022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.154 qpair failed and we were unable to recover it. 00:29:13.154 [2024-11-05 19:18:42.265308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.155 [2024-11-05 19:18:42.265319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.155 qpair failed and we were unable to recover it. 00:29:13.155 [2024-11-05 19:18:42.265621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.155 [2024-11-05 19:18:42.265631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.155 qpair failed and we were unable to recover it. 00:29:13.155 [2024-11-05 19:18:42.265915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.155 [2024-11-05 19:18:42.265925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.155 qpair failed and we were unable to recover it. 00:29:13.155 [2024-11-05 19:18:42.266225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.155 [2024-11-05 19:18:42.266234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.155 qpair failed and we were unable to recover it. 00:29:13.155 [2024-11-05 19:18:42.266532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.155 [2024-11-05 19:18:42.266542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.155 qpair failed and we were unable to recover it. 00:29:13.155 [2024-11-05 19:18:42.266843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.155 [2024-11-05 19:18:42.266854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.155 qpair failed and we were unable to recover it. 00:29:13.155 [2024-11-05 19:18:42.267161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.155 [2024-11-05 19:18:42.267171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.155 qpair failed and we were unable to recover it. 00:29:13.155 [2024-11-05 19:18:42.267379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.155 [2024-11-05 19:18:42.267389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.155 qpair failed and we were unable to recover it. 00:29:13.155 [2024-11-05 19:18:42.267710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.155 [2024-11-05 19:18:42.267720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.155 qpair failed and we were unable to recover it. 00:29:13.155 [2024-11-05 19:18:42.268006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.155 [2024-11-05 19:18:42.268016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.155 qpair failed and we were unable to recover it. 00:29:13.155 [2024-11-05 19:18:42.268327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.155 [2024-11-05 19:18:42.268337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.155 qpair failed and we were unable to recover it. 00:29:13.155 [2024-11-05 19:18:42.268674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.155 [2024-11-05 19:18:42.268684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.155 qpair failed and we were unable to recover it. 00:29:13.155 [2024-11-05 19:18:42.268991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.155 [2024-11-05 19:18:42.269002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.155 qpair failed and we were unable to recover it. 00:29:13.155 [2024-11-05 19:18:42.269293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.155 [2024-11-05 19:18:42.269304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.155 qpair failed and we were unable to recover it. 00:29:13.155 [2024-11-05 19:18:42.269614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.155 [2024-11-05 19:18:42.269626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.155 qpair failed and we were unable to recover it. 00:29:13.155 [2024-11-05 19:18:42.269913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.155 [2024-11-05 19:18:42.269923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.155 qpair failed and we were unable to recover it. 00:29:13.155 [2024-11-05 19:18:42.270209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.155 [2024-11-05 19:18:42.270219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.155 qpair failed and we were unable to recover it. 00:29:13.155 [2024-11-05 19:18:42.270425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.155 [2024-11-05 19:18:42.270435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.155 qpair failed and we were unable to recover it. 00:29:13.155 [2024-11-05 19:18:42.270731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.155 [2024-11-05 19:18:42.270741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.155 qpair failed and we were unable to recover it. 00:29:13.155 [2024-11-05 19:18:42.271066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.155 [2024-11-05 19:18:42.271077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.155 qpair failed and we were unable to recover it. 00:29:13.155 [2024-11-05 19:18:42.271362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.155 [2024-11-05 19:18:42.271373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.155 qpair failed and we were unable to recover it. 00:29:13.155 [2024-11-05 19:18:42.271714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.155 [2024-11-05 19:18:42.271724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.155 qpair failed and we were unable to recover it. 00:29:13.155 [2024-11-05 19:18:42.272062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.155 [2024-11-05 19:18:42.272072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.155 qpair failed and we were unable to recover it. 00:29:13.155 [2024-11-05 19:18:42.272353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.155 [2024-11-05 19:18:42.272363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.155 qpair failed and we were unable to recover it. 00:29:13.155 [2024-11-05 19:18:42.272530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.155 [2024-11-05 19:18:42.272541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.155 qpair failed and we were unable to recover it. 00:29:13.155 [2024-11-05 19:18:42.272837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.155 [2024-11-05 19:18:42.272848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.155 qpair failed and we were unable to recover it. 00:29:13.155 [2024-11-05 19:18:42.273146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.155 [2024-11-05 19:18:42.273156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.155 qpair failed and we were unable to recover it. 00:29:13.155 [2024-11-05 19:18:42.273459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.155 [2024-11-05 19:18:42.273469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.155 qpair failed and we were unable to recover it. 00:29:13.155 [2024-11-05 19:18:42.273742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.155 [2024-11-05 19:18:42.273756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.155 qpair failed and we were unable to recover it. 00:29:13.155 [2024-11-05 19:18:42.274129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.155 [2024-11-05 19:18:42.274139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.155 qpair failed and we were unable to recover it. 00:29:13.155 [2024-11-05 19:18:42.274443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.155 [2024-11-05 19:18:42.274454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.155 qpair failed and we were unable to recover it. 00:29:13.155 [2024-11-05 19:18:42.274743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.155 [2024-11-05 19:18:42.274758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.155 qpair failed and we were unable to recover it. 00:29:13.155 [2024-11-05 19:18:42.275073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.155 [2024-11-05 19:18:42.275083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.155 qpair failed and we were unable to recover it. 00:29:13.155 [2024-11-05 19:18:42.275441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.155 [2024-11-05 19:18:42.275450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.155 qpair failed and we were unable to recover it. 00:29:13.155 [2024-11-05 19:18:42.275753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.155 [2024-11-05 19:18:42.275763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.155 qpair failed and we were unable to recover it. 00:29:13.155 [2024-11-05 19:18:42.275990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.155 [2024-11-05 19:18:42.276000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.155 qpair failed and we were unable to recover it. 00:29:13.155 [2024-11-05 19:18:42.276306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.155 [2024-11-05 19:18:42.276315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.155 qpair failed and we were unable to recover it. 00:29:13.155 [2024-11-05 19:18:42.276643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.155 [2024-11-05 19:18:42.276653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.155 qpair failed and we were unable to recover it. 00:29:13.155 [2024-11-05 19:18:42.276978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.155 [2024-11-05 19:18:42.276989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.155 qpair failed and we were unable to recover it. 00:29:13.156 [2024-11-05 19:18:42.277298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.156 [2024-11-05 19:18:42.277308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.156 qpair failed and we were unable to recover it. 00:29:13.156 [2024-11-05 19:18:42.277581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.156 [2024-11-05 19:18:42.277591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.156 qpair failed and we were unable to recover it. 00:29:13.156 [2024-11-05 19:18:42.277935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.156 [2024-11-05 19:18:42.277947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.156 qpair failed and we were unable to recover it. 00:29:13.156 [2024-11-05 19:18:42.278232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.156 [2024-11-05 19:18:42.278242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.156 qpair failed and we were unable to recover it. 00:29:13.156 [2024-11-05 19:18:42.278540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.156 [2024-11-05 19:18:42.278550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.156 qpair failed and we were unable to recover it. 00:29:13.156 [2024-11-05 19:18:42.278757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.156 [2024-11-05 19:18:42.278767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.156 qpair failed and we were unable to recover it. 00:29:13.156 [2024-11-05 19:18:42.279079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.156 [2024-11-05 19:18:42.279089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.156 qpair failed and we were unable to recover it. 00:29:13.156 [2024-11-05 19:18:42.279369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.156 [2024-11-05 19:18:42.279379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.156 qpair failed and we were unable to recover it. 00:29:13.156 [2024-11-05 19:18:42.279744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.156 [2024-11-05 19:18:42.279762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.156 qpair failed and we were unable to recover it. 00:29:13.156 [2024-11-05 19:18:42.280073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.156 [2024-11-05 19:18:42.280083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.156 qpair failed and we were unable to recover it. 00:29:13.156 [2024-11-05 19:18:42.280366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.156 [2024-11-05 19:18:42.280376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.156 qpair failed and we were unable to recover it. 00:29:13.156 [2024-11-05 19:18:42.280655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.156 [2024-11-05 19:18:42.280665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.156 qpair failed and we were unable to recover it. 00:29:13.156 [2024-11-05 19:18:42.281022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.156 [2024-11-05 19:18:42.281032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.156 qpair failed and we were unable to recover it. 00:29:13.156 [2024-11-05 19:18:42.281343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.156 [2024-11-05 19:18:42.281352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.156 qpair failed and we were unable to recover it. 00:29:13.156 [2024-11-05 19:18:42.281704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.156 [2024-11-05 19:18:42.281713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.156 qpair failed and we were unable to recover it. 00:29:13.156 [2024-11-05 19:18:42.282026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.156 [2024-11-05 19:18:42.282037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.156 qpair failed and we were unable to recover it. 00:29:13.156 [2024-11-05 19:18:42.282396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.156 [2024-11-05 19:18:42.282406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.156 qpair failed and we were unable to recover it. 00:29:13.156 [2024-11-05 19:18:42.282693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.156 [2024-11-05 19:18:42.282703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.156 qpair failed and we were unable to recover it. 00:29:13.156 [2024-11-05 19:18:42.282982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.156 [2024-11-05 19:18:42.282993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.156 qpair failed and we were unable to recover it. 00:29:13.156 [2024-11-05 19:18:42.283305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.156 [2024-11-05 19:18:42.283315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.156 qpair failed and we were unable to recover it. 00:29:13.156 [2024-11-05 19:18:42.283622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.156 [2024-11-05 19:18:42.283632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.156 qpair failed and we were unable to recover it. 00:29:13.156 [2024-11-05 19:18:42.283914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.156 [2024-11-05 19:18:42.283924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.156 qpair failed and we were unable to recover it. 00:29:13.156 [2024-11-05 19:18:42.284217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.156 [2024-11-05 19:18:42.284227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.156 qpair failed and we were unable to recover it. 00:29:13.156 [2024-11-05 19:18:42.284510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.156 [2024-11-05 19:18:42.284519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.156 qpair failed and we were unable to recover it. 00:29:13.156 [2024-11-05 19:18:42.284869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.156 [2024-11-05 19:18:42.284880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.156 qpair failed and we were unable to recover it. 00:29:13.156 [2024-11-05 19:18:42.285212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.156 [2024-11-05 19:18:42.285222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.156 qpair failed and we were unable to recover it. 00:29:13.156 [2024-11-05 19:18:42.285507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.156 [2024-11-05 19:18:42.285517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.156 qpair failed and we were unable to recover it. 00:29:13.156 [2024-11-05 19:18:42.285883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.156 [2024-11-05 19:18:42.285893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.156 qpair failed and we were unable to recover it. 00:29:13.156 [2024-11-05 19:18:42.286201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.156 [2024-11-05 19:18:42.286211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.156 qpair failed and we were unable to recover it. 00:29:13.156 [2024-11-05 19:18:42.286516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.156 [2024-11-05 19:18:42.286526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.156 qpair failed and we were unable to recover it. 00:29:13.156 [2024-11-05 19:18:42.286845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.156 [2024-11-05 19:18:42.286855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.156 qpair failed and we were unable to recover it. 00:29:13.156 [2024-11-05 19:18:42.287155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.156 [2024-11-05 19:18:42.287164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.156 qpair failed and we were unable to recover it. 00:29:13.156 [2024-11-05 19:18:42.287447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.156 [2024-11-05 19:18:42.287457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.156 qpair failed and we were unable to recover it. 00:29:13.156 [2024-11-05 19:18:42.287786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.156 [2024-11-05 19:18:42.287796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.156 qpair failed and we were unable to recover it. 00:29:13.156 [2024-11-05 19:18:42.288114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.156 [2024-11-05 19:18:42.288123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.156 qpair failed and we were unable to recover it. 00:29:13.156 [2024-11-05 19:18:42.288404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.156 [2024-11-05 19:18:42.288414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.156 qpair failed and we were unable to recover it. 00:29:13.156 [2024-11-05 19:18:42.288754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.156 [2024-11-05 19:18:42.288765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.156 qpair failed and we were unable to recover it. 00:29:13.156 [2024-11-05 19:18:42.289061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.156 [2024-11-05 19:18:42.289071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.156 qpair failed and we were unable to recover it. 00:29:13.156 [2024-11-05 19:18:42.289381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.157 [2024-11-05 19:18:42.289391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.157 qpair failed and we were unable to recover it. 00:29:13.157 [2024-11-05 19:18:42.289740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.157 [2024-11-05 19:18:42.289755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.157 qpair failed and we were unable to recover it. 00:29:13.157 [2024-11-05 19:18:42.290039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.157 [2024-11-05 19:18:42.290049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.157 qpair failed and we were unable to recover it. 00:29:13.157 [2024-11-05 19:18:42.290356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.157 [2024-11-05 19:18:42.290366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.157 qpair failed and we were unable to recover it. 00:29:13.157 [2024-11-05 19:18:42.290697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.157 [2024-11-05 19:18:42.290707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.157 qpair failed and we were unable to recover it. 00:29:13.157 [2024-11-05 19:18:42.291002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.157 [2024-11-05 19:18:42.291013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.157 qpair failed and we were unable to recover it. 00:29:13.157 [2024-11-05 19:18:42.291354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.157 [2024-11-05 19:18:42.291364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.157 qpair failed and we were unable to recover it. 00:29:13.157 [2024-11-05 19:18:42.291647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.157 [2024-11-05 19:18:42.291657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.157 qpair failed and we were unable to recover it. 00:29:13.157 [2024-11-05 19:18:42.291959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.157 [2024-11-05 19:18:42.291970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.157 qpair failed and we were unable to recover it. 00:29:13.157 [2024-11-05 19:18:42.292309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.157 [2024-11-05 19:18:42.292318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.157 qpair failed and we were unable to recover it. 00:29:13.157 [2024-11-05 19:18:42.292607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.157 [2024-11-05 19:18:42.292617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.157 qpair failed and we were unable to recover it. 00:29:13.157 [2024-11-05 19:18:42.292927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.157 [2024-11-05 19:18:42.292938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.157 qpair failed and we were unable to recover it. 00:29:13.157 [2024-11-05 19:18:42.293239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.157 [2024-11-05 19:18:42.293249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.157 qpair failed and we were unable to recover it. 00:29:13.157 [2024-11-05 19:18:42.293562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.157 [2024-11-05 19:18:42.293572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.157 qpair failed and we were unable to recover it. 00:29:13.157 [2024-11-05 19:18:42.293912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.157 [2024-11-05 19:18:42.293922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.157 qpair failed and we were unable to recover it. 00:29:13.157 [2024-11-05 19:18:42.294218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.157 [2024-11-05 19:18:42.294228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.157 qpair failed and we were unable to recover it. 00:29:13.157 [2024-11-05 19:18:42.294515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.157 [2024-11-05 19:18:42.294525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.157 qpair failed and we were unable to recover it. 00:29:13.157 [2024-11-05 19:18:42.294811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.157 [2024-11-05 19:18:42.294821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.157 qpair failed and we were unable to recover it. 00:29:13.157 [2024-11-05 19:18:42.295134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.157 [2024-11-05 19:18:42.295144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.157 qpair failed and we were unable to recover it. 00:29:13.157 [2024-11-05 19:18:42.295426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.157 [2024-11-05 19:18:42.295436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.157 qpair failed and we were unable to recover it. 00:29:13.157 [2024-11-05 19:18:42.295784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.157 [2024-11-05 19:18:42.295794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.157 qpair failed and we were unable to recover it. 00:29:13.157 [2024-11-05 19:18:42.296109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.157 [2024-11-05 19:18:42.296119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.157 qpair failed and we were unable to recover it. 00:29:13.157 [2024-11-05 19:18:42.296434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.157 [2024-11-05 19:18:42.296444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.157 qpair failed and we were unable to recover it. 00:29:13.157 [2024-11-05 19:18:42.296754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.157 [2024-11-05 19:18:42.296765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.157 qpair failed and we were unable to recover it. 00:29:13.157 [2024-11-05 19:18:42.296994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.157 [2024-11-05 19:18:42.297004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.157 qpair failed and we were unable to recover it. 00:29:13.157 [2024-11-05 19:18:42.297340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.157 [2024-11-05 19:18:42.297350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.157 qpair failed and we were unable to recover it. 00:29:13.157 [2024-11-05 19:18:42.297632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.157 [2024-11-05 19:18:42.297642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.157 qpair failed and we were unable to recover it. 00:29:13.157 [2024-11-05 19:18:42.297948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.157 [2024-11-05 19:18:42.297958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.157 qpair failed and we were unable to recover it. 00:29:13.157 [2024-11-05 19:18:42.298239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.157 [2024-11-05 19:18:42.298249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.157 qpair failed and we were unable to recover it. 00:29:13.157 [2024-11-05 19:18:42.298530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.157 [2024-11-05 19:18:42.298540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.157 qpair failed and we were unable to recover it. 00:29:13.157 [2024-11-05 19:18:42.298822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.157 [2024-11-05 19:18:42.298832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.157 qpair failed and we were unable to recover it. 00:29:13.157 [2024-11-05 19:18:42.299113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.157 [2024-11-05 19:18:42.299122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.157 qpair failed and we were unable to recover it. 00:29:13.157 [2024-11-05 19:18:42.299404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.157 [2024-11-05 19:18:42.299417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.157 qpair failed and we were unable to recover it. 00:29:13.157 [2024-11-05 19:18:42.299776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.157 [2024-11-05 19:18:42.299787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.157 qpair failed and we were unable to recover it. 00:29:13.157 [2024-11-05 19:18:42.300090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.157 [2024-11-05 19:18:42.300100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.157 qpair failed and we were unable to recover it. 00:29:13.157 [2024-11-05 19:18:42.300388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.157 [2024-11-05 19:18:42.300398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.157 qpair failed and we were unable to recover it. 00:29:13.157 [2024-11-05 19:18:42.300685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.157 [2024-11-05 19:18:42.300695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.157 qpair failed and we were unable to recover it. 00:29:13.157 [2024-11-05 19:18:42.300989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.157 [2024-11-05 19:18:42.300999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.157 qpair failed and we were unable to recover it. 00:29:13.157 [2024-11-05 19:18:42.301321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.157 [2024-11-05 19:18:42.301331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.157 qpair failed and we were unable to recover it. 00:29:13.157 [2024-11-05 19:18:42.301658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.158 [2024-11-05 19:18:42.301668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.158 qpair failed and we were unable to recover it. 00:29:13.158 [2024-11-05 19:18:42.301979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.158 [2024-11-05 19:18:42.301989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.158 qpair failed and we were unable to recover it. 00:29:13.158 [2024-11-05 19:18:42.302275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.158 [2024-11-05 19:18:42.302285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.158 qpair failed and we were unable to recover it. 00:29:13.158 [2024-11-05 19:18:42.302642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.158 [2024-11-05 19:18:42.302652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.158 qpair failed and we were unable to recover it. 00:29:13.158 [2024-11-05 19:18:42.302895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.158 [2024-11-05 19:18:42.302905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.158 qpair failed and we were unable to recover it. 00:29:13.158 [2024-11-05 19:18:42.303082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.158 [2024-11-05 19:18:42.303095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.158 qpair failed and we were unable to recover it. 00:29:13.158 [2024-11-05 19:18:42.303408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.158 [2024-11-05 19:18:42.303419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.158 qpair failed and we were unable to recover it. 00:29:13.158 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 517570 Killed "${NVMF_APP[@]}" "$@" 00:29:13.158 [2024-11-05 19:18:42.303730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.158 [2024-11-05 19:18:42.303741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.158 qpair failed and we were unable to recover it. 00:29:13.158 [2024-11-05 19:18:42.304087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.158 [2024-11-05 19:18:42.304097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.158 qpair failed and we were unable to recover it. 00:29:13.158 [2024-11-05 19:18:42.304386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.158 [2024-11-05 19:18:42.304397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.158 qpair failed and we were unable to recover it. 00:29:13.158 19:18:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:29:13.158 [2024-11-05 19:18:42.304678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.158 [2024-11-05 19:18:42.304689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.158 qpair failed and we were unable to recover it. 00:29:13.158 19:18:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:13.158 [2024-11-05 19:18:42.304989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.158 [2024-11-05 19:18:42.305000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.158 qpair failed and we were unable to recover it. 00:29:13.158 19:18:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:29:13.158 [2024-11-05 19:18:42.305295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.158 [2024-11-05 19:18:42.305306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.158 qpair failed and we were unable to recover it. 00:29:13.158 19:18:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:13.158 19:18:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:13.158 [2024-11-05 19:18:42.305637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.158 [2024-11-05 19:18:42.305648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.158 qpair failed and we were unable to recover it. 00:29:13.158 [2024-11-05 19:18:42.305923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.158 [2024-11-05 19:18:42.305934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.158 qpair failed and we were unable to recover it. 00:29:13.158 [2024-11-05 19:18:42.306230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.158 [2024-11-05 19:18:42.306240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.158 qpair failed and we were unable to recover it. 00:29:13.158 [2024-11-05 19:18:42.306599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.158 [2024-11-05 19:18:42.306609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.158 qpair failed and we were unable to recover it. 00:29:13.158 [2024-11-05 19:18:42.306919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.158 [2024-11-05 19:18:42.306932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.158 qpair failed and we were unable to recover it. 00:29:13.158 [2024-11-05 19:18:42.307274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.158 [2024-11-05 19:18:42.307284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.158 qpair failed and we were unable to recover it. 00:29:13.158 [2024-11-05 19:18:42.307575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.158 [2024-11-05 19:18:42.307585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.158 qpair failed and we were unable to recover it. 00:29:13.158 [2024-11-05 19:18:42.307887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.158 [2024-11-05 19:18:42.307898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.158 qpair failed and we were unable to recover it. 00:29:13.158 [2024-11-05 19:18:42.308183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.158 [2024-11-05 19:18:42.308193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.158 qpair failed and we were unable to recover it. 00:29:13.158 [2024-11-05 19:18:42.308507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.158 [2024-11-05 19:18:42.308517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.158 qpair failed and we were unable to recover it. 00:29:13.158 [2024-11-05 19:18:42.308856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.158 [2024-11-05 19:18:42.308867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.158 qpair failed and we were unable to recover it. 00:29:13.158 [2024-11-05 19:18:42.309160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.158 [2024-11-05 19:18:42.309170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.158 qpair failed and we were unable to recover it. 00:29:13.158 [2024-11-05 19:18:42.309464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.158 [2024-11-05 19:18:42.309475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.158 qpair failed and we were unable to recover it. 00:29:13.158 [2024-11-05 19:18:42.309786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.158 [2024-11-05 19:18:42.309797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.158 qpair failed and we were unable to recover it. 00:29:13.158 [2024-11-05 19:18:42.310005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.158 [2024-11-05 19:18:42.310015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.158 qpair failed and we were unable to recover it. 00:29:13.158 [2024-11-05 19:18:42.310362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.158 [2024-11-05 19:18:42.310373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.158 qpair failed and we were unable to recover it. 00:29:13.158 [2024-11-05 19:18:42.310654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.158 [2024-11-05 19:18:42.310665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.158 qpair failed and we were unable to recover it. 00:29:13.158 [2024-11-05 19:18:42.310958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.158 [2024-11-05 19:18:42.310971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.158 qpair failed and we were unable to recover it. 00:29:13.158 [2024-11-05 19:18:42.311276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.158 [2024-11-05 19:18:42.311289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.158 qpair failed and we were unable to recover it. 00:29:13.158 [2024-11-05 19:18:42.311619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.158 [2024-11-05 19:18:42.311631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.158 qpair failed and we were unable to recover it. 00:29:13.158 [2024-11-05 19:18:42.311850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.158 [2024-11-05 19:18:42.311862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.158 qpair failed and we were unable to recover it. 00:29:13.158 [2024-11-05 19:18:42.312184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.158 [2024-11-05 19:18:42.312196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.158 qpair failed and we were unable to recover it. 00:29:13.158 [2024-11-05 19:18:42.312510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.158 [2024-11-05 19:18:42.312522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.158 qpair failed and we were unable to recover it. 00:29:13.158 [2024-11-05 19:18:42.312812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.159 [2024-11-05 19:18:42.312823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.159 qpair failed and we were unable to recover it. 00:29:13.159 [2024-11-05 19:18:42.313133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.159 [2024-11-05 19:18:42.313145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.159 qpair failed and we were unable to recover it. 00:29:13.159 19:18:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@328 -- # nvmfpid=518532 00:29:13.159 [2024-11-05 19:18:42.313468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.159 [2024-11-05 19:18:42.313481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.159 qpair failed and we were unable to recover it. 00:29:13.159 19:18:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@329 -- # waitforlisten 518532 00:29:13.159 [2024-11-05 19:18:42.313816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.159 [2024-11-05 19:18:42.313828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.159 qpair failed and we were unable to recover it. 00:29:13.159 19:18:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:13.159 19:18:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # '[' -z 518532 ']' 00:29:13.159 [2024-11-05 19:18:42.314109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.159 [2024-11-05 19:18:42.314122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.159 qpair failed and we were unable to recover it. 00:29:13.159 19:18:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:13.159 [2024-11-05 19:18:42.314424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.159 19:18:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:13.159 [2024-11-05 19:18:42.314436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.159 qpair failed and we were unable to recover it. 00:29:13.159 19:18:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:13.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:13.159 [2024-11-05 19:18:42.314736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.159 [2024-11-05 19:18:42.314754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.159 qpair failed and we were unable to recover it. 00:29:13.159 19:18:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:13.159 [2024-11-05 19:18:42.315026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.159 [2024-11-05 19:18:42.315038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.159 qpair failed and we were unable to recover it. 00:29:13.159 19:18:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:13.159 [2024-11-05 19:18:42.315347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.159 [2024-11-05 19:18:42.315359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.159 qpair failed and we were unable to recover it. 00:29:13.159 [2024-11-05 19:18:42.315724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.159 [2024-11-05 19:18:42.315736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.159 qpair failed and we were unable to recover it. 00:29:13.159 [2024-11-05 19:18:42.316042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.159 [2024-11-05 19:18:42.316054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.159 qpair failed and we were unable to recover it. 00:29:13.159 [2024-11-05 19:18:42.316256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.159 [2024-11-05 19:18:42.316267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.159 qpair failed and we were unable to recover it. 00:29:13.159 [2024-11-05 19:18:42.316600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.159 [2024-11-05 19:18:42.316612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.159 qpair failed and we were unable to recover it. 00:29:13.159 [2024-11-05 19:18:42.316927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.159 [2024-11-05 19:18:42.316940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.159 qpair failed and we were unable to recover it. 00:29:13.159 [2024-11-05 19:18:42.317098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.159 [2024-11-05 19:18:42.317110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.159 qpair failed and we were unable to recover it. 00:29:13.159 [2024-11-05 19:18:42.317318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.159 [2024-11-05 19:18:42.317329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.159 qpair failed and we were unable to recover it. 00:29:13.159 [2024-11-05 19:18:42.317596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.159 [2024-11-05 19:18:42.317620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.159 qpair failed and we were unable to recover it. 00:29:13.159 [2024-11-05 19:18:42.317920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.159 [2024-11-05 19:18:42.317932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.159 qpair failed and we were unable to recover it. 00:29:13.159 [2024-11-05 19:18:42.318292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.159 [2024-11-05 19:18:42.318303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.159 qpair failed and we were unable to recover it. 00:29:13.159 [2024-11-05 19:18:42.318566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.159 [2024-11-05 19:18:42.318578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.159 qpair failed and we were unable to recover it. 00:29:13.159 [2024-11-05 19:18:42.318879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.159 [2024-11-05 19:18:42.318892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.159 qpair failed and we were unable to recover it. 00:29:13.159 [2024-11-05 19:18:42.319253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.159 [2024-11-05 19:18:42.319265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.159 qpair failed and we were unable to recover it. 00:29:13.159 [2024-11-05 19:18:42.319568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.159 [2024-11-05 19:18:42.319581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.159 qpair failed and we were unable to recover it. 00:29:13.159 [2024-11-05 19:18:42.319880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.159 [2024-11-05 19:18:42.319893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.159 qpair failed and we were unable to recover it. 00:29:13.159 [2024-11-05 19:18:42.320095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.159 [2024-11-05 19:18:42.320107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.159 qpair failed and we were unable to recover it. 00:29:13.159 [2024-11-05 19:18:42.320447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.159 [2024-11-05 19:18:42.320459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.159 qpair failed and we were unable to recover it. 00:29:13.159 [2024-11-05 19:18:42.320557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.159 [2024-11-05 19:18:42.320569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.159 qpair failed and we were unable to recover it. 00:29:13.159 [2024-11-05 19:18:42.320839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.159 [2024-11-05 19:18:42.320852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.159 qpair failed and we were unable to recover it. 00:29:13.159 [2024-11-05 19:18:42.321235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.159 [2024-11-05 19:18:42.321248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.159 qpair failed and we were unable to recover it. 00:29:13.159 [2024-11-05 19:18:42.321467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.159 [2024-11-05 19:18:42.321478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.159 qpair failed and we were unable to recover it. 00:29:13.160 [2024-11-05 19:18:42.321693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.160 [2024-11-05 19:18:42.321705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.160 qpair failed and we were unable to recover it. 00:29:13.160 [2024-11-05 19:18:42.322013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.160 [2024-11-05 19:18:42.322025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.160 qpair failed and we were unable to recover it. 00:29:13.160 [2024-11-05 19:18:42.322335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.160 [2024-11-05 19:18:42.322347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.160 qpair failed and we were unable to recover it. 00:29:13.160 [2024-11-05 19:18:42.322678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.160 [2024-11-05 19:18:42.322690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.160 qpair failed and we were unable to recover it. 00:29:13.160 [2024-11-05 19:18:42.322991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.160 [2024-11-05 19:18:42.323003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.160 qpair failed and we were unable to recover it. 00:29:13.160 [2024-11-05 19:18:42.323347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.160 [2024-11-05 19:18:42.323358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.160 qpair failed and we were unable to recover it. 00:29:13.160 [2024-11-05 19:18:42.323673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.160 [2024-11-05 19:18:42.323685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.160 qpair failed and we were unable to recover it. 00:29:13.160 [2024-11-05 19:18:42.324036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.160 [2024-11-05 19:18:42.324047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.160 qpair failed and we were unable to recover it. 00:29:13.160 [2024-11-05 19:18:42.324353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.160 [2024-11-05 19:18:42.324366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.160 qpair failed and we were unable to recover it. 00:29:13.160 [2024-11-05 19:18:42.324701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.160 [2024-11-05 19:18:42.324712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.160 qpair failed and we were unable to recover it. 00:29:13.160 [2024-11-05 19:18:42.325028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.160 [2024-11-05 19:18:42.325041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.160 qpair failed and we were unable to recover it. 00:29:13.160 [2024-11-05 19:18:42.325355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.160 [2024-11-05 19:18:42.325365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.160 qpair failed and we were unable to recover it. 00:29:13.160 [2024-11-05 19:18:42.325699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.160 [2024-11-05 19:18:42.325710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.160 qpair failed and we were unable to recover it. 00:29:13.160 [2024-11-05 19:18:42.325903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.160 [2024-11-05 19:18:42.325914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.160 qpair failed and we were unable to recover it. 00:29:13.160 [2024-11-05 19:18:42.326238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.160 [2024-11-05 19:18:42.326250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.160 qpair failed and we were unable to recover it. 00:29:13.160 [2024-11-05 19:18:42.326553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.160 [2024-11-05 19:18:42.326566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.160 qpair failed and we were unable to recover it. 00:29:13.160 [2024-11-05 19:18:42.326882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.160 [2024-11-05 19:18:42.326894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.160 qpair failed and we were unable to recover it. 00:29:13.160 [2024-11-05 19:18:42.327241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.160 [2024-11-05 19:18:42.327252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.160 qpair failed and we were unable to recover it. 00:29:13.160 [2024-11-05 19:18:42.327557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.160 [2024-11-05 19:18:42.327568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.160 qpair failed and we were unable to recover it. 00:29:13.160 [2024-11-05 19:18:42.327874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.160 [2024-11-05 19:18:42.327885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.160 qpair failed and we were unable to recover it. 00:29:13.160 [2024-11-05 19:18:42.328185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.160 [2024-11-05 19:18:42.328197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.160 qpair failed and we were unable to recover it. 00:29:13.160 [2024-11-05 19:18:42.328501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.160 [2024-11-05 19:18:42.328512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.160 qpair failed and we were unable to recover it. 00:29:13.160 [2024-11-05 19:18:42.328813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.160 [2024-11-05 19:18:42.328824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.160 qpair failed and we were unable to recover it. 00:29:13.160 [2024-11-05 19:18:42.329142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.160 [2024-11-05 19:18:42.329153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.160 qpair failed and we were unable to recover it. 00:29:13.160 [2024-11-05 19:18:42.329434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.160 [2024-11-05 19:18:42.329446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.160 qpair failed and we were unable to recover it. 00:29:13.160 [2024-11-05 19:18:42.329694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.160 [2024-11-05 19:18:42.329707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.160 qpair failed and we were unable to recover it. 00:29:13.160 [2024-11-05 19:18:42.330010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.160 [2024-11-05 19:18:42.330022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.160 qpair failed and we were unable to recover it. 00:29:13.160 [2024-11-05 19:18:42.330352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.160 [2024-11-05 19:18:42.330363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.160 qpair failed and we were unable to recover it. 00:29:13.160 [2024-11-05 19:18:42.330681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.160 [2024-11-05 19:18:42.330695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.160 qpair failed and we were unable to recover it. 00:29:13.160 [2024-11-05 19:18:42.331005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.160 [2024-11-05 19:18:42.331017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.160 qpair failed and we were unable to recover it. 00:29:13.160 [2024-11-05 19:18:42.331361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.160 [2024-11-05 19:18:42.331372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.160 qpair failed and we were unable to recover it. 00:29:13.160 [2024-11-05 19:18:42.331629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.160 [2024-11-05 19:18:42.331641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.160 qpair failed and we were unable to recover it. 00:29:13.160 [2024-11-05 19:18:42.331943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.160 [2024-11-05 19:18:42.331956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.160 qpair failed and we were unable to recover it. 00:29:13.160 [2024-11-05 19:18:42.332278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.160 [2024-11-05 19:18:42.332289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.160 qpair failed and we were unable to recover it. 00:29:13.160 [2024-11-05 19:18:42.332474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.160 [2024-11-05 19:18:42.332486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.160 qpair failed and we were unable to recover it. 00:29:13.160 [2024-11-05 19:18:42.332841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.160 [2024-11-05 19:18:42.332853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.160 qpair failed and we were unable to recover it. 00:29:13.160 [2024-11-05 19:18:42.333053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.160 [2024-11-05 19:18:42.333064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.160 qpair failed and we were unable to recover it. 00:29:13.160 [2024-11-05 19:18:42.333373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.160 [2024-11-05 19:18:42.333383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.160 qpair failed and we were unable to recover it. 00:29:13.160 [2024-11-05 19:18:42.333689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.160 [2024-11-05 19:18:42.333700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.161 qpair failed and we were unable to recover it. 00:29:13.161 [2024-11-05 19:18:42.334011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.161 [2024-11-05 19:18:42.334023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.161 qpair failed and we were unable to recover it. 00:29:13.161 [2024-11-05 19:18:42.334293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.161 [2024-11-05 19:18:42.334304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.161 qpair failed and we were unable to recover it. 00:29:13.161 [2024-11-05 19:18:42.334645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.161 [2024-11-05 19:18:42.334656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.161 qpair failed and we were unable to recover it. 00:29:13.161 [2024-11-05 19:18:42.334952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.161 [2024-11-05 19:18:42.334965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.161 qpair failed and we were unable to recover it. 00:29:13.161 [2024-11-05 19:18:42.335286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.161 [2024-11-05 19:18:42.335297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.161 qpair failed and we were unable to recover it. 00:29:13.161 [2024-11-05 19:18:42.335577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.161 [2024-11-05 19:18:42.335587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.161 qpair failed and we were unable to recover it. 00:29:13.161 [2024-11-05 19:18:42.335785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.161 [2024-11-05 19:18:42.335797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.161 qpair failed and we were unable to recover it. 00:29:13.161 [2024-11-05 19:18:42.336166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.161 [2024-11-05 19:18:42.336177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.161 qpair failed and we were unable to recover it. 00:29:13.161 [2024-11-05 19:18:42.336474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.161 [2024-11-05 19:18:42.336486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.161 qpair failed and we were unable to recover it. 00:29:13.161 [2024-11-05 19:18:42.336824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.161 [2024-11-05 19:18:42.336836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.161 qpair failed and we were unable to recover it. 00:29:13.161 [2024-11-05 19:18:42.337198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.161 [2024-11-05 19:18:42.337209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.161 qpair failed and we were unable to recover it. 00:29:13.161 [2024-11-05 19:18:42.337521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.161 [2024-11-05 19:18:42.337532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.161 qpair failed and we were unable to recover it. 00:29:13.161 [2024-11-05 19:18:42.337798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.161 [2024-11-05 19:18:42.337809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.161 qpair failed and we were unable to recover it. 00:29:13.161 [2024-11-05 19:18:42.337998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.161 [2024-11-05 19:18:42.338008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.161 qpair failed and we were unable to recover it. 00:29:13.161 [2024-11-05 19:18:42.338319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.161 [2024-11-05 19:18:42.338331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.161 qpair failed and we were unable to recover it. 00:29:13.161 [2024-11-05 19:18:42.338662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.161 [2024-11-05 19:18:42.338673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.161 qpair failed and we were unable to recover it. 00:29:13.161 [2024-11-05 19:18:42.339010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.161 [2024-11-05 19:18:42.339023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.161 qpair failed and we were unable to recover it. 00:29:13.161 [2024-11-05 19:18:42.339379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.161 [2024-11-05 19:18:42.339390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.161 qpair failed and we were unable to recover it. 00:29:13.161 [2024-11-05 19:18:42.339719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.161 [2024-11-05 19:18:42.339740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.161 qpair failed and we were unable to recover it. 00:29:13.161 [2024-11-05 19:18:42.340054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.161 [2024-11-05 19:18:42.340064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.161 qpair failed and we were unable to recover it. 00:29:13.161 [2024-11-05 19:18:42.340381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.161 [2024-11-05 19:18:42.340393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.161 qpair failed and we were unable to recover it. 00:29:13.161 [2024-11-05 19:18:42.340697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.161 [2024-11-05 19:18:42.340709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.161 qpair failed and we were unable to recover it. 00:29:13.161 [2024-11-05 19:18:42.340890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.161 [2024-11-05 19:18:42.340903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.161 qpair failed and we were unable to recover it. 00:29:13.161 [2024-11-05 19:18:42.341291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.161 [2024-11-05 19:18:42.341303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.161 qpair failed and we were unable to recover it. 00:29:13.161 [2024-11-05 19:18:42.341611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.161 [2024-11-05 19:18:42.341623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.161 qpair failed and we were unable to recover it. 00:29:13.161 [2024-11-05 19:18:42.341943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.161 [2024-11-05 19:18:42.341956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.161 qpair failed and we were unable to recover it. 00:29:13.161 [2024-11-05 19:18:42.342286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.161 [2024-11-05 19:18:42.342299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.161 qpair failed and we were unable to recover it. 00:29:13.161 [2024-11-05 19:18:42.342614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.161 [2024-11-05 19:18:42.342626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.161 qpair failed and we were unable to recover it. 00:29:13.161 [2024-11-05 19:18:42.342961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.161 [2024-11-05 19:18:42.342972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.161 qpair failed and we were unable to recover it. 00:29:13.161 [2024-11-05 19:18:42.343303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.161 [2024-11-05 19:18:42.343315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.161 qpair failed and we were unable to recover it. 00:29:13.161 [2024-11-05 19:18:42.343635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.161 [2024-11-05 19:18:42.343646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.161 qpair failed and we were unable to recover it. 00:29:13.161 [2024-11-05 19:18:42.343959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.161 [2024-11-05 19:18:42.343972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.161 qpair failed and we were unable to recover it. 00:29:13.161 [2024-11-05 19:18:42.344317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.161 [2024-11-05 19:18:42.344328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.161 qpair failed and we were unable to recover it. 00:29:13.161 [2024-11-05 19:18:42.344675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.161 [2024-11-05 19:18:42.344687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.161 qpair failed and we were unable to recover it. 00:29:13.161 [2024-11-05 19:18:42.344981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.161 [2024-11-05 19:18:42.344992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.161 qpair failed and we were unable to recover it. 00:29:13.161 [2024-11-05 19:18:42.345341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.161 [2024-11-05 19:18:42.345353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.161 qpair failed and we were unable to recover it. 00:29:13.161 [2024-11-05 19:18:42.345529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.161 [2024-11-05 19:18:42.345539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.161 qpair failed and we were unable to recover it. 00:29:13.161 [2024-11-05 19:18:42.345856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.161 [2024-11-05 19:18:42.345867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.161 qpair failed and we were unable to recover it. 00:29:13.161 [2024-11-05 19:18:42.346188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.161 [2024-11-05 19:18:42.346200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.162 qpair failed and we were unable to recover it. 00:29:13.162 [2024-11-05 19:18:42.346515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.162 [2024-11-05 19:18:42.346526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.162 qpair failed and we were unable to recover it. 00:29:13.162 [2024-11-05 19:18:42.346833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.162 [2024-11-05 19:18:42.346846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.162 qpair failed and we were unable to recover it. 00:29:13.162 [2024-11-05 19:18:42.347143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.162 [2024-11-05 19:18:42.347155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.162 qpair failed and we were unable to recover it. 00:29:13.162 [2024-11-05 19:18:42.347436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.162 [2024-11-05 19:18:42.347447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.162 qpair failed and we were unable to recover it. 00:29:13.162 [2024-11-05 19:18:42.347760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.162 [2024-11-05 19:18:42.347775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.162 qpair failed and we were unable to recover it. 00:29:13.162 [2024-11-05 19:18:42.348019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.162 [2024-11-05 19:18:42.348030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.162 qpair failed and we were unable to recover it. 00:29:13.162 [2024-11-05 19:18:42.348361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.162 [2024-11-05 19:18:42.348371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.162 qpair failed and we were unable to recover it. 00:29:13.162 [2024-11-05 19:18:42.348533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.162 [2024-11-05 19:18:42.348544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.162 qpair failed and we were unable to recover it. 00:29:13.162 [2024-11-05 19:18:42.348880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.162 [2024-11-05 19:18:42.348891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.162 qpair failed and we were unable to recover it. 00:29:13.162 [2024-11-05 19:18:42.349184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.162 [2024-11-05 19:18:42.349195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.162 qpair failed and we were unable to recover it. 00:29:13.162 [2024-11-05 19:18:42.349480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.162 [2024-11-05 19:18:42.349492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.162 qpair failed and we were unable to recover it. 00:29:13.162 [2024-11-05 19:18:42.349799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.162 [2024-11-05 19:18:42.349811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.162 qpair failed and we were unable to recover it. 00:29:13.162 [2024-11-05 19:18:42.350139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.162 [2024-11-05 19:18:42.350151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.162 qpair failed and we were unable to recover it. 00:29:13.162 [2024-11-05 19:18:42.350455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.162 [2024-11-05 19:18:42.350467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.162 qpair failed and we were unable to recover it. 00:29:13.162 [2024-11-05 19:18:42.350760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.162 [2024-11-05 19:18:42.350772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.162 qpair failed and we were unable to recover it. 00:29:13.162 [2024-11-05 19:18:42.350976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.162 [2024-11-05 19:18:42.350987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.162 qpair failed and we were unable to recover it. 00:29:13.162 [2024-11-05 19:18:42.351156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.162 [2024-11-05 19:18:42.351166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.162 qpair failed and we were unable to recover it. 00:29:13.162 [2024-11-05 19:18:42.351227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.162 [2024-11-05 19:18:42.351238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.162 qpair failed and we were unable to recover it. 00:29:13.162 [2024-11-05 19:18:42.351553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.162 [2024-11-05 19:18:42.351565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.162 qpair failed and we were unable to recover it. 00:29:13.162 [2024-11-05 19:18:42.351790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.162 [2024-11-05 19:18:42.351802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.162 qpair failed and we were unable to recover it. 00:29:13.162 [2024-11-05 19:18:42.351998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.162 [2024-11-05 19:18:42.352010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.162 qpair failed and we were unable to recover it. 00:29:13.162 [2024-11-05 19:18:42.352312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.162 [2024-11-05 19:18:42.352324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.162 qpair failed and we were unable to recover it. 00:29:13.162 [2024-11-05 19:18:42.352634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.162 [2024-11-05 19:18:42.352645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.162 qpair failed and we were unable to recover it. 00:29:13.162 [2024-11-05 19:18:42.352819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.162 [2024-11-05 19:18:42.352832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.162 qpair failed and we were unable to recover it. 00:29:13.162 [2024-11-05 19:18:42.353029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.162 [2024-11-05 19:18:42.353041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.162 qpair failed and we were unable to recover it. 00:29:13.162 [2024-11-05 19:18:42.353286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.162 [2024-11-05 19:18:42.353297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.162 qpair failed and we were unable to recover it. 00:29:13.162 [2024-11-05 19:18:42.353553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.162 [2024-11-05 19:18:42.353565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.162 qpair failed and we were unable to recover it. 00:29:13.162 [2024-11-05 19:18:42.353737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.162 [2024-11-05 19:18:42.353761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.162 qpair failed and we were unable to recover it. 00:29:13.162 [2024-11-05 19:18:42.354072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.162 [2024-11-05 19:18:42.354084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.162 qpair failed and we were unable to recover it. 00:29:13.162 [2024-11-05 19:18:42.354417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.162 [2024-11-05 19:18:42.354429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.162 qpair failed and we were unable to recover it. 00:29:13.162 [2024-11-05 19:18:42.354611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.162 [2024-11-05 19:18:42.354623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.162 qpair failed and we were unable to recover it. 00:29:13.162 [2024-11-05 19:18:42.354986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.162 [2024-11-05 19:18:42.354998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.162 qpair failed and we were unable to recover it. 00:29:13.162 [2024-11-05 19:18:42.355326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.162 [2024-11-05 19:18:42.355337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.162 qpair failed and we were unable to recover it. 00:29:13.162 [2024-11-05 19:18:42.355655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.162 [2024-11-05 19:18:42.355666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.162 qpair failed and we were unable to recover it. 00:29:13.162 [2024-11-05 19:18:42.355725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.162 [2024-11-05 19:18:42.355736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.162 qpair failed and we were unable to recover it. 00:29:13.162 [2024-11-05 19:18:42.356061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.162 [2024-11-05 19:18:42.356072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.162 qpair failed and we were unable to recover it. 00:29:13.162 [2024-11-05 19:18:42.356371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.162 [2024-11-05 19:18:42.356383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.162 qpair failed and we were unable to recover it. 00:29:13.162 [2024-11-05 19:18:42.356699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.162 [2024-11-05 19:18:42.356710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.162 qpair failed and we were unable to recover it. 00:29:13.162 [2024-11-05 19:18:42.357052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.163 [2024-11-05 19:18:42.357063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.163 qpair failed and we were unable to recover it. 00:29:13.163 [2024-11-05 19:18:42.357373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.163 [2024-11-05 19:18:42.357384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.163 qpair failed and we were unable to recover it. 00:29:13.163 [2024-11-05 19:18:42.357584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.163 [2024-11-05 19:18:42.357594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.163 qpair failed and we were unable to recover it. 00:29:13.163 [2024-11-05 19:18:42.357781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.163 [2024-11-05 19:18:42.357793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.163 qpair failed and we were unable to recover it. 00:29:13.163 [2024-11-05 19:18:42.358136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.163 [2024-11-05 19:18:42.358148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.163 qpair failed and we were unable to recover it. 00:29:13.163 [2024-11-05 19:18:42.358453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.163 [2024-11-05 19:18:42.358465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.163 qpair failed and we were unable to recover it. 00:29:13.163 [2024-11-05 19:18:42.358757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.163 [2024-11-05 19:18:42.358769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.163 qpair failed and we were unable to recover it. 00:29:13.163 [2024-11-05 19:18:42.358961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.163 [2024-11-05 19:18:42.358973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.163 qpair failed and we were unable to recover it. 00:29:13.163 [2024-11-05 19:18:42.359262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.163 [2024-11-05 19:18:42.359274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.163 qpair failed and we were unable to recover it. 00:29:13.163 [2024-11-05 19:18:42.359468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.163 [2024-11-05 19:18:42.359482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.163 qpair failed and we were unable to recover it. 00:29:13.163 [2024-11-05 19:18:42.359653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.163 [2024-11-05 19:18:42.359665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.163 qpair failed and we were unable to recover it. 00:29:13.163 [2024-11-05 19:18:42.359869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.163 [2024-11-05 19:18:42.359882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.163 qpair failed and we were unable to recover it. 00:29:13.163 [2024-11-05 19:18:42.360215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.163 [2024-11-05 19:18:42.360227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.163 qpair failed and we were unable to recover it. 00:29:13.163 [2024-11-05 19:18:42.360605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.163 [2024-11-05 19:18:42.360616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.163 qpair failed and we were unable to recover it. 00:29:13.163 [2024-11-05 19:18:42.360901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.163 [2024-11-05 19:18:42.360913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.163 qpair failed and we were unable to recover it. 00:29:13.163 [2024-11-05 19:18:42.361232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.163 [2024-11-05 19:18:42.361244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.163 qpair failed and we were unable to recover it. 00:29:13.163 [2024-11-05 19:18:42.361561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.163 [2024-11-05 19:18:42.361573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.163 qpair failed and we were unable to recover it. 00:29:13.163 [2024-11-05 19:18:42.361758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.163 [2024-11-05 19:18:42.361769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.163 qpair failed and we were unable to recover it. 00:29:13.163 [2024-11-05 19:18:42.362086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.163 [2024-11-05 19:18:42.362098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.163 qpair failed and we were unable to recover it. 00:29:13.163 [2024-11-05 19:18:42.362400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.163 [2024-11-05 19:18:42.362411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.163 qpair failed and we were unable to recover it. 00:29:13.163 [2024-11-05 19:18:42.362677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.163 [2024-11-05 19:18:42.362688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.163 qpair failed and we were unable to recover it. 00:29:13.163 [2024-11-05 19:18:42.362922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.163 [2024-11-05 19:18:42.362933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.163 qpair failed and we were unable to recover it. 00:29:13.163 [2024-11-05 19:18:42.363330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.163 [2024-11-05 19:18:42.363341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.163 qpair failed and we were unable to recover it. 00:29:13.163 [2024-11-05 19:18:42.363662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.163 [2024-11-05 19:18:42.363673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.163 qpair failed and we were unable to recover it. 00:29:13.163 [2024-11-05 19:18:42.363989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.163 [2024-11-05 19:18:42.364000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.163 qpair failed and we were unable to recover it. 00:29:13.163 [2024-11-05 19:18:42.364317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.163 [2024-11-05 19:18:42.364329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.163 qpair failed and we were unable to recover it. 00:29:13.163 [2024-11-05 19:18:42.364526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.163 [2024-11-05 19:18:42.364537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.163 qpair failed and we were unable to recover it. 00:29:13.163 [2024-11-05 19:18:42.364881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.163 [2024-11-05 19:18:42.364892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.163 qpair failed and we were unable to recover it. 00:29:13.163 [2024-11-05 19:18:42.365203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.163 [2024-11-05 19:18:42.365215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.163 qpair failed and we were unable to recover it. 00:29:13.163 [2024-11-05 19:18:42.365431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.163 [2024-11-05 19:18:42.365443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.163 qpair failed and we were unable to recover it. 00:29:13.163 [2024-11-05 19:18:42.365759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.163 [2024-11-05 19:18:42.365771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.163 qpair failed and we were unable to recover it. 00:29:13.163 [2024-11-05 19:18:42.366086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.163 [2024-11-05 19:18:42.366099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.163 qpair failed and we were unable to recover it. 00:29:13.163 [2024-11-05 19:18:42.366413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.163 [2024-11-05 19:18:42.366425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.163 qpair failed and we were unable to recover it. 00:29:13.163 [2024-11-05 19:18:42.366430] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:29:13.163 [2024-11-05 19:18:42.366474] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:13.164 [2024-11-05 19:18:42.366782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.164 [2024-11-05 19:18:42.366793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.164 qpair failed and we were unable to recover it. 00:29:13.164 [2024-11-05 19:18:42.367013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.164 [2024-11-05 19:18:42.367023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.164 qpair failed and we were unable to recover it. 00:29:13.164 [2024-11-05 19:18:42.367291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.164 [2024-11-05 19:18:42.367302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.164 qpair failed and we were unable to recover it. 00:29:13.164 [2024-11-05 19:18:42.367590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.164 [2024-11-05 19:18:42.367602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.164 qpair failed and we were unable to recover it. 00:29:13.164 [2024-11-05 19:18:42.367919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.164 [2024-11-05 19:18:42.367932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.164 qpair failed and we were unable to recover it. 00:29:13.164 [2024-11-05 19:18:42.368239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.164 [2024-11-05 19:18:42.368252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.164 qpair failed and we were unable to recover it. 00:29:13.164 [2024-11-05 19:18:42.368567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.164 [2024-11-05 19:18:42.368580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.164 qpair failed and we were unable to recover it. 00:29:13.164 [2024-11-05 19:18:42.368733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.164 [2024-11-05 19:18:42.368751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.164 qpair failed and we were unable to recover it. 00:29:13.164 [2024-11-05 19:18:42.368946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.164 [2024-11-05 19:18:42.368958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.164 qpair failed and we were unable to recover it. 00:29:13.164 [2024-11-05 19:18:42.369277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.164 [2024-11-05 19:18:42.369290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.164 qpair failed and we were unable to recover it. 00:29:13.164 [2024-11-05 19:18:42.369600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.164 [2024-11-05 19:18:42.369612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.164 qpair failed and we were unable to recover it. 00:29:13.164 [2024-11-05 19:18:42.369795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.164 [2024-11-05 19:18:42.369808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.164 qpair failed and we were unable to recover it. 00:29:13.164 [2024-11-05 19:18:42.370074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.164 [2024-11-05 19:18:42.370086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.164 qpair failed and we were unable to recover it. 00:29:13.164 [2024-11-05 19:18:42.370391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.164 [2024-11-05 19:18:42.370403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.164 qpair failed and we were unable to recover it. 00:29:13.164 [2024-11-05 19:18:42.370736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.164 [2024-11-05 19:18:42.370755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.164 qpair failed and we were unable to recover it. 00:29:13.164 [2024-11-05 19:18:42.371064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.164 [2024-11-05 19:18:42.371076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.164 qpair failed and we were unable to recover it. 00:29:13.164 [2024-11-05 19:18:42.371375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.164 [2024-11-05 19:18:42.371387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.164 qpair failed and we were unable to recover it. 00:29:13.164 [2024-11-05 19:18:42.371735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.164 [2024-11-05 19:18:42.371751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.164 qpair failed and we were unable to recover it. 00:29:13.164 [2024-11-05 19:18:42.372075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.164 [2024-11-05 19:18:42.372087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.164 qpair failed and we were unable to recover it. 00:29:13.164 [2024-11-05 19:18:42.372407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.164 [2024-11-05 19:18:42.372419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.164 qpair failed and we were unable to recover it. 00:29:13.164 [2024-11-05 19:18:42.372661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.164 [2024-11-05 19:18:42.372673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.164 qpair failed and we were unable to recover it. 00:29:13.164 [2024-11-05 19:18:42.372991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.164 [2024-11-05 19:18:42.373005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.164 qpair failed and we were unable to recover it. 00:29:13.164 [2024-11-05 19:18:42.373368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.164 [2024-11-05 19:18:42.373380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.164 qpair failed and we were unable to recover it. 00:29:13.164 [2024-11-05 19:18:42.373751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.164 [2024-11-05 19:18:42.373763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.164 qpair failed and we were unable to recover it. 00:29:13.164 [2024-11-05 19:18:42.373989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.164 [2024-11-05 19:18:42.374000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.164 qpair failed and we were unable to recover it. 00:29:13.164 [2024-11-05 19:18:42.374314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.164 [2024-11-05 19:18:42.374326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.164 qpair failed and we were unable to recover it. 00:29:13.164 [2024-11-05 19:18:42.374640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.164 [2024-11-05 19:18:42.374653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.164 qpair failed and we were unable to recover it. 00:29:13.164 [2024-11-05 19:18:42.374986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.164 [2024-11-05 19:18:42.374999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.164 qpair failed and we were unable to recover it. 00:29:13.164 [2024-11-05 19:18:42.375322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.164 [2024-11-05 19:18:42.375333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.164 qpair failed and we were unable to recover it. 00:29:13.164 [2024-11-05 19:18:42.375524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.164 [2024-11-05 19:18:42.375537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.164 qpair failed and we were unable to recover it. 00:29:13.164 [2024-11-05 19:18:42.375872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.164 [2024-11-05 19:18:42.375884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.164 qpair failed and we were unable to recover it. 00:29:13.164 [2024-11-05 19:18:42.376265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.164 [2024-11-05 19:18:42.376277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.164 qpair failed and we were unable to recover it. 00:29:13.164 [2024-11-05 19:18:42.376441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.164 [2024-11-05 19:18:42.376453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.164 qpair failed and we were unable to recover it. 00:29:13.164 [2024-11-05 19:18:42.376631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.164 [2024-11-05 19:18:42.376643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.164 qpair failed and we were unable to recover it. 00:29:13.164 [2024-11-05 19:18:42.377054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.164 [2024-11-05 19:18:42.377066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.164 qpair failed and we were unable to recover it. 00:29:13.164 [2024-11-05 19:18:42.377379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.164 [2024-11-05 19:18:42.377391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.164 qpair failed and we were unable to recover it. 00:29:13.164 [2024-11-05 19:18:42.377674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.164 [2024-11-05 19:18:42.377686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.164 qpair failed and we were unable to recover it. 00:29:13.164 [2024-11-05 19:18:42.377895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.164 [2024-11-05 19:18:42.377907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.164 qpair failed and we were unable to recover it. 00:29:13.164 [2024-11-05 19:18:42.378235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.165 [2024-11-05 19:18:42.378248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.165 qpair failed and we were unable to recover it. 00:29:13.165 [2024-11-05 19:18:42.378573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.165 [2024-11-05 19:18:42.378585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.165 qpair failed and we were unable to recover it. 00:29:13.165 [2024-11-05 19:18:42.378915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.165 [2024-11-05 19:18:42.378928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.165 qpair failed and we were unable to recover it. 00:29:13.165 [2024-11-05 19:18:42.379266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.165 [2024-11-05 19:18:42.379279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.165 qpair failed and we were unable to recover it. 00:29:13.165 [2024-11-05 19:18:42.379475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.165 [2024-11-05 19:18:42.379487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.165 qpair failed and we were unable to recover it. 00:29:13.165 [2024-11-05 19:18:42.379804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.165 [2024-11-05 19:18:42.379817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.165 qpair failed and we were unable to recover it. 00:29:13.165 [2024-11-05 19:18:42.380134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.165 [2024-11-05 19:18:42.380146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.165 qpair failed and we were unable to recover it. 00:29:13.165 [2024-11-05 19:18:42.380456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.165 [2024-11-05 19:18:42.380468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.165 qpair failed and we were unable to recover it. 00:29:13.165 [2024-11-05 19:18:42.380742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.165 [2024-11-05 19:18:42.380760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.165 qpair failed and we were unable to recover it. 00:29:13.165 [2024-11-05 19:18:42.381139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.165 [2024-11-05 19:18:42.381151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.165 qpair failed and we were unable to recover it. 00:29:13.165 [2024-11-05 19:18:42.381376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.165 [2024-11-05 19:18:42.381388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.165 qpair failed and we were unable to recover it. 00:29:13.165 [2024-11-05 19:18:42.381722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.165 [2024-11-05 19:18:42.381734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.165 qpair failed and we were unable to recover it. 00:29:13.165 [2024-11-05 19:18:42.382069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.165 [2024-11-05 19:18:42.382080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.165 qpair failed and we were unable to recover it. 00:29:13.165 [2024-11-05 19:18:42.382275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.165 [2024-11-05 19:18:42.382286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.165 qpair failed and we were unable to recover it. 00:29:13.165 [2024-11-05 19:18:42.382473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.165 [2024-11-05 19:18:42.382485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.165 qpair failed and we were unable to recover it. 00:29:13.165 [2024-11-05 19:18:42.382783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.165 [2024-11-05 19:18:42.382795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.165 qpair failed and we were unable to recover it. 00:29:13.165 [2024-11-05 19:18:42.382963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.165 [2024-11-05 19:18:42.382977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.165 qpair failed and we were unable to recover it. 00:29:13.165 [2024-11-05 19:18:42.383304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.165 [2024-11-05 19:18:42.383316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.165 qpair failed and we were unable to recover it. 00:29:13.165 [2024-11-05 19:18:42.383511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.165 [2024-11-05 19:18:42.383522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.165 qpair failed and we were unable to recover it. 00:29:13.165 [2024-11-05 19:18:42.383831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.165 [2024-11-05 19:18:42.383843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.165 qpair failed and we were unable to recover it. 00:29:13.165 [2024-11-05 19:18:42.384164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.165 [2024-11-05 19:18:42.384176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.165 qpair failed and we were unable to recover it. 00:29:13.165 [2024-11-05 19:18:42.384490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.165 [2024-11-05 19:18:42.384501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.165 qpair failed and we were unable to recover it. 00:29:13.165 [2024-11-05 19:18:42.384819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.165 [2024-11-05 19:18:42.384830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.165 qpair failed and we were unable to recover it. 00:29:13.165 [2024-11-05 19:18:42.385198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.165 [2024-11-05 19:18:42.385210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.165 qpair failed and we were unable to recover it. 00:29:13.165 [2024-11-05 19:18:42.385589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.165 [2024-11-05 19:18:42.385602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.165 qpair failed and we were unable to recover it. 00:29:13.165 [2024-11-05 19:18:42.385885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.165 [2024-11-05 19:18:42.385898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.165 qpair failed and we were unable to recover it. 00:29:13.165 [2024-11-05 19:18:42.386217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.165 [2024-11-05 19:18:42.386229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.165 qpair failed and we were unable to recover it. 00:29:13.165 [2024-11-05 19:18:42.386524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.165 [2024-11-05 19:18:42.386535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.165 qpair failed and we were unable to recover it. 00:29:13.165 [2024-11-05 19:18:42.386848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.165 [2024-11-05 19:18:42.386860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.165 qpair failed and we were unable to recover it. 00:29:13.165 [2024-11-05 19:18:42.387176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.165 [2024-11-05 19:18:42.387187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.165 qpair failed and we were unable to recover it. 00:29:13.165 [2024-11-05 19:18:42.387501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.165 [2024-11-05 19:18:42.387513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.165 qpair failed and we were unable to recover it. 00:29:13.165 [2024-11-05 19:18:42.387835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.165 [2024-11-05 19:18:42.387846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.165 qpair failed and we were unable to recover it. 00:29:13.165 [2024-11-05 19:18:42.388210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.165 [2024-11-05 19:18:42.388221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.165 qpair failed and we were unable to recover it. 00:29:13.165 [2024-11-05 19:18:42.388526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.165 [2024-11-05 19:18:42.388537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.165 qpair failed and we were unable to recover it. 00:29:13.165 [2024-11-05 19:18:42.388842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.165 [2024-11-05 19:18:42.388854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.165 qpair failed and we were unable to recover it. 00:29:13.165 [2024-11-05 19:18:42.389035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.165 [2024-11-05 19:18:42.389046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.165 qpair failed and we were unable to recover it. 00:29:13.165 [2024-11-05 19:18:42.389268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.165 [2024-11-05 19:18:42.389280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.165 qpair failed and we were unable to recover it. 00:29:13.165 [2024-11-05 19:18:42.389617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.165 [2024-11-05 19:18:42.389628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.165 qpair failed and we were unable to recover it. 00:29:13.165 [2024-11-05 19:18:42.389924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.165 [2024-11-05 19:18:42.389936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.165 qpair failed and we were unable to recover it. 00:29:13.165 [2024-11-05 19:18:42.390242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.166 [2024-11-05 19:18:42.390253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.166 qpair failed and we were unable to recover it. 00:29:13.166 [2024-11-05 19:18:42.390571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.166 [2024-11-05 19:18:42.390582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.166 qpair failed and we were unable to recover it. 00:29:13.166 [2024-11-05 19:18:42.390886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.166 [2024-11-05 19:18:42.390897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.166 qpair failed and we were unable to recover it. 00:29:13.166 [2024-11-05 19:18:42.391112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.166 [2024-11-05 19:18:42.391124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.166 qpair failed and we were unable to recover it. 00:29:13.166 [2024-11-05 19:18:42.391475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.166 [2024-11-05 19:18:42.391488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.166 qpair failed and we were unable to recover it. 00:29:13.166 [2024-11-05 19:18:42.391798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.166 [2024-11-05 19:18:42.391810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.166 qpair failed and we were unable to recover it. 00:29:13.166 [2024-11-05 19:18:42.392134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.166 [2024-11-05 19:18:42.392146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.166 qpair failed and we were unable to recover it. 00:29:13.166 [2024-11-05 19:18:42.392451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.166 [2024-11-05 19:18:42.392462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.166 qpair failed and we were unable to recover it. 00:29:13.166 [2024-11-05 19:18:42.392754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.166 [2024-11-05 19:18:42.392766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.166 qpair failed and we were unable to recover it. 00:29:13.166 [2024-11-05 19:18:42.393069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.166 [2024-11-05 19:18:42.393081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.166 qpair failed and we were unable to recover it. 00:29:13.166 [2024-11-05 19:18:42.393392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.166 [2024-11-05 19:18:42.393404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.166 qpair failed and we were unable to recover it. 00:29:13.166 [2024-11-05 19:18:42.393717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.166 [2024-11-05 19:18:42.393728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.166 qpair failed and we were unable to recover it. 00:29:13.166 [2024-11-05 19:18:42.394026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.166 [2024-11-05 19:18:42.394038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.166 qpair failed and we were unable to recover it. 00:29:13.166 [2024-11-05 19:18:42.394213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.166 [2024-11-05 19:18:42.394225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.166 qpair failed and we were unable to recover it. 00:29:13.166 [2024-11-05 19:18:42.394557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.166 [2024-11-05 19:18:42.394568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.166 qpair failed and we were unable to recover it. 00:29:13.166 [2024-11-05 19:18:42.394879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.166 [2024-11-05 19:18:42.394892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.166 qpair failed and we were unable to recover it. 00:29:13.166 [2024-11-05 19:18:42.395085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.166 [2024-11-05 19:18:42.395096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.166 qpair failed and we were unable to recover it. 00:29:13.166 [2024-11-05 19:18:42.395426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.166 [2024-11-05 19:18:42.395440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.166 qpair failed and we were unable to recover it. 00:29:13.166 [2024-11-05 19:18:42.395763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.166 [2024-11-05 19:18:42.395775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.166 qpair failed and we were unable to recover it. 00:29:13.166 [2024-11-05 19:18:42.396088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.166 [2024-11-05 19:18:42.396100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.166 qpair failed and we were unable to recover it. 00:29:13.166 [2024-11-05 19:18:42.396399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.166 [2024-11-05 19:18:42.396410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.166 qpair failed and we were unable to recover it. 00:29:13.166 [2024-11-05 19:18:42.396707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.166 [2024-11-05 19:18:42.396720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.166 qpair failed and we were unable to recover it. 00:29:13.166 [2024-11-05 19:18:42.397053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.166 [2024-11-05 19:18:42.397065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.166 qpair failed and we were unable to recover it. 00:29:13.166 [2024-11-05 19:18:42.397329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.166 [2024-11-05 19:18:42.397340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.166 qpair failed and we were unable to recover it. 00:29:13.166 [2024-11-05 19:18:42.397633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.166 [2024-11-05 19:18:42.397644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.166 qpair failed and we were unable to recover it. 00:29:13.166 [2024-11-05 19:18:42.397976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.166 [2024-11-05 19:18:42.397988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.166 qpair failed and we were unable to recover it. 00:29:13.166 [2024-11-05 19:18:42.398337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.166 [2024-11-05 19:18:42.398349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.166 qpair failed and we were unable to recover it. 00:29:13.166 [2024-11-05 19:18:42.398692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.166 [2024-11-05 19:18:42.398706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.166 qpair failed and we were unable to recover it. 00:29:13.166 [2024-11-05 19:18:42.399035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.166 [2024-11-05 19:18:42.399048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.166 qpair failed and we were unable to recover it. 00:29:13.166 [2024-11-05 19:18:42.399357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.166 [2024-11-05 19:18:42.399370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.166 qpair failed and we were unable to recover it. 00:29:13.166 [2024-11-05 19:18:42.399718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.166 [2024-11-05 19:18:42.399730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.166 qpair failed and we were unable to recover it. 00:29:13.166 [2024-11-05 19:18:42.399983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.166 [2024-11-05 19:18:42.399996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.166 qpair failed and we were unable to recover it. 00:29:13.166 [2024-11-05 19:18:42.400332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.166 [2024-11-05 19:18:42.400344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.166 qpair failed and we were unable to recover it. 00:29:13.166 [2024-11-05 19:18:42.400641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.166 [2024-11-05 19:18:42.400653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.166 qpair failed and we were unable to recover it. 00:29:13.166 [2024-11-05 19:18:42.400840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.166 [2024-11-05 19:18:42.400853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.166 qpair failed and we were unable to recover it. 00:29:13.166 [2024-11-05 19:18:42.401049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.166 [2024-11-05 19:18:42.401063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.166 qpair failed and we were unable to recover it. 00:29:13.166 [2024-11-05 19:18:42.401408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.166 [2024-11-05 19:18:42.401421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.166 qpair failed and we were unable to recover it. 00:29:13.166 [2024-11-05 19:18:42.401611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.166 [2024-11-05 19:18:42.401624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.166 qpair failed and we were unable to recover it. 00:29:13.166 [2024-11-05 19:18:42.401952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.166 [2024-11-05 19:18:42.401964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.167 qpair failed and we were unable to recover it. 00:29:13.167 [2024-11-05 19:18:42.402042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.167 [2024-11-05 19:18:42.402053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.167 qpair failed and we were unable to recover it. 00:29:13.167 [2024-11-05 19:18:42.402391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.167 [2024-11-05 19:18:42.402402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.167 qpair failed and we were unable to recover it. 00:29:13.167 [2024-11-05 19:18:42.402707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.167 [2024-11-05 19:18:42.402718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.167 qpair failed and we were unable to recover it. 00:29:13.167 [2024-11-05 19:18:42.402899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.167 [2024-11-05 19:18:42.402911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.167 qpair failed and we were unable to recover it. 00:29:13.167 [2024-11-05 19:18:42.403280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.167 [2024-11-05 19:18:42.403291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.167 qpair failed and we were unable to recover it. 00:29:13.167 [2024-11-05 19:18:42.403456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.167 [2024-11-05 19:18:42.403468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.167 qpair failed and we were unable to recover it. 00:29:13.167 [2024-11-05 19:18:42.403634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.167 [2024-11-05 19:18:42.403646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.167 qpair failed and we were unable to recover it. 00:29:13.167 [2024-11-05 19:18:42.403942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.167 [2024-11-05 19:18:42.403953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.167 qpair failed and we were unable to recover it. 00:29:13.167 [2024-11-05 19:18:42.404146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.167 [2024-11-05 19:18:42.404156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.167 qpair failed and we were unable to recover it. 00:29:13.167 [2024-11-05 19:18:42.404434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.167 [2024-11-05 19:18:42.404445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.167 qpair failed and we were unable to recover it. 00:29:13.167 [2024-11-05 19:18:42.404758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.167 [2024-11-05 19:18:42.404769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.167 qpair failed and we were unable to recover it. 00:29:13.167 [2024-11-05 19:18:42.404978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.167 [2024-11-05 19:18:42.404990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.167 qpair failed and we were unable to recover it. 00:29:13.167 [2024-11-05 19:18:42.405292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.167 [2024-11-05 19:18:42.405303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.167 qpair failed and we were unable to recover it. 00:29:13.167 [2024-11-05 19:18:42.405493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.167 [2024-11-05 19:18:42.405505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.167 qpair failed and we were unable to recover it. 00:29:13.167 [2024-11-05 19:18:42.405853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.167 [2024-11-05 19:18:42.405865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.167 qpair failed and we were unable to recover it. 00:29:13.167 [2024-11-05 19:18:42.406190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.167 [2024-11-05 19:18:42.406202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.167 qpair failed and we were unable to recover it. 00:29:13.167 [2024-11-05 19:18:42.406552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.167 [2024-11-05 19:18:42.406563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.167 qpair failed and we were unable to recover it. 00:29:13.167 [2024-11-05 19:18:42.406894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.167 [2024-11-05 19:18:42.406906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.167 qpair failed and we were unable to recover it. 00:29:13.167 [2024-11-05 19:18:42.407202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.167 [2024-11-05 19:18:42.407214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.167 qpair failed and we were unable to recover it. 00:29:13.167 [2024-11-05 19:18:42.407455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.167 [2024-11-05 19:18:42.407466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.167 qpair failed and we were unable to recover it. 00:29:13.167 [2024-11-05 19:18:42.407523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.167 [2024-11-05 19:18:42.407534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.167 qpair failed and we were unable to recover it. 00:29:13.167 [2024-11-05 19:18:42.407855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.167 [2024-11-05 19:18:42.407866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.167 qpair failed and we were unable to recover it. 00:29:13.167 [2024-11-05 19:18:42.408143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.167 [2024-11-05 19:18:42.408154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.167 qpair failed and we were unable to recover it. 00:29:13.167 [2024-11-05 19:18:42.408480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.167 [2024-11-05 19:18:42.408492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.167 qpair failed and we were unable to recover it. 00:29:13.167 [2024-11-05 19:18:42.408707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.167 [2024-11-05 19:18:42.408718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.167 qpair failed and we were unable to recover it. 00:29:13.167 [2024-11-05 19:18:42.409036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.167 [2024-11-05 19:18:42.409048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.167 qpair failed and we were unable to recover it. 00:29:13.167 [2024-11-05 19:18:42.409381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.167 [2024-11-05 19:18:42.409393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.167 qpair failed and we were unable to recover it. 00:29:13.167 [2024-11-05 19:18:42.409735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.167 [2024-11-05 19:18:42.409754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.167 qpair failed and we were unable to recover it. 00:29:13.167 [2024-11-05 19:18:42.410085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.167 [2024-11-05 19:18:42.410096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.167 qpair failed and we were unable to recover it. 00:29:13.167 [2024-11-05 19:18:42.410412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.167 [2024-11-05 19:18:42.410423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.167 qpair failed and we were unable to recover it. 00:29:13.167 [2024-11-05 19:18:42.410634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.167 [2024-11-05 19:18:42.410645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.167 qpair failed and we were unable to recover it. 00:29:13.167 [2024-11-05 19:18:42.410840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.167 [2024-11-05 19:18:42.410853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.167 qpair failed and we were unable to recover it. 00:29:13.167 [2024-11-05 19:18:42.411190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.167 [2024-11-05 19:18:42.411201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.167 qpair failed and we were unable to recover it. 00:29:13.167 [2024-11-05 19:18:42.411510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.167 [2024-11-05 19:18:42.411524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.167 qpair failed and we were unable to recover it. 00:29:13.167 [2024-11-05 19:18:42.411877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.167 [2024-11-05 19:18:42.411888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.167 qpair failed and we were unable to recover it. 00:29:13.167 [2024-11-05 19:18:42.412225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.167 [2024-11-05 19:18:42.412237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.167 qpair failed and we were unable to recover it. 00:29:13.167 [2024-11-05 19:18:42.412416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.167 [2024-11-05 19:18:42.412426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.167 qpair failed and we were unable to recover it. 00:29:13.167 [2024-11-05 19:18:42.412750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.167 [2024-11-05 19:18:42.412762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.167 qpair failed and we were unable to recover it. 00:29:13.167 [2024-11-05 19:18:42.413054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.168 [2024-11-05 19:18:42.413065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.168 qpair failed and we were unable to recover it. 00:29:13.168 [2024-11-05 19:18:42.413434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.168 [2024-11-05 19:18:42.413445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.168 qpair failed and we were unable to recover it. 00:29:13.168 [2024-11-05 19:18:42.413761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.168 [2024-11-05 19:18:42.413773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.168 qpair failed and we were unable to recover it. 00:29:13.168 [2024-11-05 19:18:42.414084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.168 [2024-11-05 19:18:42.414095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.168 qpair failed and we were unable to recover it. 00:29:13.168 [2024-11-05 19:18:42.414300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.168 [2024-11-05 19:18:42.414311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.168 qpair failed and we were unable to recover it. 00:29:13.168 [2024-11-05 19:18:42.414614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.168 [2024-11-05 19:18:42.414625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.168 qpair failed and we were unable to recover it. 00:29:13.168 [2024-11-05 19:18:42.414911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.168 [2024-11-05 19:18:42.414922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.168 qpair failed and we were unable to recover it. 00:29:13.168 [2024-11-05 19:18:42.415285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.168 [2024-11-05 19:18:42.415296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.168 qpair failed and we were unable to recover it. 00:29:13.168 [2024-11-05 19:18:42.415607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.168 [2024-11-05 19:18:42.415620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.168 qpair failed and we were unable to recover it. 00:29:13.168 [2024-11-05 19:18:42.415938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.168 [2024-11-05 19:18:42.415950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.168 qpair failed and we were unable to recover it. 00:29:13.168 [2024-11-05 19:18:42.416273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.168 [2024-11-05 19:18:42.416286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.168 qpair failed and we were unable to recover it. 00:29:13.168 [2024-11-05 19:18:42.416587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.168 [2024-11-05 19:18:42.416598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.168 qpair failed and we were unable to recover it. 00:29:13.168 [2024-11-05 19:18:42.416901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.168 [2024-11-05 19:18:42.416913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.168 qpair failed and we were unable to recover it. 00:29:13.168 [2024-11-05 19:18:42.417210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.168 [2024-11-05 19:18:42.417221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.168 qpair failed and we were unable to recover it. 00:29:13.168 [2024-11-05 19:18:42.417521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.168 [2024-11-05 19:18:42.417532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.168 qpair failed and we were unable to recover it. 00:29:13.168 [2024-11-05 19:18:42.417734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.168 [2024-11-05 19:18:42.417751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.168 qpair failed and we were unable to recover it. 00:29:13.168 [2024-11-05 19:18:42.418016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.168 [2024-11-05 19:18:42.418028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.168 qpair failed and we were unable to recover it. 00:29:13.168 [2024-11-05 19:18:42.418324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.168 [2024-11-05 19:18:42.418336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.168 qpair failed and we were unable to recover it. 00:29:13.168 [2024-11-05 19:18:42.418650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.168 [2024-11-05 19:18:42.418661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.168 qpair failed and we were unable to recover it. 00:29:13.168 [2024-11-05 19:18:42.418862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.168 [2024-11-05 19:18:42.418874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.168 qpair failed and we were unable to recover it. 00:29:13.168 [2024-11-05 19:18:42.419167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.168 [2024-11-05 19:18:42.419178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.168 qpair failed and we were unable to recover it. 00:29:13.168 [2024-11-05 19:18:42.419557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.168 [2024-11-05 19:18:42.419568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.168 qpair failed and we were unable to recover it. 00:29:13.168 [2024-11-05 19:18:42.419877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.168 [2024-11-05 19:18:42.419892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.168 qpair failed and we were unable to recover it. 00:29:13.168 [2024-11-05 19:18:42.420211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.168 [2024-11-05 19:18:42.420223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.168 qpair failed and we were unable to recover it. 00:29:13.168 [2024-11-05 19:18:42.420527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.168 [2024-11-05 19:18:42.420540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.168 qpair failed and we were unable to recover it. 00:29:13.168 [2024-11-05 19:18:42.420874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.168 [2024-11-05 19:18:42.420885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.168 qpair failed and we were unable to recover it. 00:29:13.168 [2024-11-05 19:18:42.421230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.168 [2024-11-05 19:18:42.421241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.168 qpair failed and we were unable to recover it. 00:29:13.168 [2024-11-05 19:18:42.421543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.168 [2024-11-05 19:18:42.421555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.168 qpair failed and we were unable to recover it. 00:29:13.168 [2024-11-05 19:18:42.421737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.168 [2024-11-05 19:18:42.421752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.168 qpair failed and we were unable to recover it. 00:29:13.168 [2024-11-05 19:18:42.422035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.168 [2024-11-05 19:18:42.422047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.168 qpair failed and we were unable to recover it. 00:29:13.168 [2024-11-05 19:18:42.422230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.168 [2024-11-05 19:18:42.422241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.168 qpair failed and we were unable to recover it. 00:29:13.168 [2024-11-05 19:18:42.422550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.168 [2024-11-05 19:18:42.422562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.168 qpair failed and we were unable to recover it. 00:29:13.168 [2024-11-05 19:18:42.422870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.168 [2024-11-05 19:18:42.422882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.168 qpair failed and we were unable to recover it. 00:29:13.168 [2024-11-05 19:18:42.423091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.168 [2024-11-05 19:18:42.423102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.168 qpair failed and we were unable to recover it. 00:29:13.168 [2024-11-05 19:18:42.423414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.168 [2024-11-05 19:18:42.423425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.169 qpair failed and we were unable to recover it. 00:29:13.169 [2024-11-05 19:18:42.423763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.169 [2024-11-05 19:18:42.423775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.169 qpair failed and we were unable to recover it. 00:29:13.169 [2024-11-05 19:18:42.424085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.169 [2024-11-05 19:18:42.424096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.169 qpair failed and we were unable to recover it. 00:29:13.169 [2024-11-05 19:18:42.424392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.169 [2024-11-05 19:18:42.424403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.169 qpair failed and we were unable to recover it. 00:29:13.169 [2024-11-05 19:18:42.424621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.169 [2024-11-05 19:18:42.424632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.169 qpair failed and we were unable to recover it. 00:29:13.169 [2024-11-05 19:18:42.424813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.169 [2024-11-05 19:18:42.424827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.169 qpair failed and we were unable to recover it. 00:29:13.169 [2024-11-05 19:18:42.425177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.169 [2024-11-05 19:18:42.425188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.169 qpair failed and we were unable to recover it. 00:29:13.169 [2024-11-05 19:18:42.425519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.169 [2024-11-05 19:18:42.425530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.169 qpair failed and we were unable to recover it. 00:29:13.169 [2024-11-05 19:18:42.425852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.169 [2024-11-05 19:18:42.425864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.169 qpair failed and we were unable to recover it. 00:29:13.169 [2024-11-05 19:18:42.426202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.169 [2024-11-05 19:18:42.426213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.169 qpair failed and we were unable to recover it. 00:29:13.169 [2024-11-05 19:18:42.426387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.169 [2024-11-05 19:18:42.426397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.169 qpair failed and we were unable to recover it. 00:29:13.169 [2024-11-05 19:18:42.426728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.169 [2024-11-05 19:18:42.426739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.169 qpair failed and we were unable to recover it. 00:29:13.169 [2024-11-05 19:18:42.427043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.169 [2024-11-05 19:18:42.427054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.169 qpair failed and we were unable to recover it. 00:29:13.169 [2024-11-05 19:18:42.427368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.169 [2024-11-05 19:18:42.427379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.169 qpair failed and we were unable to recover it. 00:29:13.169 [2024-11-05 19:18:42.427650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.169 [2024-11-05 19:18:42.427661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.169 qpair failed and we were unable to recover it. 00:29:13.169 [2024-11-05 19:18:42.428010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.169 [2024-11-05 19:18:42.428023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.169 qpair failed and we were unable to recover it. 00:29:13.169 [2024-11-05 19:18:42.428238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.169 [2024-11-05 19:18:42.428249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.169 qpair failed and we were unable to recover it. 00:29:13.169 [2024-11-05 19:18:42.428534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.169 [2024-11-05 19:18:42.428545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.169 qpair failed and we were unable to recover it. 00:29:13.169 [2024-11-05 19:18:42.428831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.169 [2024-11-05 19:18:42.428842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.169 qpair failed and we were unable to recover it. 00:29:13.169 [2024-11-05 19:18:42.429167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.169 [2024-11-05 19:18:42.429178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.169 qpair failed and we were unable to recover it. 00:29:13.169 [2024-11-05 19:18:42.429525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.169 [2024-11-05 19:18:42.429536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.169 qpair failed and we were unable to recover it. 00:29:13.169 [2024-11-05 19:18:42.429841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.169 [2024-11-05 19:18:42.429852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.169 qpair failed and we were unable to recover it. 00:29:13.169 [2024-11-05 19:18:42.430044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.169 [2024-11-05 19:18:42.430055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.169 qpair failed and we were unable to recover it. 00:29:13.169 [2024-11-05 19:18:42.430328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.169 [2024-11-05 19:18:42.430339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.169 qpair failed and we were unable to recover it. 00:29:13.169 [2024-11-05 19:18:42.430532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.169 [2024-11-05 19:18:42.430544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.169 qpair failed and we were unable to recover it. 00:29:13.169 [2024-11-05 19:18:42.430886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.169 [2024-11-05 19:18:42.430897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.169 qpair failed and we were unable to recover it. 00:29:13.169 [2024-11-05 19:18:42.431213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.169 [2024-11-05 19:18:42.431226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.169 qpair failed and we were unable to recover it. 00:29:13.169 [2024-11-05 19:18:42.431512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.169 [2024-11-05 19:18:42.431523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.169 qpair failed and we were unable to recover it. 00:29:13.169 [2024-11-05 19:18:42.431834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.169 [2024-11-05 19:18:42.431847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.169 qpair failed and we were unable to recover it. 00:29:13.169 [2024-11-05 19:18:42.432158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.169 [2024-11-05 19:18:42.432169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.169 qpair failed and we were unable to recover it. 00:29:13.169 [2024-11-05 19:18:42.432470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.169 [2024-11-05 19:18:42.432482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.169 qpair failed and we were unable to recover it. 00:29:13.169 [2024-11-05 19:18:42.432773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.169 [2024-11-05 19:18:42.432785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.169 qpair failed and we were unable to recover it. 00:29:13.169 [2024-11-05 19:18:42.433111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.169 [2024-11-05 19:18:42.433122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.169 qpair failed and we were unable to recover it. 00:29:13.169 [2024-11-05 19:18:42.433440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.169 [2024-11-05 19:18:42.433451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.169 qpair failed and we were unable to recover it. 00:29:13.169 [2024-11-05 19:18:42.433719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.169 [2024-11-05 19:18:42.433730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.169 qpair failed and we were unable to recover it. 00:29:13.169 [2024-11-05 19:18:42.434025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.169 [2024-11-05 19:18:42.434038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.169 qpair failed and we were unable to recover it. 00:29:13.169 [2024-11-05 19:18:42.434215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.169 [2024-11-05 19:18:42.434227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.169 qpair failed and we were unable to recover it. 00:29:13.169 [2024-11-05 19:18:42.434542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.169 [2024-11-05 19:18:42.434553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.169 qpair failed and we were unable to recover it. 00:29:13.169 [2024-11-05 19:18:42.434863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.169 [2024-11-05 19:18:42.434874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.169 qpair failed and we were unable to recover it. 00:29:13.169 [2024-11-05 19:18:42.435210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.170 [2024-11-05 19:18:42.435222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.170 qpair failed and we were unable to recover it. 00:29:13.170 [2024-11-05 19:18:42.435525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.170 [2024-11-05 19:18:42.435536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.170 qpair failed and we were unable to recover it. 00:29:13.170 [2024-11-05 19:18:42.435840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.170 [2024-11-05 19:18:42.435852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.170 qpair failed and we were unable to recover it. 00:29:13.170 [2024-11-05 19:18:42.436129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.170 [2024-11-05 19:18:42.436140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.170 qpair failed and we were unable to recover it. 00:29:13.170 [2024-11-05 19:18:42.436444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.170 [2024-11-05 19:18:42.436457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.170 qpair failed and we were unable to recover it. 00:29:13.170 [2024-11-05 19:18:42.436763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.170 [2024-11-05 19:18:42.436775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.170 qpair failed and we were unable to recover it. 00:29:13.170 [2024-11-05 19:18:42.437153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.170 [2024-11-05 19:18:42.437164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.170 qpair failed and we were unable to recover it. 00:29:13.170 [2024-11-05 19:18:42.437466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.170 [2024-11-05 19:18:42.437479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.170 qpair failed and we were unable to recover it. 00:29:13.170 [2024-11-05 19:18:42.437689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.170 [2024-11-05 19:18:42.437701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.170 qpair failed and we were unable to recover it. 00:29:13.170 [2024-11-05 19:18:42.437863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.170 [2024-11-05 19:18:42.437875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.170 qpair failed and we were unable to recover it. 00:29:13.170 [2024-11-05 19:18:42.438161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.170 [2024-11-05 19:18:42.438172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.170 qpair failed and we were unable to recover it. 00:29:13.170 [2024-11-05 19:18:42.438484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.170 [2024-11-05 19:18:42.438496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.170 qpair failed and we were unable to recover it. 00:29:13.170 [2024-11-05 19:18:42.438771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.170 [2024-11-05 19:18:42.438783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.170 qpair failed and we were unable to recover it. 00:29:13.170 [2024-11-05 19:18:42.439112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.170 [2024-11-05 19:18:42.439122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.170 qpair failed and we were unable to recover it. 00:29:13.170 [2024-11-05 19:18:42.439446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.170 [2024-11-05 19:18:42.439457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.170 qpair failed and we were unable to recover it. 00:29:13.170 [2024-11-05 19:18:42.439759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.170 [2024-11-05 19:18:42.439771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.170 qpair failed and we were unable to recover it. 00:29:13.170 [2024-11-05 19:18:42.439959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.170 [2024-11-05 19:18:42.439970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.170 qpair failed and we were unable to recover it. 00:29:13.170 [2024-11-05 19:18:42.440287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.170 [2024-11-05 19:18:42.440299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.170 qpair failed and we were unable to recover it. 00:29:13.170 [2024-11-05 19:18:42.440600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.170 [2024-11-05 19:18:42.440611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.170 qpair failed and we were unable to recover it. 00:29:13.170 [2024-11-05 19:18:42.440909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.170 [2024-11-05 19:18:42.440922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.170 qpair failed and we were unable to recover it. 00:29:13.170 [2024-11-05 19:18:42.441096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.170 [2024-11-05 19:18:42.441107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.170 qpair failed and we were unable to recover it. 00:29:13.170 [2024-11-05 19:18:42.441425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.170 [2024-11-05 19:18:42.441436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.170 qpair failed and we were unable to recover it. 00:29:13.170 [2024-11-05 19:18:42.441743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.170 [2024-11-05 19:18:42.441758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.170 qpair failed and we were unable to recover it. 00:29:13.170 [2024-11-05 19:18:42.442087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.170 [2024-11-05 19:18:42.442098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.170 qpair failed and we were unable to recover it. 00:29:13.170 [2024-11-05 19:18:42.442428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.170 [2024-11-05 19:18:42.442440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.170 qpair failed and we were unable to recover it. 00:29:13.170 [2024-11-05 19:18:42.442739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.170 [2024-11-05 19:18:42.442755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.170 qpair failed and we were unable to recover it. 00:29:13.170 [2024-11-05 19:18:42.442935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.170 [2024-11-05 19:18:42.442946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.170 qpair failed and we were unable to recover it. 00:29:13.170 [2024-11-05 19:18:42.443245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.170 [2024-11-05 19:18:42.443256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.170 qpair failed and we were unable to recover it. 00:29:13.170 [2024-11-05 19:18:42.443592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.170 [2024-11-05 19:18:42.443603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.170 qpair failed and we were unable to recover it. 00:29:13.170 [2024-11-05 19:18:42.443913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.170 [2024-11-05 19:18:42.443924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.170 qpair failed and we were unable to recover it. 00:29:13.170 [2024-11-05 19:18:42.444296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.170 [2024-11-05 19:18:42.444308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.170 qpair failed and we were unable to recover it. 00:29:13.170 [2024-11-05 19:18:42.444639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.170 [2024-11-05 19:18:42.444651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.170 qpair failed and we were unable to recover it. 00:29:13.170 [2024-11-05 19:18:42.444875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.170 [2024-11-05 19:18:42.444887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.170 qpair failed and we were unable to recover it. 00:29:13.170 [2024-11-05 19:18:42.445111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.170 [2024-11-05 19:18:42.445122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.170 qpair failed and we were unable to recover it. 00:29:13.170 [2024-11-05 19:18:42.445475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.170 [2024-11-05 19:18:42.445487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.170 qpair failed and we were unable to recover it. 00:29:13.170 [2024-11-05 19:18:42.445804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.170 [2024-11-05 19:18:42.445816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.170 qpair failed and we were unable to recover it. 00:29:13.170 [2024-11-05 19:18:42.446100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.170 [2024-11-05 19:18:42.446112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.170 qpair failed and we were unable to recover it. 00:29:13.170 [2024-11-05 19:18:42.446428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.170 [2024-11-05 19:18:42.446440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.170 qpair failed and we were unable to recover it. 00:29:13.170 [2024-11-05 19:18:42.446742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.170 [2024-11-05 19:18:42.446759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.170 qpair failed and we were unable to recover it. 00:29:13.171 [2024-11-05 19:18:42.447102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.171 [2024-11-05 19:18:42.447113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.171 qpair failed and we were unable to recover it. 00:29:13.171 [2024-11-05 19:18:42.447413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.171 [2024-11-05 19:18:42.447425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.171 qpair failed and we were unable to recover it. 00:29:13.171 [2024-11-05 19:18:42.447728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.171 [2024-11-05 19:18:42.447741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.171 qpair failed and we were unable to recover it. 00:29:13.171 [2024-11-05 19:18:42.447923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.171 [2024-11-05 19:18:42.447935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.171 qpair failed and we were unable to recover it. 00:29:13.171 [2024-11-05 19:18:42.448316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.171 [2024-11-05 19:18:42.448327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.171 qpair failed and we were unable to recover it. 00:29:13.171 [2024-11-05 19:18:42.448668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.171 [2024-11-05 19:18:42.448685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.171 qpair failed and we were unable to recover it. 00:29:13.171 [2024-11-05 19:18:42.448992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.171 [2024-11-05 19:18:42.449004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.171 qpair failed and we were unable to recover it. 00:29:13.171 [2024-11-05 19:18:42.449307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.171 [2024-11-05 19:18:42.449318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.171 qpair failed and we were unable to recover it. 00:29:13.171 [2024-11-05 19:18:42.449623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.171 [2024-11-05 19:18:42.449635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.171 qpair failed and we were unable to recover it. 00:29:13.171 [2024-11-05 19:18:42.449966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.171 [2024-11-05 19:18:42.449978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.171 qpair failed and we were unable to recover it. 00:29:13.171 [2024-11-05 19:18:42.450249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.171 [2024-11-05 19:18:42.450260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.171 qpair failed and we were unable to recover it. 00:29:13.171 [2024-11-05 19:18:42.450589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.171 [2024-11-05 19:18:42.450600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.171 qpair failed and we were unable to recover it. 00:29:13.171 [2024-11-05 19:18:42.450903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.171 [2024-11-05 19:18:42.450915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.171 qpair failed and we were unable to recover it. 00:29:13.171 [2024-11-05 19:18:42.451247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.171 [2024-11-05 19:18:42.451259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.171 qpair failed and we were unable to recover it. 00:29:13.171 [2024-11-05 19:18:42.451453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.171 [2024-11-05 19:18:42.451464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.171 qpair failed and we were unable to recover it. 00:29:13.171 [2024-11-05 19:18:42.451740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.171 [2024-11-05 19:18:42.451755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.171 qpair failed and we were unable to recover it. 00:29:13.171 [2024-11-05 19:18:42.452088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.171 [2024-11-05 19:18:42.452099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.171 qpair failed and we were unable to recover it. 00:29:13.171 [2024-11-05 19:18:42.452299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.171 [2024-11-05 19:18:42.452310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.171 qpair failed and we were unable to recover it. 00:29:13.171 [2024-11-05 19:18:42.452603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.171 [2024-11-05 19:18:42.452614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.171 qpair failed and we were unable to recover it. 00:29:13.171 [2024-11-05 19:18:42.452914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.171 [2024-11-05 19:18:42.452926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.171 qpair failed and we were unable to recover it. 00:29:13.171 [2024-11-05 19:18:42.453248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.171 [2024-11-05 19:18:42.453259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.171 qpair failed and we were unable to recover it. 00:29:13.171 [2024-11-05 19:18:42.453480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.171 [2024-11-05 19:18:42.453491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.171 qpair failed and we were unable to recover it. 00:29:13.171 [2024-11-05 19:18:42.453686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.171 [2024-11-05 19:18:42.453697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.171 qpair failed and we were unable to recover it. 00:29:13.171 [2024-11-05 19:18:42.453874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.171 [2024-11-05 19:18:42.453886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.171 qpair failed and we were unable to recover it. 00:29:13.171 [2024-11-05 19:18:42.454178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.171 [2024-11-05 19:18:42.454189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.171 qpair failed and we were unable to recover it. 00:29:13.171 [2024-11-05 19:18:42.454458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.171 [2024-11-05 19:18:42.454469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.171 qpair failed and we were unable to recover it. 00:29:13.171 [2024-11-05 19:18:42.454816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.171 [2024-11-05 19:18:42.454828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.171 qpair failed and we were unable to recover it. 00:29:13.171 [2024-11-05 19:18:42.455000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.171 [2024-11-05 19:18:42.455010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.171 qpair failed and we were unable to recover it. 00:29:13.171 [2024-11-05 19:18:42.455180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.171 [2024-11-05 19:18:42.455191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.171 qpair failed and we were unable to recover it. 00:29:13.171 [2024-11-05 19:18:42.455508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.171 [2024-11-05 19:18:42.455520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.171 qpair failed and we were unable to recover it. 00:29:13.171 [2024-11-05 19:18:42.455833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.171 [2024-11-05 19:18:42.455847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.171 qpair failed and we were unable to recover it. 00:29:13.461 [2024-11-05 19:18:42.456155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.461 [2024-11-05 19:18:42.456167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.461 qpair failed and we were unable to recover it. 00:29:13.461 [2024-11-05 19:18:42.456502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.461 [2024-11-05 19:18:42.456516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.461 qpair failed and we were unable to recover it. 00:29:13.461 [2024-11-05 19:18:42.456701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.461 [2024-11-05 19:18:42.456712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.462 qpair failed and we were unable to recover it. 00:29:13.462 [2024-11-05 19:18:42.457086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.462 [2024-11-05 19:18:42.457099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.462 qpair failed and we were unable to recover it. 00:29:13.462 [2024-11-05 19:18:42.457274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.462 [2024-11-05 19:18:42.457285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.462 qpair failed and we were unable to recover it. 00:29:13.462 [2024-11-05 19:18:42.457595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.462 [2024-11-05 19:18:42.457607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.462 qpair failed and we were unable to recover it. 00:29:13.462 [2024-11-05 19:18:42.457917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.462 [2024-11-05 19:18:42.457929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.462 qpair failed and we were unable to recover it. 00:29:13.462 [2024-11-05 19:18:42.458255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.462 [2024-11-05 19:18:42.458267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.462 qpair failed and we were unable to recover it. 00:29:13.462 [2024-11-05 19:18:42.458589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.462 [2024-11-05 19:18:42.458600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.462 qpair failed and we were unable to recover it. 00:29:13.462 [2024-11-05 19:18:42.458907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.462 [2024-11-05 19:18:42.458919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.462 qpair failed and we were unable to recover it. 00:29:13.462 [2024-11-05 19:18:42.459107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.462 [2024-11-05 19:18:42.459118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.462 qpair failed and we were unable to recover it. 00:29:13.462 [2024-11-05 19:18:42.459310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.462 [2024-11-05 19:18:42.459320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.462 qpair failed and we were unable to recover it. 00:29:13.462 [2024-11-05 19:18:42.459639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.462 [2024-11-05 19:18:42.459650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.462 qpair failed and we were unable to recover it. 00:29:13.462 [2024-11-05 19:18:42.459959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.462 [2024-11-05 19:18:42.459971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.462 qpair failed and we were unable to recover it. 00:29:13.462 [2024-11-05 19:18:42.460268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.462 [2024-11-05 19:18:42.460279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.462 qpair failed and we were unable to recover it. 00:29:13.462 [2024-11-05 19:18:42.460461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.462 [2024-11-05 19:18:42.460472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.462 qpair failed and we were unable to recover it. 00:29:13.462 [2024-11-05 19:18:42.460782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.462 [2024-11-05 19:18:42.460794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.462 qpair failed and we were unable to recover it. 00:29:13.462 [2024-11-05 19:18:42.461118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.462 [2024-11-05 19:18:42.461129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.462 qpair failed and we were unable to recover it. 00:29:13.462 [2024-11-05 19:18:42.461463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.462 [2024-11-05 19:18:42.461474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.462 qpair failed and we were unable to recover it. 00:29:13.462 [2024-11-05 19:18:42.461793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.462 [2024-11-05 19:18:42.461804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.462 qpair failed and we were unable to recover it. 00:29:13.462 [2024-11-05 19:18:42.462118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.462 [2024-11-05 19:18:42.462128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.462 qpair failed and we were unable to recover it. 00:29:13.462 [2024-11-05 19:18:42.462450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.462 [2024-11-05 19:18:42.462461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.462 qpair failed and we were unable to recover it. 00:29:13.462 [2024-11-05 19:18:42.462774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.462 [2024-11-05 19:18:42.462785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.462 qpair failed and we were unable to recover it. 00:29:13.462 [2024-11-05 19:18:42.463095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.462 [2024-11-05 19:18:42.463106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.462 qpair failed and we were unable to recover it. 00:29:13.462 [2024-11-05 19:18:42.463430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.462 [2024-11-05 19:18:42.463442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.462 qpair failed and we were unable to recover it. 00:29:13.462 [2024-11-05 19:18:42.463759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.462 [2024-11-05 19:18:42.463770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.462 qpair failed and we were unable to recover it. 00:29:13.462 [2024-11-05 19:18:42.464089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.462 [2024-11-05 19:18:42.464099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.462 qpair failed and we were unable to recover it. 00:29:13.462 [2024-11-05 19:18:42.464404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.462 [2024-11-05 19:18:42.464416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.462 qpair failed and we were unable to recover it. 00:29:13.462 [2024-11-05 19:18:42.464703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.462 [2024-11-05 19:18:42.464714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.462 qpair failed and we were unable to recover it. 00:29:13.462 [2024-11-05 19:18:42.465021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.462 [2024-11-05 19:18:42.465034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.462 qpair failed and we were unable to recover it. 00:29:13.462 [2024-11-05 19:18:42.465231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.462 [2024-11-05 19:18:42.465243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.462 qpair failed and we were unable to recover it. 00:29:13.462 [2024-11-05 19:18:42.465551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.462 [2024-11-05 19:18:42.465563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.462 qpair failed and we were unable to recover it. 00:29:13.462 [2024-11-05 19:18:42.465855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.462 [2024-11-05 19:18:42.465866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.462 qpair failed and we were unable to recover it. 00:29:13.462 [2024-11-05 19:18:42.466186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.462 [2024-11-05 19:18:42.466197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.462 qpair failed and we were unable to recover it. 00:29:13.462 [2024-11-05 19:18:42.466415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.462 [2024-11-05 19:18:42.466426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.462 qpair failed and we were unable to recover it. 00:29:13.462 [2024-11-05 19:18:42.466755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.462 [2024-11-05 19:18:42.466767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.462 qpair failed and we were unable to recover it. 00:29:13.462 [2024-11-05 19:18:42.467063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.462 [2024-11-05 19:18:42.467063] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:13.462 [2024-11-05 19:18:42.467074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.462 qpair failed and we were unable to recover it. 00:29:13.462 [2024-11-05 19:18:42.467402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.462 [2024-11-05 19:18:42.467415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.462 qpair failed and we were unable to recover it. 00:29:13.462 [2024-11-05 19:18:42.467723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.462 [2024-11-05 19:18:42.467735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.462 qpair failed and we were unable to recover it. 00:29:13.462 [2024-11-05 19:18:42.468061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.462 [2024-11-05 19:18:42.468073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.462 qpair failed and we were unable to recover it. 00:29:13.463 [2024-11-05 19:18:42.468336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.463 [2024-11-05 19:18:42.468348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.463 qpair failed and we were unable to recover it. 00:29:13.463 [2024-11-05 19:18:42.468524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.463 [2024-11-05 19:18:42.468536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.463 qpair failed and we were unable to recover it. 00:29:13.463 [2024-11-05 19:18:42.468827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.463 [2024-11-05 19:18:42.468839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.463 qpair failed and we were unable to recover it. 00:29:13.463 [2024-11-05 19:18:42.469212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.463 [2024-11-05 19:18:42.469223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.463 qpair failed and we were unable to recover it. 00:29:13.463 [2024-11-05 19:18:42.469536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.463 [2024-11-05 19:18:42.469548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.463 qpair failed and we were unable to recover it. 00:29:13.463 [2024-11-05 19:18:42.469840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.463 [2024-11-05 19:18:42.469852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.463 qpair failed and we were unable to recover it. 00:29:13.463 [2024-11-05 19:18:42.469967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.463 [2024-11-05 19:18:42.469979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.463 qpair failed and we were unable to recover it. 00:29:13.463 [2024-11-05 19:18:42.470296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.463 [2024-11-05 19:18:42.470307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.463 qpair failed and we were unable to recover it. 00:29:13.463 [2024-11-05 19:18:42.470625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.463 [2024-11-05 19:18:42.470640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.463 qpair failed and we were unable to recover it. 00:29:13.463 [2024-11-05 19:18:42.470965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.463 [2024-11-05 19:18:42.470976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.463 qpair failed and we were unable to recover it. 00:29:13.463 [2024-11-05 19:18:42.471281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.463 [2024-11-05 19:18:42.471293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.463 qpair failed and we were unable to recover it. 00:29:13.463 [2024-11-05 19:18:42.471673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.463 [2024-11-05 19:18:42.471685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.463 qpair failed and we were unable to recover it. 00:29:13.463 [2024-11-05 19:18:42.471997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.463 [2024-11-05 19:18:42.472009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.463 qpair failed and we were unable to recover it. 00:29:13.463 [2024-11-05 19:18:42.472334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.463 [2024-11-05 19:18:42.472345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.463 qpair failed and we were unable to recover it. 00:29:13.463 [2024-11-05 19:18:42.472636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.463 [2024-11-05 19:18:42.472648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.463 qpair failed and we were unable to recover it. 00:29:13.463 [2024-11-05 19:18:42.472974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.463 [2024-11-05 19:18:42.472988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.463 qpair failed and we were unable to recover it. 00:29:13.463 [2024-11-05 19:18:42.473306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.463 [2024-11-05 19:18:42.473317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.463 qpair failed and we were unable to recover it. 00:29:13.463 [2024-11-05 19:18:42.473625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.463 [2024-11-05 19:18:42.473637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.463 qpair failed and we were unable to recover it. 00:29:13.463 [2024-11-05 19:18:42.474049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.463 [2024-11-05 19:18:42.474062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.463 qpair failed and we were unable to recover it. 00:29:13.463 [2024-11-05 19:18:42.474377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.463 [2024-11-05 19:18:42.474388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.463 qpair failed and we were unable to recover it. 00:29:13.463 [2024-11-05 19:18:42.474707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.463 [2024-11-05 19:18:42.474720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.463 qpair failed and we were unable to recover it. 00:29:13.463 [2024-11-05 19:18:42.475031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.463 [2024-11-05 19:18:42.475043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.463 qpair failed and we were unable to recover it. 00:29:13.463 [2024-11-05 19:18:42.475344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.463 [2024-11-05 19:18:42.475357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.463 qpair failed and we were unable to recover it. 00:29:13.463 [2024-11-05 19:18:42.475592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.463 [2024-11-05 19:18:42.475605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.463 qpair failed and we were unable to recover it. 00:29:13.463 [2024-11-05 19:18:42.476066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.463 [2024-11-05 19:18:42.476078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.463 qpair failed and we were unable to recover it. 00:29:13.463 [2024-11-05 19:18:42.476407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.463 [2024-11-05 19:18:42.476419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.463 qpair failed and we were unable to recover it. 00:29:13.463 [2024-11-05 19:18:42.476632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.463 [2024-11-05 19:18:42.476643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.463 qpair failed and we were unable to recover it. 00:29:13.463 [2024-11-05 19:18:42.476921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.463 [2024-11-05 19:18:42.476933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.463 qpair failed and we were unable to recover it. 00:29:13.463 [2024-11-05 19:18:42.477218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.463 [2024-11-05 19:18:42.477229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.463 qpair failed and we were unable to recover it. 00:29:13.463 [2024-11-05 19:18:42.477490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.463 [2024-11-05 19:18:42.477501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.463 qpair failed and we were unable to recover it. 00:29:13.463 [2024-11-05 19:18:42.477821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.463 [2024-11-05 19:18:42.477833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.463 qpair failed and we were unable to recover it. 00:29:13.463 [2024-11-05 19:18:42.478065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.463 [2024-11-05 19:18:42.478076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.463 qpair failed and we were unable to recover it. 00:29:13.463 [2024-11-05 19:18:42.478409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.463 [2024-11-05 19:18:42.478420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.463 qpair failed and we were unable to recover it. 00:29:13.463 [2024-11-05 19:18:42.478607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.463 [2024-11-05 19:18:42.478619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.463 qpair failed and we were unable to recover it. 00:29:13.463 [2024-11-05 19:18:42.478833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.463 [2024-11-05 19:18:42.478846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.463 qpair failed and we were unable to recover it. 00:29:13.463 [2024-11-05 19:18:42.479178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.463 [2024-11-05 19:18:42.479189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.463 qpair failed and we were unable to recover it. 00:29:13.463 [2024-11-05 19:18:42.479472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.463 [2024-11-05 19:18:42.479484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.463 qpair failed and we were unable to recover it. 00:29:13.463 [2024-11-05 19:18:42.479669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.463 [2024-11-05 19:18:42.479680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.464 qpair failed and we were unable to recover it. 00:29:13.464 [2024-11-05 19:18:42.480087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.464 [2024-11-05 19:18:42.480099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.464 qpair failed and we were unable to recover it. 00:29:13.464 [2024-11-05 19:18:42.480297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.464 [2024-11-05 19:18:42.480308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.464 qpair failed and we were unable to recover it. 00:29:13.464 [2024-11-05 19:18:42.480591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.464 [2024-11-05 19:18:42.480602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.464 qpair failed and we were unable to recover it. 00:29:13.464 [2024-11-05 19:18:42.480916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.464 [2024-11-05 19:18:42.480930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.464 qpair failed and we were unable to recover it. 00:29:13.464 [2024-11-05 19:18:42.481258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.464 [2024-11-05 19:18:42.481271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.464 qpair failed and we were unable to recover it. 00:29:13.464 [2024-11-05 19:18:42.481594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.464 [2024-11-05 19:18:42.481607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.464 qpair failed and we were unable to recover it. 00:29:13.464 [2024-11-05 19:18:42.481790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.464 [2024-11-05 19:18:42.481803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.464 qpair failed and we were unable to recover it. 00:29:13.464 [2024-11-05 19:18:42.482179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.464 [2024-11-05 19:18:42.482191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.464 qpair failed and we were unable to recover it. 00:29:13.464 [2024-11-05 19:18:42.482498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.464 [2024-11-05 19:18:42.482510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.464 qpair failed and we were unable to recover it. 00:29:13.464 [2024-11-05 19:18:42.482869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.464 [2024-11-05 19:18:42.482881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.464 qpair failed and we were unable to recover it. 00:29:13.464 [2024-11-05 19:18:42.483070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.464 [2024-11-05 19:18:42.483082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.464 qpair failed and we were unable to recover it. 00:29:13.464 [2024-11-05 19:18:42.483374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.464 [2024-11-05 19:18:42.483385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.464 qpair failed and we were unable to recover it. 00:29:13.464 [2024-11-05 19:18:42.483682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.464 [2024-11-05 19:18:42.483694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.464 qpair failed and we were unable to recover it. 00:29:13.464 [2024-11-05 19:18:42.484002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.464 [2024-11-05 19:18:42.484015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.464 qpair failed and we were unable to recover it. 00:29:13.464 [2024-11-05 19:18:42.484207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.464 [2024-11-05 19:18:42.484219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.464 qpair failed and we were unable to recover it. 00:29:13.464 [2024-11-05 19:18:42.484551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.464 [2024-11-05 19:18:42.484562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.464 qpair failed and we were unable to recover it. 00:29:13.464 [2024-11-05 19:18:42.484895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.464 [2024-11-05 19:18:42.484908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.464 qpair failed and we were unable to recover it. 00:29:13.464 [2024-11-05 19:18:42.485230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.464 [2024-11-05 19:18:42.485242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.464 qpair failed and we were unable to recover it. 00:29:13.464 [2024-11-05 19:18:42.485547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.464 [2024-11-05 19:18:42.485559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.464 qpair failed and we were unable to recover it. 00:29:13.464 [2024-11-05 19:18:42.485777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.464 [2024-11-05 19:18:42.485789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.464 qpair failed and we were unable to recover it. 00:29:13.464 [2024-11-05 19:18:42.486077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.464 [2024-11-05 19:18:42.486088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.464 qpair failed and we were unable to recover it. 00:29:13.464 [2024-11-05 19:18:42.486425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.464 [2024-11-05 19:18:42.486436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.464 qpair failed and we were unable to recover it. 00:29:13.464 [2024-11-05 19:18:42.486751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.464 [2024-11-05 19:18:42.486762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.464 qpair failed and we were unable to recover it. 00:29:13.464 [2024-11-05 19:18:42.487054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.464 [2024-11-05 19:18:42.487065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.464 qpair failed and we were unable to recover it. 00:29:13.464 [2024-11-05 19:18:42.487379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.464 [2024-11-05 19:18:42.487390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.464 qpair failed and we were unable to recover it. 00:29:13.464 [2024-11-05 19:18:42.487662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.464 [2024-11-05 19:18:42.487673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.464 qpair failed and we were unable to recover it. 00:29:13.464 [2024-11-05 19:18:42.487985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.464 [2024-11-05 19:18:42.487997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.464 qpair failed and we were unable to recover it. 00:29:13.464 [2024-11-05 19:18:42.488306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.464 [2024-11-05 19:18:42.488319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.464 qpair failed and we were unable to recover it. 00:29:13.464 [2024-11-05 19:18:42.488624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.464 [2024-11-05 19:18:42.488635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.464 qpair failed and we were unable to recover it. 00:29:13.464 [2024-11-05 19:18:42.488924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.464 [2024-11-05 19:18:42.488935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.464 qpair failed and we were unable to recover it. 00:29:13.464 [2024-11-05 19:18:42.489123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.464 [2024-11-05 19:18:42.489134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.464 qpair failed and we were unable to recover it. 00:29:13.464 [2024-11-05 19:18:42.489473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.464 [2024-11-05 19:18:42.489485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.464 qpair failed and we were unable to recover it. 00:29:13.464 [2024-11-05 19:18:42.489788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.464 [2024-11-05 19:18:42.489799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.464 qpair failed and we were unable to recover it. 00:29:13.464 [2024-11-05 19:18:42.490069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.464 [2024-11-05 19:18:42.490081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.464 qpair failed and we were unable to recover it. 00:29:13.464 [2024-11-05 19:18:42.490380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.465 [2024-11-05 19:18:42.490393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.465 qpair failed and we were unable to recover it. 00:29:13.465 [2024-11-05 19:18:42.490572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.465 [2024-11-05 19:18:42.490583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.465 qpair failed and we were unable to recover it. 00:29:13.465 [2024-11-05 19:18:42.490769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.465 [2024-11-05 19:18:42.490780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.465 qpair failed and we were unable to recover it. 00:29:13.465 [2024-11-05 19:18:42.491093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.465 [2024-11-05 19:18:42.491105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.465 qpair failed and we were unable to recover it. 00:29:13.465 [2024-11-05 19:18:42.491406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.465 [2024-11-05 19:18:42.491417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.465 qpair failed and we were unable to recover it. 00:29:13.465 [2024-11-05 19:18:42.491582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.465 [2024-11-05 19:18:42.491593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.465 qpair failed and we were unable to recover it. 00:29:13.465 [2024-11-05 19:18:42.491644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.465 [2024-11-05 19:18:42.491655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.465 qpair failed and we were unable to recover it. 00:29:13.465 [2024-11-05 19:18:42.491942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.465 [2024-11-05 19:18:42.491955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.465 qpair failed and we were unable to recover it. 00:29:13.465 [2024-11-05 19:18:42.492143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.465 [2024-11-05 19:18:42.492155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.465 qpair failed and we were unable to recover it. 00:29:13.465 [2024-11-05 19:18:42.492441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.465 [2024-11-05 19:18:42.492453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.465 qpair failed and we were unable to recover it. 00:29:13.465 [2024-11-05 19:18:42.492754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.465 [2024-11-05 19:18:42.492767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.465 qpair failed and we were unable to recover it. 00:29:13.465 [2024-11-05 19:18:42.493083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.465 [2024-11-05 19:18:42.493096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.465 qpair failed and we were unable to recover it. 00:29:13.465 [2024-11-05 19:18:42.493455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.465 [2024-11-05 19:18:42.493467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.465 qpair failed and we were unable to recover it. 00:29:13.465 [2024-11-05 19:18:42.493765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.465 [2024-11-05 19:18:42.493778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.465 qpair failed and we were unable to recover it. 00:29:13.465 [2024-11-05 19:18:42.494120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.465 [2024-11-05 19:18:42.494131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.465 qpair failed and we were unable to recover it. 00:29:13.465 [2024-11-05 19:18:42.494374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.465 [2024-11-05 19:18:42.494384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.465 qpair failed and we were unable to recover it. 00:29:13.465 [2024-11-05 19:18:42.494673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.465 [2024-11-05 19:18:42.494685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.465 qpair failed and we were unable to recover it. 00:29:13.465 [2024-11-05 19:18:42.495001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.465 [2024-11-05 19:18:42.495013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.465 qpair failed and we were unable to recover it. 00:29:13.465 [2024-11-05 19:18:42.495360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.465 [2024-11-05 19:18:42.495371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.465 qpair failed and we were unable to recover it. 00:29:13.465 [2024-11-05 19:18:42.495669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.465 [2024-11-05 19:18:42.495681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.465 qpair failed and we were unable to recover it. 00:29:13.465 [2024-11-05 19:18:42.495985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.465 [2024-11-05 19:18:42.495997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.465 qpair failed and we were unable to recover it. 00:29:13.465 [2024-11-05 19:18:42.496311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.465 [2024-11-05 19:18:42.496322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.465 qpair failed and we were unable to recover it. 00:29:13.465 [2024-11-05 19:18:42.496622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.465 [2024-11-05 19:18:42.496633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.465 qpair failed and we were unable to recover it. 00:29:13.465 [2024-11-05 19:18:42.496963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.465 [2024-11-05 19:18:42.496975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.465 qpair failed and we were unable to recover it. 00:29:13.465 [2024-11-05 19:18:42.497306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.465 [2024-11-05 19:18:42.497318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.465 qpair failed and we were unable to recover it. 00:29:13.465 [2024-11-05 19:18:42.497657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.465 [2024-11-05 19:18:42.497670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.465 qpair failed and we were unable to recover it. 00:29:13.465 [2024-11-05 19:18:42.497960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.465 [2024-11-05 19:18:42.497972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.465 qpair failed and we were unable to recover it. 00:29:13.465 [2024-11-05 19:18:42.498190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.465 [2024-11-05 19:18:42.498200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.465 qpair failed and we were unable to recover it. 00:29:13.465 [2024-11-05 19:18:42.498497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.465 [2024-11-05 19:18:42.498508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.465 qpair failed and we were unable to recover it. 00:29:13.465 [2024-11-05 19:18:42.498817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.465 [2024-11-05 19:18:42.498828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.465 qpair failed and we were unable to recover it. 00:29:13.465 [2024-11-05 19:18:42.499194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.465 [2024-11-05 19:18:42.499205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.465 qpair failed and we were unable to recover it. 00:29:13.465 [2024-11-05 19:18:42.499505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.465 [2024-11-05 19:18:42.499516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.465 qpair failed and we were unable to recover it. 00:29:13.465 [2024-11-05 19:18:42.499865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.465 [2024-11-05 19:18:42.499878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.465 qpair failed and we were unable to recover it. 00:29:13.465 [2024-11-05 19:18:42.500202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.465 [2024-11-05 19:18:42.500214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.465 qpair failed and we were unable to recover it. 00:29:13.465 [2024-11-05 19:18:42.500524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.465 [2024-11-05 19:18:42.500536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.465 qpair failed and we were unable to recover it. 00:29:13.465 [2024-11-05 19:18:42.500871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.465 [2024-11-05 19:18:42.500883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.465 qpair failed and we were unable to recover it. 00:29:13.465 [2024-11-05 19:18:42.501048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.465 [2024-11-05 19:18:42.501059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.465 qpair failed and we were unable to recover it. 00:29:13.465 [2024-11-05 19:18:42.501395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.465 [2024-11-05 19:18:42.501407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.465 qpair failed and we were unable to recover it. 00:29:13.465 [2024-11-05 19:18:42.501728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.465 [2024-11-05 19:18:42.501743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.465 qpair failed and we were unable to recover it. 00:29:13.465 [2024-11-05 19:18:42.502074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.466 [2024-11-05 19:18:42.502087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.466 qpair failed and we were unable to recover it. 00:29:13.466 [2024-11-05 19:18:42.502467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.466 [2024-11-05 19:18:42.502478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.466 qpair failed and we were unable to recover it. 00:29:13.466 [2024-11-05 19:18:42.502575] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:13.466 [2024-11-05 19:18:42.502601] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:13.466 [2024-11-05 19:18:42.502609] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:13.466 [2024-11-05 19:18:42.502615] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:13.466 [2024-11-05 19:18:42.502621] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:13.466 [2024-11-05 19:18:42.502789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.466 [2024-11-05 19:18:42.502802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.466 qpair failed and we were unable to recover it. 00:29:13.466 [2024-11-05 19:18:42.503112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.466 [2024-11-05 19:18:42.503123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.466 qpair failed and we were unable to recover it. 00:29:13.466 [2024-11-05 19:18:42.503423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.466 [2024-11-05 19:18:42.503434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.466 qpair failed and we were unable to recover it. 00:29:13.466 [2024-11-05 19:18:42.503769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.466 [2024-11-05 19:18:42.503781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.466 qpair failed and we were unable to recover it. 00:29:13.466 [2024-11-05 19:18:42.504036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.466 [2024-11-05 19:18:42.504047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.466 qpair failed and we were unable to recover it. 00:29:13.466 [2024-11-05 19:18:42.504361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.466 [2024-11-05 19:18:42.504264] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:29:13.466 [2024-11-05 19:18:42.504372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.466 qpair failed and we were unable to recover it. 00:29:13.466 [2024-11-05 19:18:42.504349] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:29:13.466 [2024-11-05 19:18:42.504510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:13.466 [2024-11-05 19:18:42.504547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.466 [2024-11-05 19:18:42.504558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.466 qpair failed and we were unable to recover it. 00:29:13.466 [2024-11-05 19:18:42.504511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:29:13.466 [2024-11-05 19:18:42.504866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.466 [2024-11-05 19:18:42.504877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.466 qpair failed and we were unable to recover it. 00:29:13.466 [2024-11-05 19:18:42.505207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.466 [2024-11-05 19:18:42.505220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.466 qpair failed and we were unable to recover it. 00:29:13.466 [2024-11-05 19:18:42.505529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.466 [2024-11-05 19:18:42.505542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.466 qpair failed and we were unable to recover it. 00:29:13.466 [2024-11-05 19:18:42.505836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.466 [2024-11-05 19:18:42.505848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.466 qpair failed and we were unable to recover it. 00:29:13.466 [2024-11-05 19:18:42.506133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.466 [2024-11-05 19:18:42.506144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.466 qpair failed and we were unable to recover it. 00:29:13.466 [2024-11-05 19:18:42.506455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.466 [2024-11-05 19:18:42.506465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.466 qpair failed and we were unable to recover it. 00:29:13.466 [2024-11-05 19:18:42.506642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.466 [2024-11-05 19:18:42.506653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.466 qpair failed and we were unable to recover it. 00:29:13.466 [2024-11-05 19:18:42.507060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.466 [2024-11-05 19:18:42.507073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.466 qpair failed and we were unable to recover it. 00:29:13.466 [2024-11-05 19:18:42.507250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.466 [2024-11-05 19:18:42.507261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.466 qpair failed and we were unable to recover it. 00:29:13.466 [2024-11-05 19:18:42.507588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.466 [2024-11-05 19:18:42.507600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.466 qpair failed and we were unable to recover it. 00:29:13.466 [2024-11-05 19:18:42.507809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.466 [2024-11-05 19:18:42.507821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.466 qpair failed and we were unable to recover it. 00:29:13.466 [2024-11-05 19:18:42.508202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.466 [2024-11-05 19:18:42.508213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.466 qpair failed and we were unable to recover it. 00:29:13.466 [2024-11-05 19:18:42.508542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.466 [2024-11-05 19:18:42.508554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.466 qpair failed and we were unable to recover it. 00:29:13.466 [2024-11-05 19:18:42.508858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.466 [2024-11-05 19:18:42.508869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.466 qpair failed and we were unable to recover it. 00:29:13.466 [2024-11-05 19:18:42.509190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.466 [2024-11-05 19:18:42.509206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.466 qpair failed and we were unable to recover it. 00:29:13.466 [2024-11-05 19:18:42.509517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.466 [2024-11-05 19:18:42.509528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.466 qpair failed and we were unable to recover it. 00:29:13.467 [2024-11-05 19:18:42.509818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.467 [2024-11-05 19:18:42.509830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.467 qpair failed and we were unable to recover it. 00:29:13.467 [2024-11-05 19:18:42.510122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.467 [2024-11-05 19:18:42.510133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.467 qpair failed and we were unable to recover it. 00:29:13.467 [2024-11-05 19:18:42.510459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.467 [2024-11-05 19:18:42.510471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.467 qpair failed and we were unable to recover it. 00:29:13.467 [2024-11-05 19:18:42.510800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.467 [2024-11-05 19:18:42.510812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.467 qpair failed and we were unable to recover it. 00:29:13.467 [2024-11-05 19:18:42.511149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.467 [2024-11-05 19:18:42.511160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.467 qpair failed and we were unable to recover it. 00:29:13.467 [2024-11-05 19:18:42.511461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.467 [2024-11-05 19:18:42.511474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.467 qpair failed and we were unable to recover it. 00:29:13.467 [2024-11-05 19:18:42.511662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.467 [2024-11-05 19:18:42.511673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.467 qpair failed and we were unable to recover it. 00:29:13.467 [2024-11-05 19:18:42.511864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.467 [2024-11-05 19:18:42.511876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.467 qpair failed and we were unable to recover it. 00:29:13.467 [2024-11-05 19:18:42.512181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.467 [2024-11-05 19:18:42.512192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.467 qpair failed and we were unable to recover it. 00:29:13.467 [2024-11-05 19:18:42.512500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.467 [2024-11-05 19:18:42.512511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.467 qpair failed and we were unable to recover it. 00:29:13.467 [2024-11-05 19:18:42.512771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.467 [2024-11-05 19:18:42.512782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.467 qpair failed and we were unable to recover it. 00:29:13.467 [2024-11-05 19:18:42.513164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.467 [2024-11-05 19:18:42.513176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.467 qpair failed and we were unable to recover it. 00:29:13.467 [2024-11-05 19:18:42.513385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.467 [2024-11-05 19:18:42.513397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.467 qpair failed and we were unable to recover it. 00:29:13.467 [2024-11-05 19:18:42.513597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.467 [2024-11-05 19:18:42.513610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.467 qpair failed and we were unable to recover it. 00:29:13.467 [2024-11-05 19:18:42.513946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.467 [2024-11-05 19:18:42.513957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.467 qpair failed and we were unable to recover it. 00:29:13.467 [2024-11-05 19:18:42.514261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.467 [2024-11-05 19:18:42.514273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.467 qpair failed and we were unable to recover it. 00:29:13.467 [2024-11-05 19:18:42.514610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.467 [2024-11-05 19:18:42.514622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.467 qpair failed and we were unable to recover it. 00:29:13.467 [2024-11-05 19:18:42.514937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.467 [2024-11-05 19:18:42.514951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.467 qpair failed and we were unable to recover it. 00:29:13.467 [2024-11-05 19:18:42.515268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.467 [2024-11-05 19:18:42.515279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.467 qpair failed and we were unable to recover it. 00:29:13.467 [2024-11-05 19:18:42.515584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.467 [2024-11-05 19:18:42.515596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.467 qpair failed and we were unable to recover it. 00:29:13.467 [2024-11-05 19:18:42.515770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.467 [2024-11-05 19:18:42.515782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.467 qpair failed and we were unable to recover it. 00:29:13.467 [2024-11-05 19:18:42.516147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.467 [2024-11-05 19:18:42.516159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.467 qpair failed and we were unable to recover it. 00:29:13.467 [2024-11-05 19:18:42.516468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.467 [2024-11-05 19:18:42.516480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.467 qpair failed and we were unable to recover it. 00:29:13.467 [2024-11-05 19:18:42.516793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.467 [2024-11-05 19:18:42.516804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.467 qpair failed and we were unable to recover it. 00:29:13.467 [2024-11-05 19:18:42.517115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.467 [2024-11-05 19:18:42.517126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.467 qpair failed and we were unable to recover it. 00:29:13.467 [2024-11-05 19:18:42.517428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.467 [2024-11-05 19:18:42.517444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.467 qpair failed and we were unable to recover it. 00:29:13.467 [2024-11-05 19:18:42.517773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.467 [2024-11-05 19:18:42.517785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.467 qpair failed and we were unable to recover it. 00:29:13.467 [2024-11-05 19:18:42.518113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.467 [2024-11-05 19:18:42.518124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.467 qpair failed and we were unable to recover it. 00:29:13.467 [2024-11-05 19:18:42.518460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.467 [2024-11-05 19:18:42.518472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.467 qpair failed and we were unable to recover it. 00:29:13.467 [2024-11-05 19:18:42.518777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.467 [2024-11-05 19:18:42.518788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.467 qpair failed and we were unable to recover it. 00:29:13.467 [2024-11-05 19:18:42.518966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.467 [2024-11-05 19:18:42.518977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.467 qpair failed and we were unable to recover it. 00:29:13.467 [2024-11-05 19:18:42.519252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.467 [2024-11-05 19:18:42.519264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.467 qpair failed and we were unable to recover it. 00:29:13.467 [2024-11-05 19:18:42.519601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.467 [2024-11-05 19:18:42.519613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.467 qpair failed and we were unable to recover it. 00:29:13.467 [2024-11-05 19:18:42.519912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.467 [2024-11-05 19:18:42.519924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.467 qpair failed and we were unable to recover it. 00:29:13.467 [2024-11-05 19:18:42.520124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.467 [2024-11-05 19:18:42.520134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.467 qpair failed and we were unable to recover it. 00:29:13.467 [2024-11-05 19:18:42.520450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.467 [2024-11-05 19:18:42.520461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.467 qpair failed and we were unable to recover it. 00:29:13.467 [2024-11-05 19:18:42.520742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.467 [2024-11-05 19:18:42.520758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.467 qpair failed and we were unable to recover it. 00:29:13.467 [2024-11-05 19:18:42.521061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.467 [2024-11-05 19:18:42.521074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.467 qpair failed and we were unable to recover it. 00:29:13.467 [2024-11-05 19:18:42.521260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.467 [2024-11-05 19:18:42.521271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.468 qpair failed and we were unable to recover it. 00:29:13.468 [2024-11-05 19:18:42.521581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.468 [2024-11-05 19:18:42.521594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.468 qpair failed and we were unable to recover it. 00:29:13.468 [2024-11-05 19:18:42.521999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.468 [2024-11-05 19:18:42.522011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.468 qpair failed and we were unable to recover it. 00:29:13.468 [2024-11-05 19:18:42.522311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.468 [2024-11-05 19:18:42.522323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.468 qpair failed and we were unable to recover it. 00:29:13.468 [2024-11-05 19:18:42.522508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.468 [2024-11-05 19:18:42.522520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.468 qpair failed and we were unable to recover it. 00:29:13.468 [2024-11-05 19:18:42.522790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.468 [2024-11-05 19:18:42.522801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.468 qpair failed and we were unable to recover it. 00:29:13.468 [2024-11-05 19:18:42.523121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.468 [2024-11-05 19:18:42.523133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.468 qpair failed and we were unable to recover it. 00:29:13.468 [2024-11-05 19:18:42.523398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.468 [2024-11-05 19:18:42.523410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.468 qpair failed and we were unable to recover it. 00:29:13.468 [2024-11-05 19:18:42.523710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.468 [2024-11-05 19:18:42.523722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.468 qpair failed and we were unable to recover it. 00:29:13.468 [2024-11-05 19:18:42.524028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.468 [2024-11-05 19:18:42.524042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.468 qpair failed and we were unable to recover it. 00:29:13.468 [2024-11-05 19:18:42.524373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.468 [2024-11-05 19:18:42.524385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.468 qpair failed and we were unable to recover it. 00:29:13.468 [2024-11-05 19:18:42.524686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.468 [2024-11-05 19:18:42.524700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.468 qpair failed and we were unable to recover it. 00:29:13.468 [2024-11-05 19:18:42.525011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.468 [2024-11-05 19:18:42.525023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.468 qpair failed and we were unable to recover it. 00:29:13.468 [2024-11-05 19:18:42.525313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.468 [2024-11-05 19:18:42.525325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.468 qpair failed and we were unable to recover it. 00:29:13.468 [2024-11-05 19:18:42.525607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.468 [2024-11-05 19:18:42.525622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.468 qpair failed and we were unable to recover it. 00:29:13.468 [2024-11-05 19:18:42.525973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.468 [2024-11-05 19:18:42.525986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.468 qpair failed and we were unable to recover it. 00:29:13.468 [2024-11-05 19:18:42.526286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.468 [2024-11-05 19:18:42.526297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.468 qpair failed and we were unable to recover it. 00:29:13.468 [2024-11-05 19:18:42.526610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.468 [2024-11-05 19:18:42.526623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.468 qpair failed and we were unable to recover it. 00:29:13.468 [2024-11-05 19:18:42.526908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.468 [2024-11-05 19:18:42.526920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.468 qpair failed and we were unable to recover it. 00:29:13.468 [2024-11-05 19:18:42.527098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.468 [2024-11-05 19:18:42.527109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.468 qpair failed and we were unable to recover it. 00:29:13.468 [2024-11-05 19:18:42.527444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.468 [2024-11-05 19:18:42.527455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.468 qpair failed and we were unable to recover it. 00:29:13.468 [2024-11-05 19:18:42.527678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.468 [2024-11-05 19:18:42.527688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.468 qpair failed and we were unable to recover it. 00:29:13.468 [2024-11-05 19:18:42.527996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.468 [2024-11-05 19:18:42.528008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.468 qpair failed and we were unable to recover it. 00:29:13.468 [2024-11-05 19:18:42.528352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.468 [2024-11-05 19:18:42.528365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.468 qpair failed and we were unable to recover it. 00:29:13.468 [2024-11-05 19:18:42.528670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.468 [2024-11-05 19:18:42.528681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.468 qpair failed and we were unable to recover it. 00:29:13.468 [2024-11-05 19:18:42.528864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.468 [2024-11-05 19:18:42.528877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.468 qpair failed and we were unable to recover it. 00:29:13.468 [2024-11-05 19:18:42.529063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.468 [2024-11-05 19:18:42.529075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.468 qpair failed and we were unable to recover it. 00:29:13.468 [2024-11-05 19:18:42.529397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.468 [2024-11-05 19:18:42.529410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.468 qpair failed and we were unable to recover it. 00:29:13.468 [2024-11-05 19:18:42.529715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.468 [2024-11-05 19:18:42.529727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.468 qpair failed and we were unable to recover it. 00:29:13.468 [2024-11-05 19:18:42.530055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.468 [2024-11-05 19:18:42.530068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.468 qpair failed and we were unable to recover it. 00:29:13.468 [2024-11-05 19:18:42.530398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.468 [2024-11-05 19:18:42.530411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.468 qpair failed and we were unable to recover it. 00:29:13.468 [2024-11-05 19:18:42.530741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.468 [2024-11-05 19:18:42.530759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.468 qpair failed and we were unable to recover it. 00:29:13.468 [2024-11-05 19:18:42.531064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.468 [2024-11-05 19:18:42.531076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.468 qpair failed and we were unable to recover it. 00:29:13.468 [2024-11-05 19:18:42.531382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.468 [2024-11-05 19:18:42.531394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.468 qpair failed and we were unable to recover it. 00:29:13.468 [2024-11-05 19:18:42.531708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.468 [2024-11-05 19:18:42.531719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.468 qpair failed and we were unable to recover it. 00:29:13.469 [2024-11-05 19:18:42.531972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.469 [2024-11-05 19:18:42.531983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.469 qpair failed and we were unable to recover it. 00:29:13.469 [2024-11-05 19:18:42.532152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.469 [2024-11-05 19:18:42.532163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.469 qpair failed and we were unable to recover it. 00:29:13.469 [2024-11-05 19:18:42.532442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.469 [2024-11-05 19:18:42.532454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.469 qpair failed and we were unable to recover it. 00:29:13.469 [2024-11-05 19:18:42.532630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.469 [2024-11-05 19:18:42.532641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.469 qpair failed and we were unable to recover it. 00:29:13.469 [2024-11-05 19:18:42.532837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.469 [2024-11-05 19:18:42.532848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.469 qpair failed and we were unable to recover it. 00:29:13.469 [2024-11-05 19:18:42.533045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.469 [2024-11-05 19:18:42.533056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.469 qpair failed and we were unable to recover it. 00:29:13.469 [2024-11-05 19:18:42.533392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.469 [2024-11-05 19:18:42.533403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.469 qpair failed and we were unable to recover it. 00:29:13.469 [2024-11-05 19:18:42.533752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.469 [2024-11-05 19:18:42.533764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.469 qpair failed and we were unable to recover it. 00:29:13.469 [2024-11-05 19:18:42.534069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.469 [2024-11-05 19:18:42.534081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.469 qpair failed and we were unable to recover it. 00:29:13.469 [2024-11-05 19:18:42.534385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.469 [2024-11-05 19:18:42.534397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.469 qpair failed and we were unable to recover it. 00:29:13.469 [2024-11-05 19:18:42.534703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.469 [2024-11-05 19:18:42.534716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.469 qpair failed and we were unable to recover it. 00:29:13.469 [2024-11-05 19:18:42.535032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.469 [2024-11-05 19:18:42.535044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.469 qpair failed and we were unable to recover it. 00:29:13.469 [2024-11-05 19:18:42.535398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.469 [2024-11-05 19:18:42.535409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.469 qpair failed and we were unable to recover it. 00:29:13.469 [2024-11-05 19:18:42.535580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.469 [2024-11-05 19:18:42.535591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.469 qpair failed and we were unable to recover it. 00:29:13.469 [2024-11-05 19:18:42.535913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.469 [2024-11-05 19:18:42.535925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.469 qpair failed and we were unable to recover it. 00:29:13.469 [2024-11-05 19:18:42.536216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.469 [2024-11-05 19:18:42.536229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.469 qpair failed and we were unable to recover it. 00:29:13.469 [2024-11-05 19:18:42.536539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.469 [2024-11-05 19:18:42.536552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.469 qpair failed and we were unable to recover it. 00:29:13.469 [2024-11-05 19:18:42.536852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.469 [2024-11-05 19:18:42.536864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.469 qpair failed and we were unable to recover it. 00:29:13.469 [2024-11-05 19:18:42.537189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.469 [2024-11-05 19:18:42.537202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.469 qpair failed and we were unable to recover it. 00:29:13.469 [2024-11-05 19:18:42.537504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.469 [2024-11-05 19:18:42.537516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.469 qpair failed and we were unable to recover it. 00:29:13.469 [2024-11-05 19:18:42.537809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.469 [2024-11-05 19:18:42.537821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.469 qpair failed and we were unable to recover it. 00:29:13.469 [2024-11-05 19:18:42.538113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.469 [2024-11-05 19:18:42.538125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.469 qpair failed and we were unable to recover it. 00:29:13.469 [2024-11-05 19:18:42.538459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.469 [2024-11-05 19:18:42.538471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.469 qpair failed and we were unable to recover it. 00:29:13.469 [2024-11-05 19:18:42.538823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.469 [2024-11-05 19:18:42.538834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.469 qpair failed and we were unable to recover it. 00:29:13.469 [2024-11-05 19:18:42.539153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.469 [2024-11-05 19:18:42.539164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.469 qpair failed and we were unable to recover it. 00:29:13.469 [2024-11-05 19:18:42.539478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.469 [2024-11-05 19:18:42.539490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.469 qpair failed and we were unable to recover it. 00:29:13.469 [2024-11-05 19:18:42.539767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.469 [2024-11-05 19:18:42.539780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.469 qpair failed and we were unable to recover it. 00:29:13.469 [2024-11-05 19:18:42.540126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.469 [2024-11-05 19:18:42.540137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.469 qpair failed and we were unable to recover it. 00:29:13.469 [2024-11-05 19:18:42.540446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.469 [2024-11-05 19:18:42.540458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.469 qpair failed and we were unable to recover it. 00:29:13.469 [2024-11-05 19:18:42.540518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.469 [2024-11-05 19:18:42.540529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.469 qpair failed and we were unable to recover it. 00:29:13.469 [2024-11-05 19:18:42.540749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.469 [2024-11-05 19:18:42.540761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.469 qpair failed and we were unable to recover it. 00:29:13.469 [2024-11-05 19:18:42.541095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.469 [2024-11-05 19:18:42.541107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.469 qpair failed and we were unable to recover it. 00:29:13.469 [2024-11-05 19:18:42.541166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.469 [2024-11-05 19:18:42.541177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.469 qpair failed and we were unable to recover it. 00:29:13.469 [2024-11-05 19:18:42.541473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.469 [2024-11-05 19:18:42.541485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.469 qpair failed and we were unable to recover it. 00:29:13.469 [2024-11-05 19:18:42.541834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.469 [2024-11-05 19:18:42.541846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.469 qpair failed and we were unable to recover it. 00:29:13.469 [2024-11-05 19:18:42.542175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.469 [2024-11-05 19:18:42.542188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.469 qpair failed and we were unable to recover it. 00:29:13.469 [2024-11-05 19:18:42.542361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.469 [2024-11-05 19:18:42.542373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.469 qpair failed and we were unable to recover it. 00:29:13.469 [2024-11-05 19:18:42.542694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.469 [2024-11-05 19:18:42.542707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.469 qpair failed and we were unable to recover it. 00:29:13.469 [2024-11-05 19:18:42.543096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.470 [2024-11-05 19:18:42.543110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.470 qpair failed and we were unable to recover it. 00:29:13.470 [2024-11-05 19:18:42.543287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.470 [2024-11-05 19:18:42.543299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.470 qpair failed and we were unable to recover it. 00:29:13.470 [2024-11-05 19:18:42.543590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.470 [2024-11-05 19:18:42.543603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.470 qpair failed and we were unable to recover it. 00:29:13.470 [2024-11-05 19:18:42.543910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.470 [2024-11-05 19:18:42.543921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.470 qpair failed and we were unable to recover it. 00:29:13.470 [2024-11-05 19:18:42.543974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.470 [2024-11-05 19:18:42.543985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.470 qpair failed and we were unable to recover it. 00:29:13.470 [2024-11-05 19:18:42.544151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.470 [2024-11-05 19:18:42.544162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.470 qpair failed and we were unable to recover it. 00:29:13.470 [2024-11-05 19:18:42.544326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.470 [2024-11-05 19:18:42.544339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.470 qpair failed and we were unable to recover it. 00:29:13.470 [2024-11-05 19:18:42.544672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.470 [2024-11-05 19:18:42.544684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.470 qpair failed and we were unable to recover it. 00:29:13.470 [2024-11-05 19:18:42.544994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.470 [2024-11-05 19:18:42.545007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.470 qpair failed and we were unable to recover it. 00:29:13.470 [2024-11-05 19:18:42.545318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.470 [2024-11-05 19:18:42.545332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.470 qpair failed and we were unable to recover it. 00:29:13.470 [2024-11-05 19:18:42.545668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.470 [2024-11-05 19:18:42.545679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.470 qpair failed and we were unable to recover it. 00:29:13.470 [2024-11-05 19:18:42.545847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.470 [2024-11-05 19:18:42.545858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.470 qpair failed and we were unable to recover it. 00:29:13.470 [2024-11-05 19:18:42.546059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.470 [2024-11-05 19:18:42.546070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.470 qpair failed and we were unable to recover it. 00:29:13.470 [2024-11-05 19:18:42.546238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.470 [2024-11-05 19:18:42.546251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.470 qpair failed and we were unable to recover it. 00:29:13.470 [2024-11-05 19:18:42.546417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.470 [2024-11-05 19:18:42.546428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.470 qpair failed and we were unable to recover it. 00:29:13.470 [2024-11-05 19:18:42.546608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.470 [2024-11-05 19:18:42.546619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.470 qpair failed and we were unable to recover it. 00:29:13.470 [2024-11-05 19:18:42.546997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.470 [2024-11-05 19:18:42.547009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.470 qpair failed and we were unable to recover it. 00:29:13.470 [2024-11-05 19:18:42.547321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.470 [2024-11-05 19:18:42.547333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.470 qpair failed and we were unable to recover it. 00:29:13.470 [2024-11-05 19:18:42.547669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.470 [2024-11-05 19:18:42.547681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.470 qpair failed and we were unable to recover it. 00:29:13.470 [2024-11-05 19:18:42.547990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.470 [2024-11-05 19:18:42.548003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.470 qpair failed and we were unable to recover it. 00:29:13.470 [2024-11-05 19:18:42.548320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.470 [2024-11-05 19:18:42.548331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.470 qpair failed and we were unable to recover it. 00:29:13.470 [2024-11-05 19:18:42.548516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.470 [2024-11-05 19:18:42.548526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.470 qpair failed and we were unable to recover it. 00:29:13.470 [2024-11-05 19:18:42.548577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.470 [2024-11-05 19:18:42.548589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.470 qpair failed and we were unable to recover it. 00:29:13.470 [2024-11-05 19:18:42.548757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.470 [2024-11-05 19:18:42.548770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.470 qpair failed and we were unable to recover it. 00:29:13.470 [2024-11-05 19:18:42.548963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.470 [2024-11-05 19:18:42.548975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.470 qpair failed and we were unable to recover it. 00:29:13.470 [2024-11-05 19:18:42.549309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.470 [2024-11-05 19:18:42.549321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.470 qpair failed and we were unable to recover it. 00:29:13.470 [2024-11-05 19:18:42.549688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.470 [2024-11-05 19:18:42.549699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.470 qpair failed and we were unable to recover it. 00:29:13.470 [2024-11-05 19:18:42.550036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.470 [2024-11-05 19:18:42.550048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.470 qpair failed and we were unable to recover it. 00:29:13.470 [2024-11-05 19:18:42.550336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.470 [2024-11-05 19:18:42.550348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.470 qpair failed and we were unable to recover it. 00:29:13.470 [2024-11-05 19:18:42.550653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.470 [2024-11-05 19:18:42.550665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.470 qpair failed and we were unable to recover it. 00:29:13.470 [2024-11-05 19:18:42.550980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.470 [2024-11-05 19:18:42.550991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.470 qpair failed and we were unable to recover it. 00:29:13.470 [2024-11-05 19:18:42.551302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.470 [2024-11-05 19:18:42.551314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.470 qpair failed and we were unable to recover it. 00:29:13.470 [2024-11-05 19:18:42.551494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.470 [2024-11-05 19:18:42.551506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.470 qpair failed and we were unable to recover it. 00:29:13.470 [2024-11-05 19:18:42.551834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.470 [2024-11-05 19:18:42.551846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.470 qpair failed and we were unable to recover it. 00:29:13.470 [2024-11-05 19:18:42.552152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.471 [2024-11-05 19:18:42.552163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.471 qpair failed and we were unable to recover it. 00:29:13.471 [2024-11-05 19:18:42.552465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.471 [2024-11-05 19:18:42.552476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.471 qpair failed and we were unable to recover it. 00:29:13.471 [2024-11-05 19:18:42.552667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.471 [2024-11-05 19:18:42.552680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.471 qpair failed and we were unable to recover it. 00:29:13.471 [2024-11-05 19:18:42.552987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.471 [2024-11-05 19:18:42.553000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.471 qpair failed and we were unable to recover it. 00:29:13.471 [2024-11-05 19:18:42.553278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.471 [2024-11-05 19:18:42.553290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.471 qpair failed and we were unable to recover it. 00:29:13.471 [2024-11-05 19:18:42.553598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.471 [2024-11-05 19:18:42.553609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.471 qpair failed and we were unable to recover it. 00:29:13.471 [2024-11-05 19:18:42.553988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.471 [2024-11-05 19:18:42.553999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.471 qpair failed and we were unable to recover it. 00:29:13.471 [2024-11-05 19:18:42.554310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.471 [2024-11-05 19:18:42.554321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.471 qpair failed and we were unable to recover it. 00:29:13.471 [2024-11-05 19:18:42.554631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.471 [2024-11-05 19:18:42.554643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.471 qpair failed and we were unable to recover it. 00:29:13.471 [2024-11-05 19:18:42.554868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.471 [2024-11-05 19:18:42.554879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.471 qpair failed and we were unable to recover it. 00:29:13.471 [2024-11-05 19:18:42.555214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.471 [2024-11-05 19:18:42.555226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.471 qpair failed and we were unable to recover it. 00:29:13.471 [2024-11-05 19:18:42.555537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.471 [2024-11-05 19:18:42.555549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.471 qpair failed and we were unable to recover it. 00:29:13.471 [2024-11-05 19:18:42.555858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.471 [2024-11-05 19:18:42.555869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.471 qpair failed and we were unable to recover it. 00:29:13.471 [2024-11-05 19:18:42.556182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.471 [2024-11-05 19:18:42.556194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.471 qpair failed and we were unable to recover it. 00:29:13.471 [2024-11-05 19:18:42.556531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.471 [2024-11-05 19:18:42.556542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.471 qpair failed and we were unable to recover it. 00:29:13.471 [2024-11-05 19:18:42.556851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.471 [2024-11-05 19:18:42.556864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.471 qpair failed and we were unable to recover it. 00:29:13.471 [2024-11-05 19:18:42.557177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.471 [2024-11-05 19:18:42.557189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.471 qpair failed and we were unable to recover it. 00:29:13.471 [2024-11-05 19:18:42.557492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.471 [2024-11-05 19:18:42.557504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.471 qpair failed and we were unable to recover it. 00:29:13.471 [2024-11-05 19:18:42.557836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.471 [2024-11-05 19:18:42.557848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.471 qpair failed and we were unable to recover it. 00:29:13.471 [2024-11-05 19:18:42.558148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.471 [2024-11-05 19:18:42.558167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.471 qpair failed and we were unable to recover it. 00:29:13.471 [2024-11-05 19:18:42.558497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.471 [2024-11-05 19:18:42.558507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.471 qpair failed and we were unable to recover it. 00:29:13.471 [2024-11-05 19:18:42.558818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.471 [2024-11-05 19:18:42.558831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.471 qpair failed and we were unable to recover it. 00:29:13.471 [2024-11-05 19:18:42.559135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.471 [2024-11-05 19:18:42.559147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.471 qpair failed and we were unable to recover it. 00:29:13.471 [2024-11-05 19:18:42.559457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.471 [2024-11-05 19:18:42.559468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.471 qpair failed and we were unable to recover it. 00:29:13.471 [2024-11-05 19:18:42.559783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.471 [2024-11-05 19:18:42.559796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.471 qpair failed and we were unable to recover it. 00:29:13.471 [2024-11-05 19:18:42.559984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.471 [2024-11-05 19:18:42.559995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.471 qpair failed and we were unable to recover it. 00:29:13.471 [2024-11-05 19:18:42.560341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.471 [2024-11-05 19:18:42.560352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.471 qpair failed and we were unable to recover it. 00:29:13.471 [2024-11-05 19:18:42.560661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.471 [2024-11-05 19:18:42.560673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.471 qpair failed and we were unable to recover it. 00:29:13.471 [2024-11-05 19:18:42.560982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.471 [2024-11-05 19:18:42.560994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.471 qpair failed and we were unable to recover it. 00:29:13.471 [2024-11-05 19:18:42.561309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.471 [2024-11-05 19:18:42.561323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.471 qpair failed and we were unable to recover it. 00:29:13.471 [2024-11-05 19:18:42.561513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.471 [2024-11-05 19:18:42.561524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.471 qpair failed and we were unable to recover it. 00:29:13.471 [2024-11-05 19:18:42.561835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.471 [2024-11-05 19:18:42.561846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.471 qpair failed and we were unable to recover it. 00:29:13.471 [2024-11-05 19:18:42.562159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.471 [2024-11-05 19:18:42.562170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.471 qpair failed and we were unable to recover it. 00:29:13.471 [2024-11-05 19:18:42.562452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.471 [2024-11-05 19:18:42.562464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.471 qpair failed and we were unable to recover it. 00:29:13.472 [2024-11-05 19:18:42.562802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.472 [2024-11-05 19:18:42.562814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.472 qpair failed and we were unable to recover it. 00:29:13.472 [2024-11-05 19:18:42.563125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.472 [2024-11-05 19:18:42.563136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.472 qpair failed and we were unable to recover it. 00:29:13.472 [2024-11-05 19:18:42.563310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.472 [2024-11-05 19:18:42.563331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.472 qpair failed and we were unable to recover it. 00:29:13.472 [2024-11-05 19:18:42.563661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.472 [2024-11-05 19:18:42.563672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.472 qpair failed and we were unable to recover it. 00:29:13.472 [2024-11-05 19:18:42.563998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.472 [2024-11-05 19:18:42.564011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.472 qpair failed and we were unable to recover it. 00:29:13.472 [2024-11-05 19:18:42.564393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.472 [2024-11-05 19:18:42.564404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.472 qpair failed and we were unable to recover it. 00:29:13.472 [2024-11-05 19:18:42.564620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.472 [2024-11-05 19:18:42.564631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.472 qpair failed and we were unable to recover it. 00:29:13.472 [2024-11-05 19:18:42.565032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.472 [2024-11-05 19:18:42.565044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.472 qpair failed and we were unable to recover it. 00:29:13.472 [2024-11-05 19:18:42.565379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.472 [2024-11-05 19:18:42.565392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.472 qpair failed and we were unable to recover it. 00:29:13.472 [2024-11-05 19:18:42.565793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.472 [2024-11-05 19:18:42.565805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.472 qpair failed and we were unable to recover it. 00:29:13.472 [2024-11-05 19:18:42.566110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.472 [2024-11-05 19:18:42.566122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.472 qpair failed and we were unable to recover it. 00:29:13.472 [2024-11-05 19:18:42.566459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.472 [2024-11-05 19:18:42.566470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.472 qpair failed and we were unable to recover it. 00:29:13.472 [2024-11-05 19:18:42.566541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.472 [2024-11-05 19:18:42.566550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.472 qpair failed and we were unable to recover it. 00:29:13.472 [2024-11-05 19:18:42.566722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.472 [2024-11-05 19:18:42.566733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.472 qpair failed and we were unable to recover it. 00:29:13.472 [2024-11-05 19:18:42.567063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.472 [2024-11-05 19:18:42.567075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.472 qpair failed and we were unable to recover it. 00:29:13.472 [2024-11-05 19:18:42.567387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.472 [2024-11-05 19:18:42.567399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.472 qpair failed and we were unable to recover it. 00:29:13.472 [2024-11-05 19:18:42.567716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.472 [2024-11-05 19:18:42.567727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.472 qpair failed and we were unable to recover it. 00:29:13.472 [2024-11-05 19:18:42.568021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.472 [2024-11-05 19:18:42.568034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.472 qpair failed and we were unable to recover it. 00:29:13.472 [2024-11-05 19:18:42.568226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.472 [2024-11-05 19:18:42.568238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.472 qpair failed and we were unable to recover it. 00:29:13.472 [2024-11-05 19:18:42.568573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.472 [2024-11-05 19:18:42.568585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.472 qpair failed and we were unable to recover it. 00:29:13.472 [2024-11-05 19:18:42.568776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.472 [2024-11-05 19:18:42.568788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.472 qpair failed and we were unable to recover it. 00:29:13.472 [2024-11-05 19:18:42.568957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.472 [2024-11-05 19:18:42.568969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.472 qpair failed and we were unable to recover it. 00:29:13.472 [2024-11-05 19:18:42.569253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.472 [2024-11-05 19:18:42.569264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.472 qpair failed and we were unable to recover it. 00:29:13.472 [2024-11-05 19:18:42.569591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.472 [2024-11-05 19:18:42.569602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.472 qpair failed and we were unable to recover it. 00:29:13.472 [2024-11-05 19:18:42.569937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.472 [2024-11-05 19:18:42.569950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.472 qpair failed and we were unable to recover it. 00:29:13.472 [2024-11-05 19:18:42.570119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.472 [2024-11-05 19:18:42.570130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.472 qpair failed and we were unable to recover it. 00:29:13.472 [2024-11-05 19:18:42.570407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.472 [2024-11-05 19:18:42.570418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.472 qpair failed and we were unable to recover it. 00:29:13.472 [2024-11-05 19:18:42.570582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.472 [2024-11-05 19:18:42.570593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.472 qpair failed and we were unable to recover it. 00:29:13.472 [2024-11-05 19:18:42.570871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.472 [2024-11-05 19:18:42.570883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.472 qpair failed and we were unable to recover it. 00:29:13.472 [2024-11-05 19:18:42.571209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.472 [2024-11-05 19:18:42.571220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.472 qpair failed and we were unable to recover it. 00:29:13.472 [2024-11-05 19:18:42.571266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.472 [2024-11-05 19:18:42.571275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.472 qpair failed and we were unable to recover it. 00:29:13.472 [2024-11-05 19:18:42.571424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.472 [2024-11-05 19:18:42.571436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.472 qpair failed and we were unable to recover it. 00:29:13.472 [2024-11-05 19:18:42.571602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.472 [2024-11-05 19:18:42.571614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.472 qpair failed and we were unable to recover it. 00:29:13.472 [2024-11-05 19:18:42.571666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.472 [2024-11-05 19:18:42.571678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.472 qpair failed and we were unable to recover it. 00:29:13.472 [2024-11-05 19:18:42.571990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.472 [2024-11-05 19:18:42.572001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.472 qpair failed and we were unable to recover it. 00:29:13.472 [2024-11-05 19:18:42.572296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.472 [2024-11-05 19:18:42.572307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.472 qpair failed and we were unable to recover it. 00:29:13.472 [2024-11-05 19:18:42.572627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.472 [2024-11-05 19:18:42.572641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.472 qpair failed and we were unable to recover it. 00:29:13.472 [2024-11-05 19:18:42.572944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.472 [2024-11-05 19:18:42.572956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.472 qpair failed and we were unable to recover it. 00:29:13.472 [2024-11-05 19:18:42.573137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.472 [2024-11-05 19:18:42.573148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.473 qpair failed and we were unable to recover it. 00:29:13.473 [2024-11-05 19:18:42.573193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.473 [2024-11-05 19:18:42.573202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.473 qpair failed and we were unable to recover it. 00:29:13.473 [2024-11-05 19:18:42.573452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.473 [2024-11-05 19:18:42.573464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.473 qpair failed and we were unable to recover it. 00:29:13.473 [2024-11-05 19:18:42.573549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.473 [2024-11-05 19:18:42.573560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.473 qpair failed and we were unable to recover it. 00:29:13.473 [2024-11-05 19:18:42.573866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.473 [2024-11-05 19:18:42.573878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.473 qpair failed and we were unable to recover it. 00:29:13.473 [2024-11-05 19:18:42.574197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.473 [2024-11-05 19:18:42.574208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.473 qpair failed and we were unable to recover it. 00:29:13.473 [2024-11-05 19:18:42.574370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.473 [2024-11-05 19:18:42.574382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.473 qpair failed and we were unable to recover it. 00:29:13.473 [2024-11-05 19:18:42.574582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.473 [2024-11-05 19:18:42.574593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.473 qpair failed and we were unable to recover it. 00:29:13.473 [2024-11-05 19:18:42.574901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.473 [2024-11-05 19:18:42.574913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.473 qpair failed and we were unable to recover it. 00:29:13.473 [2024-11-05 19:18:42.575112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.473 [2024-11-05 19:18:42.575123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.473 qpair failed and we were unable to recover it. 00:29:13.473 [2024-11-05 19:18:42.575296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.473 [2024-11-05 19:18:42.575306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.473 qpair failed and we were unable to recover it. 00:29:13.473 [2024-11-05 19:18:42.575613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.473 [2024-11-05 19:18:42.575624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.473 qpair failed and we were unable to recover it. 00:29:13.473 [2024-11-05 19:18:42.575816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.473 [2024-11-05 19:18:42.575828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.473 qpair failed and we were unable to recover it. 00:29:13.473 [2024-11-05 19:18:42.576144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.473 [2024-11-05 19:18:42.576155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.473 qpair failed and we were unable to recover it. 00:29:13.473 [2024-11-05 19:18:42.576315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.473 [2024-11-05 19:18:42.576325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.473 qpair failed and we were unable to recover it. 00:29:13.473 [2024-11-05 19:18:42.576658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.473 [2024-11-05 19:18:42.576669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.473 qpair failed and we were unable to recover it. 00:29:13.473 [2024-11-05 19:18:42.576734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.473 [2024-11-05 19:18:42.576743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.473 qpair failed and we were unable to recover it. 00:29:13.473 [2024-11-05 19:18:42.577053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.473 [2024-11-05 19:18:42.577064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.473 qpair failed and we were unable to recover it. 00:29:13.473 [2024-11-05 19:18:42.577249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.473 [2024-11-05 19:18:42.577260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.473 qpair failed and we were unable to recover it. 00:29:13.473 [2024-11-05 19:18:42.577614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.473 [2024-11-05 19:18:42.577626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.473 qpair failed and we were unable to recover it. 00:29:13.473 [2024-11-05 19:18:42.577801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.473 [2024-11-05 19:18:42.577813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.473 qpair failed and we were unable to recover it. 00:29:13.473 [2024-11-05 19:18:42.577997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.473 [2024-11-05 19:18:42.578008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.473 qpair failed and we were unable to recover it. 00:29:13.473 [2024-11-05 19:18:42.578180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.473 [2024-11-05 19:18:42.578191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.473 qpair failed and we were unable to recover it. 00:29:13.473 [2024-11-05 19:18:42.578429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.473 [2024-11-05 19:18:42.578439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.473 qpair failed and we were unable to recover it. 00:29:13.473 [2024-11-05 19:18:42.578627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.473 [2024-11-05 19:18:42.578639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.473 qpair failed and we were unable to recover it. 00:29:13.473 [2024-11-05 19:18:42.578905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.473 [2024-11-05 19:18:42.578919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.473 qpair failed and we were unable to recover it. 00:29:13.473 [2024-11-05 19:18:42.579104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.473 [2024-11-05 19:18:42.579115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.473 qpair failed and we were unable to recover it. 00:29:13.473 [2024-11-05 19:18:42.579440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.473 [2024-11-05 19:18:42.579452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.473 qpair failed and we were unable to recover it. 00:29:13.473 [2024-11-05 19:18:42.579639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.473 [2024-11-05 19:18:42.579650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.473 qpair failed and we were unable to recover it. 00:29:13.473 [2024-11-05 19:18:42.579832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.473 [2024-11-05 19:18:42.579844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.473 qpair failed and we were unable to recover it. 00:29:13.473 [2024-11-05 19:18:42.580189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.473 [2024-11-05 19:18:42.580200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.473 qpair failed and we were unable to recover it. 00:29:13.473 [2024-11-05 19:18:42.580511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.473 [2024-11-05 19:18:42.580523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.473 qpair failed and we were unable to recover it. 00:29:13.473 [2024-11-05 19:18:42.580809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.473 [2024-11-05 19:18:42.580820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.473 qpair failed and we were unable to recover it. 00:29:13.473 [2024-11-05 19:18:42.581010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.473 [2024-11-05 19:18:42.581022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.473 qpair failed and we were unable to recover it. 00:29:13.473 [2024-11-05 19:18:42.581310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.473 [2024-11-05 19:18:42.581321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.473 qpair failed and we were unable to recover it. 00:29:13.473 [2024-11-05 19:18:42.581456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.473 [2024-11-05 19:18:42.581467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.473 qpair failed and we were unable to recover it. 00:29:13.473 [2024-11-05 19:18:42.581809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.473 [2024-11-05 19:18:42.581821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.473 qpair failed and we were unable to recover it. 00:29:13.473 [2024-11-05 19:18:42.582098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.473 [2024-11-05 19:18:42.582109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.473 qpair failed and we were unable to recover it. 00:29:13.473 [2024-11-05 19:18:42.582426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.473 [2024-11-05 19:18:42.582437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.473 qpair failed and we were unable to recover it. 00:29:13.473 [2024-11-05 19:18:42.582799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.474 [2024-11-05 19:18:42.582811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.474 qpair failed and we were unable to recover it. 00:29:13.474 [2024-11-05 19:18:42.583147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.474 [2024-11-05 19:18:42.583159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.474 qpair failed and we were unable to recover it. 00:29:13.474 [2024-11-05 19:18:42.583494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.474 [2024-11-05 19:18:42.583505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.474 qpair failed and we were unable to recover it. 00:29:13.474 [2024-11-05 19:18:42.583848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.474 [2024-11-05 19:18:42.583860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.474 qpair failed and we were unable to recover it. 00:29:13.474 [2024-11-05 19:18:42.584108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.474 [2024-11-05 19:18:42.584118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.474 qpair failed and we were unable to recover it. 00:29:13.474 [2024-11-05 19:18:42.584430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.474 [2024-11-05 19:18:42.584443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.474 qpair failed and we were unable to recover it. 00:29:13.474 [2024-11-05 19:18:42.584621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.474 [2024-11-05 19:18:42.584633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.474 qpair failed and we were unable to recover it. 00:29:13.474 [2024-11-05 19:18:42.584911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.474 [2024-11-05 19:18:42.584923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.474 qpair failed and we were unable to recover it. 00:29:13.474 [2024-11-05 19:18:42.585261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.474 [2024-11-05 19:18:42.585272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.474 qpair failed and we were unable to recover it. 00:29:13.474 [2024-11-05 19:18:42.585610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.474 [2024-11-05 19:18:42.585621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.474 qpair failed and we were unable to recover it. 00:29:13.474 [2024-11-05 19:18:42.585960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.474 [2024-11-05 19:18:42.585973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.474 qpair failed and we were unable to recover it. 00:29:13.474 [2024-11-05 19:18:42.586284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.474 [2024-11-05 19:18:42.586296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.474 qpair failed and we were unable to recover it. 00:29:13.474 [2024-11-05 19:18:42.586606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.474 [2024-11-05 19:18:42.586617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.474 qpair failed and we were unable to recover it. 00:29:13.474 [2024-11-05 19:18:42.586902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.474 [2024-11-05 19:18:42.586915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.474 qpair failed and we were unable to recover it. 00:29:13.474 [2024-11-05 19:18:42.587231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.474 [2024-11-05 19:18:42.587242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.474 qpair failed and we were unable to recover it. 00:29:13.474 [2024-11-05 19:18:42.587553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.474 [2024-11-05 19:18:42.587565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.474 qpair failed and we were unable to recover it. 00:29:13.474 [2024-11-05 19:18:42.587788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.474 [2024-11-05 19:18:42.587799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.474 qpair failed and we were unable to recover it. 00:29:13.474 [2024-11-05 19:18:42.588136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.474 [2024-11-05 19:18:42.588148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.474 qpair failed and we were unable to recover it. 00:29:13.474 [2024-11-05 19:18:42.588454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.474 [2024-11-05 19:18:42.588466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.474 qpair failed and we were unable to recover it. 00:29:13.474 [2024-11-05 19:18:42.588796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.474 [2024-11-05 19:18:42.588808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.474 qpair failed and we were unable to recover it. 00:29:13.474 [2024-11-05 19:18:42.589101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.474 [2024-11-05 19:18:42.589112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.474 qpair failed and we were unable to recover it. 00:29:13.474 [2024-11-05 19:18:42.589400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.474 [2024-11-05 19:18:42.589411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.474 qpair failed and we were unable to recover it. 00:29:13.474 [2024-11-05 19:18:42.589717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.474 [2024-11-05 19:18:42.589729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.474 qpair failed and we were unable to recover it. 00:29:13.474 [2024-11-05 19:18:42.590078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.474 [2024-11-05 19:18:42.590090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.474 qpair failed and we were unable to recover it. 00:29:13.474 [2024-11-05 19:18:42.590396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.474 [2024-11-05 19:18:42.590409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.474 qpair failed and we were unable to recover it. 00:29:13.474 [2024-11-05 19:18:42.590759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.474 [2024-11-05 19:18:42.590770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.474 qpair failed and we were unable to recover it. 00:29:13.474 [2024-11-05 19:18:42.591078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.474 [2024-11-05 19:18:42.591090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.474 qpair failed and we were unable to recover it. 00:29:13.474 [2024-11-05 19:18:42.591402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.474 [2024-11-05 19:18:42.591413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.474 qpair failed and we were unable to recover it. 00:29:13.474 [2024-11-05 19:18:42.591761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.474 [2024-11-05 19:18:42.591772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.474 qpair failed and we were unable to recover it. 00:29:13.474 [2024-11-05 19:18:42.592126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.474 [2024-11-05 19:18:42.592138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.474 qpair failed and we were unable to recover it. 00:29:13.474 [2024-11-05 19:18:42.592437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.474 [2024-11-05 19:18:42.592448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.474 qpair failed and we were unable to recover it. 00:29:13.474 [2024-11-05 19:18:42.592757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.474 [2024-11-05 19:18:42.592770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.474 qpair failed and we were unable to recover it. 00:29:13.474 [2024-11-05 19:18:42.593118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.474 [2024-11-05 19:18:42.593130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.474 qpair failed and we were unable to recover it. 00:29:13.474 [2024-11-05 19:18:42.593422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.474 [2024-11-05 19:18:42.593434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.474 qpair failed and we were unable to recover it. 00:29:13.474 [2024-11-05 19:18:42.593609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.474 [2024-11-05 19:18:42.593620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.474 qpair failed and we were unable to recover it. 00:29:13.474 [2024-11-05 19:18:42.593907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.474 [2024-11-05 19:18:42.593918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.474 qpair failed and we were unable to recover it. 00:29:13.474 [2024-11-05 19:18:42.594241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.474 [2024-11-05 19:18:42.594252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.474 qpair failed and we were unable to recover it. 00:29:13.474 [2024-11-05 19:18:42.594535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.474 [2024-11-05 19:18:42.594546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.474 qpair failed and we were unable to recover it. 00:29:13.474 [2024-11-05 19:18:42.594854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.474 [2024-11-05 19:18:42.594866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.474 qpair failed and we were unable to recover it. 00:29:13.474 [2024-11-05 19:18:42.595175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.475 [2024-11-05 19:18:42.595186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.475 qpair failed and we were unable to recover it. 00:29:13.475 [2024-11-05 19:18:42.595502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.475 [2024-11-05 19:18:42.595514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.475 qpair failed and we were unable to recover it. 00:29:13.475 [2024-11-05 19:18:42.595806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.475 [2024-11-05 19:18:42.595819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.475 qpair failed and we were unable to recover it. 00:29:13.475 [2024-11-05 19:18:42.596151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.475 [2024-11-05 19:18:42.596163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.475 qpair failed and we were unable to recover it. 00:29:13.475 [2024-11-05 19:18:42.596511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.475 [2024-11-05 19:18:42.596523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.475 qpair failed and we were unable to recover it. 00:29:13.475 [2024-11-05 19:18:42.596829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.475 [2024-11-05 19:18:42.596841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.475 qpair failed and we were unable to recover it. 00:29:13.475 [2024-11-05 19:18:42.597013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.475 [2024-11-05 19:18:42.597024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.475 qpair failed and we were unable to recover it. 00:29:13.475 [2024-11-05 19:18:42.597339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.475 [2024-11-05 19:18:42.597351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.475 qpair failed and we were unable to recover it. 00:29:13.475 [2024-11-05 19:18:42.597524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.475 [2024-11-05 19:18:42.597536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.475 qpair failed and we were unable to recover it. 00:29:13.475 [2024-11-05 19:18:42.597754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.475 [2024-11-05 19:18:42.597766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.475 qpair failed and we were unable to recover it. 00:29:13.475 [2024-11-05 19:18:42.597929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.475 [2024-11-05 19:18:42.597939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.475 qpair failed and we were unable to recover it. 00:29:13.475 [2024-11-05 19:18:42.598129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.475 [2024-11-05 19:18:42.598140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.475 qpair failed and we were unable to recover it. 00:29:13.475 [2024-11-05 19:18:42.598473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.475 [2024-11-05 19:18:42.598485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.475 qpair failed and we were unable to recover it. 00:29:13.475 [2024-11-05 19:18:42.598657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.475 [2024-11-05 19:18:42.598669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.475 qpair failed and we were unable to recover it. 00:29:13.475 [2024-11-05 19:18:42.598880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.475 [2024-11-05 19:18:42.598891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.475 qpair failed and we were unable to recover it. 00:29:13.475 [2024-11-05 19:18:42.599110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.475 [2024-11-05 19:18:42.599121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.475 qpair failed and we were unable to recover it. 00:29:13.475 [2024-11-05 19:18:42.599409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.475 [2024-11-05 19:18:42.599420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.475 qpair failed and we were unable to recover it. 00:29:13.475 [2024-11-05 19:18:42.599752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.475 [2024-11-05 19:18:42.599765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.475 qpair failed and we were unable to recover it. 00:29:13.475 [2024-11-05 19:18:42.600074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.475 [2024-11-05 19:18:42.600087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.475 qpair failed and we were unable to recover it. 00:29:13.475 [2024-11-05 19:18:42.600277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.475 [2024-11-05 19:18:42.600288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.475 qpair failed and we were unable to recover it. 00:29:13.475 [2024-11-05 19:18:42.600476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.475 [2024-11-05 19:18:42.600487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.475 qpair failed and we were unable to recover it. 00:29:13.475 [2024-11-05 19:18:42.600814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.475 [2024-11-05 19:18:42.600825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.475 qpair failed and we were unable to recover it. 00:29:13.475 [2024-11-05 19:18:42.601001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.475 [2024-11-05 19:18:42.601012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.475 qpair failed and we were unable to recover it. 00:29:13.475 [2024-11-05 19:18:42.601329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.475 [2024-11-05 19:18:42.601340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.475 qpair failed and we were unable to recover it. 00:29:13.475 [2024-11-05 19:18:42.601652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.475 [2024-11-05 19:18:42.601664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.475 qpair failed and we were unable to recover it. 00:29:13.475 [2024-11-05 19:18:42.601959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.475 [2024-11-05 19:18:42.601970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.475 qpair failed and we were unable to recover it. 00:29:13.475 [2024-11-05 19:18:42.602282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.475 [2024-11-05 19:18:42.602294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.475 qpair failed and we were unable to recover it. 00:29:13.475 [2024-11-05 19:18:42.602509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.475 [2024-11-05 19:18:42.602520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.475 qpair failed and we were unable to recover it. 00:29:13.475 [2024-11-05 19:18:42.602738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.475 [2024-11-05 19:18:42.602754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.475 qpair failed and we were unable to recover it. 00:29:13.475 [2024-11-05 19:18:42.603045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.475 [2024-11-05 19:18:42.603056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.475 qpair failed and we were unable to recover it. 00:29:13.475 [2024-11-05 19:18:42.603396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.475 [2024-11-05 19:18:42.603408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.475 qpair failed and we were unable to recover it. 00:29:13.475 [2024-11-05 19:18:42.603597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.475 [2024-11-05 19:18:42.603609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.475 qpair failed and we were unable to recover it. 00:29:13.475 [2024-11-05 19:18:42.603912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.475 [2024-11-05 19:18:42.603924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.475 qpair failed and we were unable to recover it. 00:29:13.475 [2024-11-05 19:18:42.604125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.475 [2024-11-05 19:18:42.604136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.475 qpair failed and we were unable to recover it. 00:29:13.475 [2024-11-05 19:18:42.604450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.475 [2024-11-05 19:18:42.604460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.475 qpair failed and we were unable to recover it. 00:29:13.475 [2024-11-05 19:18:42.604769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.475 [2024-11-05 19:18:42.604781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.475 qpair failed and we were unable to recover it. 00:29:13.475 [2024-11-05 19:18:42.605097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.475 [2024-11-05 19:18:42.605108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.475 qpair failed and we were unable to recover it. 00:29:13.475 [2024-11-05 19:18:42.605361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.475 [2024-11-05 19:18:42.605372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.475 qpair failed and we were unable to recover it. 00:29:13.475 [2024-11-05 19:18:42.605687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.475 [2024-11-05 19:18:42.605698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.476 qpair failed and we were unable to recover it. 00:29:13.476 [2024-11-05 19:18:42.605914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.476 [2024-11-05 19:18:42.605925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.476 qpair failed and we were unable to recover it. 00:29:13.476 [2024-11-05 19:18:42.606236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.476 [2024-11-05 19:18:42.606248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.476 qpair failed and we were unable to recover it. 00:29:13.476 [2024-11-05 19:18:42.606598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.476 [2024-11-05 19:18:42.606609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.476 qpair failed and we were unable to recover it. 00:29:13.476 [2024-11-05 19:18:42.606915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.476 [2024-11-05 19:18:42.606929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.476 qpair failed and we were unable to recover it. 00:29:13.476 [2024-11-05 19:18:42.607301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.476 [2024-11-05 19:18:42.607312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.476 qpair failed and we were unable to recover it. 00:29:13.476 [2024-11-05 19:18:42.607520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.476 [2024-11-05 19:18:42.607531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.476 qpair failed and we were unable to recover it. 00:29:13.476 [2024-11-05 19:18:42.607861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.476 [2024-11-05 19:18:42.607873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.476 qpair failed and we were unable to recover it. 00:29:13.476 [2024-11-05 19:18:42.608210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.476 [2024-11-05 19:18:42.608222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.476 qpair failed and we were unable to recover it. 00:29:13.476 [2024-11-05 19:18:42.608524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.476 [2024-11-05 19:18:42.608535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.476 qpair failed and we were unable to recover it. 00:29:13.476 [2024-11-05 19:18:42.608851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.476 [2024-11-05 19:18:42.608863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.476 qpair failed and we were unable to recover it. 00:29:13.476 [2024-11-05 19:18:42.609034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.476 [2024-11-05 19:18:42.609045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.476 qpair failed and we were unable to recover it. 00:29:13.476 [2024-11-05 19:18:42.609330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.476 [2024-11-05 19:18:42.609340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.476 qpair failed and we were unable to recover it. 00:29:13.476 [2024-11-05 19:18:42.609514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.476 [2024-11-05 19:18:42.609525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.476 qpair failed and we were unable to recover it. 00:29:13.476 [2024-11-05 19:18:42.609862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.476 [2024-11-05 19:18:42.609875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.476 qpair failed and we were unable to recover it. 00:29:13.476 [2024-11-05 19:18:42.610181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.476 [2024-11-05 19:18:42.610192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.476 qpair failed and we were unable to recover it. 00:29:13.476 [2024-11-05 19:18:42.610472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.476 [2024-11-05 19:18:42.610484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.476 qpair failed and we were unable to recover it. 00:29:13.476 [2024-11-05 19:18:42.610804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.476 [2024-11-05 19:18:42.610816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.476 qpair failed and we were unable to recover it. 00:29:13.476 [2024-11-05 19:18:42.611183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.476 [2024-11-05 19:18:42.611195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.476 qpair failed and we were unable to recover it. 00:29:13.476 [2024-11-05 19:18:42.611498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.476 [2024-11-05 19:18:42.611508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.476 qpair failed and we were unable to recover it. 00:29:13.476 [2024-11-05 19:18:42.611791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.476 [2024-11-05 19:18:42.611803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.476 qpair failed and we were unable to recover it. 00:29:13.476 [2024-11-05 19:18:42.612111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.476 [2024-11-05 19:18:42.612123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.476 qpair failed and we were unable to recover it. 00:29:13.476 [2024-11-05 19:18:42.612440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.476 [2024-11-05 19:18:42.612451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.476 qpair failed and we were unable to recover it. 00:29:13.476 [2024-11-05 19:18:42.612865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.476 [2024-11-05 19:18:42.612876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.476 qpair failed and we were unable to recover it. 00:29:13.476 [2024-11-05 19:18:42.613077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.476 [2024-11-05 19:18:42.613088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.476 qpair failed and we were unable to recover it. 00:29:13.476 [2024-11-05 19:18:42.613424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.476 [2024-11-05 19:18:42.613434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.476 qpair failed and we were unable to recover it. 00:29:13.476 [2024-11-05 19:18:42.613758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.476 [2024-11-05 19:18:42.613769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.476 qpair failed and we were unable to recover it. 00:29:13.476 [2024-11-05 19:18:42.614085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.476 [2024-11-05 19:18:42.614097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.476 qpair failed and we were unable to recover it. 00:29:13.476 [2024-11-05 19:18:42.614430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.476 [2024-11-05 19:18:42.614442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.476 qpair failed and we were unable to recover it. 00:29:13.476 [2024-11-05 19:18:42.614753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.476 [2024-11-05 19:18:42.614765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.476 qpair failed and we were unable to recover it. 00:29:13.476 [2024-11-05 19:18:42.615082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.476 [2024-11-05 19:18:42.615095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.476 qpair failed and we were unable to recover it. 00:29:13.476 [2024-11-05 19:18:42.615399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.476 [2024-11-05 19:18:42.615414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.476 qpair failed and we were unable to recover it. 00:29:13.476 [2024-11-05 19:18:42.615736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.476 [2024-11-05 19:18:42.615755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.476 qpair failed and we were unable to recover it. 00:29:13.476 [2024-11-05 19:18:42.616070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.476 [2024-11-05 19:18:42.616082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.476 qpair failed and we were unable to recover it. 00:29:13.476 [2024-11-05 19:18:42.616392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.476 [2024-11-05 19:18:42.616404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.476 qpair failed and we were unable to recover it. 00:29:13.477 [2024-11-05 19:18:42.616718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.477 [2024-11-05 19:18:42.616732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.477 qpair failed and we were unable to recover it. 00:29:13.477 [2024-11-05 19:18:42.617063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.477 [2024-11-05 19:18:42.617077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.477 qpair failed and we were unable to recover it. 00:29:13.477 [2024-11-05 19:18:42.617267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.477 [2024-11-05 19:18:42.617278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.477 qpair failed and we were unable to recover it. 00:29:13.477 [2024-11-05 19:18:42.617594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.477 [2024-11-05 19:18:42.617605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.477 qpair failed and we were unable to recover it. 00:29:13.477 [2024-11-05 19:18:42.617942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.477 [2024-11-05 19:18:42.617954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.477 qpair failed and we were unable to recover it. 00:29:13.477 [2024-11-05 19:18:42.618295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.477 [2024-11-05 19:18:42.618306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.477 qpair failed and we were unable to recover it. 00:29:13.477 [2024-11-05 19:18:42.618618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.477 [2024-11-05 19:18:42.618630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.477 qpair failed and we were unable to recover it. 00:29:13.477 [2024-11-05 19:18:42.618915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.477 [2024-11-05 19:18:42.618926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.477 qpair failed and we were unable to recover it. 00:29:13.477 [2024-11-05 19:18:42.619259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.477 [2024-11-05 19:18:42.619271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.477 qpair failed and we were unable to recover it. 00:29:13.477 [2024-11-05 19:18:42.619573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.477 [2024-11-05 19:18:42.619584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.477 qpair failed and we were unable to recover it. 00:29:13.477 [2024-11-05 19:18:42.619755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.477 [2024-11-05 19:18:42.619768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.477 qpair failed and we were unable to recover it. 00:29:13.477 [2024-11-05 19:18:42.620140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.477 [2024-11-05 19:18:42.620151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.477 qpair failed and we were unable to recover it. 00:29:13.477 [2024-11-05 19:18:42.620472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.477 [2024-11-05 19:18:42.620483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.477 qpair failed and we were unable to recover it. 00:29:13.477 [2024-11-05 19:18:42.620817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.477 [2024-11-05 19:18:42.620828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.477 qpair failed and we were unable to recover it. 00:29:13.477 [2024-11-05 19:18:42.621136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.477 [2024-11-05 19:18:42.621147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.477 qpair failed and we were unable to recover it. 00:29:13.477 [2024-11-05 19:18:42.621481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.477 [2024-11-05 19:18:42.621493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.477 qpair failed and we were unable to recover it. 00:29:13.477 [2024-11-05 19:18:42.621805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.477 [2024-11-05 19:18:42.621816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.477 qpair failed and we were unable to recover it. 00:29:13.477 [2024-11-05 19:18:42.622123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.477 [2024-11-05 19:18:42.622134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.477 qpair failed and we were unable to recover it. 00:29:13.477 [2024-11-05 19:18:42.622468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.477 [2024-11-05 19:18:42.622479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.477 qpair failed and we were unable to recover it. 00:29:13.477 [2024-11-05 19:18:42.622815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.477 [2024-11-05 19:18:42.622830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.477 qpair failed and we were unable to recover it. 00:29:13.477 [2024-11-05 19:18:42.623147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.477 [2024-11-05 19:18:42.623159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.477 qpair failed and we were unable to recover it. 00:29:13.477 [2024-11-05 19:18:42.623495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.477 [2024-11-05 19:18:42.623507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.477 qpair failed and we were unable to recover it. 00:29:13.477 [2024-11-05 19:18:42.623815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.477 [2024-11-05 19:18:42.623827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.477 qpair failed and we were unable to recover it. 00:29:13.477 [2024-11-05 19:18:42.624001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.477 [2024-11-05 19:18:42.624013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.477 qpair failed and we were unable to recover it. 00:29:13.477 [2024-11-05 19:18:42.624291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.477 [2024-11-05 19:18:42.624302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.477 qpair failed and we were unable to recover it. 00:29:13.477 [2024-11-05 19:18:42.624495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.477 [2024-11-05 19:18:42.624506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.477 qpair failed and we were unable to recover it. 00:29:13.477 [2024-11-05 19:18:42.624773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.477 [2024-11-05 19:18:42.624786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.477 qpair failed and we were unable to recover it. 00:29:13.477 [2024-11-05 19:18:42.624980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.477 [2024-11-05 19:18:42.624991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.477 qpair failed and we were unable to recover it. 00:29:13.477 [2024-11-05 19:18:42.625294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.477 [2024-11-05 19:18:42.625306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.477 qpair failed and we were unable to recover it. 00:29:13.477 [2024-11-05 19:18:42.625597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.477 [2024-11-05 19:18:42.625607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.477 qpair failed and we were unable to recover it. 00:29:13.477 [2024-11-05 19:18:42.625893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.477 [2024-11-05 19:18:42.625905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.477 qpair failed and we were unable to recover it. 00:29:13.477 [2024-11-05 19:18:42.626254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.477 [2024-11-05 19:18:42.626265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.477 qpair failed and we were unable to recover it. 00:29:13.477 [2024-11-05 19:18:42.626595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.477 [2024-11-05 19:18:42.626607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.477 qpair failed and we were unable to recover it. 00:29:13.477 [2024-11-05 19:18:42.626947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.477 [2024-11-05 19:18:42.626959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.477 qpair failed and we were unable to recover it. 00:29:13.477 [2024-11-05 19:18:42.627265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.477 [2024-11-05 19:18:42.627277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.477 qpair failed and we were unable to recover it. 00:29:13.477 [2024-11-05 19:18:42.627580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.477 [2024-11-05 19:18:42.627592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.477 qpair failed and we were unable to recover it. 00:29:13.477 [2024-11-05 19:18:42.627899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.477 [2024-11-05 19:18:42.627911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.477 qpair failed and we were unable to recover it. 00:29:13.477 [2024-11-05 19:18:42.628219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.477 [2024-11-05 19:18:42.628230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.477 qpair failed and we were unable to recover it. 00:29:13.477 [2024-11-05 19:18:42.628569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.477 [2024-11-05 19:18:42.628581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.477 qpair failed and we were unable to recover it. 00:29:13.478 [2024-11-05 19:18:42.628905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.478 [2024-11-05 19:18:42.628917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.478 qpair failed and we were unable to recover it. 00:29:13.478 [2024-11-05 19:18:42.629214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.478 [2024-11-05 19:18:42.629225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.478 qpair failed and we were unable to recover it. 00:29:13.478 [2024-11-05 19:18:42.629532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.478 [2024-11-05 19:18:42.629544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.478 qpair failed and we were unable to recover it. 00:29:13.478 [2024-11-05 19:18:42.629852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.478 [2024-11-05 19:18:42.629864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.478 qpair failed and we were unable to recover it. 00:29:13.478 [2024-11-05 19:18:42.630169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.478 [2024-11-05 19:18:42.630181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.478 qpair failed and we were unable to recover it. 00:29:13.478 [2024-11-05 19:18:42.630337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.478 [2024-11-05 19:18:42.630350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.478 qpair failed and we were unable to recover it. 00:29:13.478 [2024-11-05 19:18:42.630666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.478 [2024-11-05 19:18:42.630678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.478 qpair failed and we were unable to recover it. 00:29:13.478 [2024-11-05 19:18:42.631016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.478 [2024-11-05 19:18:42.631028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.478 qpair failed and we were unable to recover it. 00:29:13.478 [2024-11-05 19:18:42.631342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.478 [2024-11-05 19:18:42.631355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.478 qpair failed and we were unable to recover it. 00:29:13.478 [2024-11-05 19:18:42.631705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.478 [2024-11-05 19:18:42.631717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.478 qpair failed and we were unable to recover it. 00:29:13.478 [2024-11-05 19:18:42.632041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.478 [2024-11-05 19:18:42.632053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.478 qpair failed and we were unable to recover it. 00:29:13.478 [2024-11-05 19:18:42.632363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.478 [2024-11-05 19:18:42.632375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.478 qpair failed and we were unable to recover it. 00:29:13.478 [2024-11-05 19:18:42.632567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.478 [2024-11-05 19:18:42.632579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.478 qpair failed and we were unable to recover it. 00:29:13.478 [2024-11-05 19:18:42.632879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.478 [2024-11-05 19:18:42.632891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.478 qpair failed and we were unable to recover it. 00:29:13.478 [2024-11-05 19:18:42.633132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.478 [2024-11-05 19:18:42.633143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.478 qpair failed and we were unable to recover it. 00:29:13.478 [2024-11-05 19:18:42.633468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.478 [2024-11-05 19:18:42.633479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.478 qpair failed and we were unable to recover it. 00:29:13.478 [2024-11-05 19:18:42.633763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.478 [2024-11-05 19:18:42.633774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.478 qpair failed and we were unable to recover it. 00:29:13.478 [2024-11-05 19:18:42.634091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.478 [2024-11-05 19:18:42.634103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.478 qpair failed and we were unable to recover it. 00:29:13.478 [2024-11-05 19:18:42.634445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.478 [2024-11-05 19:18:42.634456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.478 qpair failed and we were unable to recover it. 00:29:13.478 [2024-11-05 19:18:42.634770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.478 [2024-11-05 19:18:42.634783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.478 qpair failed and we were unable to recover it. 00:29:13.478 [2024-11-05 19:18:42.635123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.478 [2024-11-05 19:18:42.635135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.478 qpair failed and we were unable to recover it. 00:29:13.478 [2024-11-05 19:18:42.635298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.478 [2024-11-05 19:18:42.635310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.478 qpair failed and we were unable to recover it. 00:29:13.478 [2024-11-05 19:18:42.635646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.478 [2024-11-05 19:18:42.635656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.478 qpair failed and we were unable to recover it. 00:29:13.478 [2024-11-05 19:18:42.635839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.478 [2024-11-05 19:18:42.635850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.478 qpair failed and we were unable to recover it. 00:29:13.478 [2024-11-05 19:18:42.636028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.478 [2024-11-05 19:18:42.636037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.478 qpair failed and we were unable to recover it. 00:29:13.478 [2024-11-05 19:18:42.636345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.478 [2024-11-05 19:18:42.636358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.478 qpair failed and we were unable to recover it. 00:29:13.478 [2024-11-05 19:18:42.636539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.478 [2024-11-05 19:18:42.636549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.478 qpair failed and we were unable to recover it. 00:29:13.478 [2024-11-05 19:18:42.636844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.478 [2024-11-05 19:18:42.636856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.478 qpair failed and we were unable to recover it. 00:29:13.478 [2024-11-05 19:18:42.637196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.478 [2024-11-05 19:18:42.637207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.478 qpair failed and we were unable to recover it. 00:29:13.478 [2024-11-05 19:18:42.637396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.478 [2024-11-05 19:18:42.637407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.478 qpair failed and we were unable to recover it. 00:29:13.478 [2024-11-05 19:18:42.637698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.478 [2024-11-05 19:18:42.637709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.478 qpair failed and we were unable to recover it. 00:29:13.478 [2024-11-05 19:18:42.637992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.478 [2024-11-05 19:18:42.638003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.478 qpair failed and we were unable to recover it. 00:29:13.478 [2024-11-05 19:18:42.638236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.478 [2024-11-05 19:18:42.638247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.478 qpair failed and we were unable to recover it. 00:29:13.478 [2024-11-05 19:18:42.638461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.478 [2024-11-05 19:18:42.638472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.478 qpair failed and we were unable to recover it. 00:29:13.478 [2024-11-05 19:18:42.638813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.478 [2024-11-05 19:18:42.638825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.478 qpair failed and we were unable to recover it. 00:29:13.478 [2024-11-05 19:18:42.639137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.478 [2024-11-05 19:18:42.639148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.478 qpair failed and we were unable to recover it. 00:29:13.478 [2024-11-05 19:18:42.639438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.478 [2024-11-05 19:18:42.639449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.478 qpair failed and we were unable to recover it. 00:29:13.478 [2024-11-05 19:18:42.639759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.478 [2024-11-05 19:18:42.639771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.478 qpair failed and we were unable to recover it. 00:29:13.478 [2024-11-05 19:18:42.640072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.478 [2024-11-05 19:18:42.640084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.479 qpair failed and we were unable to recover it. 00:29:13.479 [2024-11-05 19:18:42.640297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.479 [2024-11-05 19:18:42.640309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.479 qpair failed and we were unable to recover it. 00:29:13.479 [2024-11-05 19:18:42.640639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.479 [2024-11-05 19:18:42.640651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.479 qpair failed and we were unable to recover it. 00:29:13.479 [2024-11-05 19:18:42.640975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.479 [2024-11-05 19:18:42.640987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.479 qpair failed and we were unable to recover it. 00:29:13.479 [2024-11-05 19:18:42.641287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.479 [2024-11-05 19:18:42.641299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.479 qpair failed and we were unable to recover it. 00:29:13.479 [2024-11-05 19:18:42.641633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.479 [2024-11-05 19:18:42.641644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.479 qpair failed and we were unable to recover it. 00:29:13.479 [2024-11-05 19:18:42.641836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.479 [2024-11-05 19:18:42.641848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.479 qpair failed and we were unable to recover it. 00:29:13.479 [2024-11-05 19:18:42.642106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.479 [2024-11-05 19:18:42.642117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.479 qpair failed and we were unable to recover it. 00:29:13.479 [2024-11-05 19:18:42.642442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.479 [2024-11-05 19:18:42.642454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.479 qpair failed and we were unable to recover it. 00:29:13.479 [2024-11-05 19:18:42.642826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.479 [2024-11-05 19:18:42.642837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.479 qpair failed and we were unable to recover it. 00:29:13.479 [2024-11-05 19:18:42.643156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.479 [2024-11-05 19:18:42.643167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.479 qpair failed and we were unable to recover it. 00:29:13.479 [2024-11-05 19:18:42.643514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.479 [2024-11-05 19:18:42.643525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.479 qpair failed and we were unable to recover it. 00:29:13.479 [2024-11-05 19:18:42.643833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.479 [2024-11-05 19:18:42.643846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.479 qpair failed and we were unable to recover it. 00:29:13.479 [2024-11-05 19:18:42.644157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.479 [2024-11-05 19:18:42.644168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.479 qpair failed and we were unable to recover it. 00:29:13.479 [2024-11-05 19:18:42.644487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.479 [2024-11-05 19:18:42.644501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.479 qpair failed and we were unable to recover it. 00:29:13.479 [2024-11-05 19:18:42.644815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.479 [2024-11-05 19:18:42.644826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.479 qpair failed and we were unable to recover it. 00:29:13.479 [2024-11-05 19:18:42.645201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.479 [2024-11-05 19:18:42.645212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.479 qpair failed and we were unable to recover it. 00:29:13.479 [2024-11-05 19:18:42.645544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.479 [2024-11-05 19:18:42.645555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.479 qpair failed and we were unable to recover it. 00:29:13.479 [2024-11-05 19:18:42.645782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.479 [2024-11-05 19:18:42.645793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.479 qpair failed and we were unable to recover it. 00:29:13.479 [2024-11-05 19:18:42.646124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.479 [2024-11-05 19:18:42.646136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.479 qpair failed and we were unable to recover it. 00:29:13.479 [2024-11-05 19:18:42.646308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.479 [2024-11-05 19:18:42.646319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.479 qpair failed and we were unable to recover it. 00:29:13.479 [2024-11-05 19:18:42.646625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.479 [2024-11-05 19:18:42.646636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.479 qpair failed and we were unable to recover it. 00:29:13.479 [2024-11-05 19:18:42.646810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.479 [2024-11-05 19:18:42.646822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.479 qpair failed and we were unable to recover it. 00:29:13.479 [2024-11-05 19:18:42.646999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.479 [2024-11-05 19:18:42.647010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.479 qpair failed and we were unable to recover it. 00:29:13.479 [2024-11-05 19:18:42.647329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.479 [2024-11-05 19:18:42.647340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.479 qpair failed and we were unable to recover it. 00:29:13.479 [2024-11-05 19:18:42.647644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.479 [2024-11-05 19:18:42.647656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.479 qpair failed and we were unable to recover it. 00:29:13.479 [2024-11-05 19:18:42.647963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.479 [2024-11-05 19:18:42.647975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.479 qpair failed and we were unable to recover it. 00:29:13.479 [2024-11-05 19:18:42.648288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.479 [2024-11-05 19:18:42.648300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.479 qpair failed and we were unable to recover it. 00:29:13.479 [2024-11-05 19:18:42.648649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.479 [2024-11-05 19:18:42.648661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.479 qpair failed and we were unable to recover it. 00:29:13.479 [2024-11-05 19:18:42.648972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.479 [2024-11-05 19:18:42.648984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.479 qpair failed and we were unable to recover it. 00:29:13.479 [2024-11-05 19:18:42.649147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.479 [2024-11-05 19:18:42.649158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.479 qpair failed and we were unable to recover it. 00:29:13.479 [2024-11-05 19:18:42.649349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.479 [2024-11-05 19:18:42.649359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.479 qpair failed and we were unable to recover it. 00:29:13.479 [2024-11-05 19:18:42.649682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.479 [2024-11-05 19:18:42.649693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.479 qpair failed and we were unable to recover it. 00:29:13.479 [2024-11-05 19:18:42.649880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.479 [2024-11-05 19:18:42.649891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.479 qpair failed and we were unable to recover it. 00:29:13.479 [2024-11-05 19:18:42.649939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.479 [2024-11-05 19:18:42.649948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.479 qpair failed and we were unable to recover it. 00:29:13.479 [2024-11-05 19:18:42.650256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.479 [2024-11-05 19:18:42.650267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.479 qpair failed and we were unable to recover it. 00:29:13.479 [2024-11-05 19:18:42.650431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.479 [2024-11-05 19:18:42.650443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.479 qpair failed and we were unable to recover it. 00:29:13.479 [2024-11-05 19:18:42.650634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.479 [2024-11-05 19:18:42.650646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.479 qpair failed and we were unable to recover it. 00:29:13.479 [2024-11-05 19:18:42.650856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.479 [2024-11-05 19:18:42.650868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.479 qpair failed and we were unable to recover it. 00:29:13.479 [2024-11-05 19:18:42.651035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.480 [2024-11-05 19:18:42.651046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.480 qpair failed and we were unable to recover it. 00:29:13.480 [2024-11-05 19:18:42.651363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.480 [2024-11-05 19:18:42.651375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.480 qpair failed and we were unable to recover it. 00:29:13.480 [2024-11-05 19:18:42.651696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.480 [2024-11-05 19:18:42.651710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.480 qpair failed and we were unable to recover it. 00:29:13.480 [2024-11-05 19:18:42.651907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.480 [2024-11-05 19:18:42.651920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.480 qpair failed and we were unable to recover it. 00:29:13.480 [2024-11-05 19:18:42.652255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.480 [2024-11-05 19:18:42.652267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.480 qpair failed and we were unable to recover it. 00:29:13.480 [2024-11-05 19:18:42.652581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.480 [2024-11-05 19:18:42.652593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.480 qpair failed and we were unable to recover it. 00:29:13.480 [2024-11-05 19:18:42.652916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.480 [2024-11-05 19:18:42.652928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.480 qpair failed and we were unable to recover it. 00:29:13.480 [2024-11-05 19:18:42.653115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.480 [2024-11-05 19:18:42.653125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.480 qpair failed and we were unable to recover it. 00:29:13.480 [2024-11-05 19:18:42.653408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.480 [2024-11-05 19:18:42.653419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.480 qpair failed and we were unable to recover it. 00:29:13.480 [2024-11-05 19:18:42.653613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.480 [2024-11-05 19:18:42.653623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.480 qpair failed and we were unable to recover it. 00:29:13.480 [2024-11-05 19:18:42.653847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.480 [2024-11-05 19:18:42.653860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.480 qpair failed and we were unable to recover it. 00:29:13.480 [2024-11-05 19:18:42.654182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.480 [2024-11-05 19:18:42.654194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.480 qpair failed and we were unable to recover it. 00:29:13.480 [2024-11-05 19:18:42.654448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.480 [2024-11-05 19:18:42.654459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.480 qpair failed and we were unable to recover it. 00:29:13.480 [2024-11-05 19:18:42.654631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.480 [2024-11-05 19:18:42.654643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.480 qpair failed and we were unable to recover it. 00:29:13.480 [2024-11-05 19:18:42.654692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.480 [2024-11-05 19:18:42.654701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.480 qpair failed and we were unable to recover it. 00:29:13.480 [2024-11-05 19:18:42.654874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.480 [2024-11-05 19:18:42.654887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.480 qpair failed and we were unable to recover it. 00:29:13.480 [2024-11-05 19:18:42.655211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.480 [2024-11-05 19:18:42.655223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.480 qpair failed and we were unable to recover it. 00:29:13.480 [2024-11-05 19:18:42.655408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.480 [2024-11-05 19:18:42.655420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.480 qpair failed and we were unable to recover it. 00:29:13.480 [2024-11-05 19:18:42.655577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.480 [2024-11-05 19:18:42.655588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.480 qpair failed and we were unable to recover it. 00:29:13.480 [2024-11-05 19:18:42.655764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.480 [2024-11-05 19:18:42.655775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.480 qpair failed and we were unable to recover it. 00:29:13.480 [2024-11-05 19:18:42.656060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.480 [2024-11-05 19:18:42.656071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.480 qpair failed and we were unable to recover it. 00:29:13.480 [2024-11-05 19:18:42.656277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.480 [2024-11-05 19:18:42.656287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.480 qpair failed and we were unable to recover it. 00:29:13.480 [2024-11-05 19:18:42.656630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.480 [2024-11-05 19:18:42.656640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.480 qpair failed and we were unable to recover it. 00:29:13.480 [2024-11-05 19:18:42.656691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.480 [2024-11-05 19:18:42.656701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.480 qpair failed and we were unable to recover it. 00:29:13.480 [2024-11-05 19:18:42.657005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.480 [2024-11-05 19:18:42.657017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.480 qpair failed and we were unable to recover it. 00:29:13.480 [2024-11-05 19:18:42.657331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.480 [2024-11-05 19:18:42.657342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.480 qpair failed and we were unable to recover it. 00:29:13.480 [2024-11-05 19:18:42.657507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.480 [2024-11-05 19:18:42.657518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.480 qpair failed and we were unable to recover it. 00:29:13.480 [2024-11-05 19:18:42.657757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.480 [2024-11-05 19:18:42.657769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.480 qpair failed and we were unable to recover it. 00:29:13.480 [2024-11-05 19:18:42.658060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.480 [2024-11-05 19:18:42.658071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.480 qpair failed and we were unable to recover it. 00:29:13.480 [2024-11-05 19:18:42.658397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.480 [2024-11-05 19:18:42.658408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.480 qpair failed and we were unable to recover it. 00:29:13.480 [2024-11-05 19:18:42.658566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.480 [2024-11-05 19:18:42.658576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.480 qpair failed and we were unable to recover it. 00:29:13.480 [2024-11-05 19:18:42.658771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.480 [2024-11-05 19:18:42.658783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.480 qpair failed and we were unable to recover it. 00:29:13.480 [2024-11-05 19:18:42.659092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.480 [2024-11-05 19:18:42.659104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.480 qpair failed and we were unable to recover it. 00:29:13.480 [2024-11-05 19:18:42.659278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.480 [2024-11-05 19:18:42.659289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.481 qpair failed and we were unable to recover it. 00:29:13.481 [2024-11-05 19:18:42.659478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.481 [2024-11-05 19:18:42.659488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.481 qpair failed and we were unable to recover it. 00:29:13.481 [2024-11-05 19:18:42.659787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.481 [2024-11-05 19:18:42.659798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.481 qpair failed and we were unable to recover it. 00:29:13.481 [2024-11-05 19:18:42.659994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.481 [2024-11-05 19:18:42.660005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.481 qpair failed and we were unable to recover it. 00:29:13.481 [2024-11-05 19:18:42.660294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.481 [2024-11-05 19:18:42.660306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.481 qpair failed and we were unable to recover it. 00:29:13.481 [2024-11-05 19:18:42.660597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.481 [2024-11-05 19:18:42.660608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.481 qpair failed and we were unable to recover it. 00:29:13.481 [2024-11-05 19:18:42.660778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.481 [2024-11-05 19:18:42.660789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.481 qpair failed and we were unable to recover it. 00:29:13.481 [2024-11-05 19:18:42.661067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.481 [2024-11-05 19:18:42.661079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.481 qpair failed and we were unable to recover it. 00:29:13.481 [2024-11-05 19:18:42.661397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.481 [2024-11-05 19:18:42.661409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.481 qpair failed and we were unable to recover it. 00:29:13.481 [2024-11-05 19:18:42.661678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.481 [2024-11-05 19:18:42.661689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.481 qpair failed and we were unable to recover it. 00:29:13.481 [2024-11-05 19:18:42.661999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.481 [2024-11-05 19:18:42.662011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.481 qpair failed and we were unable to recover it. 00:29:13.481 [2024-11-05 19:18:42.662377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.481 [2024-11-05 19:18:42.662388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.481 qpair failed and we were unable to recover it. 00:29:13.481 [2024-11-05 19:18:42.662687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.481 [2024-11-05 19:18:42.662699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.481 qpair failed and we were unable to recover it. 00:29:13.481 [2024-11-05 19:18:42.663005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.481 [2024-11-05 19:18:42.663017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.481 qpair failed and we were unable to recover it. 00:29:13.481 [2024-11-05 19:18:42.663325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.481 [2024-11-05 19:18:42.663337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.481 qpair failed and we were unable to recover it. 00:29:13.481 [2024-11-05 19:18:42.663669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.481 [2024-11-05 19:18:42.663680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.481 qpair failed and we were unable to recover it. 00:29:13.481 [2024-11-05 19:18:42.664074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.481 [2024-11-05 19:18:42.664085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.481 qpair failed and we were unable to recover it. 00:29:13.481 [2024-11-05 19:18:42.664394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.481 [2024-11-05 19:18:42.664405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.481 qpair failed and we were unable to recover it. 00:29:13.481 [2024-11-05 19:18:42.664716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.481 [2024-11-05 19:18:42.664727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.481 qpair failed and we were unable to recover it. 00:29:13.481 [2024-11-05 19:18:42.664945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.481 [2024-11-05 19:18:42.664956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.481 qpair failed and we were unable to recover it. 00:29:13.481 [2024-11-05 19:18:42.665180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.481 [2024-11-05 19:18:42.665191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.481 qpair failed and we were unable to recover it. 00:29:13.481 [2024-11-05 19:18:42.665561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.481 [2024-11-05 19:18:42.665574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.481 qpair failed and we were unable to recover it. 00:29:13.481 [2024-11-05 19:18:42.665821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.481 [2024-11-05 19:18:42.665832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.481 qpair failed and we were unable to recover it. 00:29:13.481 [2024-11-05 19:18:42.666137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.481 [2024-11-05 19:18:42.666148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.481 qpair failed and we were unable to recover it. 00:29:13.481 [2024-11-05 19:18:42.666328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.481 [2024-11-05 19:18:42.666338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.481 qpair failed and we were unable to recover it. 00:29:13.481 [2024-11-05 19:18:42.666626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.481 [2024-11-05 19:18:42.666637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.481 qpair failed and we were unable to recover it. 00:29:13.481 [2024-11-05 19:18:42.666960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.481 [2024-11-05 19:18:42.666972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.481 qpair failed and we were unable to recover it. 00:29:13.481 [2024-11-05 19:18:42.667303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.481 [2024-11-05 19:18:42.667315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.481 qpair failed and we were unable to recover it. 00:29:13.481 [2024-11-05 19:18:42.667503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.481 [2024-11-05 19:18:42.667513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.481 qpair failed and we were unable to recover it. 00:29:13.481 [2024-11-05 19:18:42.667872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.481 [2024-11-05 19:18:42.667885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.481 qpair failed and we were unable to recover it. 00:29:13.482 [2024-11-05 19:18:42.668064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.482 [2024-11-05 19:18:42.668076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.482 qpair failed and we were unable to recover it. 00:29:13.482 [2024-11-05 19:18:42.668379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.482 [2024-11-05 19:18:42.668391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.482 qpair failed and we were unable to recover it. 00:29:13.482 [2024-11-05 19:18:42.668740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.482 [2024-11-05 19:18:42.668768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.482 qpair failed and we were unable to recover it. 00:29:13.482 [2024-11-05 19:18:42.669090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.482 [2024-11-05 19:18:42.669102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.482 qpair failed and we were unable to recover it. 00:29:13.482 [2024-11-05 19:18:42.669412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.482 [2024-11-05 19:18:42.669425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.482 qpair failed and we were unable to recover it. 00:29:13.482 [2024-11-05 19:18:42.669762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.482 [2024-11-05 19:18:42.669773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.482 qpair failed and we were unable to recover it. 00:29:13.482 [2024-11-05 19:18:42.669946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.482 [2024-11-05 19:18:42.669957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.482 qpair failed and we were unable to recover it. 00:29:13.482 [2024-11-05 19:18:42.670273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.482 [2024-11-05 19:18:42.670287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.482 qpair failed and we were unable to recover it. 00:29:13.482 [2024-11-05 19:18:42.670617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.482 [2024-11-05 19:18:42.670628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.482 qpair failed and we were unable to recover it. 00:29:13.482 [2024-11-05 19:18:42.670803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.482 [2024-11-05 19:18:42.670814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.482 qpair failed and we were unable to recover it. 00:29:13.482 [2024-11-05 19:18:42.671078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.482 [2024-11-05 19:18:42.671089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.482 qpair failed and we were unable to recover it. 00:29:13.482 [2024-11-05 19:18:42.671395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.482 [2024-11-05 19:18:42.671405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.482 qpair failed and we were unable to recover it. 00:29:13.482 [2024-11-05 19:18:42.671699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.482 [2024-11-05 19:18:42.671712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.482 qpair failed and we were unable to recover it. 00:29:13.482 [2024-11-05 19:18:42.672046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.482 [2024-11-05 19:18:42.672058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.482 qpair failed and we were unable to recover it. 00:29:13.482 [2024-11-05 19:18:42.672364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.482 [2024-11-05 19:18:42.672376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.482 qpair failed and we were unable to recover it. 00:29:13.482 [2024-11-05 19:18:42.672698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.482 [2024-11-05 19:18:42.672709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.482 qpair failed and we were unable to recover it. 00:29:13.482 [2024-11-05 19:18:42.673019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.482 [2024-11-05 19:18:42.673031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.482 qpair failed and we were unable to recover it. 00:29:13.482 [2024-11-05 19:18:42.673306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.482 [2024-11-05 19:18:42.673317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.482 qpair failed and we were unable to recover it. 00:29:13.482 [2024-11-05 19:18:42.673628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.482 [2024-11-05 19:18:42.673640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.482 qpair failed and we were unable to recover it. 00:29:13.482 [2024-11-05 19:18:42.673962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.482 [2024-11-05 19:18:42.673974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.482 qpair failed and we were unable to recover it. 00:29:13.482 [2024-11-05 19:18:42.674279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.482 [2024-11-05 19:18:42.674290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.482 qpair failed and we were unable to recover it. 00:29:13.482 [2024-11-05 19:18:42.674474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.482 [2024-11-05 19:18:42.674485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.482 qpair failed and we were unable to recover it. 00:29:13.482 [2024-11-05 19:18:42.674722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.482 [2024-11-05 19:18:42.674735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.482 qpair failed and we were unable to recover it. 00:29:13.482 [2024-11-05 19:18:42.674912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.482 [2024-11-05 19:18:42.674925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.482 qpair failed and we were unable to recover it. 00:29:13.482 [2024-11-05 19:18:42.675150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.482 [2024-11-05 19:18:42.675161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.482 qpair failed and we were unable to recover it. 00:29:13.482 [2024-11-05 19:18:42.675359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.482 [2024-11-05 19:18:42.675371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.482 qpair failed and we were unable to recover it. 00:29:13.482 [2024-11-05 19:18:42.675705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.482 [2024-11-05 19:18:42.675718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.482 qpair failed and we were unable to recover it. 00:29:13.482 [2024-11-05 19:18:42.676033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.482 [2024-11-05 19:18:42.676046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.482 qpair failed and we were unable to recover it. 00:29:13.482 [2024-11-05 19:18:42.676328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.482 [2024-11-05 19:18:42.676341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.482 qpair failed and we were unable to recover it. 00:29:13.482 [2024-11-05 19:18:42.676619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.482 [2024-11-05 19:18:42.676631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.482 qpair failed and we were unable to recover it. 00:29:13.482 [2024-11-05 19:18:42.676912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.482 [2024-11-05 19:18:42.676924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.482 qpair failed and we were unable to recover it. 00:29:13.482 [2024-11-05 19:18:42.677114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.482 [2024-11-05 19:18:42.677126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.482 qpair failed and we were unable to recover it. 00:29:13.482 [2024-11-05 19:18:42.677422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.482 [2024-11-05 19:18:42.677434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.482 qpair failed and we were unable to recover it. 00:29:13.482 [2024-11-05 19:18:42.677652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.482 [2024-11-05 19:18:42.677664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.482 qpair failed and we were unable to recover it. 00:29:13.482 [2024-11-05 19:18:42.677963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.482 [2024-11-05 19:18:42.677978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.482 qpair failed and we were unable to recover it. 00:29:13.482 [2024-11-05 19:18:42.678305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.482 [2024-11-05 19:18:42.678317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.482 qpair failed and we were unable to recover it. 00:29:13.482 [2024-11-05 19:18:42.678649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.482 [2024-11-05 19:18:42.678661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.482 qpair failed and we were unable to recover it. 00:29:13.482 [2024-11-05 19:18:42.678972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.482 [2024-11-05 19:18:42.678985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.482 qpair failed and we were unable to recover it. 00:29:13.482 [2024-11-05 19:18:42.679292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.483 [2024-11-05 19:18:42.679304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.483 qpair failed and we were unable to recover it. 00:29:13.483 [2024-11-05 19:18:42.679481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.483 [2024-11-05 19:18:42.679492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.483 qpair failed and we were unable to recover it. 00:29:13.483 [2024-11-05 19:18:42.679823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.483 [2024-11-05 19:18:42.679835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.483 qpair failed and we were unable to recover it. 00:29:13.483 [2024-11-05 19:18:42.680174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.483 [2024-11-05 19:18:42.680187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.483 qpair failed and we were unable to recover it. 00:29:13.483 [2024-11-05 19:18:42.680494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.483 [2024-11-05 19:18:42.680505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.483 qpair failed and we were unable to recover it. 00:29:13.483 [2024-11-05 19:18:42.680676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.483 [2024-11-05 19:18:42.680688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.483 qpair failed and we were unable to recover it. 00:29:13.483 [2024-11-05 19:18:42.680994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.483 [2024-11-05 19:18:42.681006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.483 qpair failed and we were unable to recover it. 00:29:13.483 [2024-11-05 19:18:42.681292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.483 [2024-11-05 19:18:42.681304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.483 qpair failed and we were unable to recover it. 00:29:13.483 [2024-11-05 19:18:42.681609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.483 [2024-11-05 19:18:42.681620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.483 qpair failed and we were unable to recover it. 00:29:13.483 [2024-11-05 19:18:42.681919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.483 [2024-11-05 19:18:42.681931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.483 qpair failed and we were unable to recover it. 00:29:13.483 [2024-11-05 19:18:42.682144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.483 [2024-11-05 19:18:42.682155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.483 qpair failed and we were unable to recover it. 00:29:13.483 [2024-11-05 19:18:42.682461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.483 [2024-11-05 19:18:42.682474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.483 qpair failed and we were unable to recover it. 00:29:13.483 [2024-11-05 19:18:42.682791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.483 [2024-11-05 19:18:42.682802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.483 qpair failed and we were unable to recover it. 00:29:13.483 [2024-11-05 19:18:42.682976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.483 [2024-11-05 19:18:42.682986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.483 qpair failed and we were unable to recover it. 00:29:13.483 [2024-11-05 19:18:42.683319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.483 [2024-11-05 19:18:42.683330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.483 qpair failed and we were unable to recover it. 00:29:13.483 [2024-11-05 19:18:42.683617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.483 [2024-11-05 19:18:42.683629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.483 qpair failed and we were unable to recover it. 00:29:13.483 [2024-11-05 19:18:42.683789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.483 [2024-11-05 19:18:42.683801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.483 qpair failed and we were unable to recover it. 00:29:13.483 [2024-11-05 19:18:42.684187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.483 [2024-11-05 19:18:42.684200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.483 qpair failed and we were unable to recover it. 00:29:13.483 [2024-11-05 19:18:42.684499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.483 [2024-11-05 19:18:42.684510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.483 qpair failed and we were unable to recover it. 00:29:13.483 [2024-11-05 19:18:42.684792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.483 [2024-11-05 19:18:42.684803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.483 qpair failed and we were unable to recover it. 00:29:13.483 [2024-11-05 19:18:42.685122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.483 [2024-11-05 19:18:42.685134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.483 qpair failed and we were unable to recover it. 00:29:13.483 [2024-11-05 19:18:42.685444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.483 [2024-11-05 19:18:42.685456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.483 qpair failed and we were unable to recover it. 00:29:13.483 [2024-11-05 19:18:42.685647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.483 [2024-11-05 19:18:42.685658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.483 qpair failed and we were unable to recover it. 00:29:13.483 [2024-11-05 19:18:42.685959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.483 [2024-11-05 19:18:42.685970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.483 qpair failed and we were unable to recover it. 00:29:13.483 [2024-11-05 19:18:42.686157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.483 [2024-11-05 19:18:42.686168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.483 qpair failed and we were unable to recover it. 00:29:13.483 [2024-11-05 19:18:42.686483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.483 [2024-11-05 19:18:42.686496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.483 qpair failed and we were unable to recover it. 00:29:13.483 [2024-11-05 19:18:42.686682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.483 [2024-11-05 19:18:42.686694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.483 qpair failed and we were unable to recover it. 00:29:13.483 [2024-11-05 19:18:42.686982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.483 [2024-11-05 19:18:42.686994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.483 qpair failed and we were unable to recover it. 00:29:13.483 [2024-11-05 19:18:42.687332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.483 [2024-11-05 19:18:42.687344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.483 qpair failed and we were unable to recover it. 00:29:13.483 [2024-11-05 19:18:42.687561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.483 [2024-11-05 19:18:42.687573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.483 qpair failed and we were unable to recover it. 00:29:13.483 [2024-11-05 19:18:42.687884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.483 [2024-11-05 19:18:42.687896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.483 qpair failed and we were unable to recover it. 00:29:13.484 [2024-11-05 19:18:42.688076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.484 [2024-11-05 19:18:42.688087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.484 qpair failed and we were unable to recover it. 00:29:13.484 [2024-11-05 19:18:42.688395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.484 [2024-11-05 19:18:42.688407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.484 qpair failed and we were unable to recover it. 00:29:13.484 [2024-11-05 19:18:42.688788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.484 [2024-11-05 19:18:42.688800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.484 qpair failed and we were unable to recover it. 00:29:13.484 [2024-11-05 19:18:42.689108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.484 [2024-11-05 19:18:42.689119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.484 qpair failed and we were unable to recover it. 00:29:13.484 [2024-11-05 19:18:42.689401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.484 [2024-11-05 19:18:42.689411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.484 qpair failed and we were unable to recover it. 00:29:13.484 [2024-11-05 19:18:42.689716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.484 [2024-11-05 19:18:42.689729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.484 qpair failed and we were unable to recover it. 00:29:13.484 [2024-11-05 19:18:42.690039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.484 [2024-11-05 19:18:42.690051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.484 qpair failed and we were unable to recover it. 00:29:13.484 [2024-11-05 19:18:42.690251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.484 [2024-11-05 19:18:42.690262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.484 qpair failed and we were unable to recover it. 00:29:13.484 [2024-11-05 19:18:42.690593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.484 [2024-11-05 19:18:42.690605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.484 qpair failed and we were unable to recover it. 00:29:13.484 [2024-11-05 19:18:42.690914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.484 [2024-11-05 19:18:42.690926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.484 qpair failed and we were unable to recover it. 00:29:13.484 [2024-11-05 19:18:42.691117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.484 [2024-11-05 19:18:42.691128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.484 qpair failed and we were unable to recover it. 00:29:13.484 [2024-11-05 19:18:42.691330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.484 [2024-11-05 19:18:42.691341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.484 qpair failed and we were unable to recover it. 00:29:13.484 [2024-11-05 19:18:42.691542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.484 [2024-11-05 19:18:42.691555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.484 qpair failed and we were unable to recover it. 00:29:13.484 [2024-11-05 19:18:42.691874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.484 [2024-11-05 19:18:42.691886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.484 qpair failed and we were unable to recover it. 00:29:13.484 [2024-11-05 19:18:42.692217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.484 [2024-11-05 19:18:42.692230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.484 qpair failed and we were unable to recover it. 00:29:13.484 [2024-11-05 19:18:42.692532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.484 [2024-11-05 19:18:42.692544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.484 qpair failed and we were unable to recover it. 00:29:13.484 [2024-11-05 19:18:42.692828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.484 [2024-11-05 19:18:42.692840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.484 qpair failed and we were unable to recover it. 00:29:13.484 [2024-11-05 19:18:42.693150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.484 [2024-11-05 19:18:42.693161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.484 qpair failed and we were unable to recover it. 00:29:13.484 [2024-11-05 19:18:42.693328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.484 [2024-11-05 19:18:42.693339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.484 qpair failed and we were unable to recover it. 00:29:13.484 [2024-11-05 19:18:42.693678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.484 [2024-11-05 19:18:42.693690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.484 qpair failed and we were unable to recover it. 00:29:13.484 [2024-11-05 19:18:42.693891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.484 [2024-11-05 19:18:42.693903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.484 qpair failed and we were unable to recover it. 00:29:13.484 [2024-11-05 19:18:42.694210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.484 [2024-11-05 19:18:42.694221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.484 qpair failed and we were unable to recover it. 00:29:13.484 [2024-11-05 19:18:42.694406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.484 [2024-11-05 19:18:42.694416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.484 qpair failed and we were unable to recover it. 00:29:13.484 [2024-11-05 19:18:42.694725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.484 [2024-11-05 19:18:42.694738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.484 qpair failed and we were unable to recover it. 00:29:13.484 [2024-11-05 19:18:42.695054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.484 [2024-11-05 19:18:42.695065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.484 qpair failed and we were unable to recover it. 00:29:13.484 [2024-11-05 19:18:42.695405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.484 [2024-11-05 19:18:42.695417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.484 qpair failed and we were unable to recover it. 00:29:13.484 [2024-11-05 19:18:42.695720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.484 [2024-11-05 19:18:42.695732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.484 qpair failed and we were unable to recover it. 00:29:13.484 [2024-11-05 19:18:42.696060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.484 [2024-11-05 19:18:42.696073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.484 qpair failed and we were unable to recover it. 00:29:13.484 [2024-11-05 19:18:42.696352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.484 [2024-11-05 19:18:42.696364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.484 qpair failed and we were unable to recover it. 00:29:13.484 [2024-11-05 19:18:42.696565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.484 [2024-11-05 19:18:42.696577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.484 qpair failed and we were unable to recover it. 00:29:13.484 [2024-11-05 19:18:42.696913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.484 [2024-11-05 19:18:42.696925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.484 qpair failed and we were unable to recover it. 00:29:13.484 [2024-11-05 19:18:42.697097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.484 [2024-11-05 19:18:42.697109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.484 qpair failed and we were unable to recover it. 00:29:13.484 [2024-11-05 19:18:42.697330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.484 [2024-11-05 19:18:42.697343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.484 qpair failed and we were unable to recover it. 00:29:13.484 [2024-11-05 19:18:42.697664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.484 [2024-11-05 19:18:42.697680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.484 qpair failed and we were unable to recover it. 00:29:13.484 [2024-11-05 19:18:42.697983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.484 [2024-11-05 19:18:42.697996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.484 qpair failed and we were unable to recover it. 00:29:13.484 [2024-11-05 19:18:42.698297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.484 [2024-11-05 19:18:42.698309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.484 qpair failed and we were unable to recover it. 00:29:13.484 [2024-11-05 19:18:42.698621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.484 [2024-11-05 19:18:42.698633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.484 qpair failed and we were unable to recover it. 00:29:13.484 [2024-11-05 19:18:42.698868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.485 [2024-11-05 19:18:42.698880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.485 qpair failed and we were unable to recover it. 00:29:13.485 [2024-11-05 19:18:42.699187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.485 [2024-11-05 19:18:42.699199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.485 qpair failed and we were unable to recover it. 00:29:13.485 [2024-11-05 19:18:42.699242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.485 [2024-11-05 19:18:42.699253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.485 qpair failed and we were unable to recover it. 00:29:13.485 [2024-11-05 19:18:42.699363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.485 [2024-11-05 19:18:42.699375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.485 qpair failed and we were unable to recover it. 00:29:13.485 [2024-11-05 19:18:42.699424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.485 [2024-11-05 19:18:42.699435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.485 qpair failed and we were unable to recover it. 00:29:13.485 [2024-11-05 19:18:42.699482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.485 [2024-11-05 19:18:42.699493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.485 qpair failed and we were unable to recover it. 00:29:13.485 [2024-11-05 19:18:42.699667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.485 [2024-11-05 19:18:42.699679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.485 qpair failed and we were unable to recover it. 00:29:13.485 [2024-11-05 19:18:42.700005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.485 [2024-11-05 19:18:42.700017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.485 qpair failed and we were unable to recover it. 00:29:13.485 [2024-11-05 19:18:42.700320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.485 [2024-11-05 19:18:42.700332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.485 qpair failed and we were unable to recover it. 00:29:13.485 [2024-11-05 19:18:42.700627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.485 [2024-11-05 19:18:42.700639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.485 qpair failed and we were unable to recover it. 00:29:13.485 [2024-11-05 19:18:42.700969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.485 [2024-11-05 19:18:42.700981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.485 qpair failed and we were unable to recover it. 00:29:13.485 [2024-11-05 19:18:42.701288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.485 [2024-11-05 19:18:42.701299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.485 qpair failed and we were unable to recover it. 00:29:13.485 [2024-11-05 19:18:42.701608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.485 [2024-11-05 19:18:42.701619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.485 qpair failed and we were unable to recover it. 00:29:13.485 [2024-11-05 19:18:42.701781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.485 [2024-11-05 19:18:42.701793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.485 qpair failed and we were unable to recover it. 00:29:13.485 [2024-11-05 19:18:42.702066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.485 [2024-11-05 19:18:42.702077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.485 qpair failed and we were unable to recover it. 00:29:13.485 [2024-11-05 19:18:42.702339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.485 [2024-11-05 19:18:42.702351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.485 qpair failed and we were unable to recover it. 00:29:13.485 [2024-11-05 19:18:42.702534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.485 [2024-11-05 19:18:42.702545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.485 qpair failed and we were unable to recover it. 00:29:13.485 [2024-11-05 19:18:42.702840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.485 [2024-11-05 19:18:42.702851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.485 qpair failed and we were unable to recover it. 00:29:13.485 [2024-11-05 19:18:42.703041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.485 [2024-11-05 19:18:42.703051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.485 qpair failed and we were unable to recover it. 00:29:13.485 [2024-11-05 19:18:42.703233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.485 [2024-11-05 19:18:42.703244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.485 qpair failed and we were unable to recover it. 00:29:13.485 [2024-11-05 19:18:42.703550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.485 [2024-11-05 19:18:42.703562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.485 qpair failed and we were unable to recover it. 00:29:13.485 [2024-11-05 19:18:42.703768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.485 [2024-11-05 19:18:42.703780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.485 qpair failed and we were unable to recover it. 00:29:13.485 [2024-11-05 19:18:42.703886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.485 [2024-11-05 19:18:42.703897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.485 qpair failed and we were unable to recover it. 00:29:13.485 [2024-11-05 19:18:42.704093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.485 [2024-11-05 19:18:42.704105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.485 qpair failed and we were unable to recover it. 00:29:13.485 [2024-11-05 19:18:42.704283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.485 [2024-11-05 19:18:42.704294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.486 qpair failed and we were unable to recover it. 00:29:13.486 [2024-11-05 19:18:42.704626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.486 [2024-11-05 19:18:42.704638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.486 qpair failed and we were unable to recover it. 00:29:13.486 [2024-11-05 19:18:42.704688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.486 [2024-11-05 19:18:42.704698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.486 qpair failed and we were unable to recover it. 00:29:13.486 [2024-11-05 19:18:42.704882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.486 [2024-11-05 19:18:42.704894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.486 qpair failed and we were unable to recover it. 00:29:13.486 [2024-11-05 19:18:42.705201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.486 [2024-11-05 19:18:42.705213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.486 qpair failed and we were unable to recover it. 00:29:13.486 [2024-11-05 19:18:42.705517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.486 [2024-11-05 19:18:42.705529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.486 qpair failed and we were unable to recover it. 00:29:13.486 [2024-11-05 19:18:42.705717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.486 [2024-11-05 19:18:42.705728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.486 qpair failed and we were unable to recover it. 00:29:13.486 [2024-11-05 19:18:42.706013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.486 [2024-11-05 19:18:42.706025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.486 qpair failed and we were unable to recover it. 00:29:13.486 [2024-11-05 19:18:42.706343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.486 [2024-11-05 19:18:42.706354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.486 qpair failed and we were unable to recover it. 00:29:13.486 [2024-11-05 19:18:42.706514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.486 [2024-11-05 19:18:42.706525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.486 qpair failed and we were unable to recover it. 00:29:13.486 [2024-11-05 19:18:42.706712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.486 [2024-11-05 19:18:42.706724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.486 qpair failed and we were unable to recover it. 00:29:13.486 [2024-11-05 19:18:42.706908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.486 [2024-11-05 19:18:42.706920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.486 qpair failed and we were unable to recover it. 00:29:13.486 [2024-11-05 19:18:42.707087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.486 [2024-11-05 19:18:42.707098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.486 qpair failed and we were unable to recover it. 00:29:13.486 [2024-11-05 19:18:42.707377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.486 [2024-11-05 19:18:42.707387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.486 qpair failed and we were unable to recover it. 00:29:13.486 [2024-11-05 19:18:42.707686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.486 [2024-11-05 19:18:42.707697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.486 qpair failed and we were unable to recover it. 00:29:13.486 [2024-11-05 19:18:42.707993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.486 [2024-11-05 19:18:42.708014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.486 qpair failed and we were unable to recover it. 00:29:13.486 [2024-11-05 19:18:42.708335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.486 [2024-11-05 19:18:42.708346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.486 qpair failed and we were unable to recover it. 00:29:13.486 [2024-11-05 19:18:42.708650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.486 [2024-11-05 19:18:42.708662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.486 qpair failed and we were unable to recover it. 00:29:13.486 [2024-11-05 19:18:42.708822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.486 [2024-11-05 19:18:42.708834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.486 qpair failed and we were unable to recover it. 00:29:13.486 [2024-11-05 19:18:42.709179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.486 [2024-11-05 19:18:42.709190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.486 qpair failed and we were unable to recover it. 00:29:13.486 [2024-11-05 19:18:42.709498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.486 [2024-11-05 19:18:42.709510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.486 qpair failed and we were unable to recover it. 00:29:13.486 [2024-11-05 19:18:42.709825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.486 [2024-11-05 19:18:42.709837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.486 qpair failed and we were unable to recover it. 00:29:13.486 [2024-11-05 19:18:42.709882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.486 [2024-11-05 19:18:42.709891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.486 qpair failed and we were unable to recover it. 00:29:13.486 [2024-11-05 19:18:42.710187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.486 [2024-11-05 19:18:42.710199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.486 qpair failed and we were unable to recover it. 00:29:13.486 [2024-11-05 19:18:42.710509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.486 [2024-11-05 19:18:42.710521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.486 qpair failed and we were unable to recover it. 00:29:13.486 [2024-11-05 19:18:42.710907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.486 [2024-11-05 19:18:42.710919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.486 qpair failed and we were unable to recover it. 00:29:13.486 [2024-11-05 19:18:42.711222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.486 [2024-11-05 19:18:42.711237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.486 qpair failed and we were unable to recover it. 00:29:13.486 [2024-11-05 19:18:42.711453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.486 [2024-11-05 19:18:42.711466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.486 qpair failed and we were unable to recover it. 00:29:13.486 [2024-11-05 19:18:42.711767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.486 [2024-11-05 19:18:42.711780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.486 qpair failed and we were unable to recover it. 00:29:13.486 [2024-11-05 19:18:42.712112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.486 [2024-11-05 19:18:42.712123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.486 qpair failed and we were unable to recover it. 00:29:13.486 [2024-11-05 19:18:42.712309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.486 [2024-11-05 19:18:42.712321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.486 qpair failed and we were unable to recover it. 00:29:13.486 [2024-11-05 19:18:42.712606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.486 [2024-11-05 19:18:42.712618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.486 qpair failed and we were unable to recover it. 00:29:13.486 [2024-11-05 19:18:42.712915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.486 [2024-11-05 19:18:42.712927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.486 qpair failed and we were unable to recover it. 00:29:13.486 [2024-11-05 19:18:42.713287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.486 [2024-11-05 19:18:42.713298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.486 qpair failed and we were unable to recover it. 00:29:13.486 [2024-11-05 19:18:42.713465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.486 [2024-11-05 19:18:42.713475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.486 qpair failed and we were unable to recover it. 00:29:13.486 [2024-11-05 19:18:42.713770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.486 [2024-11-05 19:18:42.713782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.486 qpair failed and we were unable to recover it. 00:29:13.486 [2024-11-05 19:18:42.714095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.486 [2024-11-05 19:18:42.714107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.486 qpair failed and we were unable to recover it. 00:29:13.486 [2024-11-05 19:18:42.714436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.486 [2024-11-05 19:18:42.714448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.486 qpair failed and we were unable to recover it. 00:29:13.486 [2024-11-05 19:18:42.714754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.487 [2024-11-05 19:18:42.714767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.487 qpair failed and we were unable to recover it. 00:29:13.487 [2024-11-05 19:18:42.715107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.487 [2024-11-05 19:18:42.715119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.487 qpair failed and we were unable to recover it. 00:29:13.487 [2024-11-05 19:18:42.715446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.487 [2024-11-05 19:18:42.715458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.487 qpair failed and we were unable to recover it. 00:29:13.487 [2024-11-05 19:18:42.715646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.487 [2024-11-05 19:18:42.715657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.487 qpair failed and we were unable to recover it. 00:29:13.487 [2024-11-05 19:18:42.715822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.487 [2024-11-05 19:18:42.715833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.487 qpair failed and we were unable to recover it. 00:29:13.487 [2024-11-05 19:18:42.716122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.487 [2024-11-05 19:18:42.716133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.487 qpair failed and we were unable to recover it. 00:29:13.487 [2024-11-05 19:18:42.716446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.487 [2024-11-05 19:18:42.716458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.487 qpair failed and we were unable to recover it. 00:29:13.487 [2024-11-05 19:18:42.716728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.487 [2024-11-05 19:18:42.716739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.487 qpair failed and we were unable to recover it. 00:29:13.487 [2024-11-05 19:18:42.717090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.487 [2024-11-05 19:18:42.717102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.487 qpair failed and we were unable to recover it. 00:29:13.487 [2024-11-05 19:18:42.717439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.487 [2024-11-05 19:18:42.717451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.487 qpair failed and we were unable to recover it. 00:29:13.487 [2024-11-05 19:18:42.717755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.487 [2024-11-05 19:18:42.717767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.487 qpair failed and we were unable to recover it. 00:29:13.487 [2024-11-05 19:18:42.718140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.487 [2024-11-05 19:18:42.718151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.487 qpair failed and we were unable to recover it. 00:29:13.487 [2024-11-05 19:18:42.718362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.487 [2024-11-05 19:18:42.718373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.487 qpair failed and we were unable to recover it. 00:29:13.487 [2024-11-05 19:18:42.718680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.487 [2024-11-05 19:18:42.718692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.487 qpair failed and we were unable to recover it. 00:29:13.487 [2024-11-05 19:18:42.719000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.487 [2024-11-05 19:18:42.719012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.487 qpair failed and we were unable to recover it. 00:29:13.487 [2024-11-05 19:18:42.719324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.487 [2024-11-05 19:18:42.719336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.487 qpair failed and we were unable to recover it. 00:29:13.487 [2024-11-05 19:18:42.719643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.487 [2024-11-05 19:18:42.719655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.487 qpair failed and we were unable to recover it. 00:29:13.487 [2024-11-05 19:18:42.720021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.487 [2024-11-05 19:18:42.720034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.487 qpair failed and we were unable to recover it. 00:29:13.487 [2024-11-05 19:18:42.720335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.487 [2024-11-05 19:18:42.720347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.487 qpair failed and we were unable to recover it. 00:29:13.487 [2024-11-05 19:18:42.720691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.487 [2024-11-05 19:18:42.720703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.487 qpair failed and we were unable to recover it. 00:29:13.487 [2024-11-05 19:18:42.720886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.487 [2024-11-05 19:18:42.720899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.487 qpair failed and we were unable to recover it. 00:29:13.487 [2024-11-05 19:18:42.721188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.487 [2024-11-05 19:18:42.721200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.487 qpair failed and we were unable to recover it. 00:29:13.487 [2024-11-05 19:18:42.721547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.487 [2024-11-05 19:18:42.721559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.487 qpair failed and we were unable to recover it. 00:29:13.487 [2024-11-05 19:18:42.721860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.487 [2024-11-05 19:18:42.721872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.487 qpair failed and we were unable to recover it. 00:29:13.487 [2024-11-05 19:18:42.722178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.487 [2024-11-05 19:18:42.722190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.487 qpair failed and we were unable to recover it. 00:29:13.487 [2024-11-05 19:18:42.722507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.487 [2024-11-05 19:18:42.722518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.487 qpair failed and we were unable to recover it. 00:29:13.487 [2024-11-05 19:18:42.722836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.487 [2024-11-05 19:18:42.722858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.487 qpair failed and we were unable to recover it. 00:29:13.487 [2024-11-05 19:18:42.723069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.487 [2024-11-05 19:18:42.723081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.487 qpair failed and we were unable to recover it. 00:29:13.487 [2024-11-05 19:18:42.723334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.487 [2024-11-05 19:18:42.723346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.487 qpair failed and we were unable to recover it. 00:29:13.487 [2024-11-05 19:18:42.723693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.487 [2024-11-05 19:18:42.723704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.487 qpair failed and we were unable to recover it. 00:29:13.487 [2024-11-05 19:18:42.724019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.487 [2024-11-05 19:18:42.724032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.487 qpair failed and we were unable to recover it. 00:29:13.487 [2024-11-05 19:18:42.724337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.487 [2024-11-05 19:18:42.724349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.487 qpair failed and we were unable to recover it. 00:29:13.487 [2024-11-05 19:18:42.724657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.487 [2024-11-05 19:18:42.724670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.487 qpair failed and we were unable to recover it. 00:29:13.487 [2024-11-05 19:18:42.724979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.487 [2024-11-05 19:18:42.724991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.487 qpair failed and we were unable to recover it. 00:29:13.487 [2024-11-05 19:18:42.725305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.488 [2024-11-05 19:18:42.725317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.488 qpair failed and we were unable to recover it. 00:29:13.488 [2024-11-05 19:18:42.725628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.488 [2024-11-05 19:18:42.725638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.488 qpair failed and we were unable to recover it. 00:29:13.488 [2024-11-05 19:18:42.725812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.488 [2024-11-05 19:18:42.725823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.488 qpair failed and we were unable to recover it. 00:29:13.488 [2024-11-05 19:18:42.726053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.488 [2024-11-05 19:18:42.726065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.488 qpair failed and we were unable to recover it. 00:29:13.488 [2024-11-05 19:18:42.726255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.488 [2024-11-05 19:18:42.726266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.488 qpair failed and we were unable to recover it. 00:29:13.488 [2024-11-05 19:18:42.726484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.488 [2024-11-05 19:18:42.726495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.488 qpair failed and we were unable to recover it. 00:29:13.488 [2024-11-05 19:18:42.726856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.488 [2024-11-05 19:18:42.726868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.488 qpair failed and we were unable to recover it. 00:29:13.488 [2024-11-05 19:18:42.727202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.488 [2024-11-05 19:18:42.727213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.488 qpair failed and we were unable to recover it. 00:29:13.488 [2024-11-05 19:18:42.727567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.488 [2024-11-05 19:18:42.727579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.488 qpair failed and we were unable to recover it. 00:29:13.488 [2024-11-05 19:18:42.727894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.488 [2024-11-05 19:18:42.727906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.488 qpair failed and we were unable to recover it. 00:29:13.488 [2024-11-05 19:18:42.728241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.488 [2024-11-05 19:18:42.728252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.488 qpair failed and we were unable to recover it. 00:29:13.488 [2024-11-05 19:18:42.728586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.488 [2024-11-05 19:18:42.728598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.488 qpair failed and we were unable to recover it. 00:29:13.488 [2024-11-05 19:18:42.728906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.488 [2024-11-05 19:18:42.728917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.488 qpair failed and we were unable to recover it. 00:29:13.488 [2024-11-05 19:18:42.729288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.488 [2024-11-05 19:18:42.729299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.488 qpair failed and we were unable to recover it. 00:29:13.488 [2024-11-05 19:18:42.729469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.488 [2024-11-05 19:18:42.729480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.488 qpair failed and we were unable to recover it. 00:29:13.488 [2024-11-05 19:18:42.729781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.488 [2024-11-05 19:18:42.729793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.488 qpair failed and we were unable to recover it. 00:29:13.488 [2024-11-05 19:18:42.730120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.488 [2024-11-05 19:18:42.730131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.488 qpair failed and we were unable to recover it. 00:29:13.488 [2024-11-05 19:18:42.730521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.488 [2024-11-05 19:18:42.730533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.488 qpair failed and we were unable to recover it. 00:29:13.488 [2024-11-05 19:18:42.730844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.488 [2024-11-05 19:18:42.730858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.488 qpair failed and we were unable to recover it. 00:29:13.488 [2024-11-05 19:18:42.731164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.488 [2024-11-05 19:18:42.731175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.488 qpair failed and we were unable to recover it. 00:29:13.488 [2024-11-05 19:18:42.731454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.488 [2024-11-05 19:18:42.731466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.488 qpair failed and we were unable to recover it. 00:29:13.488 [2024-11-05 19:18:42.731773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.488 [2024-11-05 19:18:42.731784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.488 qpair failed and we were unable to recover it. 00:29:13.488 [2024-11-05 19:18:42.732111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.488 [2024-11-05 19:18:42.732125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.488 qpair failed and we were unable to recover it. 00:29:13.488 [2024-11-05 19:18:42.732460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.488 [2024-11-05 19:18:42.732473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.488 qpair failed and we were unable to recover it. 00:29:13.488 [2024-11-05 19:18:42.732780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.488 [2024-11-05 19:18:42.732793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.488 qpair failed and we were unable to recover it. 00:29:13.488 [2024-11-05 19:18:42.732956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.488 [2024-11-05 19:18:42.732967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.488 qpair failed and we were unable to recover it. 00:29:13.488 [2024-11-05 19:18:42.733156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.488 [2024-11-05 19:18:42.733168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.488 qpair failed and we were unable to recover it. 00:29:13.488 [2024-11-05 19:18:42.733468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.488 [2024-11-05 19:18:42.733479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.488 qpair failed and we were unable to recover it. 00:29:13.488 [2024-11-05 19:18:42.733788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.488 [2024-11-05 19:18:42.733800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.488 qpair failed and we were unable to recover it. 00:29:13.488 [2024-11-05 19:18:42.734114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.488 [2024-11-05 19:18:42.734125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.488 qpair failed and we were unable to recover it. 00:29:13.488 [2024-11-05 19:18:42.734430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.488 [2024-11-05 19:18:42.734442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.488 qpair failed and we were unable to recover it. 00:29:13.488 [2024-11-05 19:18:42.734781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.488 [2024-11-05 19:18:42.734794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.488 qpair failed and we were unable to recover it. 00:29:13.488 [2024-11-05 19:18:42.735115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.489 [2024-11-05 19:18:42.735126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.489 qpair failed and we were unable to recover it. 00:29:13.489 [2024-11-05 19:18:42.735473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.489 [2024-11-05 19:18:42.735484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.489 qpair failed and we were unable to recover it. 00:29:13.489 [2024-11-05 19:18:42.735786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.489 [2024-11-05 19:18:42.735799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.489 qpair failed and we were unable to recover it. 00:29:13.489 [2024-11-05 19:18:42.736115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.489 [2024-11-05 19:18:42.736126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.489 qpair failed and we were unable to recover it. 00:29:13.489 [2024-11-05 19:18:42.736432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.489 [2024-11-05 19:18:42.736445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.489 qpair failed and we were unable to recover it. 00:29:13.489 [2024-11-05 19:18:42.736751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.489 [2024-11-05 19:18:42.736763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.489 qpair failed and we were unable to recover it. 00:29:13.489 [2024-11-05 19:18:42.737093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.489 [2024-11-05 19:18:42.737105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.489 qpair failed and we were unable to recover it. 00:29:13.489 [2024-11-05 19:18:42.737269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.489 [2024-11-05 19:18:42.737280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.489 qpair failed and we were unable to recover it. 00:29:13.489 [2024-11-05 19:18:42.737586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.489 [2024-11-05 19:18:42.737597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.489 qpair failed and we were unable to recover it. 00:29:13.489 [2024-11-05 19:18:42.737907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.489 [2024-11-05 19:18:42.737919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.489 qpair failed and we were unable to recover it. 00:29:13.489 [2024-11-05 19:18:42.738187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.489 [2024-11-05 19:18:42.738198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.489 qpair failed and we were unable to recover it. 00:29:13.489 [2024-11-05 19:18:42.738369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.489 [2024-11-05 19:18:42.738380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.489 qpair failed and we were unable to recover it. 00:29:13.489 [2024-11-05 19:18:42.738708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.489 [2024-11-05 19:18:42.738719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.489 qpair failed and we were unable to recover it. 00:29:13.489 [2024-11-05 19:18:42.739029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.489 [2024-11-05 19:18:42.739042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.489 qpair failed and we were unable to recover it. 00:29:13.489 [2024-11-05 19:18:42.739341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.489 [2024-11-05 19:18:42.739352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.489 qpair failed and we were unable to recover it. 00:29:13.489 [2024-11-05 19:18:42.739638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.489 [2024-11-05 19:18:42.739651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.489 qpair failed and we were unable to recover it. 00:29:13.489 [2024-11-05 19:18:42.739955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.489 [2024-11-05 19:18:42.739967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.489 qpair failed and we were unable to recover it. 00:29:13.489 [2024-11-05 19:18:42.740273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.489 [2024-11-05 19:18:42.740288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.489 qpair failed and we were unable to recover it. 00:29:13.489 [2024-11-05 19:18:42.740608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.489 [2024-11-05 19:18:42.740620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.489 qpair failed and we were unable to recover it. 00:29:13.489 [2024-11-05 19:18:42.740955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.489 [2024-11-05 19:18:42.740967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.489 qpair failed and we were unable to recover it. 00:29:13.489 [2024-11-05 19:18:42.741254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.489 [2024-11-05 19:18:42.741267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.489 qpair failed and we were unable to recover it. 00:29:13.489 [2024-11-05 19:18:42.741568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.489 [2024-11-05 19:18:42.741579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.489 qpair failed and we were unable to recover it. 00:29:13.489 [2024-11-05 19:18:42.741887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.489 [2024-11-05 19:18:42.741899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.489 qpair failed and we were unable to recover it. 00:29:13.489 [2024-11-05 19:18:42.742184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.489 [2024-11-05 19:18:42.742196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.489 qpair failed and we were unable to recover it. 00:29:13.489 [2024-11-05 19:18:42.742492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.489 [2024-11-05 19:18:42.742505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.489 qpair failed and we were unable to recover it. 00:29:13.489 [2024-11-05 19:18:42.742880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.489 [2024-11-05 19:18:42.742892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.489 qpair failed and we were unable to recover it. 00:29:13.489 [2024-11-05 19:18:42.743167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.489 [2024-11-05 19:18:42.743179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.489 qpair failed and we were unable to recover it. 00:29:13.489 [2024-11-05 19:18:42.743524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.489 [2024-11-05 19:18:42.743535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.489 qpair failed and we were unable to recover it. 00:29:13.489 [2024-11-05 19:18:42.743923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.489 [2024-11-05 19:18:42.743934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.489 qpair failed and we were unable to recover it. 00:29:13.489 [2024-11-05 19:18:42.744106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.489 [2024-11-05 19:18:42.744117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.489 qpair failed and we were unable to recover it. 00:29:13.489 [2024-11-05 19:18:42.744451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.489 [2024-11-05 19:18:42.744462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.489 qpair failed and we were unable to recover it. 00:29:13.489 [2024-11-05 19:18:42.744805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.489 [2024-11-05 19:18:42.744817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.489 qpair failed and we were unable to recover it. 00:29:13.489 [2024-11-05 19:18:42.744991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.489 [2024-11-05 19:18:42.745002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.489 qpair failed and we were unable to recover it. 00:29:13.489 [2024-11-05 19:18:42.745287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.489 [2024-11-05 19:18:42.745298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.489 qpair failed and we were unable to recover it. 00:29:13.489 [2024-11-05 19:18:42.745614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.489 [2024-11-05 19:18:42.745625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.489 qpair failed and we were unable to recover it. 00:29:13.489 [2024-11-05 19:18:42.745915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.489 [2024-11-05 19:18:42.745927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.489 qpair failed and we were unable to recover it. 00:29:13.489 [2024-11-05 19:18:42.746256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.489 [2024-11-05 19:18:42.746267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.489 qpair failed and we were unable to recover it. 00:29:13.489 [2024-11-05 19:18:42.746575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.489 [2024-11-05 19:18:42.746587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.489 qpair failed and we were unable to recover it. 00:29:13.489 [2024-11-05 19:18:42.746901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.490 [2024-11-05 19:18:42.746913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.490 qpair failed and we were unable to recover it. 00:29:13.490 [2024-11-05 19:18:42.747088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.490 [2024-11-05 19:18:42.747099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.490 qpair failed and we were unable to recover it. 00:29:13.490 [2024-11-05 19:18:42.747471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.490 [2024-11-05 19:18:42.747482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.490 qpair failed and we were unable to recover it. 00:29:13.490 [2024-11-05 19:18:42.747670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.490 [2024-11-05 19:18:42.747681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.490 qpair failed and we were unable to recover it. 00:29:13.490 [2024-11-05 19:18:42.747990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.490 [2024-11-05 19:18:42.748002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.490 qpair failed and we were unable to recover it. 00:29:13.490 [2024-11-05 19:18:42.748284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.490 [2024-11-05 19:18:42.748295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.490 qpair failed and we were unable to recover it. 00:29:13.490 [2024-11-05 19:18:42.748602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.490 [2024-11-05 19:18:42.748614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.490 qpair failed and we were unable to recover it. 00:29:13.490 [2024-11-05 19:18:42.748787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.490 [2024-11-05 19:18:42.748798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.490 qpair failed and we were unable to recover it. 00:29:13.490 [2024-11-05 19:18:42.748965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.490 [2024-11-05 19:18:42.748975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.490 qpair failed and we were unable to recover it. 00:29:13.490 [2024-11-05 19:18:42.749329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.490 [2024-11-05 19:18:42.749340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.490 qpair failed and we were unable to recover it. 00:29:13.490 [2024-11-05 19:18:42.749648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.490 [2024-11-05 19:18:42.749659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.490 qpair failed and we were unable to recover it. 00:29:13.490 [2024-11-05 19:18:42.749873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.490 [2024-11-05 19:18:42.749884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.490 qpair failed and we were unable to recover it. 00:29:13.490 [2024-11-05 19:18:42.750215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.490 [2024-11-05 19:18:42.750228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.490 qpair failed and we were unable to recover it. 00:29:13.490 [2024-11-05 19:18:42.750568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.490 [2024-11-05 19:18:42.750580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.490 qpair failed and we were unable to recover it. 00:29:13.490 [2024-11-05 19:18:42.750769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.490 [2024-11-05 19:18:42.750781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.490 qpair failed and we were unable to recover it. 00:29:13.490 [2024-11-05 19:18:42.751091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.490 [2024-11-05 19:18:42.751102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.490 qpair failed and we were unable to recover it. 00:29:13.490 [2024-11-05 19:18:42.751403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.490 [2024-11-05 19:18:42.751415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.490 qpair failed and we were unable to recover it. 00:29:13.490 [2024-11-05 19:18:42.751639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.490 [2024-11-05 19:18:42.751650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.490 qpair failed and we were unable to recover it. 00:29:13.490 [2024-11-05 19:18:42.751965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.490 [2024-11-05 19:18:42.751976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.490 qpair failed and we were unable to recover it. 00:29:13.490 [2024-11-05 19:18:42.752160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.490 [2024-11-05 19:18:42.752171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.490 qpair failed and we were unable to recover it. 00:29:13.490 [2024-11-05 19:18:42.752471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.490 [2024-11-05 19:18:42.752482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.490 qpair failed and we were unable to recover it. 00:29:13.490 [2024-11-05 19:18:42.752713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.490 [2024-11-05 19:18:42.752724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.490 qpair failed and we were unable to recover it. 00:29:13.490 [2024-11-05 19:18:42.753030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.490 [2024-11-05 19:18:42.753044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.490 qpair failed and we were unable to recover it. 00:29:13.490 [2024-11-05 19:18:42.753376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.490 [2024-11-05 19:18:42.753387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.490 qpair failed and we were unable to recover it. 00:29:13.490 [2024-11-05 19:18:42.753691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.490 [2024-11-05 19:18:42.753703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.490 qpair failed and we were unable to recover it. 00:29:13.490 [2024-11-05 19:18:42.754032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.490 [2024-11-05 19:18:42.754044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.490 qpair failed and we were unable to recover it. 00:29:13.490 [2024-11-05 19:18:42.754347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.490 [2024-11-05 19:18:42.754358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.490 qpair failed and we were unable to recover it. 00:29:13.490 [2024-11-05 19:18:42.754705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.490 [2024-11-05 19:18:42.754716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.490 qpair failed and we were unable to recover it. 00:29:13.490 [2024-11-05 19:18:42.754990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.490 [2024-11-05 19:18:42.755002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.490 qpair failed and we were unable to recover it. 00:29:13.490 [2024-11-05 19:18:42.755336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.490 [2024-11-05 19:18:42.755347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.490 qpair failed and we were unable to recover it. 00:29:13.490 [2024-11-05 19:18:42.755655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.490 [2024-11-05 19:18:42.755667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.490 qpair failed and we were unable to recover it. 00:29:13.490 [2024-11-05 19:18:42.755994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.490 [2024-11-05 19:18:42.756005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.490 qpair failed and we were unable to recover it. 00:29:13.490 [2024-11-05 19:18:42.756331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.490 [2024-11-05 19:18:42.756344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.490 qpair failed and we were unable to recover it. 00:29:13.490 [2024-11-05 19:18:42.756524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.490 [2024-11-05 19:18:42.756536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.490 qpair failed and we were unable to recover it. 00:29:13.491 [2024-11-05 19:18:42.756806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.491 [2024-11-05 19:18:42.756818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.491 qpair failed and we were unable to recover it. 00:29:13.491 [2024-11-05 19:18:42.756985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.491 [2024-11-05 19:18:42.756995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.491 qpair failed and we were unable to recover it. 00:29:13.491 [2024-11-05 19:18:42.757344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.491 [2024-11-05 19:18:42.757356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.491 qpair failed and we were unable to recover it. 00:29:13.491 [2024-11-05 19:18:42.757529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.491 [2024-11-05 19:18:42.757541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.491 qpair failed and we were unable to recover it. 00:29:13.491 [2024-11-05 19:18:42.757845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.491 [2024-11-05 19:18:42.757857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.491 qpair failed and we were unable to recover it. 00:29:13.805 [2024-11-05 19:18:42.758168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.805 [2024-11-05 19:18:42.758181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.805 qpair failed and we were unable to recover it. 00:29:13.805 [2024-11-05 19:18:42.758490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.805 [2024-11-05 19:18:42.758503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.805 qpair failed and we were unable to recover it. 00:29:13.805 [2024-11-05 19:18:42.758818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.805 [2024-11-05 19:18:42.758830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.805 qpair failed and we were unable to recover it. 00:29:13.805 [2024-11-05 19:18:42.759006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.805 [2024-11-05 19:18:42.759016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.805 qpair failed and we were unable to recover it. 00:29:13.805 [2024-11-05 19:18:42.759328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.805 [2024-11-05 19:18:42.759339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.805 qpair failed and we were unable to recover it. 00:29:13.805 [2024-11-05 19:18:42.759512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.805 [2024-11-05 19:18:42.759523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.805 qpair failed and we were unable to recover it. 00:29:13.805 [2024-11-05 19:18:42.759854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.805 [2024-11-05 19:18:42.759867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.805 qpair failed and we were unable to recover it. 00:29:13.805 [2024-11-05 19:18:42.760170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.805 [2024-11-05 19:18:42.760181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.805 qpair failed and we were unable to recover it. 00:29:13.805 [2024-11-05 19:18:42.760483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.805 [2024-11-05 19:18:42.760497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.805 qpair failed and we were unable to recover it. 00:29:13.805 [2024-11-05 19:18:42.760804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.805 [2024-11-05 19:18:42.760815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.805 qpair failed and we were unable to recover it. 00:29:13.805 [2024-11-05 19:18:42.761125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.805 [2024-11-05 19:18:42.761137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.805 qpair failed and we were unable to recover it. 00:29:13.805 [2024-11-05 19:18:42.761424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.805 [2024-11-05 19:18:42.761435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.805 qpair failed and we were unable to recover it. 00:29:13.805 [2024-11-05 19:18:42.761740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.805 [2024-11-05 19:18:42.761761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.805 qpair failed and we were unable to recover it. 00:29:13.805 [2024-11-05 19:18:42.761937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.805 [2024-11-05 19:18:42.761947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.805 qpair failed and we were unable to recover it. 00:29:13.805 [2024-11-05 19:18:42.762296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.805 [2024-11-05 19:18:42.762307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.805 qpair failed and we were unable to recover it. 00:29:13.805 [2024-11-05 19:18:42.762589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.805 [2024-11-05 19:18:42.762601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.805 qpair failed and we were unable to recover it. 00:29:13.805 [2024-11-05 19:18:42.762947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.805 [2024-11-05 19:18:42.762960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.805 qpair failed and we were unable to recover it. 00:29:13.805 [2024-11-05 19:18:42.763269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.805 [2024-11-05 19:18:42.763280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.805 qpair failed and we were unable to recover it. 00:29:13.805 [2024-11-05 19:18:42.763593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.806 [2024-11-05 19:18:42.763605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.806 qpair failed and we were unable to recover it. 00:29:13.806 [2024-11-05 19:18:42.763943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.806 [2024-11-05 19:18:42.763954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.806 qpair failed and we were unable to recover it. 00:29:13.806 [2024-11-05 19:18:42.764258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.806 [2024-11-05 19:18:42.764271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.806 qpair failed and we were unable to recover it. 00:29:13.806 [2024-11-05 19:18:42.764617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.806 [2024-11-05 19:18:42.764628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.806 qpair failed and we were unable to recover it. 00:29:13.806 [2024-11-05 19:18:42.764923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.806 [2024-11-05 19:18:42.764935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.806 qpair failed and we were unable to recover it. 00:29:13.806 [2024-11-05 19:18:42.765237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.806 [2024-11-05 19:18:42.765248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.806 qpair failed and we were unable to recover it. 00:29:13.806 [2024-11-05 19:18:42.765429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.806 [2024-11-05 19:18:42.765440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.806 qpair failed and we were unable to recover it. 00:29:13.806 [2024-11-05 19:18:42.765753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.806 [2024-11-05 19:18:42.765765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.806 qpair failed and we were unable to recover it. 00:29:13.806 [2024-11-05 19:18:42.766096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.806 [2024-11-05 19:18:42.766107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.806 qpair failed and we were unable to recover it. 00:29:13.806 [2024-11-05 19:18:42.766292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.806 [2024-11-05 19:18:42.766304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.806 qpair failed and we were unable to recover it. 00:29:13.806 [2024-11-05 19:18:42.766485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.806 [2024-11-05 19:18:42.766495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.806 qpair failed and we were unable to recover it. 00:29:13.806 [2024-11-05 19:18:42.766721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.806 [2024-11-05 19:18:42.766732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.806 qpair failed and we were unable to recover it. 00:29:13.806 [2024-11-05 19:18:42.767063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.806 [2024-11-05 19:18:42.767075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.806 qpair failed and we were unable to recover it. 00:29:13.806 [2024-11-05 19:18:42.767364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.806 [2024-11-05 19:18:42.767376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.806 qpair failed and we were unable to recover it. 00:29:13.806 [2024-11-05 19:18:42.767683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.806 [2024-11-05 19:18:42.767694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.806 qpair failed and we were unable to recover it. 00:29:13.806 [2024-11-05 19:18:42.767896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.806 [2024-11-05 19:18:42.767908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.806 qpair failed and we were unable to recover it. 00:29:13.806 [2024-11-05 19:18:42.768221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.806 [2024-11-05 19:18:42.768232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.806 qpair failed and we were unable to recover it. 00:29:13.806 [2024-11-05 19:18:42.768511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.806 [2024-11-05 19:18:42.768524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.806 qpair failed and we were unable to recover it. 00:29:13.806 [2024-11-05 19:18:42.768895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.806 [2024-11-05 19:18:42.768907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.806 qpair failed and we were unable to recover it. 00:29:13.806 [2024-11-05 19:18:42.769169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.806 [2024-11-05 19:18:42.769181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.806 qpair failed and we were unable to recover it. 00:29:13.806 [2024-11-05 19:18:42.769487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.806 [2024-11-05 19:18:42.769499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.806 qpair failed and we were unable to recover it. 00:29:13.806 [2024-11-05 19:18:42.769714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.806 [2024-11-05 19:18:42.769727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.806 qpair failed and we were unable to recover it. 00:29:13.806 [2024-11-05 19:18:42.769907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.806 [2024-11-05 19:18:42.769920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.806 qpair failed and we were unable to recover it. 00:29:13.806 [2024-11-05 19:18:42.770305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.806 [2024-11-05 19:18:42.770316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.806 qpair failed and we were unable to recover it. 00:29:13.806 [2024-11-05 19:18:42.770516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.806 [2024-11-05 19:18:42.770527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.806 qpair failed and we were unable to recover it. 00:29:13.806 [2024-11-05 19:18:42.770823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.806 [2024-11-05 19:18:42.770835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.806 qpair failed and we were unable to recover it. 00:29:13.806 [2024-11-05 19:18:42.771162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.806 [2024-11-05 19:18:42.771174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.806 qpair failed and we were unable to recover it. 00:29:13.806 [2024-11-05 19:18:42.771476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.806 [2024-11-05 19:18:42.771487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.806 qpair failed and we were unable to recover it. 00:29:13.806 [2024-11-05 19:18:42.771669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.806 [2024-11-05 19:18:42.771680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.806 qpair failed and we were unable to recover it. 00:29:13.806 [2024-11-05 19:18:42.771998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.806 [2024-11-05 19:18:42.772010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.806 qpair failed and we were unable to recover it. 00:29:13.806 [2024-11-05 19:18:42.772315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.806 [2024-11-05 19:18:42.772326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.806 qpair failed and we were unable to recover it. 00:29:13.806 [2024-11-05 19:18:42.772552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.806 [2024-11-05 19:18:42.772563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.806 qpair failed and we were unable to recover it. 00:29:13.806 [2024-11-05 19:18:42.772947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.806 [2024-11-05 19:18:42.772959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.806 qpair failed and we were unable to recover it. 00:29:13.806 [2024-11-05 19:18:42.773258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.806 [2024-11-05 19:18:42.773269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.806 qpair failed and we were unable to recover it. 00:29:13.806 [2024-11-05 19:18:42.773591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.806 [2024-11-05 19:18:42.773602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.806 qpair failed and we were unable to recover it. 00:29:13.806 [2024-11-05 19:18:42.773910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.806 [2024-11-05 19:18:42.773923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.806 qpair failed and we were unable to recover it. 00:29:13.806 [2024-11-05 19:18:42.774257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.806 [2024-11-05 19:18:42.774268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.806 qpair failed and we were unable to recover it. 00:29:13.806 [2024-11-05 19:18:42.774631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.806 [2024-11-05 19:18:42.774643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.806 qpair failed and we were unable to recover it. 00:29:13.806 [2024-11-05 19:18:42.774860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.807 [2024-11-05 19:18:42.774871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.807 qpair failed and we were unable to recover it. 00:29:13.807 [2024-11-05 19:18:42.775035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.807 [2024-11-05 19:18:42.775046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.807 qpair failed and we were unable to recover it. 00:29:13.807 [2024-11-05 19:18:42.775364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.807 [2024-11-05 19:18:42.775375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.807 qpair failed and we were unable to recover it. 00:29:13.807 [2024-11-05 19:18:42.775700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.807 [2024-11-05 19:18:42.775711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.807 qpair failed and we were unable to recover it. 00:29:13.807 [2024-11-05 19:18:42.776023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.807 [2024-11-05 19:18:42.776034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.807 qpair failed and we were unable to recover it. 00:29:13.807 [2024-11-05 19:18:42.776339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.807 [2024-11-05 19:18:42.776349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.807 qpair failed and we were unable to recover it. 00:29:13.807 [2024-11-05 19:18:42.776667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.807 [2024-11-05 19:18:42.776681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.807 qpair failed and we were unable to recover it. 00:29:13.807 [2024-11-05 19:18:42.777024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.807 [2024-11-05 19:18:42.777036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.807 qpair failed and we were unable to recover it. 00:29:13.807 [2024-11-05 19:18:42.777345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.807 [2024-11-05 19:18:42.777356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.807 qpair failed and we were unable to recover it. 00:29:13.807 [2024-11-05 19:18:42.777505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.807 [2024-11-05 19:18:42.777517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.807 qpair failed and we were unable to recover it. 00:29:13.807 [2024-11-05 19:18:42.777741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.807 [2024-11-05 19:18:42.777758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.807 qpair failed and we were unable to recover it. 00:29:13.807 [2024-11-05 19:18:42.778056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.807 [2024-11-05 19:18:42.778067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.807 qpair failed and we were unable to recover it. 00:29:13.807 [2024-11-05 19:18:42.778367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.807 [2024-11-05 19:18:42.778379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.807 qpair failed and we were unable to recover it. 00:29:13.807 [2024-11-05 19:18:42.778676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.807 [2024-11-05 19:18:42.778687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.807 qpair failed and we were unable to recover it. 00:29:13.807 [2024-11-05 19:18:42.778861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.807 [2024-11-05 19:18:42.778874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.807 qpair failed and we were unable to recover it. 00:29:13.807 [2024-11-05 19:18:42.779181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.807 [2024-11-05 19:18:42.779193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.807 qpair failed and we were unable to recover it. 00:29:13.807 [2024-11-05 19:18:42.779376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.807 [2024-11-05 19:18:42.779388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.807 qpair failed and we were unable to recover it. 00:29:13.807 [2024-11-05 19:18:42.779739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.807 [2024-11-05 19:18:42.779761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.807 qpair failed and we were unable to recover it. 00:29:13.807 [2024-11-05 19:18:42.780087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.807 [2024-11-05 19:18:42.780098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.807 qpair failed and we were unable to recover it. 00:29:13.807 [2024-11-05 19:18:42.780376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.807 [2024-11-05 19:18:42.780387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.807 qpair failed and we were unable to recover it. 00:29:13.807 [2024-11-05 19:18:42.780699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.807 [2024-11-05 19:18:42.780710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.807 qpair failed and we were unable to recover it. 00:29:13.807 [2024-11-05 19:18:42.780889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.807 [2024-11-05 19:18:42.780900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.807 qpair failed and we were unable to recover it. 00:29:13.807 [2024-11-05 19:18:42.781187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.807 [2024-11-05 19:18:42.781199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.807 qpair failed and we were unable to recover it. 00:29:13.807 [2024-11-05 19:18:42.781315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.807 [2024-11-05 19:18:42.781326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.807 qpair failed and we were unable to recover it. 00:29:13.807 [2024-11-05 19:18:42.781635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.807 [2024-11-05 19:18:42.781646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.807 qpair failed and we were unable to recover it. 00:29:13.807 [2024-11-05 19:18:42.781690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.807 [2024-11-05 19:18:42.781699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.807 qpair failed and we were unable to recover it. 00:29:13.807 [2024-11-05 19:18:42.781866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.807 [2024-11-05 19:18:42.781877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.807 qpair failed and we were unable to recover it. 00:29:13.807 [2024-11-05 19:18:42.782052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.807 [2024-11-05 19:18:42.782062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.807 qpair failed and we were unable to recover it. 00:29:13.807 [2024-11-05 19:18:42.782377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.807 [2024-11-05 19:18:42.782388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.807 qpair failed and we were unable to recover it. 00:29:13.807 [2024-11-05 19:18:42.782572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.807 [2024-11-05 19:18:42.782583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.807 qpair failed and we were unable to recover it. 00:29:13.807 [2024-11-05 19:18:42.782858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.807 [2024-11-05 19:18:42.782870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.807 qpair failed and we were unable to recover it. 00:29:13.807 [2024-11-05 19:18:42.783189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.807 [2024-11-05 19:18:42.783200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.807 qpair failed and we were unable to recover it. 00:29:13.807 [2024-11-05 19:18:42.783386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.807 [2024-11-05 19:18:42.783397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.807 qpair failed and we were unable to recover it. 00:29:13.807 [2024-11-05 19:18:42.783689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.807 [2024-11-05 19:18:42.783700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.807 qpair failed and we were unable to recover it. 00:29:13.807 [2024-11-05 19:18:42.783893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.807 [2024-11-05 19:18:42.783905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.807 qpair failed and we were unable to recover it. 00:29:13.807 [2024-11-05 19:18:42.784082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.807 [2024-11-05 19:18:42.784092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.807 qpair failed and we were unable to recover it. 00:29:13.807 [2024-11-05 19:18:42.784289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.807 [2024-11-05 19:18:42.784300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.807 qpair failed and we were unable to recover it. 00:29:13.807 [2024-11-05 19:18:42.784472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.807 [2024-11-05 19:18:42.784483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.807 qpair failed and we were unable to recover it. 00:29:13.807 [2024-11-05 19:18:42.784755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.807 [2024-11-05 19:18:42.784768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.807 qpair failed and we were unable to recover it. 00:29:13.808 [2024-11-05 19:18:42.784943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.808 [2024-11-05 19:18:42.784955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.808 qpair failed and we were unable to recover it. 00:29:13.808 [2024-11-05 19:18:42.785118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.808 [2024-11-05 19:18:42.785129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.808 qpair failed and we were unable to recover it. 00:29:13.808 [2024-11-05 19:18:42.785408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.808 [2024-11-05 19:18:42.785419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.808 qpair failed and we were unable to recover it. 00:29:13.808 [2024-11-05 19:18:42.785712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.808 [2024-11-05 19:18:42.785723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.808 qpair failed and we were unable to recover it. 00:29:13.808 [2024-11-05 19:18:42.785888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.808 [2024-11-05 19:18:42.785900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.808 qpair failed and we were unable to recover it. 00:29:13.808 [2024-11-05 19:18:42.786179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.808 [2024-11-05 19:18:42.786190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.808 qpair failed and we were unable to recover it. 00:29:13.808 [2024-11-05 19:18:42.786235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.808 [2024-11-05 19:18:42.786244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.808 qpair failed and we were unable to recover it. 00:29:13.808 [2024-11-05 19:18:42.786291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.808 [2024-11-05 19:18:42.786302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.808 qpair failed and we were unable to recover it. 00:29:13.808 [2024-11-05 19:18:42.786454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.808 [2024-11-05 19:18:42.786464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.808 qpair failed and we were unable to recover it. 00:29:13.808 [2024-11-05 19:18:42.786790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.808 [2024-11-05 19:18:42.786801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.808 qpair failed and we were unable to recover it. 00:29:13.808 [2024-11-05 19:18:42.787128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.808 [2024-11-05 19:18:42.787139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.808 qpair failed and we were unable to recover it. 00:29:13.808 [2024-11-05 19:18:42.787472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.808 [2024-11-05 19:18:42.787483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.808 qpair failed and we were unable to recover it. 00:29:13.808 [2024-11-05 19:18:42.787671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.808 [2024-11-05 19:18:42.787682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.808 qpair failed and we were unable to recover it. 00:29:13.808 [2024-11-05 19:18:42.787970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.808 [2024-11-05 19:18:42.787982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.808 qpair failed and we were unable to recover it. 00:29:13.808 [2024-11-05 19:18:42.788283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.808 [2024-11-05 19:18:42.788294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.808 qpair failed and we were unable to recover it. 00:29:13.808 [2024-11-05 19:18:42.788586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.808 [2024-11-05 19:18:42.788597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.808 qpair failed and we were unable to recover it. 00:29:13.808 [2024-11-05 19:18:42.788771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.808 [2024-11-05 19:18:42.788782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.808 qpair failed and we were unable to recover it. 00:29:13.808 [2024-11-05 19:18:42.788987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.808 [2024-11-05 19:18:42.788997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.808 qpair failed and we were unable to recover it. 00:29:13.808 [2024-11-05 19:18:42.789226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.808 [2024-11-05 19:18:42.789238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.808 qpair failed and we were unable to recover it. 00:29:13.808 [2024-11-05 19:18:42.789421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.808 [2024-11-05 19:18:42.789431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.808 qpair failed and we were unable to recover it. 00:29:13.808 [2024-11-05 19:18:42.789739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.808 [2024-11-05 19:18:42.789756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.808 qpair failed and we were unable to recover it. 00:29:13.808 [2024-11-05 19:18:42.790066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.808 [2024-11-05 19:18:42.790078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.808 qpair failed and we were unable to recover it. 00:29:13.808 [2024-11-05 19:18:42.790398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.808 [2024-11-05 19:18:42.790410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.808 qpair failed and we were unable to recover it. 00:29:13.808 [2024-11-05 19:18:42.790604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.808 [2024-11-05 19:18:42.790615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.808 qpair failed and we were unable to recover it. 00:29:13.808 [2024-11-05 19:18:42.790826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.808 [2024-11-05 19:18:42.790838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.808 qpair failed and we were unable to recover it. 00:29:13.808 [2024-11-05 19:18:42.791102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.808 [2024-11-05 19:18:42.791113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.808 qpair failed and we were unable to recover it. 00:29:13.808 [2024-11-05 19:18:42.791383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.808 [2024-11-05 19:18:42.791395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.808 qpair failed and we were unable to recover it. 00:29:13.808 [2024-11-05 19:18:42.791579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.808 [2024-11-05 19:18:42.791590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.808 qpair failed and we were unable to recover it. 00:29:13.808 [2024-11-05 19:18:42.791910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.808 [2024-11-05 19:18:42.791922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.808 qpair failed and we were unable to recover it. 00:29:13.808 [2024-11-05 19:18:42.792089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.808 [2024-11-05 19:18:42.792100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.808 qpair failed and we were unable to recover it. 00:29:13.808 [2024-11-05 19:18:42.792423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.808 [2024-11-05 19:18:42.792435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.808 qpair failed and we were unable to recover it. 00:29:13.808 [2024-11-05 19:18:42.792718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.808 [2024-11-05 19:18:42.792730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.808 qpair failed and we were unable to recover it. 00:29:13.808 [2024-11-05 19:18:42.793093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.808 [2024-11-05 19:18:42.793105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.808 qpair failed and we were unable to recover it. 00:29:13.808 [2024-11-05 19:18:42.793417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.808 [2024-11-05 19:18:42.793429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.808 qpair failed and we were unable to recover it. 00:29:13.808 [2024-11-05 19:18:42.793740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.808 [2024-11-05 19:18:42.793757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.808 qpair failed and we were unable to recover it. 00:29:13.808 [2024-11-05 19:18:42.793948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.808 [2024-11-05 19:18:42.793962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.808 qpair failed and we were unable to recover it. 00:29:13.808 [2024-11-05 19:18:42.794126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.808 [2024-11-05 19:18:42.794137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.808 qpair failed and we were unable to recover it. 00:29:13.808 [2024-11-05 19:18:42.794483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.808 [2024-11-05 19:18:42.794495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.808 qpair failed and we were unable to recover it. 00:29:13.808 [2024-11-05 19:18:42.794656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.808 [2024-11-05 19:18:42.794667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.809 qpair failed and we were unable to recover it. 00:29:13.809 [2024-11-05 19:18:42.794988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.809 [2024-11-05 19:18:42.795000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.809 qpair failed and we were unable to recover it. 00:29:13.809 [2024-11-05 19:18:42.795285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.809 [2024-11-05 19:18:42.795297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.809 qpair failed and we were unable to recover it. 00:29:13.809 [2024-11-05 19:18:42.795614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.809 [2024-11-05 19:18:42.795625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.809 qpair failed and we were unable to recover it. 00:29:13.809 [2024-11-05 19:18:42.795934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.809 [2024-11-05 19:18:42.795945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.809 qpair failed and we were unable to recover it. 00:29:13.809 [2024-11-05 19:18:42.796230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.809 [2024-11-05 19:18:42.796240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.809 qpair failed and we were unable to recover it. 00:29:13.809 [2024-11-05 19:18:42.796548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.809 [2024-11-05 19:18:42.796559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.809 qpair failed and we were unable to recover it. 00:29:13.809 [2024-11-05 19:18:42.796864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.809 [2024-11-05 19:18:42.796876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.809 qpair failed and we were unable to recover it. 00:29:13.809 [2024-11-05 19:18:42.797186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.809 [2024-11-05 19:18:42.797197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.809 qpair failed and we were unable to recover it. 00:29:13.809 [2024-11-05 19:18:42.797528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.809 [2024-11-05 19:18:42.797539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.809 qpair failed and we were unable to recover it. 00:29:13.809 [2024-11-05 19:18:42.797845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.809 [2024-11-05 19:18:42.797856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.809 qpair failed and we were unable to recover it. 00:29:13.809 [2024-11-05 19:18:42.798206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.809 [2024-11-05 19:18:42.798218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.809 qpair failed and we were unable to recover it. 00:29:13.809 [2024-11-05 19:18:42.798518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.809 [2024-11-05 19:18:42.798529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.809 qpair failed and we were unable to recover it. 00:29:13.809 [2024-11-05 19:18:42.798827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.809 [2024-11-05 19:18:42.798839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.809 qpair failed and we were unable to recover it. 00:29:13.809 [2024-11-05 19:18:42.799173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.809 [2024-11-05 19:18:42.799184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.809 qpair failed and we were unable to recover it. 00:29:13.809 [2024-11-05 19:18:42.799490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.809 [2024-11-05 19:18:42.799502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.809 qpair failed and we were unable to recover it. 00:29:13.809 [2024-11-05 19:18:42.799818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.809 [2024-11-05 19:18:42.799829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.809 qpair failed and we were unable to recover it. 00:29:13.809 [2024-11-05 19:18:42.800176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.809 [2024-11-05 19:18:42.800187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.809 qpair failed and we were unable to recover it. 00:29:13.809 [2024-11-05 19:18:42.800499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.809 [2024-11-05 19:18:42.800510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.809 qpair failed and we were unable to recover it. 00:29:13.809 [2024-11-05 19:18:42.800698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.809 [2024-11-05 19:18:42.800709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.809 qpair failed and we were unable to recover it. 00:29:13.809 [2024-11-05 19:18:42.801004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.809 [2024-11-05 19:18:42.801017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.809 qpair failed and we were unable to recover it. 00:29:13.809 [2024-11-05 19:18:42.801356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.809 [2024-11-05 19:18:42.801367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.809 qpair failed and we were unable to recover it. 00:29:13.809 [2024-11-05 19:18:42.801586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.809 [2024-11-05 19:18:42.801596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.809 qpair failed and we were unable to recover it. 00:29:13.809 [2024-11-05 19:18:42.801911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.809 [2024-11-05 19:18:42.801923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.809 qpair failed and we were unable to recover it. 00:29:13.809 [2024-11-05 19:18:42.802231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.809 [2024-11-05 19:18:42.802247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.809 qpair failed and we were unable to recover it. 00:29:13.809 [2024-11-05 19:18:42.802577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.809 [2024-11-05 19:18:42.802589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.809 qpair failed and we were unable to recover it. 00:29:13.809 [2024-11-05 19:18:42.802889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.809 [2024-11-05 19:18:42.802902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.809 qpair failed and we were unable to recover it. 00:29:13.809 [2024-11-05 19:18:42.803076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.809 [2024-11-05 19:18:42.803088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.809 qpair failed and we were unable to recover it. 00:29:13.809 [2024-11-05 19:18:42.803426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.809 [2024-11-05 19:18:42.803437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.809 qpair failed and we were unable to recover it. 00:29:13.809 [2024-11-05 19:18:42.803649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.809 [2024-11-05 19:18:42.803659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.809 qpair failed and we were unable to recover it. 00:29:13.809 [2024-11-05 19:18:42.803980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.809 [2024-11-05 19:18:42.803991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.809 qpair failed and we were unable to recover it. 00:29:13.809 [2024-11-05 19:18:42.804299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.809 [2024-11-05 19:18:42.804311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.809 qpair failed and we were unable to recover it. 00:29:13.809 [2024-11-05 19:18:42.804619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.809 [2024-11-05 19:18:42.804630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.809 qpair failed and we were unable to recover it. 00:29:13.809 [2024-11-05 19:18:42.804916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.809 [2024-11-05 19:18:42.804928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.809 qpair failed and we were unable to recover it. 00:29:13.809 [2024-11-05 19:18:42.805211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.809 [2024-11-05 19:18:42.805222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.809 qpair failed and we were unable to recover it. 00:29:13.809 [2024-11-05 19:18:42.805530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.809 [2024-11-05 19:18:42.805542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.809 qpair failed and we were unable to recover it. 00:29:13.809 [2024-11-05 19:18:42.805731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.809 [2024-11-05 19:18:42.805743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.809 qpair failed and we were unable to recover it. 00:29:13.809 [2024-11-05 19:18:42.806065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.809 [2024-11-05 19:18:42.806076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.809 qpair failed and we were unable to recover it. 00:29:13.809 [2024-11-05 19:18:42.806266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.810 [2024-11-05 19:18:42.806277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.810 qpair failed and we were unable to recover it. 00:29:13.810 [2024-11-05 19:18:42.806583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.810 [2024-11-05 19:18:42.806595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.810 qpair failed and we were unable to recover it. 00:29:13.810 [2024-11-05 19:18:42.806769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.810 [2024-11-05 19:18:42.806782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.810 qpair failed and we were unable to recover it. 00:29:13.810 [2024-11-05 19:18:42.807108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.810 [2024-11-05 19:18:42.807119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.810 qpair failed and we were unable to recover it. 00:29:13.810 [2024-11-05 19:18:42.807428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.810 [2024-11-05 19:18:42.807440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.810 qpair failed and we were unable to recover it. 00:29:13.810 [2024-11-05 19:18:42.807755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.810 [2024-11-05 19:18:42.807766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.810 qpair failed and we were unable to recover it. 00:29:13.810 [2024-11-05 19:18:42.808093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.810 [2024-11-05 19:18:42.808104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.810 qpair failed and we were unable to recover it. 00:29:13.810 [2024-11-05 19:18:42.808436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.810 [2024-11-05 19:18:42.808449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.810 qpair failed and we were unable to recover it. 00:29:13.810 [2024-11-05 19:18:42.808754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.810 [2024-11-05 19:18:42.808767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.810 qpair failed and we were unable to recover it. 00:29:13.810 [2024-11-05 19:18:42.808947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.810 [2024-11-05 19:18:42.808958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.810 qpair failed and we were unable to recover it. 00:29:13.810 [2024-11-05 19:18:42.809272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.810 [2024-11-05 19:18:42.809283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.810 qpair failed and we were unable to recover it. 00:29:13.810 [2024-11-05 19:18:42.809567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.810 [2024-11-05 19:18:42.809588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.810 qpair failed and we were unable to recover it. 00:29:13.810 [2024-11-05 19:18:42.809912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.810 [2024-11-05 19:18:42.809924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.810 qpair failed and we were unable to recover it. 00:29:13.810 [2024-11-05 19:18:42.810234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.810 [2024-11-05 19:18:42.810247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.810 qpair failed and we were unable to recover it. 00:29:13.810 [2024-11-05 19:18:42.810560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.810 [2024-11-05 19:18:42.810572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.810 qpair failed and we were unable to recover it. 00:29:13.810 [2024-11-05 19:18:42.810873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.810 [2024-11-05 19:18:42.810884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.810 qpair failed and we were unable to recover it. 00:29:13.810 [2024-11-05 19:18:42.811219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.810 [2024-11-05 19:18:42.811230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.810 qpair failed and we were unable to recover it. 00:29:13.810 [2024-11-05 19:18:42.811543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.810 [2024-11-05 19:18:42.811554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.810 qpair failed and we were unable to recover it. 00:29:13.810 [2024-11-05 19:18:42.811859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.810 [2024-11-05 19:18:42.811871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.810 qpair failed and we were unable to recover it. 00:29:13.810 [2024-11-05 19:18:42.812183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.810 [2024-11-05 19:18:42.812194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.810 qpair failed and we were unable to recover it. 00:29:13.810 [2024-11-05 19:18:42.812500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.810 [2024-11-05 19:18:42.812511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.810 qpair failed and we were unable to recover it. 00:29:13.810 [2024-11-05 19:18:42.812850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.810 [2024-11-05 19:18:42.812862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.810 qpair failed and we were unable to recover it. 00:29:13.810 [2024-11-05 19:18:42.813188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.810 [2024-11-05 19:18:42.813201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.810 qpair failed and we were unable to recover it. 00:29:13.810 [2024-11-05 19:18:42.813530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.810 [2024-11-05 19:18:42.813542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.810 qpair failed and we were unable to recover it. 00:29:13.810 [2024-11-05 19:18:42.813853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.810 [2024-11-05 19:18:42.813866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.810 qpair failed and we were unable to recover it. 00:29:13.810 [2024-11-05 19:18:42.814256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.810 [2024-11-05 19:18:42.814267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.810 qpair failed and we were unable to recover it. 00:29:13.810 [2024-11-05 19:18:42.814576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.810 [2024-11-05 19:18:42.814589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.810 qpair failed and we were unable to recover it. 00:29:13.810 [2024-11-05 19:18:42.814779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.810 [2024-11-05 19:18:42.814792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.810 qpair failed and we were unable to recover it. 00:29:13.810 [2024-11-05 19:18:42.814981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.810 [2024-11-05 19:18:42.814993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.810 qpair failed and we were unable to recover it. 00:29:13.810 [2024-11-05 19:18:42.815325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.810 [2024-11-05 19:18:42.815336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.810 qpair failed and we were unable to recover it. 00:29:13.810 [2024-11-05 19:18:42.815634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.810 [2024-11-05 19:18:42.815644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.810 qpair failed and we were unable to recover it. 00:29:13.810 [2024-11-05 19:18:42.815923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.810 [2024-11-05 19:18:42.815934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.810 qpair failed and we were unable to recover it. 00:29:13.810 [2024-11-05 19:18:42.816236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.810 [2024-11-05 19:18:42.816247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.810 qpair failed and we were unable to recover it. 00:29:13.810 [2024-11-05 19:18:42.816581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.811 [2024-11-05 19:18:42.816592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.811 qpair failed and we were unable to recover it. 00:29:13.811 [2024-11-05 19:18:42.816896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.811 [2024-11-05 19:18:42.816909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.811 qpair failed and we were unable to recover it. 00:29:13.811 [2024-11-05 19:18:42.817101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.811 [2024-11-05 19:18:42.817112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.811 qpair failed and we were unable to recover it. 00:29:13.811 [2024-11-05 19:18:42.817479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.811 [2024-11-05 19:18:42.817490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.811 qpair failed and we were unable to recover it. 00:29:13.811 [2024-11-05 19:18:42.817866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.811 [2024-11-05 19:18:42.817878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.811 qpair failed and we were unable to recover it. 00:29:13.811 [2024-11-05 19:18:42.818174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.811 [2024-11-05 19:18:42.818185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.811 qpair failed and we were unable to recover it. 00:29:13.811 [2024-11-05 19:18:42.818402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.811 [2024-11-05 19:18:42.818414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.811 qpair failed and we were unable to recover it. 00:29:13.811 [2024-11-05 19:18:42.818722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.811 [2024-11-05 19:18:42.818734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.811 qpair failed and we were unable to recover it. 00:29:13.811 [2024-11-05 19:18:42.819072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.811 [2024-11-05 19:18:42.819084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.811 qpair failed and we were unable to recover it. 00:29:13.811 [2024-11-05 19:18:42.819256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.811 [2024-11-05 19:18:42.819267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.811 qpair failed and we were unable to recover it. 00:29:13.811 [2024-11-05 19:18:42.819574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.811 [2024-11-05 19:18:42.819586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.811 qpair failed and we were unable to recover it. 00:29:13.811 [2024-11-05 19:18:42.819888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.811 [2024-11-05 19:18:42.819900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.811 qpair failed and we were unable to recover it. 00:29:13.811 [2024-11-05 19:18:42.820081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.811 [2024-11-05 19:18:42.820092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.811 qpair failed and we were unable to recover it. 00:29:13.811 [2024-11-05 19:18:42.820407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.811 [2024-11-05 19:18:42.820418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.811 qpair failed and we were unable to recover it. 00:29:13.811 [2024-11-05 19:18:42.820708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.811 [2024-11-05 19:18:42.820720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.811 qpair failed and we were unable to recover it. 00:29:13.811 [2024-11-05 19:18:42.821027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.811 [2024-11-05 19:18:42.821039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.811 qpair failed and we were unable to recover it. 00:29:13.811 [2024-11-05 19:18:42.821350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.811 [2024-11-05 19:18:42.821362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.811 qpair failed and we were unable to recover it. 00:29:13.811 [2024-11-05 19:18:42.821668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.811 [2024-11-05 19:18:42.821680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.811 qpair failed and we were unable to recover it. 00:29:13.811 [2024-11-05 19:18:42.821977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.811 [2024-11-05 19:18:42.821989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.811 qpair failed and we were unable to recover it. 00:29:13.811 [2024-11-05 19:18:42.822294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.811 [2024-11-05 19:18:42.822305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.811 qpair failed and we were unable to recover it. 00:29:13.811 [2024-11-05 19:18:42.822615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.811 [2024-11-05 19:18:42.822627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.811 qpair failed and we were unable to recover it. 00:29:13.811 [2024-11-05 19:18:42.822914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.811 [2024-11-05 19:18:42.822928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.811 qpair failed and we were unable to recover it. 00:29:13.811 [2024-11-05 19:18:42.823124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.811 [2024-11-05 19:18:42.823136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.811 qpair failed and we were unable to recover it. 00:29:13.811 [2024-11-05 19:18:42.823460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.811 [2024-11-05 19:18:42.823472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.811 qpair failed and we were unable to recover it. 00:29:13.811 [2024-11-05 19:18:42.823784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.811 [2024-11-05 19:18:42.823795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.811 qpair failed and we were unable to recover it. 00:29:13.811 [2024-11-05 19:18:42.824112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.811 [2024-11-05 19:18:42.824123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.811 qpair failed and we were unable to recover it. 00:29:13.811 [2024-11-05 19:18:42.824309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.811 [2024-11-05 19:18:42.824320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.811 qpair failed and we were unable to recover it. 00:29:13.811 [2024-11-05 19:18:42.824634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.811 [2024-11-05 19:18:42.824645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.811 qpair failed and we were unable to recover it. 00:29:13.811 [2024-11-05 19:18:42.824971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.811 [2024-11-05 19:18:42.824983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.811 qpair failed and we were unable to recover it. 00:29:13.811 [2024-11-05 19:18:42.825265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.811 [2024-11-05 19:18:42.825276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.811 qpair failed and we were unable to recover it. 00:29:13.811 [2024-11-05 19:18:42.825594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.811 [2024-11-05 19:18:42.825605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.811 qpair failed and we were unable to recover it. 00:29:13.811 [2024-11-05 19:18:42.825911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.811 [2024-11-05 19:18:42.825922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.811 qpair failed and we were unable to recover it. 00:29:13.811 [2024-11-05 19:18:42.826224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.811 [2024-11-05 19:18:42.826236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.811 qpair failed and we were unable to recover it. 00:29:13.811 [2024-11-05 19:18:42.826566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.811 [2024-11-05 19:18:42.826577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.811 qpair failed and we were unable to recover it. 00:29:13.811 [2024-11-05 19:18:42.826872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.811 [2024-11-05 19:18:42.826883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.811 qpair failed and we were unable to recover it. 00:29:13.811 [2024-11-05 19:18:42.827221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.811 [2024-11-05 19:18:42.827232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.811 qpair failed and we were unable to recover it. 00:29:13.811 [2024-11-05 19:18:42.827541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.811 [2024-11-05 19:18:42.827553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.811 qpair failed and we were unable to recover it. 00:29:13.811 [2024-11-05 19:18:42.827872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.811 [2024-11-05 19:18:42.827884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.811 qpair failed and we were unable to recover it. 00:29:13.811 [2024-11-05 19:18:42.828224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.811 [2024-11-05 19:18:42.828236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.811 qpair failed and we were unable to recover it. 00:29:13.812 [2024-11-05 19:18:42.828457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.812 [2024-11-05 19:18:42.828468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.812 qpair failed and we were unable to recover it. 00:29:13.812 [2024-11-05 19:18:42.828778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.812 [2024-11-05 19:18:42.828789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.812 qpair failed and we were unable to recover it. 00:29:13.812 [2024-11-05 19:18:42.828961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.812 [2024-11-05 19:18:42.828971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.812 qpair failed and we were unable to recover it. 00:29:13.812 [2024-11-05 19:18:42.829253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.812 [2024-11-05 19:18:42.829264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.812 qpair failed and we were unable to recover it. 00:29:13.812 [2024-11-05 19:18:42.829566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.812 [2024-11-05 19:18:42.829578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.812 qpair failed and we were unable to recover it. 00:29:13.812 [2024-11-05 19:18:42.829880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.812 [2024-11-05 19:18:42.829891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.812 qpair failed and we were unable to recover it. 00:29:13.812 [2024-11-05 19:18:42.830272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.812 [2024-11-05 19:18:42.830284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.812 qpair failed and we were unable to recover it. 00:29:13.812 [2024-11-05 19:18:42.830476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.812 [2024-11-05 19:18:42.830487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.812 qpair failed and we were unable to recover it. 00:29:13.812 [2024-11-05 19:18:42.830798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.812 [2024-11-05 19:18:42.830809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.812 qpair failed and we were unable to recover it. 00:29:13.812 [2024-11-05 19:18:42.831131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.812 [2024-11-05 19:18:42.831144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.812 qpair failed and we were unable to recover it. 00:29:13.812 [2024-11-05 19:18:42.831452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.812 [2024-11-05 19:18:42.831463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.812 qpair failed and we were unable to recover it. 00:29:13.812 [2024-11-05 19:18:42.831766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.812 [2024-11-05 19:18:42.831778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.812 qpair failed and we were unable to recover it. 00:29:13.812 [2024-11-05 19:18:42.832092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.812 [2024-11-05 19:18:42.832103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.812 qpair failed and we were unable to recover it. 00:29:13.812 [2024-11-05 19:18:42.832404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.812 [2024-11-05 19:18:42.832417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.812 qpair failed and we were unable to recover it. 00:29:13.812 [2024-11-05 19:18:42.832564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.812 [2024-11-05 19:18:42.832575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.812 qpair failed and we were unable to recover it. 00:29:13.812 [2024-11-05 19:18:42.832878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.812 [2024-11-05 19:18:42.832890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.812 qpair failed and we were unable to recover it. 00:29:13.812 [2024-11-05 19:18:42.833210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.812 [2024-11-05 19:18:42.833222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.812 qpair failed and we were unable to recover it. 00:29:13.812 [2024-11-05 19:18:42.833535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.812 [2024-11-05 19:18:42.833546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.812 qpair failed and we were unable to recover it. 00:29:13.812 [2024-11-05 19:18:42.833859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.812 [2024-11-05 19:18:42.833872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.812 qpair failed and we were unable to recover it. 00:29:13.812 [2024-11-05 19:18:42.834064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.812 [2024-11-05 19:18:42.834075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.812 qpair failed and we were unable to recover it. 00:29:13.812 [2024-11-05 19:18:42.834405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.812 [2024-11-05 19:18:42.834417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.812 qpair failed and we were unable to recover it. 00:29:13.812 [2024-11-05 19:18:42.834716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.812 [2024-11-05 19:18:42.834726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.812 qpair failed and we were unable to recover it. 00:29:13.812 [2024-11-05 19:18:42.834918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.812 [2024-11-05 19:18:42.834929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.812 qpair failed and we were unable to recover it. 00:29:13.812 [2024-11-05 19:18:42.835112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.812 [2024-11-05 19:18:42.835122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.812 qpair failed and we were unable to recover it. 00:29:13.812 [2024-11-05 19:18:42.835454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.812 [2024-11-05 19:18:42.835465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.812 qpair failed and we were unable to recover it. 00:29:13.812 [2024-11-05 19:18:42.835650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.812 [2024-11-05 19:18:42.835661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.812 qpair failed and we were unable to recover it. 00:29:13.812 [2024-11-05 19:18:42.835900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.812 [2024-11-05 19:18:42.835912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.812 qpair failed and we were unable to recover it. 00:29:13.812 [2024-11-05 19:18:42.836086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.812 [2024-11-05 19:18:42.836098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.812 qpair failed and we were unable to recover it. 00:29:13.812 [2024-11-05 19:18:42.836409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.812 [2024-11-05 19:18:42.836421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.812 qpair failed and we were unable to recover it. 00:29:13.812 [2024-11-05 19:18:42.836731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.812 [2024-11-05 19:18:42.836742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.812 qpair failed and we were unable to recover it. 00:29:13.812 [2024-11-05 19:18:42.836927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.812 [2024-11-05 19:18:42.836938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.812 qpair failed and we were unable to recover it. 00:29:13.812 [2024-11-05 19:18:42.837268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.812 [2024-11-05 19:18:42.837280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.812 qpair failed and we were unable to recover it. 00:29:13.812 [2024-11-05 19:18:42.837586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.812 [2024-11-05 19:18:42.837597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.812 qpair failed and we were unable to recover it. 00:29:13.812 [2024-11-05 19:18:42.837908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.812 [2024-11-05 19:18:42.837920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.812 qpair failed and we were unable to recover it. 00:29:13.812 [2024-11-05 19:18:42.838198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.812 [2024-11-05 19:18:42.838210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.812 qpair failed and we were unable to recover it. 00:29:13.812 [2024-11-05 19:18:42.838507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.812 [2024-11-05 19:18:42.838518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.812 qpair failed and we were unable to recover it. 00:29:13.812 [2024-11-05 19:18:42.838837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.812 [2024-11-05 19:18:42.838852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.812 qpair failed and we were unable to recover it. 00:29:13.812 [2024-11-05 19:18:42.839168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.812 [2024-11-05 19:18:42.839180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.812 qpair failed and we were unable to recover it. 00:29:13.812 [2024-11-05 19:18:42.839492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.812 [2024-11-05 19:18:42.839503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.812 qpair failed and we were unable to recover it. 00:29:13.813 [2024-11-05 19:18:42.839807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.813 [2024-11-05 19:18:42.839819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.813 qpair failed and we were unable to recover it. 00:29:13.813 [2024-11-05 19:18:42.840125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.813 [2024-11-05 19:18:42.840137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.813 qpair failed and we were unable to recover it. 00:29:13.813 [2024-11-05 19:18:42.840448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.813 [2024-11-05 19:18:42.840459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.813 qpair failed and we were unable to recover it. 00:29:13.813 [2024-11-05 19:18:42.840780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.813 [2024-11-05 19:18:42.840793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.813 qpair failed and we were unable to recover it. 00:29:13.813 [2024-11-05 19:18:42.841117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.813 [2024-11-05 19:18:42.841128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.813 qpair failed and we were unable to recover it. 00:29:13.813 [2024-11-05 19:18:42.841298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.813 [2024-11-05 19:18:42.841309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.813 qpair failed and we were unable to recover it. 00:29:13.813 [2024-11-05 19:18:42.841641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.813 [2024-11-05 19:18:42.841652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.813 qpair failed and we were unable to recover it. 00:29:13.813 [2024-11-05 19:18:42.841967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.813 [2024-11-05 19:18:42.841978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.813 qpair failed and we were unable to recover it. 00:29:13.813 [2024-11-05 19:18:42.842269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.813 [2024-11-05 19:18:42.842281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.813 qpair failed and we were unable to recover it. 00:29:13.813 [2024-11-05 19:18:42.842506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.813 [2024-11-05 19:18:42.842517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.813 qpair failed and we were unable to recover it. 00:29:13.813 [2024-11-05 19:18:42.842762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.813 [2024-11-05 19:18:42.842773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.813 qpair failed and we were unable to recover it. 00:29:13.813 [2024-11-05 19:18:42.843108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.813 [2024-11-05 19:18:42.843120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.813 qpair failed and we were unable to recover it. 00:29:13.813 [2024-11-05 19:18:42.843416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.813 [2024-11-05 19:18:42.843427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.813 qpair failed and we were unable to recover it. 00:29:13.813 [2024-11-05 19:18:42.843734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.813 [2024-11-05 19:18:42.843755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.813 qpair failed and we were unable to recover it. 00:29:13.813 [2024-11-05 19:18:42.844055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.813 [2024-11-05 19:18:42.844066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.813 qpair failed and we were unable to recover it. 00:29:13.813 [2024-11-05 19:18:42.844240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.813 [2024-11-05 19:18:42.844251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.813 qpair failed and we were unable to recover it. 00:29:13.813 [2024-11-05 19:18:42.844581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.813 [2024-11-05 19:18:42.844594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.813 qpair failed and we were unable to recover it. 00:29:13.813 [2024-11-05 19:18:42.844920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.813 [2024-11-05 19:18:42.844932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.813 qpair failed and we were unable to recover it. 00:29:13.813 [2024-11-05 19:18:42.845246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.813 [2024-11-05 19:18:42.845258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.813 qpair failed and we were unable to recover it. 00:29:13.813 [2024-11-05 19:18:42.845563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.813 [2024-11-05 19:18:42.845574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.813 qpair failed and we were unable to recover it. 00:29:13.813 [2024-11-05 19:18:42.845860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.813 [2024-11-05 19:18:42.845871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.813 qpair failed and we were unable to recover it. 00:29:13.813 [2024-11-05 19:18:42.846179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.813 [2024-11-05 19:18:42.846190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.813 qpair failed and we were unable to recover it. 00:29:13.813 [2024-11-05 19:18:42.846375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.813 [2024-11-05 19:18:42.846386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.813 qpair failed and we were unable to recover it. 00:29:13.813 [2024-11-05 19:18:42.846697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.813 [2024-11-05 19:18:42.846708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.813 qpair failed and we were unable to recover it. 00:29:13.813 [2024-11-05 19:18:42.847010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.813 [2024-11-05 19:18:42.847021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.813 qpair failed and we were unable to recover it. 00:29:13.813 [2024-11-05 19:18:42.847331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.813 [2024-11-05 19:18:42.847342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.813 qpair failed and we were unable to recover it. 00:29:13.813 [2024-11-05 19:18:42.847651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.813 [2024-11-05 19:18:42.847664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.813 qpair failed and we were unable to recover it. 00:29:13.813 [2024-11-05 19:18:42.847975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.813 [2024-11-05 19:18:42.847986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.813 qpair failed and we were unable to recover it. 00:29:13.813 [2024-11-05 19:18:42.848316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.813 [2024-11-05 19:18:42.848327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.813 qpair failed and we were unable to recover it. 00:29:13.813 [2024-11-05 19:18:42.848635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.813 [2024-11-05 19:18:42.848647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.813 qpair failed and we were unable to recover it. 00:29:13.813 [2024-11-05 19:18:42.848960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.813 [2024-11-05 19:18:42.848973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.813 qpair failed and we were unable to recover it. 00:29:13.813 [2024-11-05 19:18:42.849304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.813 [2024-11-05 19:18:42.849315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.813 qpair failed and we were unable to recover it. 00:29:13.813 [2024-11-05 19:18:42.849598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.813 [2024-11-05 19:18:42.849609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.813 qpair failed and we were unable to recover it. 00:29:13.813 [2024-11-05 19:18:42.849899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.813 [2024-11-05 19:18:42.849910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.813 qpair failed and we were unable to recover it. 00:29:13.813 [2024-11-05 19:18:42.850081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.813 [2024-11-05 19:18:42.850093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.813 qpair failed and we were unable to recover it. 00:29:13.813 [2024-11-05 19:18:42.850466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.813 [2024-11-05 19:18:42.850478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.813 qpair failed and we were unable to recover it. 00:29:13.813 [2024-11-05 19:18:42.850764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.813 [2024-11-05 19:18:42.850776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.813 qpair failed and we were unable to recover it. 00:29:13.813 [2024-11-05 19:18:42.850969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.813 [2024-11-05 19:18:42.850980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.813 qpair failed and we were unable to recover it. 00:29:13.813 [2024-11-05 19:18:42.851266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.813 [2024-11-05 19:18:42.851277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.814 qpair failed and we were unable to recover it. 00:29:13.814 [2024-11-05 19:18:42.851604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.814 [2024-11-05 19:18:42.851615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.814 qpair failed and we were unable to recover it. 00:29:13.814 [2024-11-05 19:18:42.851915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.814 [2024-11-05 19:18:42.851927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.814 qpair failed and we were unable to recover it. 00:29:13.814 [2024-11-05 19:18:42.852240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.814 [2024-11-05 19:18:42.852251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.814 qpair failed and we were unable to recover it. 00:29:13.814 [2024-11-05 19:18:42.852495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.814 [2024-11-05 19:18:42.852506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.814 qpair failed and we were unable to recover it. 00:29:13.814 [2024-11-05 19:18:42.852835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.814 [2024-11-05 19:18:42.852848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.814 qpair failed and we were unable to recover it. 00:29:13.814 [2024-11-05 19:18:42.853035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.814 [2024-11-05 19:18:42.853048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.814 qpair failed and we were unable to recover it. 00:29:13.814 [2024-11-05 19:18:42.853363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.814 [2024-11-05 19:18:42.853375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.814 qpair failed and we were unable to recover it. 00:29:13.814 [2024-11-05 19:18:42.853686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.814 [2024-11-05 19:18:42.853697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.814 qpair failed and we were unable to recover it. 00:29:13.814 [2024-11-05 19:18:42.854091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.814 [2024-11-05 19:18:42.854103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.814 qpair failed and we were unable to recover it. 00:29:13.814 [2024-11-05 19:18:42.854395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.814 [2024-11-05 19:18:42.854406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.814 qpair failed and we were unable to recover it. 00:29:13.814 [2024-11-05 19:18:42.854790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.814 [2024-11-05 19:18:42.854802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.814 qpair failed and we were unable to recover it. 00:29:13.814 [2024-11-05 19:18:42.855107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.814 [2024-11-05 19:18:42.855118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.814 qpair failed and we were unable to recover it. 00:29:13.814 [2024-11-05 19:18:42.855412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.814 [2024-11-05 19:18:42.855423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.814 qpair failed and we were unable to recover it. 00:29:13.814 [2024-11-05 19:18:42.855639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.814 [2024-11-05 19:18:42.855650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.814 qpair failed and we were unable to recover it. 00:29:13.814 [2024-11-05 19:18:42.855952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.814 [2024-11-05 19:18:42.855964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.814 qpair failed and we were unable to recover it. 00:29:13.814 [2024-11-05 19:18:42.856285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.814 [2024-11-05 19:18:42.856297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.814 qpair failed and we were unable to recover it. 00:29:13.814 [2024-11-05 19:18:42.856605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.814 [2024-11-05 19:18:42.856616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.814 qpair failed and we were unable to recover it. 00:29:13.814 [2024-11-05 19:18:42.856940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.814 [2024-11-05 19:18:42.856953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.814 qpair failed and we were unable to recover it. 00:29:13.814 [2024-11-05 19:18:42.857258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.814 [2024-11-05 19:18:42.857270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.814 qpair failed and we were unable to recover it. 00:29:13.814 [2024-11-05 19:18:42.857584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.814 [2024-11-05 19:18:42.857595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.814 qpair failed and we were unable to recover it. 00:29:13.814 [2024-11-05 19:18:42.857911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.814 [2024-11-05 19:18:42.857923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.814 qpair failed and we were unable to recover it. 00:29:13.814 [2024-11-05 19:18:42.858278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.814 [2024-11-05 19:18:42.858289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.814 qpair failed and we were unable to recover it. 00:29:13.814 [2024-11-05 19:18:42.858682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.814 [2024-11-05 19:18:42.858694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.814 qpair failed and we were unable to recover it. 00:29:13.814 [2024-11-05 19:18:42.859001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.814 [2024-11-05 19:18:42.859013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.814 qpair failed and we were unable to recover it. 00:29:13.814 [2024-11-05 19:18:42.859369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.814 [2024-11-05 19:18:42.859380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.814 qpair failed and we were unable to recover it. 00:29:13.814 [2024-11-05 19:18:42.859679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.814 [2024-11-05 19:18:42.859691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.814 qpair failed and we were unable to recover it. 00:29:13.814 [2024-11-05 19:18:42.860022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.814 [2024-11-05 19:18:42.860036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.814 qpair failed and we were unable to recover it. 00:29:13.814 [2024-11-05 19:18:42.860257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.814 [2024-11-05 19:18:42.860268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.814 qpair failed and we were unable to recover it. 00:29:13.814 [2024-11-05 19:18:42.860600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.814 [2024-11-05 19:18:42.860612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.814 qpair failed and we were unable to recover it. 00:29:13.814 [2024-11-05 19:18:42.860938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.814 [2024-11-05 19:18:42.860951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.814 qpair failed and we were unable to recover it. 00:29:13.814 [2024-11-05 19:18:42.861271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.814 [2024-11-05 19:18:42.861282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.814 qpair failed and we were unable to recover it. 00:29:13.814 [2024-11-05 19:18:42.861593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.814 [2024-11-05 19:18:42.861605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.814 qpair failed and we were unable to recover it. 00:29:13.814 [2024-11-05 19:18:42.861790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.814 [2024-11-05 19:18:42.861802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.814 qpair failed and we were unable to recover it. 00:29:13.814 [2024-11-05 19:18:42.862199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.814 [2024-11-05 19:18:42.862211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.814 qpair failed and we were unable to recover it. 00:29:13.814 [2024-11-05 19:18:42.862523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.815 [2024-11-05 19:18:42.862535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.815 qpair failed and we were unable to recover it. 00:29:13.815 [2024-11-05 19:18:42.862844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.815 [2024-11-05 19:18:42.862856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.815 qpair failed and we were unable to recover it. 00:29:13.815 [2024-11-05 19:18:42.863018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.815 [2024-11-05 19:18:42.863029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.815 qpair failed and we were unable to recover it. 00:29:13.815 [2024-11-05 19:18:42.863336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.815 [2024-11-05 19:18:42.863348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.815 qpair failed and we were unable to recover it. 00:29:13.815 [2024-11-05 19:18:42.863680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.815 [2024-11-05 19:18:42.863691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.815 qpair failed and we were unable to recover it. 00:29:13.815 [2024-11-05 19:18:42.864003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.815 [2024-11-05 19:18:42.864015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.815 qpair failed and we were unable to recover it. 00:29:13.815 [2024-11-05 19:18:42.864413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.815 [2024-11-05 19:18:42.864425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.815 qpair failed and we were unable to recover it. 00:29:13.815 [2024-11-05 19:18:42.864731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.815 [2024-11-05 19:18:42.864742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.815 qpair failed and we were unable to recover it. 00:29:13.815 [2024-11-05 19:18:42.864931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.815 [2024-11-05 19:18:42.864943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.815 qpair failed and we were unable to recover it. 00:29:13.815 [2024-11-05 19:18:42.865139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.815 [2024-11-05 19:18:42.865150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.815 qpair failed and we were unable to recover it. 00:29:13.815 [2024-11-05 19:18:42.865446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.815 [2024-11-05 19:18:42.865457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.815 qpair failed and we were unable to recover it. 00:29:13.815 [2024-11-05 19:18:42.865621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.815 [2024-11-05 19:18:42.865633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.815 qpair failed and we were unable to recover it. 00:29:13.815 [2024-11-05 19:18:42.865935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.815 [2024-11-05 19:18:42.865948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.815 qpair failed and we were unable to recover it. 00:29:13.815 [2024-11-05 19:18:42.866267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.815 [2024-11-05 19:18:42.866279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.815 qpair failed and we were unable to recover it. 00:29:13.815 [2024-11-05 19:18:42.866585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.815 [2024-11-05 19:18:42.866596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.815 qpair failed and we were unable to recover it. 00:29:13.815 [2024-11-05 19:18:42.866782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.815 [2024-11-05 19:18:42.866793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.815 qpair failed and we were unable to recover it. 00:29:13.815 [2024-11-05 19:18:42.867060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.815 [2024-11-05 19:18:42.867071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.815 qpair failed and we were unable to recover it. 00:29:13.815 [2024-11-05 19:18:42.867364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.815 [2024-11-05 19:18:42.867375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.815 qpair failed and we were unable to recover it. 00:29:13.815 [2024-11-05 19:18:42.867657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.815 [2024-11-05 19:18:42.867668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.815 qpair failed and we were unable to recover it. 00:29:13.815 [2024-11-05 19:18:42.867994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.815 [2024-11-05 19:18:42.868008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.815 qpair failed and we were unable to recover it. 00:29:13.815 [2024-11-05 19:18:42.868327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.815 [2024-11-05 19:18:42.868338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.815 qpair failed and we were unable to recover it. 00:29:13.815 [2024-11-05 19:18:42.868669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.815 [2024-11-05 19:18:42.868680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.815 qpair failed and we were unable to recover it. 00:29:13.815 [2024-11-05 19:18:42.868994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.815 [2024-11-05 19:18:42.869005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.815 qpair failed and we were unable to recover it. 00:29:13.815 [2024-11-05 19:18:42.869300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.815 [2024-11-05 19:18:42.869311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.815 qpair failed and we were unable to recover it. 00:29:13.815 [2024-11-05 19:18:42.869658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.815 [2024-11-05 19:18:42.869670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.815 qpair failed and we were unable to recover it. 00:29:13.815 [2024-11-05 19:18:42.869977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.815 [2024-11-05 19:18:42.869989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.815 qpair failed and we were unable to recover it. 00:29:13.815 [2024-11-05 19:18:42.870332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.815 [2024-11-05 19:18:42.870344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.815 qpair failed and we were unable to recover it. 00:29:13.815 [2024-11-05 19:18:42.870652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.815 [2024-11-05 19:18:42.870663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.815 qpair failed and we were unable to recover it. 00:29:13.815 [2024-11-05 19:18:42.870973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.815 [2024-11-05 19:18:42.870984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.815 qpair failed and we were unable to recover it. 00:29:13.815 [2024-11-05 19:18:42.871341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.815 [2024-11-05 19:18:42.871352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.815 qpair failed and we were unable to recover it. 00:29:13.815 [2024-11-05 19:18:42.871690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.815 [2024-11-05 19:18:42.871702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.815 qpair failed and we were unable to recover it. 00:29:13.815 [2024-11-05 19:18:42.872014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.815 [2024-11-05 19:18:42.872026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.815 qpair failed and we were unable to recover it. 00:29:13.815 [2024-11-05 19:18:42.872253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.815 [2024-11-05 19:18:42.872264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.815 qpair failed and we were unable to recover it. 00:29:13.815 [2024-11-05 19:18:42.872579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.815 [2024-11-05 19:18:42.872590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.815 qpair failed and we were unable to recover it. 00:29:13.815 [2024-11-05 19:18:42.872909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.815 [2024-11-05 19:18:42.872920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.815 qpair failed and we were unable to recover it. 00:29:13.815 [2024-11-05 19:18:42.873255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.815 [2024-11-05 19:18:42.873267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.815 qpair failed and we were unable to recover it. 00:29:13.815 [2024-11-05 19:18:42.873673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.815 [2024-11-05 19:18:42.873686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.815 qpair failed and we were unable to recover it. 00:29:13.815 [2024-11-05 19:18:42.874005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.815 [2024-11-05 19:18:42.874017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.815 qpair failed and we were unable to recover it. 00:29:13.815 [2024-11-05 19:18:42.874323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.815 [2024-11-05 19:18:42.874336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.815 qpair failed and we were unable to recover it. 00:29:13.816 [2024-11-05 19:18:42.874617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.816 [2024-11-05 19:18:42.874629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.816 qpair failed and we were unable to recover it. 00:29:13.816 [2024-11-05 19:18:42.874960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.816 [2024-11-05 19:18:42.874972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.816 qpair failed and we were unable to recover it. 00:29:13.816 [2024-11-05 19:18:42.875276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.816 [2024-11-05 19:18:42.875288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.816 qpair failed and we were unable to recover it. 00:29:13.816 [2024-11-05 19:18:42.875599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.816 [2024-11-05 19:18:42.875611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.816 qpair failed and we were unable to recover it. 00:29:13.816 [2024-11-05 19:18:42.875916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.816 [2024-11-05 19:18:42.875927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.816 qpair failed and we were unable to recover it. 00:29:13.816 [2024-11-05 19:18:42.876237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.816 [2024-11-05 19:18:42.876249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.816 qpair failed and we were unable to recover it. 00:29:13.816 [2024-11-05 19:18:42.876562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.816 [2024-11-05 19:18:42.876574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.816 qpair failed and we were unable to recover it. 00:29:13.816 [2024-11-05 19:18:42.876892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.816 [2024-11-05 19:18:42.876904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.816 qpair failed and we were unable to recover it. 00:29:13.816 [2024-11-05 19:18:42.877086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.816 [2024-11-05 19:18:42.877098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.816 qpair failed and we were unable to recover it. 00:29:13.816 [2024-11-05 19:18:42.877379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.816 [2024-11-05 19:18:42.877391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.816 qpair failed and we were unable to recover it. 00:29:13.816 [2024-11-05 19:18:42.877589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.816 [2024-11-05 19:18:42.877600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.816 qpair failed and we were unable to recover it. 00:29:13.816 [2024-11-05 19:18:42.877960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.816 [2024-11-05 19:18:42.877972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.816 qpair failed and we were unable to recover it. 00:29:13.816 [2024-11-05 19:18:42.878280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.816 [2024-11-05 19:18:42.878292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.816 qpair failed and we were unable to recover it. 00:29:13.816 [2024-11-05 19:18:42.878606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.816 [2024-11-05 19:18:42.878617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.816 qpair failed and we were unable to recover it. 00:29:13.816 [2024-11-05 19:18:42.878925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.816 [2024-11-05 19:18:42.878937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.816 qpair failed and we were unable to recover it. 00:29:13.816 [2024-11-05 19:18:42.879166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.816 [2024-11-05 19:18:42.879177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.816 qpair failed and we were unable to recover it. 00:29:13.816 [2024-11-05 19:18:42.879413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.816 [2024-11-05 19:18:42.879424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.816 qpair failed and we were unable to recover it. 00:29:13.816 [2024-11-05 19:18:42.879741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.816 [2024-11-05 19:18:42.879761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.816 qpair failed and we were unable to recover it. 00:29:13.816 [2024-11-05 19:18:42.879961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.816 [2024-11-05 19:18:42.879972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.816 qpair failed and we were unable to recover it. 00:29:13.816 [2024-11-05 19:18:42.880245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.816 [2024-11-05 19:18:42.880256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.816 qpair failed and we were unable to recover it. 00:29:13.816 [2024-11-05 19:18:42.880600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.816 [2024-11-05 19:18:42.880611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.816 qpair failed and we were unable to recover it. 00:29:13.816 [2024-11-05 19:18:42.880961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.816 [2024-11-05 19:18:42.880973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.816 qpair failed and we were unable to recover it. 00:29:13.816 [2024-11-05 19:18:42.881283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.816 [2024-11-05 19:18:42.881294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.816 qpair failed and we were unable to recover it. 00:29:13.816 [2024-11-05 19:18:42.881648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.816 [2024-11-05 19:18:42.881660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.816 qpair failed and we were unable to recover it. 00:29:13.816 [2024-11-05 19:18:42.881987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.816 [2024-11-05 19:18:42.881999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.816 qpair failed and we were unable to recover it. 00:29:13.816 [2024-11-05 19:18:42.882296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.816 [2024-11-05 19:18:42.882308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.816 qpair failed and we were unable to recover it. 00:29:13.816 [2024-11-05 19:18:42.882636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.816 [2024-11-05 19:18:42.882648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.816 qpair failed and we were unable to recover it. 00:29:13.816 [2024-11-05 19:18:42.882965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.816 [2024-11-05 19:18:42.882977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.816 qpair failed and we were unable to recover it. 00:29:13.816 [2024-11-05 19:18:42.883274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.816 [2024-11-05 19:18:42.883286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.816 qpair failed and we were unable to recover it. 00:29:13.816 [2024-11-05 19:18:42.883636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.816 [2024-11-05 19:18:42.883649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.816 qpair failed and we were unable to recover it. 00:29:13.816 [2024-11-05 19:18:42.883924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.816 [2024-11-05 19:18:42.883937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.816 qpair failed and we were unable to recover it. 00:29:13.816 [2024-11-05 19:18:42.884323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.816 [2024-11-05 19:18:42.884336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.816 qpair failed and we were unable to recover it. 00:29:13.816 [2024-11-05 19:18:42.884639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.816 [2024-11-05 19:18:42.884651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.816 qpair failed and we were unable to recover it. 00:29:13.816 [2024-11-05 19:18:42.884835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.816 [2024-11-05 19:18:42.884848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.816 qpair failed and we were unable to recover it. 00:29:13.816 [2024-11-05 19:18:42.885234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.816 [2024-11-05 19:18:42.885246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.816 qpair failed and we were unable to recover it. 00:29:13.816 [2024-11-05 19:18:42.885474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.816 [2024-11-05 19:18:42.885487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.816 qpair failed and we were unable to recover it. 00:29:13.816 [2024-11-05 19:18:42.885817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.816 [2024-11-05 19:18:42.885829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.816 qpair failed and we were unable to recover it. 00:29:13.816 [2024-11-05 19:18:42.886019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.816 [2024-11-05 19:18:42.886031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.816 qpair failed and we were unable to recover it. 00:29:13.816 [2024-11-05 19:18:42.886221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.816 [2024-11-05 19:18:42.886232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.816 qpair failed and we were unable to recover it. 00:29:13.817 [2024-11-05 19:18:42.886422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.817 [2024-11-05 19:18:42.886434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.817 qpair failed and we were unable to recover it. 00:29:13.817 [2024-11-05 19:18:42.886765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.817 [2024-11-05 19:18:42.886776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.817 qpair failed and we were unable to recover it. 00:29:13.817 [2024-11-05 19:18:42.887096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.817 [2024-11-05 19:18:42.887108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.817 qpair failed and we were unable to recover it. 00:29:13.817 [2024-11-05 19:18:42.887381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.817 [2024-11-05 19:18:42.887392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.817 qpair failed and we were unable to recover it. 00:29:13.817 [2024-11-05 19:18:42.887575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.817 [2024-11-05 19:18:42.887585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.817 qpair failed and we were unable to recover it. 00:29:13.817 [2024-11-05 19:18:42.887765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.817 [2024-11-05 19:18:42.887776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.817 qpair failed and we were unable to recover it. 00:29:13.817 [2024-11-05 19:18:42.888062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.817 [2024-11-05 19:18:42.888073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.817 qpair failed and we were unable to recover it. 00:29:13.817 [2024-11-05 19:18:42.888266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.817 [2024-11-05 19:18:42.888277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.817 qpair failed and we were unable to recover it. 00:29:13.817 [2024-11-05 19:18:42.888576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.817 [2024-11-05 19:18:42.888588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.817 qpair failed and we were unable to recover it. 00:29:13.817 [2024-11-05 19:18:42.888875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.817 [2024-11-05 19:18:42.888889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.817 qpair failed and we were unable to recover it. 00:29:13.817 [2024-11-05 19:18:42.889079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.817 [2024-11-05 19:18:42.889091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.817 qpair failed and we were unable to recover it. 00:29:13.817 [2024-11-05 19:18:42.889372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.817 [2024-11-05 19:18:42.889385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.817 qpair failed and we were unable to recover it. 00:29:13.817 [2024-11-05 19:18:42.889639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.817 [2024-11-05 19:18:42.889652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.817 qpair failed and we were unable to recover it. 00:29:13.817 [2024-11-05 19:18:42.889846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.817 [2024-11-05 19:18:42.889859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.817 qpair failed and we were unable to recover it. 00:29:13.817 [2024-11-05 19:18:42.890164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.817 [2024-11-05 19:18:42.890175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.817 qpair failed and we were unable to recover it. 00:29:13.817 [2024-11-05 19:18:42.890502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.817 [2024-11-05 19:18:42.890514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.817 qpair failed and we were unable to recover it. 00:29:13.817 [2024-11-05 19:18:42.890681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.817 [2024-11-05 19:18:42.890692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.817 qpair failed and we were unable to recover it. 00:29:13.817 [2024-11-05 19:18:42.890872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.817 [2024-11-05 19:18:42.890884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.817 qpair failed and we were unable to recover it. 00:29:13.817 [2024-11-05 19:18:42.891059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.817 [2024-11-05 19:18:42.891071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.817 qpair failed and we were unable to recover it. 00:29:13.817 [2024-11-05 19:18:42.891378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.817 [2024-11-05 19:18:42.891389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.817 qpair failed and we were unable to recover it. 00:29:13.817 [2024-11-05 19:18:42.891550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.817 [2024-11-05 19:18:42.891561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.817 qpair failed and we were unable to recover it. 00:29:13.817 [2024-11-05 19:18:42.891865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.817 [2024-11-05 19:18:42.891876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.817 qpair failed and we were unable to recover it. 00:29:13.817 [2024-11-05 19:18:42.892077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.817 [2024-11-05 19:18:42.892089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.817 qpair failed and we were unable to recover it. 00:29:13.817 [2024-11-05 19:18:42.892394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.817 [2024-11-05 19:18:42.892406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.817 qpair failed and we were unable to recover it. 00:29:13.817 [2024-11-05 19:18:42.892621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.817 [2024-11-05 19:18:42.892631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.817 qpair failed and we were unable to recover it. 00:29:13.817 [2024-11-05 19:18:42.892966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.817 [2024-11-05 19:18:42.892978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.817 qpair failed and we were unable to recover it. 00:29:13.817 [2024-11-05 19:18:42.893328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.817 [2024-11-05 19:18:42.893340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.817 qpair failed and we were unable to recover it. 00:29:13.817 [2024-11-05 19:18:42.893640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.817 [2024-11-05 19:18:42.893652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.817 qpair failed and we were unable to recover it. 00:29:13.817 [2024-11-05 19:18:42.893850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.817 [2024-11-05 19:18:42.893862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.817 qpair failed and we were unable to recover it. 00:29:13.817 [2024-11-05 19:18:42.894157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.817 [2024-11-05 19:18:42.894169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.817 qpair failed and we were unable to recover it. 00:29:13.817 [2024-11-05 19:18:42.894361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.817 [2024-11-05 19:18:42.894374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.817 qpair failed and we were unable to recover it. 00:29:13.817 [2024-11-05 19:18:42.894532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.817 [2024-11-05 19:18:42.894545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.817 qpair failed and we were unable to recover it. 00:29:13.817 [2024-11-05 19:18:42.894711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.817 [2024-11-05 19:18:42.894724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.817 qpair failed and we were unable to recover it. 00:29:13.817 [2024-11-05 19:18:42.894910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.817 [2024-11-05 19:18:42.894922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.817 qpair failed and we were unable to recover it. 00:29:13.817 [2024-11-05 19:18:42.895239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.817 [2024-11-05 19:18:42.895252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.817 qpair failed and we were unable to recover it. 00:29:13.817 [2024-11-05 19:18:42.895458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.817 [2024-11-05 19:18:42.895470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.817 qpair failed and we were unable to recover it. 00:29:13.817 [2024-11-05 19:18:42.895789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.817 [2024-11-05 19:18:42.895803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.817 qpair failed and we were unable to recover it. 00:29:13.817 [2024-11-05 19:18:42.895985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.817 [2024-11-05 19:18:42.895995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.817 qpair failed and we were unable to recover it. 00:29:13.817 [2024-11-05 19:18:42.896241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.817 [2024-11-05 19:18:42.896252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.818 qpair failed and we were unable to recover it. 00:29:13.818 [2024-11-05 19:18:42.896528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.818 [2024-11-05 19:18:42.896540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.818 qpair failed and we were unable to recover it. 00:29:13.818 [2024-11-05 19:18:42.896588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.818 [2024-11-05 19:18:42.896598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.818 qpair failed and we were unable to recover it. 00:29:13.818 [2024-11-05 19:18:42.896860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.818 [2024-11-05 19:18:42.896872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.818 qpair failed and we were unable to recover it. 00:29:13.818 [2024-11-05 19:18:42.897188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.818 [2024-11-05 19:18:42.897199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.818 qpair failed and we were unable to recover it. 00:29:13.818 [2024-11-05 19:18:42.897243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.818 [2024-11-05 19:18:42.897254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.818 qpair failed and we were unable to recover it. 00:29:13.818 [2024-11-05 19:18:42.897509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.818 [2024-11-05 19:18:42.897520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.818 qpair failed and we were unable to recover it. 00:29:13.818 [2024-11-05 19:18:42.897822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.818 [2024-11-05 19:18:42.897835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.818 qpair failed and we were unable to recover it. 00:29:13.818 [2024-11-05 19:18:42.898068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.818 [2024-11-05 19:18:42.898079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.818 qpair failed and we were unable to recover it. 00:29:13.818 [2024-11-05 19:18:42.898390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.818 [2024-11-05 19:18:42.898402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.818 qpair failed and we were unable to recover it. 00:29:13.818 [2024-11-05 19:18:42.898708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.818 [2024-11-05 19:18:42.898720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.818 qpair failed and we were unable to recover it. 00:29:13.818 [2024-11-05 19:18:42.898900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.818 [2024-11-05 19:18:42.898913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.818 qpair failed and we were unable to recover it. 00:29:13.818 [2024-11-05 19:18:42.899093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.818 [2024-11-05 19:18:42.899104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.818 qpair failed and we were unable to recover it. 00:29:13.818 [2024-11-05 19:18:42.899309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.818 [2024-11-05 19:18:42.899320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.818 qpair failed and we were unable to recover it. 00:29:13.818 [2024-11-05 19:18:42.899495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.818 [2024-11-05 19:18:42.899506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.818 qpair failed and we were unable to recover it. 00:29:13.818 [2024-11-05 19:18:42.899556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.818 [2024-11-05 19:18:42.899567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.818 qpair failed and we were unable to recover it. 00:29:13.818 [2024-11-05 19:18:42.899875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.818 [2024-11-05 19:18:42.899886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.818 qpair failed and we were unable to recover it. 00:29:13.818 [2024-11-05 19:18:42.900255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.818 [2024-11-05 19:18:42.900266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.818 qpair failed and we were unable to recover it. 00:29:13.818 [2024-11-05 19:18:42.900446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.818 [2024-11-05 19:18:42.900457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.818 qpair failed and we were unable to recover it. 00:29:13.818 [2024-11-05 19:18:42.900767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.818 [2024-11-05 19:18:42.900778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.818 qpair failed and we were unable to recover it. 00:29:13.818 [2024-11-05 19:18:42.901152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.818 [2024-11-05 19:18:42.901164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.818 qpair failed and we were unable to recover it. 00:29:13.818 [2024-11-05 19:18:42.901379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.818 [2024-11-05 19:18:42.901390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.818 qpair failed and we were unable to recover it. 00:29:13.818 [2024-11-05 19:18:42.901695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.818 [2024-11-05 19:18:42.901706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.818 qpair failed and we were unable to recover it. 00:29:13.818 [2024-11-05 19:18:42.901908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.818 [2024-11-05 19:18:42.901919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.818 qpair failed and we were unable to recover it. 00:29:13.818 [2024-11-05 19:18:42.902252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.818 [2024-11-05 19:18:42.902263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.818 qpair failed and we were unable to recover it. 00:29:13.818 [2024-11-05 19:18:42.902451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.818 [2024-11-05 19:18:42.902465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.818 qpair failed and we were unable to recover it. 00:29:13.818 [2024-11-05 19:18:42.902756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.818 [2024-11-05 19:18:42.902768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.818 qpair failed and we were unable to recover it. 00:29:13.818 [2024-11-05 19:18:42.903135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.818 [2024-11-05 19:18:42.903147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.818 qpair failed and we were unable to recover it. 00:29:13.818 [2024-11-05 19:18:42.903534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.818 [2024-11-05 19:18:42.903545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.818 qpair failed and we were unable to recover it. 00:29:13.818 [2024-11-05 19:18:42.903851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.818 [2024-11-05 19:18:42.903862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.818 qpair failed and we were unable to recover it. 00:29:13.818 [2024-11-05 19:18:42.904204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.818 [2024-11-05 19:18:42.904215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.818 qpair failed and we were unable to recover it. 00:29:13.818 [2024-11-05 19:18:42.904560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.818 [2024-11-05 19:18:42.904571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.818 qpair failed and we were unable to recover it. 00:29:13.818 [2024-11-05 19:18:42.904849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.818 [2024-11-05 19:18:42.904860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.818 qpair failed and we were unable to recover it. 00:29:13.818 [2024-11-05 19:18:42.905149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.818 [2024-11-05 19:18:42.905160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.818 qpair failed and we were unable to recover it. 00:29:13.818 [2024-11-05 19:18:42.905535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.818 [2024-11-05 19:18:42.905546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.818 qpair failed and we were unable to recover it. 00:29:13.818 [2024-11-05 19:18:42.905846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.818 [2024-11-05 19:18:42.905858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.818 qpair failed and we were unable to recover it. 00:29:13.818 [2024-11-05 19:18:42.906034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.818 [2024-11-05 19:18:42.906045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.818 qpair failed and we were unable to recover it. 00:29:13.818 [2024-11-05 19:18:42.906352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.818 [2024-11-05 19:18:42.906363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.818 qpair failed and we were unable to recover it. 00:29:13.818 [2024-11-05 19:18:42.906664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.818 [2024-11-05 19:18:42.906674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.818 qpair failed and we were unable to recover it. 00:29:13.818 [2024-11-05 19:18:42.907044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.819 [2024-11-05 19:18:42.907057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.819 qpair failed and we were unable to recover it. 00:29:13.819 [2024-11-05 19:18:42.907395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.819 [2024-11-05 19:18:42.907407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.819 qpair failed and we were unable to recover it. 00:29:13.819 [2024-11-05 19:18:42.907723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.819 [2024-11-05 19:18:42.907734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.819 qpair failed and we were unable to recover it. 00:29:13.819 [2024-11-05 19:18:42.908053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.819 [2024-11-05 19:18:42.908065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.819 qpair failed and we were unable to recover it. 00:29:13.819 [2024-11-05 19:18:42.908373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.819 [2024-11-05 19:18:42.908384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.819 qpair failed and we were unable to recover it. 00:29:13.819 [2024-11-05 19:18:42.908691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.819 [2024-11-05 19:18:42.908703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.819 qpair failed and we were unable to recover it. 00:29:13.819 [2024-11-05 19:18:42.908903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.819 [2024-11-05 19:18:42.908915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.819 qpair failed and we were unable to recover it. 00:29:13.819 [2024-11-05 19:18:42.909352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.819 [2024-11-05 19:18:42.909363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.819 qpair failed and we were unable to recover it. 00:29:13.819 [2024-11-05 19:18:42.909540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.819 [2024-11-05 19:18:42.909553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.819 qpair failed and we were unable to recover it. 00:29:13.819 [2024-11-05 19:18:42.909770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.819 [2024-11-05 19:18:42.909783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.819 qpair failed and we were unable to recover it. 00:29:13.819 [2024-11-05 19:18:42.909942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.819 [2024-11-05 19:18:42.909954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.819 qpair failed and we were unable to recover it. 00:29:13.819 [2024-11-05 19:18:42.910197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.819 [2024-11-05 19:18:42.910208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.819 qpair failed and we were unable to recover it. 00:29:13.819 [2024-11-05 19:18:42.910515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.819 [2024-11-05 19:18:42.910527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.819 qpair failed and we were unable to recover it. 00:29:13.819 [2024-11-05 19:18:42.910837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.819 [2024-11-05 19:18:42.910848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.819 qpair failed and we were unable to recover it. 00:29:13.819 [2024-11-05 19:18:42.911183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.819 [2024-11-05 19:18:42.911195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.819 qpair failed and we were unable to recover it. 00:29:13.819 [2024-11-05 19:18:42.911527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.819 [2024-11-05 19:18:42.911539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.819 qpair failed and we were unable to recover it. 00:29:13.819 [2024-11-05 19:18:42.911844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.819 [2024-11-05 19:18:42.911856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.819 qpair failed and we were unable to recover it. 00:29:13.819 [2024-11-05 19:18:42.912159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.819 [2024-11-05 19:18:42.912170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.819 qpair failed and we were unable to recover it. 00:29:13.819 [2024-11-05 19:18:42.912514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.819 [2024-11-05 19:18:42.912525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.819 qpair failed and we were unable to recover it. 00:29:13.819 [2024-11-05 19:18:42.912829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.819 [2024-11-05 19:18:42.912841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.819 qpair failed and we were unable to recover it. 00:29:13.819 [2024-11-05 19:18:42.913028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.819 [2024-11-05 19:18:42.913039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.819 qpair failed and we were unable to recover it. 00:29:13.819 [2024-11-05 19:18:42.913276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.819 [2024-11-05 19:18:42.913287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.819 qpair failed and we were unable to recover it. 00:29:13.819 [2024-11-05 19:18:42.913657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.819 [2024-11-05 19:18:42.913667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.819 qpair failed and we were unable to recover it. 00:29:13.819 [2024-11-05 19:18:42.913959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.819 [2024-11-05 19:18:42.913971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.819 qpair failed and we were unable to recover it. 00:29:13.819 [2024-11-05 19:18:42.914285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.819 [2024-11-05 19:18:42.914296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.819 qpair failed and we were unable to recover it. 00:29:13.819 [2024-11-05 19:18:42.914609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.819 [2024-11-05 19:18:42.914620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.819 qpair failed and we were unable to recover it. 00:29:13.819 [2024-11-05 19:18:42.914830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.819 [2024-11-05 19:18:42.914841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.819 qpair failed and we were unable to recover it. 00:29:13.819 [2024-11-05 19:18:42.914995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.819 [2024-11-05 19:18:42.915005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.819 qpair failed and we were unable to recover it. 00:29:13.819 [2024-11-05 19:18:42.915281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.819 [2024-11-05 19:18:42.915292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.819 qpair failed and we were unable to recover it. 00:29:13.819 [2024-11-05 19:18:42.915620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.819 [2024-11-05 19:18:42.915630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.819 qpair failed and we were unable to recover it. 00:29:13.819 [2024-11-05 19:18:42.915928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.819 [2024-11-05 19:18:42.915939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.819 qpair failed and we were unable to recover it. 00:29:13.819 [2024-11-05 19:18:42.916271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.819 [2024-11-05 19:18:42.916283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.819 qpair failed and we were unable to recover it. 00:29:13.819 [2024-11-05 19:18:42.916581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.819 [2024-11-05 19:18:42.916593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.819 qpair failed and we were unable to recover it. 00:29:13.819 [2024-11-05 19:18:42.916862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.819 [2024-11-05 19:18:42.916873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.819 qpair failed and we were unable to recover it. 00:29:13.819 [2024-11-05 19:18:42.917152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.819 [2024-11-05 19:18:42.917164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.819 qpair failed and we were unable to recover it. 00:29:13.820 [2024-11-05 19:18:42.917501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.820 [2024-11-05 19:18:42.917512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.820 qpair failed and we were unable to recover it. 00:29:13.820 [2024-11-05 19:18:42.917867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.820 [2024-11-05 19:18:42.917880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.820 qpair failed and we were unable to recover it. 00:29:13.820 [2024-11-05 19:18:42.918205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.820 [2024-11-05 19:18:42.918216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.820 qpair failed and we were unable to recover it. 00:29:13.820 [2024-11-05 19:18:42.918522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.820 [2024-11-05 19:18:42.918535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.820 qpair failed and we were unable to recover it. 00:29:13.820 [2024-11-05 19:18:42.918834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.820 [2024-11-05 19:18:42.918845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.820 qpair failed and we were unable to recover it. 00:29:13.820 [2024-11-05 19:18:42.919013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.820 [2024-11-05 19:18:42.919024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.820 qpair failed and we were unable to recover it. 00:29:13.820 [2024-11-05 19:18:42.919336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.820 [2024-11-05 19:18:42.919347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.820 qpair failed and we were unable to recover it. 00:29:13.820 [2024-11-05 19:18:42.919664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.820 [2024-11-05 19:18:42.919675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.820 qpair failed and we were unable to recover it. 00:29:13.820 [2024-11-05 19:18:42.919856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.820 [2024-11-05 19:18:42.919868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.820 qpair failed and we were unable to recover it. 00:29:13.820 [2024-11-05 19:18:42.920187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.820 [2024-11-05 19:18:42.920199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.820 qpair failed and we were unable to recover it. 00:29:13.820 [2024-11-05 19:18:42.920510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.820 [2024-11-05 19:18:42.920522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.820 qpair failed and we were unable to recover it. 00:29:13.820 [2024-11-05 19:18:42.920859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.820 [2024-11-05 19:18:42.920871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.820 qpair failed and we were unable to recover it. 00:29:13.820 [2024-11-05 19:18:42.921163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.820 [2024-11-05 19:18:42.921176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.820 qpair failed and we were unable to recover it. 00:29:13.820 [2024-11-05 19:18:42.921487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.820 [2024-11-05 19:18:42.921499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.820 qpair failed and we were unable to recover it. 00:29:13.820 [2024-11-05 19:18:42.921812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.820 [2024-11-05 19:18:42.921824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.820 qpair failed and we were unable to recover it. 00:29:13.820 [2024-11-05 19:18:42.922006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.820 [2024-11-05 19:18:42.922017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.820 qpair failed and we were unable to recover it. 00:29:13.820 [2024-11-05 19:18:42.922197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.820 [2024-11-05 19:18:42.922207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.820 qpair failed and we were unable to recover it. 00:29:13.820 [2024-11-05 19:18:42.922528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.820 [2024-11-05 19:18:42.922541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.820 qpair failed and we were unable to recover it. 00:29:13.820 [2024-11-05 19:18:42.922861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.820 [2024-11-05 19:18:42.922873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.820 qpair failed and we were unable to recover it. 00:29:13.820 [2024-11-05 19:18:42.923045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.820 [2024-11-05 19:18:42.923084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.820 qpair failed and we were unable to recover it. 00:29:13.820 [2024-11-05 19:18:42.923383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.820 [2024-11-05 19:18:42.923394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.820 qpair failed and we were unable to recover it. 00:29:13.820 [2024-11-05 19:18:42.923705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.820 [2024-11-05 19:18:42.923716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.820 qpair failed and we were unable to recover it. 00:29:13.820 [2024-11-05 19:18:42.924042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.820 [2024-11-05 19:18:42.924054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.820 qpair failed and we were unable to recover it. 00:29:13.820 [2024-11-05 19:18:42.924355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.820 [2024-11-05 19:18:42.924367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.820 qpair failed and we were unable to recover it. 00:29:13.820 [2024-11-05 19:18:42.924515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.820 [2024-11-05 19:18:42.924528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.820 qpair failed and we were unable to recover it. 00:29:13.820 [2024-11-05 19:18:42.924840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.820 [2024-11-05 19:18:42.924852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.820 qpair failed and we were unable to recover it. 00:29:13.820 [2024-11-05 19:18:42.925162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.820 [2024-11-05 19:18:42.925174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.820 qpair failed and we were unable to recover it. 00:29:13.820 [2024-11-05 19:18:42.925470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.820 [2024-11-05 19:18:42.925481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.820 qpair failed and we were unable to recover it. 00:29:13.820 [2024-11-05 19:18:42.925776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.820 [2024-11-05 19:18:42.925787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.820 qpair failed and we were unable to recover it. 00:29:13.820 [2024-11-05 19:18:42.926050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.820 [2024-11-05 19:18:42.926061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.820 qpair failed and we were unable to recover it. 00:29:13.820 [2024-11-05 19:18:42.926338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.820 [2024-11-05 19:18:42.926350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.820 qpair failed and we were unable to recover it. 00:29:13.820 [2024-11-05 19:18:42.926661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.820 [2024-11-05 19:18:42.926672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.820 qpair failed and we were unable to recover it. 00:29:13.820 [2024-11-05 19:18:42.926970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.820 [2024-11-05 19:18:42.926983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.820 qpair failed and we were unable to recover it. 00:29:13.820 [2024-11-05 19:18:42.927297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.820 [2024-11-05 19:18:42.927308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.820 qpair failed and we were unable to recover it. 00:29:13.820 [2024-11-05 19:18:42.927618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.820 [2024-11-05 19:18:42.927630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.820 qpair failed and we were unable to recover it. 00:29:13.820 [2024-11-05 19:18:42.927982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.820 [2024-11-05 19:18:42.927994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.820 qpair failed and we were unable to recover it. 00:29:13.820 [2024-11-05 19:18:42.928310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.820 [2024-11-05 19:18:42.928322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.820 qpair failed and we were unable to recover it. 00:29:13.820 [2024-11-05 19:18:42.928635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.820 [2024-11-05 19:18:42.928646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.820 qpair failed and we were unable to recover it. 00:29:13.820 [2024-11-05 19:18:42.928979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.820 [2024-11-05 19:18:42.928991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.821 qpair failed and we were unable to recover it. 00:29:13.821 [2024-11-05 19:18:42.929299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.821 [2024-11-05 19:18:42.929310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.821 qpair failed and we were unable to recover it. 00:29:13.821 [2024-11-05 19:18:42.929604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.821 [2024-11-05 19:18:42.929616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.821 qpair failed and we were unable to recover it. 00:29:13.821 [2024-11-05 19:18:42.929917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.821 [2024-11-05 19:18:42.929928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.821 qpair failed and we were unable to recover it. 00:29:13.821 [2024-11-05 19:18:42.930241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.821 [2024-11-05 19:18:42.930253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.821 qpair failed and we were unable to recover it. 00:29:13.821 [2024-11-05 19:18:42.930559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.821 [2024-11-05 19:18:42.930570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.821 qpair failed and we were unable to recover it. 00:29:13.821 [2024-11-05 19:18:42.930770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.821 [2024-11-05 19:18:42.930783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.821 qpair failed and we were unable to recover it. 00:29:13.821 [2024-11-05 19:18:42.931081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.821 [2024-11-05 19:18:42.931093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.821 qpair failed and we were unable to recover it. 00:29:13.821 [2024-11-05 19:18:42.931405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.821 [2024-11-05 19:18:42.931419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.821 qpair failed and we were unable to recover it. 00:29:13.821 [2024-11-05 19:18:42.931592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.821 [2024-11-05 19:18:42.931603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.821 qpair failed and we were unable to recover it. 00:29:13.821 [2024-11-05 19:18:42.931816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.821 [2024-11-05 19:18:42.931828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.821 qpair failed and we were unable to recover it. 00:29:13.821 [2024-11-05 19:18:42.932082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.821 [2024-11-05 19:18:42.932092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.821 qpair failed and we were unable to recover it. 00:29:13.821 [2024-11-05 19:18:42.932399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.821 [2024-11-05 19:18:42.932411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.821 qpair failed and we were unable to recover it. 00:29:13.821 [2024-11-05 19:18:42.932685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.821 [2024-11-05 19:18:42.932697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.821 qpair failed and we were unable to recover it. 00:29:13.821 [2024-11-05 19:18:42.932876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.821 [2024-11-05 19:18:42.932889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.821 qpair failed and we were unable to recover it. 00:29:13.821 [2024-11-05 19:18:42.933074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.821 [2024-11-05 19:18:42.933085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.821 qpair failed and we were unable to recover it. 00:29:13.821 [2024-11-05 19:18:42.933169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.821 [2024-11-05 19:18:42.933179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.821 qpair failed and we were unable to recover it. 00:29:13.821 [2024-11-05 19:18:42.933373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.821 [2024-11-05 19:18:42.933384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.821 qpair failed and we were unable to recover it. 00:29:13.821 [2024-11-05 19:18:42.933703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.821 [2024-11-05 19:18:42.933715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.821 qpair failed and we were unable to recover it. 00:29:13.821 [2024-11-05 19:18:42.933903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.821 [2024-11-05 19:18:42.933915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.821 qpair failed and we were unable to recover it. 00:29:13.821 [2024-11-05 19:18:42.934226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.821 [2024-11-05 19:18:42.934237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.821 qpair failed and we were unable to recover it. 00:29:13.821 [2024-11-05 19:18:42.934602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.821 [2024-11-05 19:18:42.934613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.821 qpair failed and we were unable to recover it. 00:29:13.821 [2024-11-05 19:18:42.934744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.821 [2024-11-05 19:18:42.934759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.821 qpair failed and we were unable to recover it. 00:29:13.821 [2024-11-05 19:18:42.934952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.821 [2024-11-05 19:18:42.934964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.821 qpair failed and we were unable to recover it. 00:29:13.821 [2024-11-05 19:18:42.935163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.821 [2024-11-05 19:18:42.935174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.821 qpair failed and we were unable to recover it. 00:29:13.821 [2024-11-05 19:18:42.935279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.821 [2024-11-05 19:18:42.935289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.821 qpair failed and we were unable to recover it. 00:29:13.821 [2024-11-05 19:18:42.935604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.821 [2024-11-05 19:18:42.935615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.821 qpair failed and we were unable to recover it. 00:29:13.821 [2024-11-05 19:18:42.935792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.821 [2024-11-05 19:18:42.935803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.821 qpair failed and we were unable to recover it. 00:29:13.821 [2024-11-05 19:18:42.935932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.821 [2024-11-05 19:18:42.935942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.821 qpair failed and we were unable to recover it. 00:29:13.821 [2024-11-05 19:18:42.936277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.821 [2024-11-05 19:18:42.936288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.821 qpair failed and we were unable to recover it. 00:29:13.821 [2024-11-05 19:18:42.936600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.821 [2024-11-05 19:18:42.936612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.821 qpair failed and we were unable to recover it. 00:29:13.821 [2024-11-05 19:18:42.936914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.821 [2024-11-05 19:18:42.936925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.821 qpair failed and we were unable to recover it. 00:29:13.821 [2024-11-05 19:18:42.936975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.821 [2024-11-05 19:18:42.936985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.821 qpair failed and we were unable to recover it. 00:29:13.821 [2024-11-05 19:18:42.937177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.821 [2024-11-05 19:18:42.937188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.821 qpair failed and we were unable to recover it. 00:29:13.821 [2024-11-05 19:18:42.937494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.821 [2024-11-05 19:18:42.937504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.821 qpair failed and we were unable to recover it. 00:29:13.821 [2024-11-05 19:18:42.937811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.821 [2024-11-05 19:18:42.937824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.821 qpair failed and we were unable to recover it. 00:29:13.821 [2024-11-05 19:18:42.938032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.821 [2024-11-05 19:18:42.938043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.821 qpair failed and we were unable to recover it. 00:29:13.821 [2024-11-05 19:18:42.938222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.821 [2024-11-05 19:18:42.938235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.821 qpair failed and we were unable to recover it. 00:29:13.821 [2024-11-05 19:18:42.938512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.821 [2024-11-05 19:18:42.938524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.821 qpair failed and we were unable to recover it. 00:29:13.821 [2024-11-05 19:18:42.938681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.821 [2024-11-05 19:18:42.938691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.822 qpair failed and we were unable to recover it. 00:29:13.822 [2024-11-05 19:18:42.938797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.822 [2024-11-05 19:18:42.938808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.822 qpair failed and we were unable to recover it. 00:29:13.822 [2024-11-05 19:18:42.939013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.822 [2024-11-05 19:18:42.939025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.822 qpair failed and we were unable to recover it. 00:29:13.822 [2024-11-05 19:18:42.939216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.822 [2024-11-05 19:18:42.939227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.822 qpair failed and we were unable to recover it. 00:29:13.822 [2024-11-05 19:18:42.939556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.822 [2024-11-05 19:18:42.939568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.822 qpair failed and we were unable to recover it. 00:29:13.822 [2024-11-05 19:18:42.939876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.822 [2024-11-05 19:18:42.939889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.822 qpair failed and we were unable to recover it. 00:29:13.822 [2024-11-05 19:18:42.940079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.822 [2024-11-05 19:18:42.940091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.822 qpair failed and we were unable to recover it. 00:29:13.822 [2024-11-05 19:18:42.940419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.822 [2024-11-05 19:18:42.940430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.822 qpair failed and we were unable to recover it. 00:29:13.822 [2024-11-05 19:18:42.940729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.822 [2024-11-05 19:18:42.940741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.822 qpair failed and we were unable to recover it. 00:29:13.822 [2024-11-05 19:18:42.941061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.822 [2024-11-05 19:18:42.941073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.822 qpair failed and we were unable to recover it. 00:29:13.822 [2024-11-05 19:18:42.941389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.822 [2024-11-05 19:18:42.941401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.822 qpair failed and we were unable to recover it. 00:29:13.822 [2024-11-05 19:18:42.941560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.822 [2024-11-05 19:18:42.941570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.822 qpair failed and we were unable to recover it. 00:29:13.822 [2024-11-05 19:18:42.941762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.822 [2024-11-05 19:18:42.941775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.822 qpair failed and we were unable to recover it. 00:29:13.822 [2024-11-05 19:18:42.941943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.822 [2024-11-05 19:18:42.941955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.822 qpair failed and we were unable to recover it. 00:29:13.822 [2024-11-05 19:18:42.942264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.822 [2024-11-05 19:18:42.942275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.822 qpair failed and we were unable to recover it. 00:29:13.822 [2024-11-05 19:18:42.942432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.822 [2024-11-05 19:18:42.942443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.822 qpair failed and we were unable to recover it. 00:29:13.822 [2024-11-05 19:18:42.942628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.822 [2024-11-05 19:18:42.942640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.822 qpair failed and we were unable to recover it. 00:29:13.822 [2024-11-05 19:18:42.942955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.822 [2024-11-05 19:18:42.942966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.822 qpair failed and we were unable to recover it. 00:29:13.822 [2024-11-05 19:18:42.943147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.822 [2024-11-05 19:18:42.943158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.822 qpair failed and we were unable to recover it. 00:29:13.822 [2024-11-05 19:18:42.943358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.822 [2024-11-05 19:18:42.943370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.822 qpair failed and we were unable to recover it. 00:29:13.822 [2024-11-05 19:18:42.943552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.822 [2024-11-05 19:18:42.943563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.822 qpair failed and we were unable to recover it. 00:29:13.822 [2024-11-05 19:18:42.943776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.822 [2024-11-05 19:18:42.943788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.822 qpair failed and we were unable to recover it. 00:29:13.822 [2024-11-05 19:18:42.943987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.822 [2024-11-05 19:18:42.943997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.822 qpair failed and we were unable to recover it. 00:29:13.822 [2024-11-05 19:18:42.944184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.822 [2024-11-05 19:18:42.944195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.822 qpair failed and we were unable to recover it. 00:29:13.822 [2024-11-05 19:18:42.944505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.822 [2024-11-05 19:18:42.944516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.822 qpair failed and we were unable to recover it. 00:29:13.822 [2024-11-05 19:18:42.944680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.822 [2024-11-05 19:18:42.944690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.822 qpair failed and we were unable to recover it. 00:29:13.822 [2024-11-05 19:18:42.944979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.822 [2024-11-05 19:18:42.944991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.822 qpair failed and we were unable to recover it. 00:29:13.822 [2024-11-05 19:18:42.945176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.822 [2024-11-05 19:18:42.945188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.822 qpair failed and we were unable to recover it. 00:29:13.822 [2024-11-05 19:18:42.945452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.822 [2024-11-05 19:18:42.945463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.822 qpair failed and we were unable to recover it. 00:29:13.822 [2024-11-05 19:18:42.945780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.822 [2024-11-05 19:18:42.945792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.822 qpair failed and we were unable to recover it. 00:29:13.822 [2024-11-05 19:18:42.946123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.822 [2024-11-05 19:18:42.946141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.822 qpair failed and we were unable to recover it. 00:29:13.822 [2024-11-05 19:18:42.946451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.822 [2024-11-05 19:18:42.946462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.822 qpair failed and we were unable to recover it. 00:29:13.822 [2024-11-05 19:18:42.946638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.822 [2024-11-05 19:18:42.946649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.822 qpair failed and we were unable to recover it. 00:29:13.822 [2024-11-05 19:18:42.946827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.822 [2024-11-05 19:18:42.946839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.822 qpair failed and we were unable to recover it. 00:29:13.822 [2024-11-05 19:18:42.947166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.822 [2024-11-05 19:18:42.947178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.822 qpair failed and we were unable to recover it. 00:29:13.822 [2024-11-05 19:18:42.947481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.822 [2024-11-05 19:18:42.947493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.822 qpair failed and we were unable to recover it. 00:29:13.822 [2024-11-05 19:18:42.947800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.822 [2024-11-05 19:18:42.947813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.822 qpair failed and we were unable to recover it. 00:29:13.822 [2024-11-05 19:18:42.947988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.822 [2024-11-05 19:18:42.948001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.822 qpair failed and we were unable to recover it. 00:29:13.822 [2024-11-05 19:18:42.948185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.822 [2024-11-05 19:18:42.948196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.822 qpair failed and we were unable to recover it. 00:29:13.822 [2024-11-05 19:18:42.948502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.823 [2024-11-05 19:18:42.948513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.823 qpair failed and we were unable to recover it. 00:29:13.823 [2024-11-05 19:18:42.948846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.823 [2024-11-05 19:18:42.948858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.823 qpair failed and we were unable to recover it. 00:29:13.823 [2024-11-05 19:18:42.949188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.823 [2024-11-05 19:18:42.949199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.823 qpair failed and we were unable to recover it. 00:29:13.823 [2024-11-05 19:18:42.949484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.823 [2024-11-05 19:18:42.949495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.823 qpair failed and we were unable to recover it. 00:29:13.823 [2024-11-05 19:18:42.949849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.823 [2024-11-05 19:18:42.949861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.823 qpair failed and we were unable to recover it. 00:29:13.823 [2024-11-05 19:18:42.950163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.823 [2024-11-05 19:18:42.950175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.823 qpair failed and we were unable to recover it. 00:29:13.823 [2024-11-05 19:18:42.950472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.823 [2024-11-05 19:18:42.950483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.823 qpair failed and we were unable to recover it. 00:29:13.823 [2024-11-05 19:18:42.950763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.823 [2024-11-05 19:18:42.950775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.823 qpair failed and we were unable to recover it. 00:29:13.823 [2024-11-05 19:18:42.951084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.823 [2024-11-05 19:18:42.951095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.823 qpair failed and we were unable to recover it. 00:29:13.823 [2024-11-05 19:18:42.951280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.823 [2024-11-05 19:18:42.951290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.823 qpair failed and we were unable to recover it. 00:29:13.823 [2024-11-05 19:18:42.951595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.823 [2024-11-05 19:18:42.951606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.823 qpair failed and we were unable to recover it. 00:29:13.823 [2024-11-05 19:18:42.951801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.823 [2024-11-05 19:18:42.951813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.823 qpair failed and we were unable to recover it. 00:29:13.823 [2024-11-05 19:18:42.952145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.823 [2024-11-05 19:18:42.952156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.823 qpair failed and we were unable to recover it. 00:29:13.823 [2024-11-05 19:18:42.952328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.823 [2024-11-05 19:18:42.952339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.823 qpair failed and we were unable to recover it. 00:29:13.823 [2024-11-05 19:18:42.952741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.823 [2024-11-05 19:18:42.952758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.823 qpair failed and we were unable to recover it. 00:29:13.823 [2024-11-05 19:18:42.953061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.823 [2024-11-05 19:18:42.953072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.823 qpair failed and we were unable to recover it. 00:29:13.823 [2024-11-05 19:18:42.953381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.823 [2024-11-05 19:18:42.953392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.823 qpair failed and we were unable to recover it. 00:29:13.823 [2024-11-05 19:18:42.953704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.823 [2024-11-05 19:18:42.953715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.823 qpair failed and we were unable to recover it. 00:29:13.823 [2024-11-05 19:18:42.954016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.823 [2024-11-05 19:18:42.954029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.823 qpair failed and we were unable to recover it. 00:29:13.823 [2024-11-05 19:18:42.954317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.823 [2024-11-05 19:18:42.954327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.823 qpair failed and we were unable to recover it. 00:29:13.823 [2024-11-05 19:18:42.954600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.823 [2024-11-05 19:18:42.954611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.823 qpair failed and we were unable to recover it. 00:29:13.823 [2024-11-05 19:18:42.954917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.823 [2024-11-05 19:18:42.954929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.823 qpair failed and we were unable to recover it. 00:29:13.823 [2024-11-05 19:18:42.955299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.823 [2024-11-05 19:18:42.955310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.823 qpair failed and we were unable to recover it. 00:29:13.823 [2024-11-05 19:18:42.955574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.823 [2024-11-05 19:18:42.955585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.823 qpair failed and we were unable to recover it. 00:29:13.823 [2024-11-05 19:18:42.955889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.823 [2024-11-05 19:18:42.955901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.823 qpair failed and we were unable to recover it. 00:29:13.823 [2024-11-05 19:18:42.956205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.823 [2024-11-05 19:18:42.956220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.823 qpair failed and we were unable to recover it. 00:29:13.823 [2024-11-05 19:18:42.956529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.823 [2024-11-05 19:18:42.956541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.823 qpair failed and we were unable to recover it. 00:29:13.823 [2024-11-05 19:18:42.956898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.823 [2024-11-05 19:18:42.956910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.823 qpair failed and we were unable to recover it. 00:29:13.823 [2024-11-05 19:18:42.957224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.823 [2024-11-05 19:18:42.957235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.823 qpair failed and we were unable to recover it. 00:29:13.823 [2024-11-05 19:18:42.957555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.823 [2024-11-05 19:18:42.957567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.823 qpair failed and we were unable to recover it. 00:29:13.823 [2024-11-05 19:18:42.957883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.823 [2024-11-05 19:18:42.957896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.823 qpair failed and we were unable to recover it. 00:29:13.823 [2024-11-05 19:18:42.958115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.823 [2024-11-05 19:18:42.958126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.823 qpair failed and we were unable to recover it. 00:29:13.823 [2024-11-05 19:18:42.958425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.823 [2024-11-05 19:18:42.958437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.823 qpair failed and we were unable to recover it. 00:29:13.823 [2024-11-05 19:18:42.958742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.823 [2024-11-05 19:18:42.958757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.823 qpair failed and we were unable to recover it. 00:29:13.823 [2024-11-05 19:18:42.959085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.823 [2024-11-05 19:18:42.959096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.824 qpair failed and we were unable to recover it. 00:29:13.824 [2024-11-05 19:18:42.959431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.824 [2024-11-05 19:18:42.959443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.824 qpair failed and we were unable to recover it. 00:29:13.824 [2024-11-05 19:18:42.959614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.824 [2024-11-05 19:18:42.959625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.824 qpair failed and we were unable to recover it. 00:29:13.824 [2024-11-05 19:18:42.959943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.824 [2024-11-05 19:18:42.959955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.824 qpair failed and we were unable to recover it. 00:29:13.824 [2024-11-05 19:18:42.960346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.824 [2024-11-05 19:18:42.960358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.824 qpair failed and we were unable to recover it. 00:29:13.824 [2024-11-05 19:18:42.960660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.824 [2024-11-05 19:18:42.960672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.824 qpair failed and we were unable to recover it. 00:29:13.824 [2024-11-05 19:18:42.960986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.824 [2024-11-05 19:18:42.960997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.824 qpair failed and we were unable to recover it. 00:29:13.824 [2024-11-05 19:18:42.961190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.824 [2024-11-05 19:18:42.961200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.824 qpair failed and we were unable to recover it. 00:29:13.824 [2024-11-05 19:18:42.961521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.824 [2024-11-05 19:18:42.961533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.824 qpair failed and we were unable to recover it. 00:29:13.824 [2024-11-05 19:18:42.961709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.824 [2024-11-05 19:18:42.961721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.824 qpair failed and we were unable to recover it. 00:29:13.824 [2024-11-05 19:18:42.961918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.824 [2024-11-05 19:18:42.961930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.824 qpair failed and we were unable to recover it. 00:29:13.824 [2024-11-05 19:18:42.962203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.824 [2024-11-05 19:18:42.962214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.824 qpair failed and we were unable to recover it. 00:29:13.824 [2024-11-05 19:18:42.962544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.824 [2024-11-05 19:18:42.962556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.824 qpair failed and we were unable to recover it. 00:29:13.824 [2024-11-05 19:18:42.962890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.824 [2024-11-05 19:18:42.962902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.824 qpair failed and we were unable to recover it. 00:29:13.824 [2024-11-05 19:18:42.963204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.824 [2024-11-05 19:18:42.963216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.824 qpair failed and we were unable to recover it. 00:29:13.824 [2024-11-05 19:18:42.963520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.824 [2024-11-05 19:18:42.963531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.824 qpair failed and we were unable to recover it. 00:29:13.824 [2024-11-05 19:18:42.963838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.824 [2024-11-05 19:18:42.963851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.824 qpair failed and we were unable to recover it. 00:29:13.824 [2024-11-05 19:18:42.964130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.824 [2024-11-05 19:18:42.964141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.824 qpair failed and we were unable to recover it. 00:29:13.824 [2024-11-05 19:18:42.964202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.824 [2024-11-05 19:18:42.964213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.824 qpair failed and we were unable to recover it. 00:29:13.824 [2024-11-05 19:18:42.964401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.824 [2024-11-05 19:18:42.964412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.824 qpair failed and we were unable to recover it. 00:29:13.824 [2024-11-05 19:18:42.964586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.824 [2024-11-05 19:18:42.964597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.824 qpair failed and we were unable to recover it. 00:29:13.824 [2024-11-05 19:18:42.964932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.824 [2024-11-05 19:18:42.964943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.824 qpair failed and we were unable to recover it. 00:29:13.824 [2024-11-05 19:18:42.965145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.824 [2024-11-05 19:18:42.965156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.824 qpair failed and we were unable to recover it. 00:29:13.824 [2024-11-05 19:18:42.965469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.824 [2024-11-05 19:18:42.965480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.824 qpair failed and we were unable to recover it. 00:29:13.824 [2024-11-05 19:18:42.965662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.824 [2024-11-05 19:18:42.965673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.824 qpair failed and we were unable to recover it. 00:29:13.824 [2024-11-05 19:18:42.965982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.824 [2024-11-05 19:18:42.965994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.824 qpair failed and we were unable to recover it. 00:29:13.824 [2024-11-05 19:18:42.966190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.824 [2024-11-05 19:18:42.966201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.824 qpair failed and we were unable to recover it. 00:29:13.824 [2024-11-05 19:18:42.966248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.824 [2024-11-05 19:18:42.966259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.824 qpair failed and we were unable to recover it. 00:29:13.824 [2024-11-05 19:18:42.966563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.824 [2024-11-05 19:18:42.966575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.824 qpair failed and we were unable to recover it. 00:29:13.824 [2024-11-05 19:18:42.966887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.824 [2024-11-05 19:18:42.966900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.824 qpair failed and we were unable to recover it. 00:29:13.824 [2024-11-05 19:18:42.967233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.824 [2024-11-05 19:18:42.967244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.824 qpair failed and we were unable to recover it. 00:29:13.824 [2024-11-05 19:18:42.967549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.824 [2024-11-05 19:18:42.967561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.824 qpair failed and we were unable to recover it. 00:29:13.824 [2024-11-05 19:18:42.967750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.824 [2024-11-05 19:18:42.967762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.824 qpair failed and we were unable to recover it. 00:29:13.824 [2024-11-05 19:18:42.968134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.824 [2024-11-05 19:18:42.968146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.824 qpair failed and we were unable to recover it. 00:29:13.824 [2024-11-05 19:18:42.968195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.824 [2024-11-05 19:18:42.968204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.824 qpair failed and we were unable to recover it. 00:29:13.824 [2024-11-05 19:18:42.968490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.824 [2024-11-05 19:18:42.968501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.824 qpair failed and we were unable to recover it. 00:29:13.824 [2024-11-05 19:18:42.968653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.824 [2024-11-05 19:18:42.968664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.824 qpair failed and we were unable to recover it. 00:29:13.824 [2024-11-05 19:18:42.968854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.824 [2024-11-05 19:18:42.968867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.824 qpair failed and we were unable to recover it. 00:29:13.824 [2024-11-05 19:18:42.969032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.824 [2024-11-05 19:18:42.969043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.824 qpair failed and we were unable to recover it. 00:29:13.824 [2024-11-05 19:18:42.969227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.825 [2024-11-05 19:18:42.969237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.825 qpair failed and we were unable to recover it. 00:29:13.825 [2024-11-05 19:18:42.969555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.825 [2024-11-05 19:18:42.969568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.825 qpair failed and we were unable to recover it. 00:29:13.825 [2024-11-05 19:18:42.969806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.825 [2024-11-05 19:18:42.969817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.825 qpair failed and we were unable to recover it. 00:29:13.825 [2024-11-05 19:18:42.969997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.825 [2024-11-05 19:18:42.970009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.825 qpair failed and we were unable to recover it. 00:29:13.825 [2024-11-05 19:18:42.970261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.825 [2024-11-05 19:18:42.970274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.825 qpair failed and we were unable to recover it. 00:29:13.825 [2024-11-05 19:18:42.970609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.825 [2024-11-05 19:18:42.970620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.825 qpair failed and we were unable to recover it. 00:29:13.825 [2024-11-05 19:18:42.970913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.825 [2024-11-05 19:18:42.970924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.825 qpair failed and we were unable to recover it. 00:29:13.825 [2024-11-05 19:18:42.971099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.825 [2024-11-05 19:18:42.971110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.825 qpair failed and we were unable to recover it. 00:29:13.825 [2024-11-05 19:18:42.971285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.825 [2024-11-05 19:18:42.971296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.825 qpair failed and we were unable to recover it. 00:29:13.825 [2024-11-05 19:18:42.971486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.825 [2024-11-05 19:18:42.971497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.825 qpair failed and we were unable to recover it. 00:29:13.825 [2024-11-05 19:18:42.971787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.825 [2024-11-05 19:18:42.971798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.825 qpair failed and we were unable to recover it. 00:29:13.825 [2024-11-05 19:18:42.971949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.825 [2024-11-05 19:18:42.971960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.825 qpair failed and we were unable to recover it. 00:29:13.825 [2024-11-05 19:18:42.972294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.825 [2024-11-05 19:18:42.972305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.825 qpair failed and we were unable to recover it. 00:29:13.825 [2024-11-05 19:18:42.972634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.825 [2024-11-05 19:18:42.972646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.825 qpair failed and we were unable to recover it. 00:29:13.825 [2024-11-05 19:18:42.972942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.825 [2024-11-05 19:18:42.972953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.825 qpair failed and we were unable to recover it. 00:29:13.825 [2024-11-05 19:18:42.973123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.825 [2024-11-05 19:18:42.973135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.825 qpair failed and we were unable to recover it. 00:29:13.825 [2024-11-05 19:18:42.973324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.825 [2024-11-05 19:18:42.973336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.825 qpair failed and we were unable to recover it. 00:29:13.825 [2024-11-05 19:18:42.973634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.825 [2024-11-05 19:18:42.973646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.825 qpair failed and we were unable to recover it. 00:29:13.825 [2024-11-05 19:18:42.973981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.825 [2024-11-05 19:18:42.973993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.825 qpair failed and we were unable to recover it. 00:29:13.825 [2024-11-05 19:18:42.974311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.825 [2024-11-05 19:18:42.974322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.825 qpair failed and we were unable to recover it. 00:29:13.825 [2024-11-05 19:18:42.974511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.825 [2024-11-05 19:18:42.974522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.825 qpair failed and we were unable to recover it. 00:29:13.825 [2024-11-05 19:18:42.974859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.825 [2024-11-05 19:18:42.974870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.825 qpair failed and we were unable to recover it. 00:29:13.825 [2024-11-05 19:18:42.975164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.825 [2024-11-05 19:18:42.975175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.825 qpair failed and we were unable to recover it. 00:29:13.825 [2024-11-05 19:18:42.975480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.825 [2024-11-05 19:18:42.975492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.825 qpair failed and we were unable to recover it. 00:29:13.825 [2024-11-05 19:18:42.975839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.825 [2024-11-05 19:18:42.975851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.825 qpair failed and we were unable to recover it. 00:29:13.825 [2024-11-05 19:18:42.976039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.825 [2024-11-05 19:18:42.976049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.825 qpair failed and we were unable to recover it. 00:29:13.825 [2024-11-05 19:18:42.976232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.825 [2024-11-05 19:18:42.976251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.825 qpair failed and we were unable to recover it. 00:29:13.825 [2024-11-05 19:18:42.976538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.825 [2024-11-05 19:18:42.976550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.825 qpair failed and we were unable to recover it. 00:29:13.825 [2024-11-05 19:18:42.976718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.825 [2024-11-05 19:18:42.976731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.825 qpair failed and we were unable to recover it. 00:29:13.825 [2024-11-05 19:18:42.976922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.825 [2024-11-05 19:18:42.976934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.825 qpair failed and we were unable to recover it. 00:29:13.825 [2024-11-05 19:18:42.977143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.825 [2024-11-05 19:18:42.977154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.825 qpair failed and we were unable to recover it. 00:29:13.825 [2024-11-05 19:18:42.977467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.825 [2024-11-05 19:18:42.977478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.825 qpair failed and we were unable to recover it. 00:29:13.825 [2024-11-05 19:18:42.977788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.825 [2024-11-05 19:18:42.977799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.825 qpair failed and we were unable to recover it. 00:29:13.825 [2024-11-05 19:18:42.978118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.825 [2024-11-05 19:18:42.978130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.825 qpair failed and we were unable to recover it. 00:29:13.825 [2024-11-05 19:18:42.978448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.825 [2024-11-05 19:18:42.978459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.825 qpair failed and we were unable to recover it. 00:29:13.825 [2024-11-05 19:18:42.978632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.825 [2024-11-05 19:18:42.978643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.825 qpair failed and we were unable to recover it. 00:29:13.825 [2024-11-05 19:18:42.978873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.825 [2024-11-05 19:18:42.978886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.825 qpair failed and we were unable to recover it. 00:29:13.825 [2024-11-05 19:18:42.979253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.825 [2024-11-05 19:18:42.979264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.825 qpair failed and we were unable to recover it. 00:29:13.825 [2024-11-05 19:18:42.979444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.825 [2024-11-05 19:18:42.979456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.825 qpair failed and we were unable to recover it. 00:29:13.825 [2024-11-05 19:18:42.979768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.826 [2024-11-05 19:18:42.979780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.826 qpair failed and we were unable to recover it. 00:29:13.826 [2024-11-05 19:18:42.980112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.826 [2024-11-05 19:18:42.980123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.826 qpair failed and we were unable to recover it. 00:29:13.826 [2024-11-05 19:18:42.980407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.826 [2024-11-05 19:18:42.980418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.826 qpair failed and we were unable to recover it. 00:29:13.826 [2024-11-05 19:18:42.980591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.826 [2024-11-05 19:18:42.980602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.826 qpair failed and we were unable to recover it. 00:29:13.826 [2024-11-05 19:18:42.980902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.826 [2024-11-05 19:18:42.980915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.826 qpair failed and we were unable to recover it. 00:29:13.826 [2024-11-05 19:18:42.981227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.826 [2024-11-05 19:18:42.981239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.826 qpair failed and we were unable to recover it. 00:29:13.826 [2024-11-05 19:18:42.981515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.826 [2024-11-05 19:18:42.981526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.826 qpair failed and we were unable to recover it. 00:29:13.826 [2024-11-05 19:18:42.981716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.826 [2024-11-05 19:18:42.981727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.826 qpair failed and we were unable to recover it. 00:29:13.826 [2024-11-05 19:18:42.982050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.826 [2024-11-05 19:18:42.982064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.826 qpair failed and we were unable to recover it. 00:29:13.826 [2024-11-05 19:18:42.982250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.826 [2024-11-05 19:18:42.982261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.826 qpair failed and we were unable to recover it. 00:29:13.826 [2024-11-05 19:18:42.982545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.826 [2024-11-05 19:18:42.982557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.826 qpair failed and we were unable to recover it. 00:29:13.826 [2024-11-05 19:18:42.982944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.826 [2024-11-05 19:18:42.982956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.826 qpair failed and we were unable to recover it. 00:29:13.826 [2024-11-05 19:18:42.983294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.826 [2024-11-05 19:18:42.983307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.826 qpair failed and we were unable to recover it. 00:29:13.826 [2024-11-05 19:18:42.983618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.826 [2024-11-05 19:18:42.983630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.826 qpair failed and we were unable to recover it. 00:29:13.826 [2024-11-05 19:18:42.983936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.826 [2024-11-05 19:18:42.983948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.826 qpair failed and we were unable to recover it. 00:29:13.826 [2024-11-05 19:18:42.984291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.826 [2024-11-05 19:18:42.984302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.826 qpair failed and we were unable to recover it. 00:29:13.826 [2024-11-05 19:18:42.984525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.826 [2024-11-05 19:18:42.984536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.826 qpair failed and we were unable to recover it. 00:29:13.826 [2024-11-05 19:18:42.984862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.826 [2024-11-05 19:18:42.984874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.826 qpair failed and we were unable to recover it. 00:29:13.826 [2024-11-05 19:18:42.985053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.826 [2024-11-05 19:18:42.985064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.826 qpair failed and we were unable to recover it. 00:29:13.826 [2024-11-05 19:18:42.985325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.826 [2024-11-05 19:18:42.985337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.826 qpair failed and we were unable to recover it. 00:29:13.826 [2024-11-05 19:18:42.985666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.826 [2024-11-05 19:18:42.985678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.826 qpair failed and we were unable to recover it. 00:29:13.826 [2024-11-05 19:18:42.985999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.826 [2024-11-05 19:18:42.986012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.826 qpair failed and we were unable to recover it. 00:29:13.826 [2024-11-05 19:18:42.986289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.826 [2024-11-05 19:18:42.986300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.826 qpair failed and we were unable to recover it. 00:29:13.826 [2024-11-05 19:18:42.986608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.826 [2024-11-05 19:18:42.986621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.826 qpair failed and we were unable to recover it. 00:29:13.826 [2024-11-05 19:18:42.986953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.826 [2024-11-05 19:18:42.986965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.826 qpair failed and we were unable to recover it. 00:29:13.826 [2024-11-05 19:18:42.987290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.826 [2024-11-05 19:18:42.987302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.826 qpair failed and we were unable to recover it. 00:29:13.826 [2024-11-05 19:18:42.987583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.826 [2024-11-05 19:18:42.987594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.826 qpair failed and we were unable to recover it. 00:29:13.826 [2024-11-05 19:18:42.987913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.826 [2024-11-05 19:18:42.987924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.826 qpair failed and we were unable to recover it. 00:29:13.826 [2024-11-05 19:18:42.988246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.826 [2024-11-05 19:18:42.988258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.826 qpair failed and we were unable to recover it. 00:29:13.826 [2024-11-05 19:18:42.988604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.826 [2024-11-05 19:18:42.988615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.826 qpair failed and we were unable to recover it. 00:29:13.826 [2024-11-05 19:18:42.988922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.826 [2024-11-05 19:18:42.988933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.826 qpair failed and we were unable to recover it. 00:29:13.826 [2024-11-05 19:18:42.989252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.826 [2024-11-05 19:18:42.989262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.826 qpair failed and we were unable to recover it. 00:29:13.826 [2024-11-05 19:18:42.989576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.826 [2024-11-05 19:18:42.989589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.826 qpair failed and we were unable to recover it. 00:29:13.826 [2024-11-05 19:18:42.989898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.826 [2024-11-05 19:18:42.989911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.826 qpair failed and we were unable to recover it. 00:29:13.826 [2024-11-05 19:18:42.990255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.826 [2024-11-05 19:18:42.990266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.826 qpair failed and we were unable to recover it. 00:29:13.826 [2024-11-05 19:18:42.990438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.826 [2024-11-05 19:18:42.990452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.826 qpair failed and we were unable to recover it. 00:29:13.826 [2024-11-05 19:18:42.990694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.826 [2024-11-05 19:18:42.990704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.826 qpair failed and we were unable to recover it. 00:29:13.826 [2024-11-05 19:18:42.991013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.826 [2024-11-05 19:18:42.991025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.826 qpair failed and we were unable to recover it. 00:29:13.826 [2024-11-05 19:18:42.991358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.826 [2024-11-05 19:18:42.991370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.826 qpair failed and we were unable to recover it. 00:29:13.826 [2024-11-05 19:18:42.991677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.827 [2024-11-05 19:18:42.991690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.827 qpair failed and we were unable to recover it. 00:29:13.827 [2024-11-05 19:18:42.991973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.827 [2024-11-05 19:18:42.991985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.827 qpair failed and we were unable to recover it. 00:29:13.827 [2024-11-05 19:18:42.992293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.827 [2024-11-05 19:18:42.992305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.827 qpair failed and we were unable to recover it. 00:29:13.827 [2024-11-05 19:18:42.992648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.827 [2024-11-05 19:18:42.992660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.827 qpair failed and we were unable to recover it. 00:29:13.827 [2024-11-05 19:18:42.992961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.827 [2024-11-05 19:18:42.992974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.827 qpair failed and we were unable to recover it. 00:29:13.827 [2024-11-05 19:18:42.993342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.827 [2024-11-05 19:18:42.993354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.827 qpair failed and we were unable to recover it. 00:29:13.827 [2024-11-05 19:18:42.993531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.827 [2024-11-05 19:18:42.993544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.827 qpair failed and we were unable to recover it. 00:29:13.827 [2024-11-05 19:18:42.993798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.827 [2024-11-05 19:18:42.993810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.827 qpair failed and we were unable to recover it. 00:29:13.827 [2024-11-05 19:18:42.994137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.827 [2024-11-05 19:18:42.994149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.827 qpair failed and we were unable to recover it. 00:29:13.827 [2024-11-05 19:18:42.994460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.827 [2024-11-05 19:18:42.994471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.827 qpair failed and we were unable to recover it. 00:29:13.827 [2024-11-05 19:18:42.994770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.827 [2024-11-05 19:18:42.994782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.827 qpair failed and we were unable to recover it. 00:29:13.827 [2024-11-05 19:18:42.995096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.827 [2024-11-05 19:18:42.995107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.827 qpair failed and we were unable to recover it. 00:29:13.827 [2024-11-05 19:18:42.995357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.827 [2024-11-05 19:18:42.995368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.827 qpair failed and we were unable to recover it. 00:29:13.827 [2024-11-05 19:18:42.995669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.827 [2024-11-05 19:18:42.995680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.827 qpair failed and we were unable to recover it. 00:29:13.827 [2024-11-05 19:18:42.995985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.827 [2024-11-05 19:18:42.995998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.827 qpair failed and we were unable to recover it. 00:29:13.827 [2024-11-05 19:18:42.996284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.827 [2024-11-05 19:18:42.996296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.827 qpair failed and we were unable to recover it. 00:29:13.827 [2024-11-05 19:18:42.996468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.827 [2024-11-05 19:18:42.996478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.827 qpair failed and we were unable to recover it. 00:29:13.827 [2024-11-05 19:18:42.996651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.827 [2024-11-05 19:18:42.996662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.827 qpair failed and we were unable to recover it. 00:29:13.827 [2024-11-05 19:18:42.996953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.827 [2024-11-05 19:18:42.996966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.827 qpair failed and we were unable to recover it. 00:29:13.827 [2024-11-05 19:18:42.997298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.827 [2024-11-05 19:18:42.997311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.827 qpair failed and we were unable to recover it. 00:29:13.827 [2024-11-05 19:18:42.997622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.827 [2024-11-05 19:18:42.997634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.827 qpair failed and we were unable to recover it. 00:29:13.827 [2024-11-05 19:18:42.997962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.827 [2024-11-05 19:18:42.997975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.827 qpair failed and we were unable to recover it. 00:29:13.827 [2024-11-05 19:18:42.998158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.827 [2024-11-05 19:18:42.998170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.827 qpair failed and we were unable to recover it. 00:29:13.827 [2024-11-05 19:18:42.998359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.827 [2024-11-05 19:18:42.998371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.827 qpair failed and we were unable to recover it. 00:29:13.827 [2024-11-05 19:18:42.998703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.827 [2024-11-05 19:18:42.998716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.827 qpair failed and we were unable to recover it. 00:29:13.827 [2024-11-05 19:18:42.999017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.827 [2024-11-05 19:18:42.999029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.827 qpair failed and we were unable to recover it. 00:29:13.827 [2024-11-05 19:18:42.999344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.827 [2024-11-05 19:18:42.999356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.827 qpair failed and we were unable to recover it. 00:29:13.827 [2024-11-05 19:18:42.999532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.827 [2024-11-05 19:18:42.999543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.827 qpair failed and we were unable to recover it. 00:29:13.827 [2024-11-05 19:18:42.999873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.827 [2024-11-05 19:18:42.999885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.827 qpair failed and we were unable to recover it. 00:29:13.827 [2024-11-05 19:18:43.000200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.827 [2024-11-05 19:18:43.000214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.827 qpair failed and we were unable to recover it. 00:29:13.827 [2024-11-05 19:18:43.000553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.827 [2024-11-05 19:18:43.000567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.827 qpair failed and we were unable to recover it. 00:29:13.827 [2024-11-05 19:18:43.000863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.827 [2024-11-05 19:18:43.000875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.827 qpair failed and we were unable to recover it. 00:29:13.827 [2024-11-05 19:18:43.001184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.827 [2024-11-05 19:18:43.001195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.827 qpair failed and we were unable to recover it. 00:29:13.827 [2024-11-05 19:18:43.001504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.827 [2024-11-05 19:18:43.001515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.827 qpair failed and we were unable to recover it. 00:29:13.827 [2024-11-05 19:18:43.001831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.827 [2024-11-05 19:18:43.001844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.827 qpair failed and we were unable to recover it. 00:29:13.827 [2024-11-05 19:18:43.002178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.827 [2024-11-05 19:18:43.002189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.827 qpair failed and we were unable to recover it. 00:29:13.827 [2024-11-05 19:18:43.002495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.827 [2024-11-05 19:18:43.002507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.827 qpair failed and we were unable to recover it. 00:29:13.827 [2024-11-05 19:18:43.002676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.827 [2024-11-05 19:18:43.002688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.827 qpair failed and we were unable to recover it. 00:29:13.827 [2024-11-05 19:18:43.002993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.827 [2024-11-05 19:18:43.003005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.827 qpair failed and we were unable to recover it. 00:29:13.828 [2024-11-05 19:18:43.003311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.828 [2024-11-05 19:18:43.003322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.828 qpair failed and we were unable to recover it. 00:29:13.828 [2024-11-05 19:18:43.003630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.828 [2024-11-05 19:18:43.003643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.828 qpair failed and we were unable to recover it. 00:29:13.828 [2024-11-05 19:18:43.003968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.828 [2024-11-05 19:18:43.003980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.828 qpair failed and we were unable to recover it. 00:29:13.828 [2024-11-05 19:18:43.004287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.828 [2024-11-05 19:18:43.004299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.828 qpair failed and we were unable to recover it. 00:29:13.828 [2024-11-05 19:18:43.004620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.828 [2024-11-05 19:18:43.004631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.828 qpair failed and we were unable to recover it. 00:29:13.828 [2024-11-05 19:18:43.004963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.828 [2024-11-05 19:18:43.004974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.828 qpair failed and we were unable to recover it. 00:29:13.828 [2024-11-05 19:18:43.005312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.828 [2024-11-05 19:18:43.005324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.828 qpair failed and we were unable to recover it. 00:29:13.828 [2024-11-05 19:18:43.005588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.828 [2024-11-05 19:18:43.005599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.828 qpair failed and we were unable to recover it. 00:29:13.828 [2024-11-05 19:18:43.005791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.828 [2024-11-05 19:18:43.005803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.828 qpair failed and we were unable to recover it. 00:29:13.828 [2024-11-05 19:18:43.006100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.828 [2024-11-05 19:18:43.006111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.828 qpair failed and we were unable to recover it. 00:29:13.828 [2024-11-05 19:18:43.006457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.828 [2024-11-05 19:18:43.006469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.828 qpair failed and we were unable to recover it. 00:29:13.828 [2024-11-05 19:18:43.006805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.828 [2024-11-05 19:18:43.006817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.828 qpair failed and we were unable to recover it. 00:29:13.828 [2024-11-05 19:18:43.007151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.828 [2024-11-05 19:18:43.007162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.828 qpair failed and we were unable to recover it. 00:29:13.828 [2024-11-05 19:18:43.007466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.828 [2024-11-05 19:18:43.007478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.828 qpair failed and we were unable to recover it. 00:29:13.828 [2024-11-05 19:18:43.007663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.828 [2024-11-05 19:18:43.007675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.828 qpair failed and we were unable to recover it. 00:29:13.828 [2024-11-05 19:18:43.008043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.828 [2024-11-05 19:18:43.008055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.828 qpair failed and we were unable to recover it. 00:29:13.828 [2024-11-05 19:18:43.008389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.828 [2024-11-05 19:18:43.008401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.828 qpair failed and we were unable to recover it. 00:29:13.828 [2024-11-05 19:18:43.008718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.828 [2024-11-05 19:18:43.008730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.828 qpair failed and we were unable to recover it. 00:29:13.828 [2024-11-05 19:18:43.009042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.828 [2024-11-05 19:18:43.009055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.828 qpair failed and we were unable to recover it. 00:29:13.828 [2024-11-05 19:18:43.009308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.828 [2024-11-05 19:18:43.009321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.828 qpair failed and we were unable to recover it. 00:29:13.828 [2024-11-05 19:18:43.009651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.828 [2024-11-05 19:18:43.009663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.828 qpair failed and we were unable to recover it. 00:29:13.828 [2024-11-05 19:18:43.009846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.828 [2024-11-05 19:18:43.009860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.828 qpair failed and we were unable to recover it. 00:29:13.828 [2024-11-05 19:18:43.010028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.828 [2024-11-05 19:18:43.010041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.828 qpair failed and we were unable to recover it. 00:29:13.828 [2024-11-05 19:18:43.010250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.828 [2024-11-05 19:18:43.010262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.828 qpair failed and we were unable to recover it. 00:29:13.828 [2024-11-05 19:18:43.010550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.828 [2024-11-05 19:18:43.010562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.828 qpair failed and we were unable to recover it. 00:29:13.828 [2024-11-05 19:18:43.010756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.828 [2024-11-05 19:18:43.010772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.828 qpair failed and we were unable to recover it. 00:29:13.828 [2024-11-05 19:18:43.011077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.828 [2024-11-05 19:18:43.011088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.828 qpair failed and we were unable to recover it. 00:29:13.828 [2024-11-05 19:18:43.011399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.828 [2024-11-05 19:18:43.011411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.828 qpair failed and we were unable to recover it. 00:29:13.828 [2024-11-05 19:18:43.011703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.828 [2024-11-05 19:18:43.011714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.828 qpair failed and we were unable to recover it. 00:29:13.828 [2024-11-05 19:18:43.012039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.828 [2024-11-05 19:18:43.012051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.828 qpair failed and we were unable to recover it. 00:29:13.828 [2024-11-05 19:18:43.012369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.828 [2024-11-05 19:18:43.012380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.828 qpair failed and we were unable to recover it. 00:29:13.828 [2024-11-05 19:18:43.012693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.828 [2024-11-05 19:18:43.012704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.828 qpair failed and we were unable to recover it. 00:29:13.828 [2024-11-05 19:18:43.013044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.828 [2024-11-05 19:18:43.013055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.828 qpair failed and we were unable to recover it. 00:29:13.828 [2024-11-05 19:18:43.013368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.828 [2024-11-05 19:18:43.013379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.828 qpair failed and we were unable to recover it. 00:29:13.828 [2024-11-05 19:18:43.013690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.829 [2024-11-05 19:18:43.013701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.829 qpair failed and we were unable to recover it. 00:29:13.829 [2024-11-05 19:18:43.014005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.829 [2024-11-05 19:18:43.014017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.829 qpair failed and we were unable to recover it. 00:29:13.829 [2024-11-05 19:18:43.014303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.829 [2024-11-05 19:18:43.014314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.829 qpair failed and we were unable to recover it. 00:29:13.829 [2024-11-05 19:18:43.014628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.829 [2024-11-05 19:18:43.014641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.829 qpair failed and we were unable to recover it. 00:29:13.829 [2024-11-05 19:18:43.014971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.829 [2024-11-05 19:18:43.014983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.829 qpair failed and we were unable to recover it. 00:29:13.829 [2024-11-05 19:18:43.015334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.829 [2024-11-05 19:18:43.015345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.829 qpair failed and we were unable to recover it. 00:29:13.829 [2024-11-05 19:18:43.015671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.829 [2024-11-05 19:18:43.015682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.829 qpair failed and we were unable to recover it. 00:29:13.829 [2024-11-05 19:18:43.015987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.829 [2024-11-05 19:18:43.015999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.829 qpair failed and we were unable to recover it. 00:29:13.829 [2024-11-05 19:18:43.016302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.829 [2024-11-05 19:18:43.016314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.829 qpair failed and we were unable to recover it. 00:29:13.829 [2024-11-05 19:18:43.016664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.829 [2024-11-05 19:18:43.016676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.829 qpair failed and we were unable to recover it. 00:29:13.829 [2024-11-05 19:18:43.016862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.829 [2024-11-05 19:18:43.016873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.829 qpair failed and we were unable to recover it. 00:29:13.829 [2024-11-05 19:18:43.017207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.829 [2024-11-05 19:18:43.017219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.829 qpair failed and we were unable to recover it. 00:29:13.829 [2024-11-05 19:18:43.017420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.829 [2024-11-05 19:18:43.017432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.829 qpair failed and we were unable to recover it. 00:29:13.829 [2024-11-05 19:18:43.017761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.829 [2024-11-05 19:18:43.017774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.829 qpair failed and we were unable to recover it. 00:29:13.829 [2024-11-05 19:18:43.018115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.829 [2024-11-05 19:18:43.018126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.829 qpair failed and we were unable to recover it. 00:29:13.829 [2024-11-05 19:18:43.018463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.829 [2024-11-05 19:18:43.018475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.829 qpair failed and we were unable to recover it. 00:29:13.829 [2024-11-05 19:18:43.018787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.829 [2024-11-05 19:18:43.018799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.829 qpair failed and we were unable to recover it. 00:29:13.829 [2024-11-05 19:18:43.019107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.829 [2024-11-05 19:18:43.019118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.829 qpair failed and we were unable to recover it. 00:29:13.829 [2024-11-05 19:18:43.019377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.829 [2024-11-05 19:18:43.019390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.829 qpair failed and we were unable to recover it. 00:29:13.829 [2024-11-05 19:18:43.019739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.829 [2024-11-05 19:18:43.019756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.829 qpair failed and we were unable to recover it. 00:29:13.829 [2024-11-05 19:18:43.019936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.829 [2024-11-05 19:18:43.019947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.829 qpair failed and we were unable to recover it. 00:29:13.829 [2024-11-05 19:18:43.020212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.829 [2024-11-05 19:18:43.020224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.829 qpair failed and we were unable to recover it. 00:29:13.829 [2024-11-05 19:18:43.020558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.829 [2024-11-05 19:18:43.020570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.829 qpair failed and we were unable to recover it. 00:29:13.829 [2024-11-05 19:18:43.020878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.829 [2024-11-05 19:18:43.020890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.829 qpair failed and we were unable to recover it. 00:29:13.829 [2024-11-05 19:18:43.021152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.829 [2024-11-05 19:18:43.021164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.829 qpair failed and we were unable to recover it. 00:29:13.829 [2024-11-05 19:18:43.021340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.829 [2024-11-05 19:18:43.021352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.829 qpair failed and we were unable to recover it. 00:29:13.829 [2024-11-05 19:18:43.021521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.829 [2024-11-05 19:18:43.021532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.829 qpair failed and we were unable to recover it. 00:29:13.829 [2024-11-05 19:18:43.021758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.829 [2024-11-05 19:18:43.021770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.829 qpair failed and we were unable to recover it. 00:29:13.829 [2024-11-05 19:18:43.022079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.829 [2024-11-05 19:18:43.022091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.829 qpair failed and we were unable to recover it. 00:29:13.829 [2024-11-05 19:18:43.022401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.829 [2024-11-05 19:18:43.022413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.829 qpair failed and we were unable to recover it. 00:29:13.829 [2024-11-05 19:18:43.022721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.829 [2024-11-05 19:18:43.022733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.829 qpair failed and we were unable to recover it. 00:29:13.829 [2024-11-05 19:18:43.023124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.829 [2024-11-05 19:18:43.023136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.829 qpair failed and we were unable to recover it. 00:29:13.829 [2024-11-05 19:18:43.023443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.829 [2024-11-05 19:18:43.023455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.829 qpair failed and we were unable to recover it. 00:29:13.829 [2024-11-05 19:18:43.023759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.829 [2024-11-05 19:18:43.023770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.829 qpair failed and we were unable to recover it. 00:29:13.829 [2024-11-05 19:18:43.024085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.829 [2024-11-05 19:18:43.024097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.829 qpair failed and we were unable to recover it. 00:29:13.829 [2024-11-05 19:18:43.024411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.829 [2024-11-05 19:18:43.024423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.829 qpair failed and we were unable to recover it. 00:29:13.829 [2024-11-05 19:18:43.024735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.829 [2024-11-05 19:18:43.024753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.829 qpair failed and we were unable to recover it. 00:29:13.829 [2024-11-05 19:18:43.024954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.829 [2024-11-05 19:18:43.024965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.829 qpair failed and we were unable to recover it. 00:29:13.829 [2024-11-05 19:18:43.025150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.829 [2024-11-05 19:18:43.025161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.829 qpair failed and we were unable to recover it. 00:29:13.829 [2024-11-05 19:18:43.025424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.830 [2024-11-05 19:18:43.025435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.830 qpair failed and we were unable to recover it. 00:29:13.830 [2024-11-05 19:18:43.025610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.830 [2024-11-05 19:18:43.025619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.830 qpair failed and we were unable to recover it. 00:29:13.830 [2024-11-05 19:18:43.025914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.830 [2024-11-05 19:18:43.025926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.830 qpair failed and we were unable to recover it. 00:29:13.830 [2024-11-05 19:18:43.026233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.830 [2024-11-05 19:18:43.026244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.830 qpair failed and we were unable to recover it. 00:29:13.830 [2024-11-05 19:18:43.026561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.830 [2024-11-05 19:18:43.026573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.830 qpair failed and we were unable to recover it. 00:29:13.830 [2024-11-05 19:18:43.026882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.830 [2024-11-05 19:18:43.026894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.830 qpair failed and we were unable to recover it. 00:29:13.830 [2024-11-05 19:18:43.027204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.830 [2024-11-05 19:18:43.027217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.830 qpair failed and we were unable to recover it. 00:29:13.830 [2024-11-05 19:18:43.027394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.830 [2024-11-05 19:18:43.027404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.830 qpair failed and we were unable to recover it. 00:29:13.830 [2024-11-05 19:18:43.027716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.830 [2024-11-05 19:18:43.027727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.830 qpair failed and we were unable to recover it. 00:29:13.830 [2024-11-05 19:18:43.028038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.830 [2024-11-05 19:18:43.028049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.830 qpair failed and we were unable to recover it. 00:29:13.830 [2024-11-05 19:18:43.028356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.830 [2024-11-05 19:18:43.028367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.830 qpair failed and we were unable to recover it. 00:29:13.830 [2024-11-05 19:18:43.028648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.830 [2024-11-05 19:18:43.028661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.830 qpair failed and we were unable to recover it. 00:29:13.830 [2024-11-05 19:18:43.028860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.830 [2024-11-05 19:18:43.028872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.830 qpair failed and we were unable to recover it. 00:29:13.830 [2024-11-05 19:18:43.029150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.830 [2024-11-05 19:18:43.029163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.830 qpair failed and we were unable to recover it. 00:29:13.830 [2024-11-05 19:18:43.029485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.830 [2024-11-05 19:18:43.029496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.830 qpair failed and we were unable to recover it. 00:29:13.830 [2024-11-05 19:18:43.029789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.830 [2024-11-05 19:18:43.029800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.830 qpair failed and we were unable to recover it. 00:29:13.830 [2024-11-05 19:18:43.030179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.830 [2024-11-05 19:18:43.030191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.830 qpair failed and we were unable to recover it. 00:29:13.830 [2024-11-05 19:18:43.030507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.830 [2024-11-05 19:18:43.030518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.830 qpair failed and we were unable to recover it. 00:29:13.830 [2024-11-05 19:18:43.030856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.830 [2024-11-05 19:18:43.030868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.830 qpair failed and we were unable to recover it. 00:29:13.830 [2024-11-05 19:18:43.031156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.830 [2024-11-05 19:18:43.031167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.830 qpair failed and we were unable to recover it. 00:29:13.830 [2024-11-05 19:18:43.031487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.830 [2024-11-05 19:18:43.031499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.830 qpair failed and we were unable to recover it. 00:29:13.830 [2024-11-05 19:18:43.031801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.830 [2024-11-05 19:18:43.031813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.830 qpair failed and we were unable to recover it. 00:29:13.830 [2024-11-05 19:18:43.032120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.830 [2024-11-05 19:18:43.032131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.830 qpair failed and we were unable to recover it. 00:29:13.830 [2024-11-05 19:18:43.032466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.830 [2024-11-05 19:18:43.032477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.830 qpair failed and we were unable to recover it. 00:29:13.830 [2024-11-05 19:18:43.032786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.830 [2024-11-05 19:18:43.032799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.830 qpair failed and we were unable to recover it. 00:29:13.830 [2024-11-05 19:18:43.033088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.830 [2024-11-05 19:18:43.033099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.830 qpair failed and we were unable to recover it. 00:29:13.830 [2024-11-05 19:18:43.033400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.830 [2024-11-05 19:18:43.033412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.830 qpair failed and we were unable to recover it. 00:29:13.830 [2024-11-05 19:18:43.033602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.830 [2024-11-05 19:18:43.033614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.830 qpair failed and we were unable to recover it. 00:29:13.830 [2024-11-05 19:18:43.033925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.830 [2024-11-05 19:18:43.033937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.830 qpair failed and we were unable to recover it. 00:29:13.830 [2024-11-05 19:18:43.034251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.830 [2024-11-05 19:18:43.034262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.830 qpair failed and we were unable to recover it. 00:29:13.830 [2024-11-05 19:18:43.034569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.830 [2024-11-05 19:18:43.034581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.830 qpair failed and we were unable to recover it. 00:29:13.830 [2024-11-05 19:18:43.034919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.830 [2024-11-05 19:18:43.034931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.830 qpair failed and we were unable to recover it. 00:29:13.830 [2024-11-05 19:18:43.035236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.830 [2024-11-05 19:18:43.035248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.830 qpair failed and we were unable to recover it. 00:29:13.830 [2024-11-05 19:18:43.035558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.830 [2024-11-05 19:18:43.035569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.830 qpair failed and we were unable to recover it. 00:29:13.830 [2024-11-05 19:18:43.035885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.830 [2024-11-05 19:18:43.035898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.830 qpair failed and we were unable to recover it. 00:29:13.830 [2024-11-05 19:18:43.036225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.830 [2024-11-05 19:18:43.036236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.830 qpair failed and we were unable to recover it. 00:29:13.830 [2024-11-05 19:18:43.036556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.830 [2024-11-05 19:18:43.036567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.830 qpair failed and we were unable to recover it. 00:29:13.830 [2024-11-05 19:18:43.036899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.830 [2024-11-05 19:18:43.036910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.830 qpair failed and we were unable to recover it. 00:29:13.830 [2024-11-05 19:18:43.037218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.830 [2024-11-05 19:18:43.037230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.830 qpair failed and we were unable to recover it. 00:29:13.831 [2024-11-05 19:18:43.037514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.831 [2024-11-05 19:18:43.037525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.831 qpair failed and we were unable to recover it. 00:29:13.831 [2024-11-05 19:18:43.037839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.831 [2024-11-05 19:18:43.037851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.831 qpair failed and we were unable to recover it. 00:29:13.831 [2024-11-05 19:18:43.038153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.831 [2024-11-05 19:18:43.038165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.831 qpair failed and we were unable to recover it. 00:29:13.831 [2024-11-05 19:18:43.038469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.831 [2024-11-05 19:18:43.038482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.831 qpair failed and we were unable to recover it. 00:29:13.831 [2024-11-05 19:18:43.038764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.831 [2024-11-05 19:18:43.038776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.831 qpair failed and we were unable to recover it. 00:29:13.831 [2024-11-05 19:18:43.039104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.831 [2024-11-05 19:18:43.039115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.831 qpair failed and we were unable to recover it. 00:29:13.831 [2024-11-05 19:18:43.039461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.831 [2024-11-05 19:18:43.039472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.831 qpair failed and we were unable to recover it. 00:29:13.831 [2024-11-05 19:18:43.039648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.831 [2024-11-05 19:18:43.039658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.831 qpair failed and we were unable to recover it. 00:29:13.831 [2024-11-05 19:18:43.039989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.831 [2024-11-05 19:18:43.040005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.831 qpair failed and we were unable to recover it. 00:29:13.831 [2024-11-05 19:18:43.040294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.831 [2024-11-05 19:18:43.040305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.831 qpair failed and we were unable to recover it. 00:29:13.831 [2024-11-05 19:18:43.040491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.831 [2024-11-05 19:18:43.040503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.831 qpair failed and we were unable to recover it. 00:29:13.831 [2024-11-05 19:18:43.040680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.831 [2024-11-05 19:18:43.040691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.831 qpair failed and we were unable to recover it. 00:29:13.831 [2024-11-05 19:18:43.040989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.831 [2024-11-05 19:18:43.041002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.831 qpair failed and we were unable to recover it. 00:29:13.831 [2024-11-05 19:18:43.041203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.831 [2024-11-05 19:18:43.041214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.831 qpair failed and we were unable to recover it. 00:29:13.831 [2024-11-05 19:18:43.041383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.831 [2024-11-05 19:18:43.041395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.831 qpair failed and we were unable to recover it. 00:29:13.831 [2024-11-05 19:18:43.041670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.831 [2024-11-05 19:18:43.041681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.831 qpair failed and we were unable to recover it. 00:29:13.831 [2024-11-05 19:18:43.041868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.831 [2024-11-05 19:18:43.041881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.831 qpair failed and we were unable to recover it. 00:29:13.831 [2024-11-05 19:18:43.042208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.831 [2024-11-05 19:18:43.042219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.831 qpair failed and we were unable to recover it. 00:29:13.831 [2024-11-05 19:18:43.042530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.831 [2024-11-05 19:18:43.042542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.831 qpair failed and we were unable to recover it. 00:29:13.831 [2024-11-05 19:18:43.042929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.831 [2024-11-05 19:18:43.042941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.831 qpair failed and we were unable to recover it. 00:29:13.831 [2024-11-05 19:18:43.043112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.831 [2024-11-05 19:18:43.043124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.831 qpair failed and we were unable to recover it. 00:29:13.831 [2024-11-05 19:18:43.043395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.831 [2024-11-05 19:18:43.043407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.831 qpair failed and we were unable to recover it. 00:29:13.831 [2024-11-05 19:18:43.043750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.831 [2024-11-05 19:18:43.043762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.831 qpair failed and we were unable to recover it. 00:29:13.831 [2024-11-05 19:18:43.044167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.831 [2024-11-05 19:18:43.044178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.831 qpair failed and we were unable to recover it. 00:29:13.831 [2024-11-05 19:18:43.044510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.831 [2024-11-05 19:18:43.044522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.831 qpair failed and we were unable to recover it. 00:29:13.831 [2024-11-05 19:18:43.044855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.831 [2024-11-05 19:18:43.044867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.831 qpair failed and we were unable to recover it. 00:29:13.831 [2024-11-05 19:18:43.045224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.831 [2024-11-05 19:18:43.045235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.831 qpair failed and we were unable to recover it. 00:29:13.831 [2024-11-05 19:18:43.045537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.831 [2024-11-05 19:18:43.045550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.831 qpair failed and we were unable to recover it. 00:29:13.831 [2024-11-05 19:18:43.045886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.831 [2024-11-05 19:18:43.045899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.831 qpair failed and we were unable to recover it. 00:29:13.831 [2024-11-05 19:18:43.046089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.831 [2024-11-05 19:18:43.046100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.831 qpair failed and we were unable to recover it. 00:29:13.831 [2024-11-05 19:18:43.046262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.831 [2024-11-05 19:18:43.046273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.831 qpair failed and we were unable to recover it. 00:29:13.831 [2024-11-05 19:18:43.046559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.831 [2024-11-05 19:18:43.046570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.831 qpair failed and we were unable to recover it. 00:29:13.831 [2024-11-05 19:18:43.046855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.831 [2024-11-05 19:18:43.046866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.831 qpair failed and we were unable to recover it. 00:29:13.831 [2024-11-05 19:18:43.047048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.831 [2024-11-05 19:18:43.047059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.831 qpair failed and we were unable to recover it. 00:29:13.831 [2024-11-05 19:18:43.047228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.831 [2024-11-05 19:18:43.047238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.831 qpair failed and we were unable to recover it. 00:29:13.831 [2024-11-05 19:18:43.047522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.831 [2024-11-05 19:18:43.047535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.831 qpair failed and we were unable to recover it. 00:29:13.831 [2024-11-05 19:18:43.047847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.831 [2024-11-05 19:18:43.047859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.831 qpair failed and we were unable to recover it. 00:29:13.831 [2024-11-05 19:18:43.048051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.831 [2024-11-05 19:18:43.048064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.831 qpair failed and we were unable to recover it. 00:29:13.831 [2024-11-05 19:18:43.048114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.831 [2024-11-05 19:18:43.048125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.831 qpair failed and we were unable to recover it. 00:29:13.832 [2024-11-05 19:18:43.048420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.832 [2024-11-05 19:18:43.048432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.832 qpair failed and we were unable to recover it. 00:29:13.832 [2024-11-05 19:18:43.048729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.832 [2024-11-05 19:18:43.048740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.832 qpair failed and we were unable to recover it. 00:29:13.832 [2024-11-05 19:18:43.049058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.832 [2024-11-05 19:18:43.049070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.832 qpair failed and we were unable to recover it. 00:29:13.832 [2024-11-05 19:18:43.049250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.832 [2024-11-05 19:18:43.049261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.832 qpair failed and we were unable to recover it. 00:29:13.832 [2024-11-05 19:18:43.049633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.832 [2024-11-05 19:18:43.049645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.832 qpair failed and we were unable to recover it. 00:29:13.832 [2024-11-05 19:18:43.049840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.832 [2024-11-05 19:18:43.049853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.832 qpair failed and we were unable to recover it. 00:29:13.832 [2024-11-05 19:18:43.050038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.832 [2024-11-05 19:18:43.050050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.832 qpair failed and we were unable to recover it. 00:29:13.832 [2024-11-05 19:18:43.050388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.832 [2024-11-05 19:18:43.050400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.832 qpair failed and we were unable to recover it. 00:29:13.832 [2024-11-05 19:18:43.050703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.832 [2024-11-05 19:18:43.050714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.832 qpair failed and we were unable to recover it. 00:29:13.832 [2024-11-05 19:18:43.050905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.832 [2024-11-05 19:18:43.050917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.832 qpair failed and we were unable to recover it. 00:29:13.832 [2024-11-05 19:18:43.051212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.832 [2024-11-05 19:18:43.051224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.832 qpair failed and we were unable to recover it. 00:29:13.832 [2024-11-05 19:18:43.051272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.832 [2024-11-05 19:18:43.051281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.832 qpair failed and we were unable to recover it. 00:29:13.832 [2024-11-05 19:18:43.051454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.832 [2024-11-05 19:18:43.051465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.832 qpair failed and we were unable to recover it. 00:29:13.832 [2024-11-05 19:18:43.051794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.832 [2024-11-05 19:18:43.051806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.832 qpair failed and we were unable to recover it. 00:29:13.832 [2024-11-05 19:18:43.052108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.832 [2024-11-05 19:18:43.052119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.832 qpair failed and we were unable to recover it. 00:29:13.832 [2024-11-05 19:18:43.052458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.832 [2024-11-05 19:18:43.052469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.832 qpair failed and we were unable to recover it. 00:29:13.832 [2024-11-05 19:18:43.052770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.832 [2024-11-05 19:18:43.052782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.832 qpair failed and we were unable to recover it. 00:29:13.832 [2024-11-05 19:18:43.053105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.832 [2024-11-05 19:18:43.053116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.832 qpair failed and we were unable to recover it. 00:29:13.832 [2024-11-05 19:18:43.053288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.832 [2024-11-05 19:18:43.053299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.832 qpair failed and we were unable to recover it. 00:29:13.832 [2024-11-05 19:18:43.053627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.832 [2024-11-05 19:18:43.053638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.832 qpair failed and we were unable to recover it. 00:29:13.832 [2024-11-05 19:18:43.053961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.832 [2024-11-05 19:18:43.053974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.832 qpair failed and we were unable to recover it. 00:29:13.832 [2024-11-05 19:18:43.054134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.832 [2024-11-05 19:18:43.054147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.832 qpair failed and we were unable to recover it. 00:29:13.832 [2024-11-05 19:18:43.054196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.832 [2024-11-05 19:18:43.054206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.832 qpair failed and we were unable to recover it. 00:29:13.832 [2024-11-05 19:18:43.054497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.832 [2024-11-05 19:18:43.054510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.832 qpair failed and we were unable to recover it. 00:29:13.832 [2024-11-05 19:18:43.054841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.832 [2024-11-05 19:18:43.054853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.832 qpair failed and we were unable to recover it. 00:29:13.832 [2024-11-05 19:18:43.055176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.832 [2024-11-05 19:18:43.055187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.832 qpair failed and we were unable to recover it. 00:29:13.832 [2024-11-05 19:18:43.055495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.832 [2024-11-05 19:18:43.055507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.832 qpair failed and we were unable to recover it. 00:29:13.832 [2024-11-05 19:18:43.055774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.832 [2024-11-05 19:18:43.055786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.832 qpair failed and we were unable to recover it. 00:29:13.832 [2024-11-05 19:18:43.056121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.832 [2024-11-05 19:18:43.056133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.832 qpair failed and we were unable to recover it. 00:29:13.832 [2024-11-05 19:18:43.056442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.832 [2024-11-05 19:18:43.056453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.832 qpair failed and we were unable to recover it. 00:29:13.832 [2024-11-05 19:18:43.056759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.832 [2024-11-05 19:18:43.056772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.832 qpair failed and we were unable to recover it. 00:29:13.832 [2024-11-05 19:18:43.056951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.832 [2024-11-05 19:18:43.056962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.832 qpair failed and we were unable to recover it. 00:29:13.832 [2024-11-05 19:18:43.057270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.832 [2024-11-05 19:18:43.057282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.832 qpair failed and we were unable to recover it. 00:29:13.832 [2024-11-05 19:18:43.057574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.832 [2024-11-05 19:18:43.057586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.832 qpair failed and we were unable to recover it. 00:29:13.832 [2024-11-05 19:18:43.057902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.833 [2024-11-05 19:18:43.057913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.833 qpair failed and we were unable to recover it. 00:29:13.833 [2024-11-05 19:18:43.058091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.833 [2024-11-05 19:18:43.058102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.833 qpair failed and we were unable to recover it. 00:29:13.833 [2024-11-05 19:18:43.058430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.833 [2024-11-05 19:18:43.058441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.833 qpair failed and we were unable to recover it. 00:29:13.833 [2024-11-05 19:18:43.058616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.833 [2024-11-05 19:18:43.058628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.833 qpair failed and we were unable to recover it. 00:29:13.833 [2024-11-05 19:18:43.058918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.833 [2024-11-05 19:18:43.058931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.833 qpair failed and we were unable to recover it. 00:29:13.833 [2024-11-05 19:18:43.059261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.833 [2024-11-05 19:18:43.059273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.833 qpair failed and we were unable to recover it. 00:29:13.833 [2024-11-05 19:18:43.059593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.833 [2024-11-05 19:18:43.059604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.833 qpair failed and we were unable to recover it. 00:29:13.833 [2024-11-05 19:18:43.059937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.833 [2024-11-05 19:18:43.059948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.833 qpair failed and we were unable to recover it. 00:29:13.833 [2024-11-05 19:18:43.060261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.833 [2024-11-05 19:18:43.060273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.833 qpair failed and we were unable to recover it. 00:29:13.833 [2024-11-05 19:18:43.060543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.833 [2024-11-05 19:18:43.060555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.833 qpair failed and we were unable to recover it. 00:29:13.833 [2024-11-05 19:18:43.060857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.833 [2024-11-05 19:18:43.060870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.833 qpair failed and we were unable to recover it. 00:29:13.833 [2024-11-05 19:18:43.061173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.833 [2024-11-05 19:18:43.061184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.833 qpair failed and we were unable to recover it. 00:29:13.833 [2024-11-05 19:18:43.061488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.833 [2024-11-05 19:18:43.061501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.833 qpair failed and we were unable to recover it. 00:29:13.833 [2024-11-05 19:18:43.061715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.833 [2024-11-05 19:18:43.061726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.833 qpair failed and we were unable to recover it. 00:29:13.833 [2024-11-05 19:18:43.062040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.833 [2024-11-05 19:18:43.062052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.833 qpair failed and we were unable to recover it. 00:29:13.833 [2024-11-05 19:18:43.062356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.833 [2024-11-05 19:18:43.062369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.833 qpair failed and we were unable to recover it. 00:29:13.833 [2024-11-05 19:18:43.062634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.833 [2024-11-05 19:18:43.062645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.833 qpair failed and we were unable to recover it. 00:29:13.833 [2024-11-05 19:18:43.063020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.833 [2024-11-05 19:18:43.063033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.833 qpair failed and we were unable to recover it. 00:29:13.833 [2024-11-05 19:18:43.063345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.833 [2024-11-05 19:18:43.063357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.833 qpair failed and we were unable to recover it. 00:29:13.833 [2024-11-05 19:18:43.063664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.833 [2024-11-05 19:18:43.063676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.833 qpair failed and we were unable to recover it. 00:29:13.833 [2024-11-05 19:18:43.064057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.833 [2024-11-05 19:18:43.064069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.833 qpair failed and we were unable to recover it. 00:29:13.833 [2024-11-05 19:18:43.064417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.833 [2024-11-05 19:18:43.064429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.833 qpair failed and we were unable to recover it. 00:29:13.833 [2024-11-05 19:18:43.064772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.833 [2024-11-05 19:18:43.064784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.833 qpair failed and we were unable to recover it. 00:29:13.833 [2024-11-05 19:18:43.065117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.833 [2024-11-05 19:18:43.065128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.833 qpair failed and we were unable to recover it. 00:29:13.833 [2024-11-05 19:18:43.065312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.833 [2024-11-05 19:18:43.065323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.833 qpair failed and we were unable to recover it. 00:29:13.833 [2024-11-05 19:18:43.065517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.833 [2024-11-05 19:18:43.065530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.833 qpair failed and we were unable to recover it. 00:29:13.833 [2024-11-05 19:18:43.065824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.833 [2024-11-05 19:18:43.065836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.833 qpair failed and we were unable to recover it. 00:29:13.833 [2024-11-05 19:18:43.066147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.833 [2024-11-05 19:18:43.066159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.833 qpair failed and we were unable to recover it. 00:29:13.833 [2024-11-05 19:18:43.066479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.833 [2024-11-05 19:18:43.066489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.833 qpair failed and we were unable to recover it. 00:29:13.833 [2024-11-05 19:18:43.066803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.833 [2024-11-05 19:18:43.066816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.833 qpair failed and we were unable to recover it. 00:29:13.833 [2024-11-05 19:18:43.067131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.833 [2024-11-05 19:18:43.067142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.833 qpair failed and we were unable to recover it. 00:29:13.833 [2024-11-05 19:18:43.067466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.833 [2024-11-05 19:18:43.067477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.833 qpair failed and we were unable to recover it. 00:29:13.833 [2024-11-05 19:18:43.067781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.833 [2024-11-05 19:18:43.067792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.833 qpair failed and we were unable to recover it. 00:29:13.833 [2024-11-05 19:18:43.068116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.833 [2024-11-05 19:18:43.068127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.833 qpair failed and we were unable to recover it. 00:29:13.833 [2024-11-05 19:18:43.068428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.833 [2024-11-05 19:18:43.068440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.833 qpair failed and we were unable to recover it. 00:29:13.833 [2024-11-05 19:18:43.068758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.833 [2024-11-05 19:18:43.068771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.833 qpair failed and we were unable to recover it. 00:29:13.833 [2024-11-05 19:18:43.069104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.833 [2024-11-05 19:18:43.069116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.833 qpair failed and we were unable to recover it. 00:29:13.833 [2024-11-05 19:18:43.069392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.833 [2024-11-05 19:18:43.069404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.833 qpair failed and we were unable to recover it. 00:29:13.833 [2024-11-05 19:18:43.069697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.833 [2024-11-05 19:18:43.069708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.833 qpair failed and we were unable to recover it. 00:29:13.833 [2024-11-05 19:18:43.070017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.834 [2024-11-05 19:18:43.070029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.834 qpair failed and we were unable to recover it. 00:29:13.834 [2024-11-05 19:18:43.070244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.834 [2024-11-05 19:18:43.070255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.834 qpair failed and we were unable to recover it. 00:29:13.834 [2024-11-05 19:18:43.070556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.834 [2024-11-05 19:18:43.070567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.834 qpair failed and we were unable to recover it. 00:29:13.834 [2024-11-05 19:18:43.070755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.834 [2024-11-05 19:18:43.070766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.834 qpair failed and we were unable to recover it. 00:29:13.834 [2024-11-05 19:18:43.071089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.834 [2024-11-05 19:18:43.071101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.834 qpair failed and we were unable to recover it. 00:29:13.834 [2024-11-05 19:18:43.071279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.834 [2024-11-05 19:18:43.071290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.834 qpair failed and we were unable to recover it. 00:29:13.834 [2024-11-05 19:18:43.071590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.834 [2024-11-05 19:18:43.071602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.834 qpair failed and we were unable to recover it. 00:29:13.834 [2024-11-05 19:18:43.071922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.834 [2024-11-05 19:18:43.071934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.834 qpair failed and we were unable to recover it. 00:29:13.834 [2024-11-05 19:18:43.072264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.834 [2024-11-05 19:18:43.072276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.834 qpair failed and we were unable to recover it. 00:29:13.834 [2024-11-05 19:18:43.072461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.834 [2024-11-05 19:18:43.072473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.834 qpair failed and we were unable to recover it. 00:29:13.834 [2024-11-05 19:18:43.072639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.834 [2024-11-05 19:18:43.072651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.834 qpair failed and we were unable to recover it. 00:29:13.834 [2024-11-05 19:18:43.072958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.834 [2024-11-05 19:18:43.072971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.834 qpair failed and we were unable to recover it. 00:29:13.834 [2024-11-05 19:18:43.073140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.834 [2024-11-05 19:18:43.073152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.834 qpair failed and we were unable to recover it. 00:29:13.834 [2024-11-05 19:18:43.073540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.834 [2024-11-05 19:18:43.073552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.834 qpair failed and we were unable to recover it. 00:29:13.834 [2024-11-05 19:18:43.073727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.834 [2024-11-05 19:18:43.073740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.834 qpair failed and we were unable to recover it. 00:29:13.834 [2024-11-05 19:18:43.074085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.834 [2024-11-05 19:18:43.074096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.834 qpair failed and we were unable to recover it. 00:29:13.834 [2024-11-05 19:18:43.074408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.834 [2024-11-05 19:18:43.074420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.834 qpair failed and we were unable to recover it. 00:29:13.834 [2024-11-05 19:18:43.074723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.834 [2024-11-05 19:18:43.074734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.834 qpair failed and we were unable to recover it. 00:29:13.834 [2024-11-05 19:18:43.074946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.834 [2024-11-05 19:18:43.074959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.834 qpair failed and we were unable to recover it. 00:29:13.834 [2024-11-05 19:18:43.075293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.834 [2024-11-05 19:18:43.075305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.834 qpair failed and we were unable to recover it. 00:29:13.834 [2024-11-05 19:18:43.075602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.834 [2024-11-05 19:18:43.075614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.834 qpair failed and we were unable to recover it. 00:29:13.834 [2024-11-05 19:18:43.075820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.834 [2024-11-05 19:18:43.075832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.834 qpair failed and we were unable to recover it. 00:29:13.834 [2024-11-05 19:18:43.076172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.834 [2024-11-05 19:18:43.076185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.834 qpair failed and we were unable to recover it. 00:29:13.834 [2024-11-05 19:18:43.076370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.834 [2024-11-05 19:18:43.076381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.834 qpair failed and we were unable to recover it. 00:29:13.834 [2024-11-05 19:18:43.076700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.834 [2024-11-05 19:18:43.076711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.834 qpair failed and we were unable to recover it. 00:29:13.834 [2024-11-05 19:18:43.076910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.834 [2024-11-05 19:18:43.076921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.834 qpair failed and we were unable to recover it. 00:29:13.834 [2024-11-05 19:18:43.077256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.834 [2024-11-05 19:18:43.077268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.834 qpair failed and we were unable to recover it. 00:29:13.834 [2024-11-05 19:18:43.077601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.834 [2024-11-05 19:18:43.077613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.834 qpair failed and we were unable to recover it. 00:29:13.834 [2024-11-05 19:18:43.077923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.834 [2024-11-05 19:18:43.077935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.834 qpair failed and we were unable to recover it. 00:29:13.834 [2024-11-05 19:18:43.078253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.834 [2024-11-05 19:18:43.078264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.834 qpair failed and we were unable to recover it. 00:29:13.834 [2024-11-05 19:18:43.078579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.834 [2024-11-05 19:18:43.078592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.834 qpair failed and we were unable to recover it. 00:29:13.834 [2024-11-05 19:18:43.078938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.834 [2024-11-05 19:18:43.078951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.834 qpair failed and we were unable to recover it. 00:29:13.834 [2024-11-05 19:18:43.079280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.834 [2024-11-05 19:18:43.079292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.834 qpair failed and we were unable to recover it. 00:29:13.834 [2024-11-05 19:18:43.079478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.834 [2024-11-05 19:18:43.079489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.834 qpair failed and we were unable to recover it. 00:29:13.834 [2024-11-05 19:18:43.079806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.834 [2024-11-05 19:18:43.079819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.834 qpair failed and we were unable to recover it. 00:29:13.834 [2024-11-05 19:18:43.080165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.834 [2024-11-05 19:18:43.080177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.834 qpair failed and we were unable to recover it. 00:29:13.834 [2024-11-05 19:18:43.080390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.834 [2024-11-05 19:18:43.080401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.834 qpair failed and we were unable to recover it. 00:29:13.834 [2024-11-05 19:18:43.080660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.834 [2024-11-05 19:18:43.080670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.834 qpair failed and we were unable to recover it. 00:29:13.834 [2024-11-05 19:18:43.081035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.834 [2024-11-05 19:18:43.081048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.834 qpair failed and we were unable to recover it. 00:29:13.834 [2024-11-05 19:18:43.081327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.835 [2024-11-05 19:18:43.081339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.835 qpair failed and we were unable to recover it. 00:29:13.835 [2024-11-05 19:18:43.081653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.835 [2024-11-05 19:18:43.081665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.835 qpair failed and we were unable to recover it. 00:29:13.835 [2024-11-05 19:18:43.081955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.835 [2024-11-05 19:18:43.081967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.835 qpair failed and we were unable to recover it. 00:29:13.835 [2024-11-05 19:18:43.082141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.835 [2024-11-05 19:18:43.082153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.835 qpair failed and we were unable to recover it. 00:29:13.835 [2024-11-05 19:18:43.082493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.835 [2024-11-05 19:18:43.082506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.835 qpair failed and we were unable to recover it. 00:29:13.835 [2024-11-05 19:18:43.082817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.835 [2024-11-05 19:18:43.082829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.835 qpair failed and we were unable to recover it. 00:29:13.835 [2024-11-05 19:18:43.083170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.835 [2024-11-05 19:18:43.083185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.835 qpair failed and we were unable to recover it. 00:29:13.835 [2024-11-05 19:18:43.083487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.835 [2024-11-05 19:18:43.083498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.835 qpair failed and we were unable to recover it. 00:29:13.835 [2024-11-05 19:18:43.083781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.835 [2024-11-05 19:18:43.083794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.835 qpair failed and we were unable to recover it. 00:29:13.835 [2024-11-05 19:18:43.084114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.835 [2024-11-05 19:18:43.084126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.835 qpair failed and we were unable to recover it. 00:29:13.835 [2024-11-05 19:18:43.084436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.835 [2024-11-05 19:18:43.084447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.835 qpair failed and we were unable to recover it. 00:29:13.835 [2024-11-05 19:18:43.084778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.835 [2024-11-05 19:18:43.084790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.835 qpair failed and we were unable to recover it. 00:29:13.835 [2024-11-05 19:18:43.085111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.835 [2024-11-05 19:18:43.085123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.835 qpair failed and we were unable to recover it. 00:29:13.835 [2024-11-05 19:18:43.085276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.835 [2024-11-05 19:18:43.085287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.835 qpair failed and we were unable to recover it. 00:29:13.835 [2024-11-05 19:18:43.085647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.835 [2024-11-05 19:18:43.085659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.835 qpair failed and we were unable to recover it. 00:29:13.835 [2024-11-05 19:18:43.085981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.835 [2024-11-05 19:18:43.085994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.835 qpair failed and we were unable to recover it. 00:29:13.835 [2024-11-05 19:18:43.086289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.835 [2024-11-05 19:18:43.086301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.835 qpair failed and we were unable to recover it. 00:29:13.835 [2024-11-05 19:18:43.086625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.835 [2024-11-05 19:18:43.086637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.835 qpair failed and we were unable to recover it. 00:29:13.835 [2024-11-05 19:18:43.086830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.835 [2024-11-05 19:18:43.086841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.835 qpair failed and we were unable to recover it. 00:29:13.835 [2024-11-05 19:18:43.087141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.835 [2024-11-05 19:18:43.087153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.835 qpair failed and we were unable to recover it. 00:29:13.835 [2024-11-05 19:18:43.087495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.835 [2024-11-05 19:18:43.087507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.835 qpair failed and we were unable to recover it. 00:29:13.835 [2024-11-05 19:18:43.087810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.835 [2024-11-05 19:18:43.087822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.835 qpair failed and we were unable to recover it. 00:29:13.835 [2024-11-05 19:18:43.088146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.835 [2024-11-05 19:18:43.088157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.835 qpair failed and we were unable to recover it. 00:29:13.835 [2024-11-05 19:18:43.088337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.835 [2024-11-05 19:18:43.088348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.835 qpair failed and we were unable to recover it. 00:29:13.835 [2024-11-05 19:18:43.088669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.835 [2024-11-05 19:18:43.088681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.835 qpair failed and we were unable to recover it. 00:29:13.835 [2024-11-05 19:18:43.088968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.835 [2024-11-05 19:18:43.088979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.835 qpair failed and we were unable to recover it. 00:29:13.835 [2024-11-05 19:18:43.089162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.835 [2024-11-05 19:18:43.089174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.835 qpair failed and we were unable to recover it. 00:29:13.835 [2024-11-05 19:18:43.089489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.835 [2024-11-05 19:18:43.089500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.835 qpair failed and we were unable to recover it. 00:29:13.835 [2024-11-05 19:18:43.089835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.835 [2024-11-05 19:18:43.089847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.835 qpair failed and we were unable to recover it. 00:29:13.835 [2024-11-05 19:18:43.090209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.835 [2024-11-05 19:18:43.090221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.835 qpair failed and we were unable to recover it. 00:29:13.835 [2024-11-05 19:18:43.090544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.835 [2024-11-05 19:18:43.090556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.835 qpair failed and we were unable to recover it. 00:29:13.835 [2024-11-05 19:18:43.090741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.835 [2024-11-05 19:18:43.090759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.835 qpair failed and we were unable to recover it. 00:29:13.835 [2024-11-05 19:18:43.091064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.835 [2024-11-05 19:18:43.091076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.835 qpair failed and we were unable to recover it. 00:29:13.835 [2024-11-05 19:18:43.091231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.835 [2024-11-05 19:18:43.091244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.835 qpair failed and we were unable to recover it. 00:29:13.835 [2024-11-05 19:18:43.091424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.835 [2024-11-05 19:18:43.091436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.835 qpair failed and we were unable to recover it. 00:29:13.835 [2024-11-05 19:18:43.091759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.835 [2024-11-05 19:18:43.091771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.835 qpair failed and we were unable to recover it. 00:29:13.835 [2024-11-05 19:18:43.092099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.835 [2024-11-05 19:18:43.092111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.835 qpair failed and we were unable to recover it. 00:29:13.835 [2024-11-05 19:18:43.092409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.835 [2024-11-05 19:18:43.092421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.835 qpair failed and we were unable to recover it. 00:29:13.835 [2024-11-05 19:18:43.092738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.835 [2024-11-05 19:18:43.092759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.835 qpair failed and we were unable to recover it. 00:29:13.836 [2024-11-05 19:18:43.093122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.836 [2024-11-05 19:18:43.093133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.836 qpair failed and we were unable to recover it. 00:29:13.836 [2024-11-05 19:18:43.093479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.836 [2024-11-05 19:18:43.093491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.836 qpair failed and we were unable to recover it. 00:29:13.836 [2024-11-05 19:18:43.093815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.836 [2024-11-05 19:18:43.093827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.836 qpair failed and we were unable to recover it. 00:29:13.836 [2024-11-05 19:18:43.094152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.836 [2024-11-05 19:18:43.094165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.836 qpair failed and we were unable to recover it. 00:29:13.836 [2024-11-05 19:18:43.094573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.836 [2024-11-05 19:18:43.094584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.836 qpair failed and we were unable to recover it. 00:29:13.836 [2024-11-05 19:18:43.094904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.836 [2024-11-05 19:18:43.094918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.836 qpair failed and we were unable to recover it. 00:29:13.836 [2024-11-05 19:18:43.095261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.836 [2024-11-05 19:18:43.095273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.836 qpair failed and we were unable to recover it. 00:29:13.836 [2024-11-05 19:18:43.095591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.836 [2024-11-05 19:18:43.095603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.836 qpair failed and we were unable to recover it. 00:29:13.836 [2024-11-05 19:18:43.095941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.836 [2024-11-05 19:18:43.095954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.836 qpair failed and we were unable to recover it. 00:29:13.836 [2024-11-05 19:18:43.096285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.836 [2024-11-05 19:18:43.096296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.836 qpair failed and we were unable to recover it. 00:29:13.836 [2024-11-05 19:18:43.096479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.836 [2024-11-05 19:18:43.096490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.836 qpair failed and we were unable to recover it. 00:29:13.836 [2024-11-05 19:18:43.096775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.836 [2024-11-05 19:18:43.096786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.836 qpair failed and we were unable to recover it. 00:29:13.836 [2024-11-05 19:18:43.097117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.836 [2024-11-05 19:18:43.097130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.836 qpair failed and we were unable to recover it. 00:29:13.836 [2024-11-05 19:18:43.097355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.836 [2024-11-05 19:18:43.097366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.836 qpair failed and we were unable to recover it. 00:29:13.836 [2024-11-05 19:18:43.097677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.836 [2024-11-05 19:18:43.097689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.836 qpair failed and we were unable to recover it. 00:29:13.836 [2024-11-05 19:18:43.097864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.836 [2024-11-05 19:18:43.097876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.836 qpair failed and we were unable to recover it. 00:29:13.836 [2024-11-05 19:18:43.098207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.836 [2024-11-05 19:18:43.098218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.836 qpair failed and we were unable to recover it. 00:29:13.836 [2024-11-05 19:18:43.098527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.836 [2024-11-05 19:18:43.098538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.836 qpair failed and we were unable to recover it. 00:29:13.836 [2024-11-05 19:18:43.098859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.836 [2024-11-05 19:18:43.098871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.836 qpair failed and we were unable to recover it. 00:29:13.836 [2024-11-05 19:18:43.099052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.836 [2024-11-05 19:18:43.099063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.836 qpair failed and we were unable to recover it. 00:29:13.836 [2024-11-05 19:18:43.099353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.836 [2024-11-05 19:18:43.099365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.836 qpair failed and we were unable to recover it. 00:29:13.836 [2024-11-05 19:18:43.099708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.836 [2024-11-05 19:18:43.099720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.836 qpair failed and we were unable to recover it. 00:29:13.836 [2024-11-05 19:18:43.100048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.836 [2024-11-05 19:18:43.100061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.836 qpair failed and we were unable to recover it. 00:29:13.836 [2024-11-05 19:18:43.100373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.836 [2024-11-05 19:18:43.100386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.836 qpair failed and we were unable to recover it. 00:29:13.836 [2024-11-05 19:18:43.100699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.836 [2024-11-05 19:18:43.100711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.836 qpair failed and we were unable to recover it. 00:29:13.836 [2024-11-05 19:18:43.101049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.836 [2024-11-05 19:18:43.101062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.836 qpair failed and we were unable to recover it. 00:29:13.836 [2024-11-05 19:18:43.101367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.836 [2024-11-05 19:18:43.101379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.836 qpair failed and we were unable to recover it. 00:29:13.836 [2024-11-05 19:18:43.101554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.836 [2024-11-05 19:18:43.101567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.836 qpair failed and we were unable to recover it. 00:29:13.836 [2024-11-05 19:18:43.101784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.836 [2024-11-05 19:18:43.101796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.836 qpair failed and we were unable to recover it. 00:29:13.836 [2024-11-05 19:18:43.102112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.836 [2024-11-05 19:18:43.102124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.836 qpair failed and we were unable to recover it. 00:29:13.836 [2024-11-05 19:18:43.102435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.836 [2024-11-05 19:18:43.102446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.836 qpair failed and we were unable to recover it. 00:29:13.836 [2024-11-05 19:18:43.102755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.836 [2024-11-05 19:18:43.102768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.836 qpair failed and we were unable to recover it. 00:29:13.836 [2024-11-05 19:18:43.102946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.836 [2024-11-05 19:18:43.102958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:13.836 qpair failed and we were unable to recover it. 00:29:14.117 [2024-11-05 19:18:43.103169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.117 [2024-11-05 19:18:43.103183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.117 qpair failed and we were unable to recover it. 00:29:14.117 [2024-11-05 19:18:43.103359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.117 [2024-11-05 19:18:43.103370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.117 qpair failed and we were unable to recover it. 00:29:14.117 [2024-11-05 19:18:43.103465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.117 [2024-11-05 19:18:43.103478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.117 qpair failed and we were unable to recover it. 00:29:14.117 [2024-11-05 19:18:43.103644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.117 [2024-11-05 19:18:43.103654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.117 qpair failed and we were unable to recover it. 00:29:14.117 [2024-11-05 19:18:43.103970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.117 [2024-11-05 19:18:43.103981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.117 qpair failed and we were unable to recover it. 00:29:14.117 [2024-11-05 19:18:43.104286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.118 [2024-11-05 19:18:43.104298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.118 qpair failed and we were unable to recover it. 00:29:14.118 [2024-11-05 19:18:43.104562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.118 [2024-11-05 19:18:43.104573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.118 qpair failed and we were unable to recover it. 00:29:14.118 [2024-11-05 19:18:43.104911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.118 [2024-11-05 19:18:43.104924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.118 qpair failed and we were unable to recover it. 00:29:14.118 [2024-11-05 19:18:43.105265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.118 [2024-11-05 19:18:43.105276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.118 qpair failed and we were unable to recover it. 00:29:14.118 [2024-11-05 19:18:43.105539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.118 [2024-11-05 19:18:43.105551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.118 qpair failed and we were unable to recover it. 00:29:14.118 [2024-11-05 19:18:43.105873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.118 [2024-11-05 19:18:43.105884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.118 qpair failed and we were unable to recover it. 00:29:14.118 [2024-11-05 19:18:43.106192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.118 [2024-11-05 19:18:43.106204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.118 qpair failed and we were unable to recover it. 00:29:14.118 [2024-11-05 19:18:43.106537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.118 [2024-11-05 19:18:43.106549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.118 qpair failed and we were unable to recover it. 00:29:14.118 [2024-11-05 19:18:43.106882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.118 [2024-11-05 19:18:43.106894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.118 qpair failed and we were unable to recover it. 00:29:14.118 [2024-11-05 19:18:43.107245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.118 [2024-11-05 19:18:43.107257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.118 qpair failed and we were unable to recover it. 00:29:14.118 [2024-11-05 19:18:43.107568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.118 [2024-11-05 19:18:43.107580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.118 qpair failed and we were unable to recover it. 00:29:14.118 [2024-11-05 19:18:43.107891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.118 [2024-11-05 19:18:43.107903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.118 qpair failed and we were unable to recover it. 00:29:14.118 [2024-11-05 19:18:43.108121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.118 [2024-11-05 19:18:43.108132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.118 qpair failed and we were unable to recover it. 00:29:14.118 [2024-11-05 19:18:43.108448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.118 [2024-11-05 19:18:43.108459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.118 qpair failed and we were unable to recover it. 00:29:14.118 [2024-11-05 19:18:43.108766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.118 [2024-11-05 19:18:43.108779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.118 qpair failed and we were unable to recover it. 00:29:14.118 [2024-11-05 19:18:43.108981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.118 [2024-11-05 19:18:43.108992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.118 qpair failed and we were unable to recover it. 00:29:14.118 [2024-11-05 19:18:43.109344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.118 [2024-11-05 19:18:43.109358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.118 qpair failed and we were unable to recover it. 00:29:14.118 [2024-11-05 19:18:43.109667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.118 [2024-11-05 19:18:43.109679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.118 qpair failed and we were unable to recover it. 00:29:14.118 [2024-11-05 19:18:43.109971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.118 [2024-11-05 19:18:43.109983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.118 qpair failed and we were unable to recover it. 00:29:14.118 [2024-11-05 19:18:43.110290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.118 [2024-11-05 19:18:43.110302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.118 qpair failed and we were unable to recover it. 00:29:14.118 [2024-11-05 19:18:43.110629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.118 [2024-11-05 19:18:43.110641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.118 qpair failed and we were unable to recover it. 00:29:14.118 [2024-11-05 19:18:43.110929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.118 [2024-11-05 19:18:43.110942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.118 qpair failed and we were unable to recover it. 00:29:14.118 [2024-11-05 19:18:43.111323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.118 [2024-11-05 19:18:43.111335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.118 qpair failed and we were unable to recover it. 00:29:14.118 [2024-11-05 19:18:43.111635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.118 [2024-11-05 19:18:43.111647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.118 qpair failed and we were unable to recover it. 00:29:14.118 [2024-11-05 19:18:43.111958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.118 [2024-11-05 19:18:43.111976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.118 qpair failed and we were unable to recover it. 00:29:14.118 [2024-11-05 19:18:43.112296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.118 [2024-11-05 19:18:43.112308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.118 qpair failed and we were unable to recover it. 00:29:14.118 [2024-11-05 19:18:43.112614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.118 [2024-11-05 19:18:43.112627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.118 qpair failed and we were unable to recover it. 00:29:14.118 [2024-11-05 19:18:43.112968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.118 [2024-11-05 19:18:43.112981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.118 qpair failed and we were unable to recover it. 00:29:14.118 [2024-11-05 19:18:43.113320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.118 [2024-11-05 19:18:43.113332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.118 qpair failed and we were unable to recover it. 00:29:14.118 [2024-11-05 19:18:43.113642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.118 [2024-11-05 19:18:43.113654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.118 qpair failed and we were unable to recover it. 00:29:14.118 [2024-11-05 19:18:43.113988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.118 [2024-11-05 19:18:43.114001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.118 qpair failed and we were unable to recover it. 00:29:14.118 [2024-11-05 19:18:43.114284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.118 [2024-11-05 19:18:43.114296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.118 qpair failed and we were unable to recover it. 00:29:14.118 [2024-11-05 19:18:43.114605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.118 [2024-11-05 19:18:43.114617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.118 qpair failed and we were unable to recover it. 00:29:14.118 [2024-11-05 19:18:43.114958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.118 [2024-11-05 19:18:43.114971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.118 qpair failed and we were unable to recover it. 00:29:14.118 [2024-11-05 19:18:43.115274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.118 [2024-11-05 19:18:43.115287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.118 qpair failed and we were unable to recover it. 00:29:14.118 [2024-11-05 19:18:43.115468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.118 [2024-11-05 19:18:43.115481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.118 qpair failed and we were unable to recover it. 00:29:14.119 [2024-11-05 19:18:43.115852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.119 [2024-11-05 19:18:43.115864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.119 qpair failed and we were unable to recover it. 00:29:14.119 [2024-11-05 19:18:43.116118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.119 [2024-11-05 19:18:43.116130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.119 qpair failed and we were unable to recover it. 00:29:14.119 [2024-11-05 19:18:43.116300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.119 [2024-11-05 19:18:43.116313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.119 qpair failed and we were unable to recover it. 00:29:14.119 [2024-11-05 19:18:43.116617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.119 [2024-11-05 19:18:43.116628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.119 qpair failed and we were unable to recover it. 00:29:14.119 [2024-11-05 19:18:43.116813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.119 [2024-11-05 19:18:43.116824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.119 qpair failed and we were unable to recover it. 00:29:14.119 [2024-11-05 19:18:43.117145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.119 [2024-11-05 19:18:43.117157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.119 qpair failed and we were unable to recover it. 00:29:14.119 [2024-11-05 19:18:43.117544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.119 [2024-11-05 19:18:43.117556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.119 qpair failed and we were unable to recover it. 00:29:14.119 [2024-11-05 19:18:43.117858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.119 [2024-11-05 19:18:43.117870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.119 qpair failed and we were unable to recover it. 00:29:14.119 [2024-11-05 19:18:43.118194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.119 [2024-11-05 19:18:43.118205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.119 qpair failed and we were unable to recover it. 00:29:14.119 [2024-11-05 19:18:43.118509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.119 [2024-11-05 19:18:43.118520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.119 qpair failed and we were unable to recover it. 00:29:14.119 [2024-11-05 19:18:43.118853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.119 [2024-11-05 19:18:43.118866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.119 qpair failed and we were unable to recover it. 00:29:14.119 [2024-11-05 19:18:43.119171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.119 [2024-11-05 19:18:43.119182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.119 qpair failed and we were unable to recover it. 00:29:14.119 [2024-11-05 19:18:43.119373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.119 [2024-11-05 19:18:43.119384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.119 qpair failed and we were unable to recover it. 00:29:14.119 [2024-11-05 19:18:43.119738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.119 [2024-11-05 19:18:43.119754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.119 qpair failed and we were unable to recover it. 00:29:14.119 [2024-11-05 19:18:43.120071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.119 [2024-11-05 19:18:43.120091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.119 qpair failed and we were unable to recover it. 00:29:14.119 [2024-11-05 19:18:43.120393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.119 [2024-11-05 19:18:43.120406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.119 qpair failed and we were unable to recover it. 00:29:14.119 [2024-11-05 19:18:43.120718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.119 [2024-11-05 19:18:43.120731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.119 qpair failed and we were unable to recover it. 00:29:14.119 [2024-11-05 19:18:43.120908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.119 [2024-11-05 19:18:43.120920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.119 qpair failed and we were unable to recover it. 00:29:14.119 [2024-11-05 19:18:43.121245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.119 [2024-11-05 19:18:43.121256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.119 qpair failed and we were unable to recover it. 00:29:14.119 [2024-11-05 19:18:43.121537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.119 [2024-11-05 19:18:43.121549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.119 qpair failed and we were unable to recover it. 00:29:14.119 [2024-11-05 19:18:43.121729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.119 [2024-11-05 19:18:43.121741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.119 qpair failed and we were unable to recover it. 00:29:14.119 [2024-11-05 19:18:43.121926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.119 [2024-11-05 19:18:43.121938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.119 qpair failed and we were unable to recover it. 00:29:14.119 [2024-11-05 19:18:43.122254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.119 [2024-11-05 19:18:43.122265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.119 qpair failed and we were unable to recover it. 00:29:14.119 [2024-11-05 19:18:43.122505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.119 [2024-11-05 19:18:43.122517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.119 qpair failed and we were unable to recover it. 00:29:14.119 [2024-11-05 19:18:43.122857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.119 [2024-11-05 19:18:43.122869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.119 qpair failed and we were unable to recover it. 00:29:14.119 [2024-11-05 19:18:43.123208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.119 [2024-11-05 19:18:43.123220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.119 qpair failed and we were unable to recover it. 00:29:14.119 [2024-11-05 19:18:43.123533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.119 [2024-11-05 19:18:43.123544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.119 qpair failed and we were unable to recover it. 00:29:14.119 [2024-11-05 19:18:43.123860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.119 [2024-11-05 19:18:43.123872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.119 qpair failed and we were unable to recover it. 00:29:14.119 [2024-11-05 19:18:43.124188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.119 [2024-11-05 19:18:43.124200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.119 qpair failed and we were unable to recover it. 00:29:14.119 [2024-11-05 19:18:43.124540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.119 [2024-11-05 19:18:43.124552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.119 qpair failed and we were unable to recover it. 00:29:14.119 [2024-11-05 19:18:43.124859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.119 [2024-11-05 19:18:43.124870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.119 qpair failed and we were unable to recover it. 00:29:14.119 [2024-11-05 19:18:43.125185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.119 [2024-11-05 19:18:43.125197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.119 qpair failed and we were unable to recover it. 00:29:14.119 [2024-11-05 19:18:43.125508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.119 [2024-11-05 19:18:43.125520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.119 qpair failed and we were unable to recover it. 00:29:14.119 [2024-11-05 19:18:43.125858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.119 [2024-11-05 19:18:43.125870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.119 qpair failed and we were unable to recover it. 00:29:14.119 [2024-11-05 19:18:43.126198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.119 [2024-11-05 19:18:43.126210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.119 qpair failed and we were unable to recover it. 00:29:14.119 [2024-11-05 19:18:43.126556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.119 [2024-11-05 19:18:43.126568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.119 qpair failed and we were unable to recover it. 00:29:14.119 [2024-11-05 19:18:43.126736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.119 [2024-11-05 19:18:43.126752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.119 qpair failed and we were unable to recover it. 00:29:14.119 [2024-11-05 19:18:43.127067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.119 [2024-11-05 19:18:43.127078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.119 qpair failed and we were unable to recover it. 00:29:14.120 [2024-11-05 19:18:43.127391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.120 [2024-11-05 19:18:43.127402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.120 qpair failed and we were unable to recover it. 00:29:14.120 [2024-11-05 19:18:43.127744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.120 [2024-11-05 19:18:43.127760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.120 qpair failed and we were unable to recover it. 00:29:14.120 [2024-11-05 19:18:43.128038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.120 [2024-11-05 19:18:43.128049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.120 qpair failed and we were unable to recover it. 00:29:14.120 [2024-11-05 19:18:43.128363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.120 [2024-11-05 19:18:43.128374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.120 qpair failed and we were unable to recover it. 00:29:14.120 [2024-11-05 19:18:43.128676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.120 [2024-11-05 19:18:43.128687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.120 qpair failed and we were unable to recover it. 00:29:14.120 [2024-11-05 19:18:43.129097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.120 [2024-11-05 19:18:43.129109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.120 qpair failed and we were unable to recover it. 00:29:14.120 [2024-11-05 19:18:43.129444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.120 [2024-11-05 19:18:43.129457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.120 qpair failed and we were unable to recover it. 00:29:14.120 [2024-11-05 19:18:43.129780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.120 [2024-11-05 19:18:43.129791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.120 qpair failed and we were unable to recover it. 00:29:14.120 [2024-11-05 19:18:43.129977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.120 [2024-11-05 19:18:43.129987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.120 qpair failed and we were unable to recover it. 00:29:14.120 [2024-11-05 19:18:43.130325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.120 [2024-11-05 19:18:43.130336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.120 qpair failed and we were unable to recover it. 00:29:14.120 [2024-11-05 19:18:43.130525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.120 [2024-11-05 19:18:43.130536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.120 qpair failed and we were unable to recover it. 00:29:14.120 [2024-11-05 19:18:43.130866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.120 [2024-11-05 19:18:43.130878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.120 qpair failed and we were unable to recover it. 00:29:14.120 [2024-11-05 19:18:43.131190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.120 [2024-11-05 19:18:43.131200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.120 qpair failed and we were unable to recover it. 00:29:14.120 [2024-11-05 19:18:43.131495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.120 [2024-11-05 19:18:43.131505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.120 qpair failed and we were unable to recover it. 00:29:14.120 [2024-11-05 19:18:43.131778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.120 [2024-11-05 19:18:43.131789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.120 qpair failed and we were unable to recover it. 00:29:14.120 [2024-11-05 19:18:43.131863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.120 [2024-11-05 19:18:43.131873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.120 qpair failed and we were unable to recover it. 00:29:14.120 [2024-11-05 19:18:43.132075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.120 [2024-11-05 19:18:43.132086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.120 qpair failed and we were unable to recover it. 00:29:14.120 [2024-11-05 19:18:43.132418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.120 [2024-11-05 19:18:43.132429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.120 qpair failed and we were unable to recover it. 00:29:14.120 [2024-11-05 19:18:43.132710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.120 [2024-11-05 19:18:43.132722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.120 qpair failed and we were unable to recover it. 00:29:14.120 [2024-11-05 19:18:43.132892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.120 [2024-11-05 19:18:43.132904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.120 qpair failed and we were unable to recover it. 00:29:14.120 [2024-11-05 19:18:43.133186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.120 [2024-11-05 19:18:43.133197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.120 qpair failed and we were unable to recover it. 00:29:14.120 [2024-11-05 19:18:43.133426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.120 [2024-11-05 19:18:43.133437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.120 qpair failed and we were unable to recover it. 00:29:14.120 [2024-11-05 19:18:43.133758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.120 [2024-11-05 19:18:43.133771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.120 qpair failed and we were unable to recover it. 00:29:14.120 [2024-11-05 19:18:43.134004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.120 [2024-11-05 19:18:43.134015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.120 qpair failed and we were unable to recover it. 00:29:14.120 [2024-11-05 19:18:43.134328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.120 [2024-11-05 19:18:43.134340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.120 qpair failed and we were unable to recover it. 00:29:14.120 [2024-11-05 19:18:43.134652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.120 [2024-11-05 19:18:43.134663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.120 qpair failed and we were unable to recover it. 00:29:14.120 [2024-11-05 19:18:43.134985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.120 [2024-11-05 19:18:43.134997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.120 qpair failed and we were unable to recover it. 00:29:14.120 [2024-11-05 19:18:43.135312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.120 [2024-11-05 19:18:43.135323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.120 qpair failed and we were unable to recover it. 00:29:14.120 [2024-11-05 19:18:43.135631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.120 [2024-11-05 19:18:43.135643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.120 qpair failed and we were unable to recover it. 00:29:14.120 [2024-11-05 19:18:43.135963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.120 [2024-11-05 19:18:43.135975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.120 qpair failed and we were unable to recover it. 00:29:14.120 [2024-11-05 19:18:43.136158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.120 [2024-11-05 19:18:43.136168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.120 qpair failed and we were unable to recover it. 00:29:14.120 [2024-11-05 19:18:43.136341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.120 [2024-11-05 19:18:43.136352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.120 qpair failed and we were unable to recover it. 00:29:14.120 [2024-11-05 19:18:43.136621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.120 [2024-11-05 19:18:43.136632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.120 qpair failed and we were unable to recover it. 00:29:14.120 [2024-11-05 19:18:43.136962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.120 [2024-11-05 19:18:43.136974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.120 qpair failed and we were unable to recover it. 00:29:14.120 [2024-11-05 19:18:43.137259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.120 [2024-11-05 19:18:43.137270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.120 qpair failed and we were unable to recover it. 00:29:14.120 [2024-11-05 19:18:43.137463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.120 [2024-11-05 19:18:43.137473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.120 qpair failed and we were unable to recover it. 00:29:14.120 [2024-11-05 19:18:43.137774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.120 [2024-11-05 19:18:43.137786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.120 qpair failed and we were unable to recover it. 00:29:14.120 [2024-11-05 19:18:43.138126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.121 [2024-11-05 19:18:43.138137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.121 qpair failed and we were unable to recover it. 00:29:14.121 [2024-11-05 19:18:43.138426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.121 [2024-11-05 19:18:43.138437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.121 qpair failed and we were unable to recover it. 00:29:14.121 [2024-11-05 19:18:43.138767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.121 [2024-11-05 19:18:43.138780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.121 qpair failed and we were unable to recover it. 00:29:14.121 [2024-11-05 19:18:43.139092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.121 [2024-11-05 19:18:43.139103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.121 qpair failed and we were unable to recover it. 00:29:14.121 [2024-11-05 19:18:43.139373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.121 [2024-11-05 19:18:43.139384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.121 qpair failed and we were unable to recover it. 00:29:14.121 [2024-11-05 19:18:43.139715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.121 [2024-11-05 19:18:43.139726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.121 qpair failed and we were unable to recover it. 00:29:14.121 [2024-11-05 19:18:43.140076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.121 [2024-11-05 19:18:43.140088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.121 qpair failed and we were unable to recover it. 00:29:14.121 [2024-11-05 19:18:43.140437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.121 [2024-11-05 19:18:43.140448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.121 qpair failed and we were unable to recover it. 00:29:14.121 [2024-11-05 19:18:43.140754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.121 [2024-11-05 19:18:43.140769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.121 qpair failed and we were unable to recover it. 00:29:14.121 [2024-11-05 19:18:43.141092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.121 [2024-11-05 19:18:43.141103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.121 qpair failed and we were unable to recover it. 00:29:14.121 [2024-11-05 19:18:43.141419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.121 [2024-11-05 19:18:43.141430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.121 qpair failed and we were unable to recover it. 00:29:14.121 [2024-11-05 19:18:43.141759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.121 [2024-11-05 19:18:43.141771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.121 qpair failed and we were unable to recover it. 00:29:14.121 [2024-11-05 19:18:43.142107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.121 [2024-11-05 19:18:43.142119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.121 qpair failed and we were unable to recover it. 00:29:14.121 [2024-11-05 19:18:43.142342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.121 [2024-11-05 19:18:43.142353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.121 qpair failed and we were unable to recover it. 00:29:14.121 [2024-11-05 19:18:43.142657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.121 [2024-11-05 19:18:43.142668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.121 qpair failed and we were unable to recover it. 00:29:14.121 [2024-11-05 19:18:43.142989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.121 [2024-11-05 19:18:43.143001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.121 qpair failed and we were unable to recover it. 00:29:14.121 [2024-11-05 19:18:43.143327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.121 [2024-11-05 19:18:43.143338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.121 qpair failed and we were unable to recover it. 00:29:14.121 [2024-11-05 19:18:43.143621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.121 [2024-11-05 19:18:43.143632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.121 qpair failed and we were unable to recover it. 00:29:14.121 [2024-11-05 19:18:43.143795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.121 [2024-11-05 19:18:43.143808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.121 qpair failed and we were unable to recover it. 00:29:14.121 [2024-11-05 19:18:43.144085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.121 [2024-11-05 19:18:43.144096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.121 qpair failed and we were unable to recover it. 00:29:14.121 [2024-11-05 19:18:43.144144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.121 [2024-11-05 19:18:43.144153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.121 qpair failed and we were unable to recover it. 00:29:14.121 [2024-11-05 19:18:43.144446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.121 [2024-11-05 19:18:43.144457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.121 qpair failed and we were unable to recover it. 00:29:14.121 [2024-11-05 19:18:43.144532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.121 [2024-11-05 19:18:43.144543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.121 qpair failed and we were unable to recover it. 00:29:14.121 [2024-11-05 19:18:43.144698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.121 [2024-11-05 19:18:43.144710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.121 qpair failed and we were unable to recover it. 00:29:14.121 [2024-11-05 19:18:43.144935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.121 [2024-11-05 19:18:43.144947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.121 qpair failed and we were unable to recover it. 00:29:14.121 [2024-11-05 19:18:43.145186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.121 [2024-11-05 19:18:43.145197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.121 qpair failed and we were unable to recover it. 00:29:14.121 [2024-11-05 19:18:43.145376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.121 [2024-11-05 19:18:43.145388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.121 qpair failed and we were unable to recover it. 00:29:14.121 [2024-11-05 19:18:43.145566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.121 [2024-11-05 19:18:43.145577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.121 qpair failed and we were unable to recover it. 00:29:14.121 [2024-11-05 19:18:43.145894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.121 [2024-11-05 19:18:43.145906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.121 qpair failed and we were unable to recover it. 00:29:14.121 [2024-11-05 19:18:43.146066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.121 [2024-11-05 19:18:43.146077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.121 qpair failed and we were unable to recover it. 00:29:14.121 [2024-11-05 19:18:43.146266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.121 [2024-11-05 19:18:43.146277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.121 qpair failed and we were unable to recover it. 00:29:14.121 [2024-11-05 19:18:43.146504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.121 [2024-11-05 19:18:43.146515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.121 qpair failed and we were unable to recover it. 00:29:14.121 [2024-11-05 19:18:43.146712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.121 [2024-11-05 19:18:43.146723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.121 qpair failed and we were unable to recover it. 00:29:14.121 [2024-11-05 19:18:43.147063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.121 [2024-11-05 19:18:43.147076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.121 qpair failed and we were unable to recover it. 00:29:14.121 [2024-11-05 19:18:43.147145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.121 [2024-11-05 19:18:43.147156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.121 qpair failed and we were unable to recover it. 00:29:14.121 [2024-11-05 19:18:43.147440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.121 [2024-11-05 19:18:43.147454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.121 qpair failed and we were unable to recover it. 00:29:14.121 [2024-11-05 19:18:43.147749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.122 [2024-11-05 19:18:43.147760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.122 qpair failed and we were unable to recover it. 00:29:14.122 [2024-11-05 19:18:43.148150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.122 [2024-11-05 19:18:43.148161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.122 qpair failed and we were unable to recover it. 00:29:14.122 [2024-11-05 19:18:43.148476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.122 [2024-11-05 19:18:43.148488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.122 qpair failed and we were unable to recover it. 00:29:14.122 [2024-11-05 19:18:43.148658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.122 [2024-11-05 19:18:43.148670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.122 qpair failed and we were unable to recover it. 00:29:14.122 [2024-11-05 19:18:43.148868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.122 [2024-11-05 19:18:43.148880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.122 qpair failed and we were unable to recover it. 00:29:14.122 [2024-11-05 19:18:43.149221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.122 [2024-11-05 19:18:43.149233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.122 qpair failed and we were unable to recover it. 00:29:14.122 [2024-11-05 19:18:43.149548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.122 [2024-11-05 19:18:43.149560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.122 qpair failed and we were unable to recover it. 00:29:14.122 [2024-11-05 19:18:43.149873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.122 [2024-11-05 19:18:43.149884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.122 qpair failed and we were unable to recover it. 00:29:14.122 [2024-11-05 19:18:43.149951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.122 [2024-11-05 19:18:43.149960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.122 qpair failed and we were unable to recover it. 00:29:14.122 [2024-11-05 19:18:43.150231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.122 [2024-11-05 19:18:43.150243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.122 qpair failed and we were unable to recover it. 00:29:14.122 [2024-11-05 19:18:43.150429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.122 [2024-11-05 19:18:43.150440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.122 qpair failed and we were unable to recover it. 00:29:14.122 [2024-11-05 19:18:43.150657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.122 [2024-11-05 19:18:43.150668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.122 qpair failed and we were unable to recover it. 00:29:14.122 [2024-11-05 19:18:43.150719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.122 [2024-11-05 19:18:43.150730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.122 qpair failed and we were unable to recover it. 00:29:14.122 [2024-11-05 19:18:43.150910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.122 [2024-11-05 19:18:43.150922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.122 qpair failed and we were unable to recover it. 00:29:14.122 [2024-11-05 19:18:43.151165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.122 [2024-11-05 19:18:43.151176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.122 qpair failed and we were unable to recover it. 00:29:14.122 [2024-11-05 19:18:43.151494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.122 [2024-11-05 19:18:43.151506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.122 qpair failed and we were unable to recover it. 00:29:14.122 [2024-11-05 19:18:43.151816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.122 [2024-11-05 19:18:43.151827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.122 qpair failed and we were unable to recover it. 00:29:14.122 [2024-11-05 19:18:43.152166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.122 [2024-11-05 19:18:43.152178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.122 qpair failed and we were unable to recover it. 00:29:14.122 [2024-11-05 19:18:43.152515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.122 [2024-11-05 19:18:43.152526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.122 qpair failed and we were unable to recover it. 00:29:14.122 [2024-11-05 19:18:43.152712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.122 [2024-11-05 19:18:43.152723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.122 qpair failed and we were unable to recover it. 00:29:14.122 [2024-11-05 19:18:43.153031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.122 [2024-11-05 19:18:43.153043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.122 qpair failed and we were unable to recover it. 00:29:14.122 [2024-11-05 19:18:43.153341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.122 [2024-11-05 19:18:43.153352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.122 qpair failed and we were unable to recover it. 00:29:14.122 [2024-11-05 19:18:43.153411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.122 [2024-11-05 19:18:43.153421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.122 qpair failed and we were unable to recover it. 00:29:14.122 [2024-11-05 19:18:43.153720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.122 [2024-11-05 19:18:43.153731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.122 qpair failed and we were unable to recover it. 00:29:14.122 [2024-11-05 19:18:43.154111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.122 [2024-11-05 19:18:43.154123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.122 qpair failed and we were unable to recover it. 00:29:14.122 [2024-11-05 19:18:43.154306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.122 [2024-11-05 19:18:43.154317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.122 qpair failed and we were unable to recover it. 00:29:14.122 [2024-11-05 19:18:43.154575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.122 [2024-11-05 19:18:43.154588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.122 qpair failed and we were unable to recover it. 00:29:14.122 [2024-11-05 19:18:43.154792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.122 [2024-11-05 19:18:43.154804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.122 qpair failed and we were unable to recover it. 00:29:14.122 [2024-11-05 19:18:43.155113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.122 [2024-11-05 19:18:43.155124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.122 qpair failed and we were unable to recover it. 00:29:14.122 [2024-11-05 19:18:43.155332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.122 [2024-11-05 19:18:43.155342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.122 qpair failed and we were unable to recover it. 00:29:14.122 [2024-11-05 19:18:43.155531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.123 [2024-11-05 19:18:43.155542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.123 qpair failed and we were unable to recover it. 00:29:14.123 [2024-11-05 19:18:43.155870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.123 [2024-11-05 19:18:43.155882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.123 qpair failed and we were unable to recover it. 00:29:14.123 [2024-11-05 19:18:43.156040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.123 [2024-11-05 19:18:43.156051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.123 qpair failed and we were unable to recover it. 00:29:14.123 [2024-11-05 19:18:43.156362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.123 [2024-11-05 19:18:43.156374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.123 qpair failed and we were unable to recover it. 00:29:14.123 [2024-11-05 19:18:43.156559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.123 [2024-11-05 19:18:43.156571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.123 qpair failed and we were unable to recover it. 00:29:14.123 19:18:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:14.123 [2024-11-05 19:18:43.156863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.123 [2024-11-05 19:18:43.156874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.123 qpair failed and we were unable to recover it. 00:29:14.123 [2024-11-05 19:18:43.157068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.123 [2024-11-05 19:18:43.157080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.123 qpair failed and we were unable to recover it. 00:29:14.123 19:18:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@866 -- # return 0 00:29:14.123 [2024-11-05 19:18:43.157411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.123 [2024-11-05 19:18:43.157423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.123 qpair failed and we were unable to recover it. 00:29:14.123 19:18:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:29:14.123 [2024-11-05 19:18:43.157607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.123 [2024-11-05 19:18:43.157618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.123 qpair failed and we were unable to recover it. 00:29:14.123 19:18:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:14.123 [2024-11-05 19:18:43.157907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.123 [2024-11-05 19:18:43.157919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.123 qpair failed and we were unable to recover it. 00:29:14.123 19:18:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:14.123 [2024-11-05 19:18:43.158231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.123 [2024-11-05 19:18:43.158242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.123 qpair failed and we were unable to recover it. 00:29:14.123 [2024-11-05 19:18:43.158448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.123 [2024-11-05 19:18:43.158459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.123 qpair failed and we were unable to recover it. 00:29:14.123 [2024-11-05 19:18:43.158791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.123 [2024-11-05 19:18:43.158803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.123 qpair failed and we were unable to recover it. 00:29:14.123 [2024-11-05 19:18:43.159109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.123 [2024-11-05 19:18:43.159120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.123 qpair failed and we were unable to recover it. 00:29:14.123 [2024-11-05 19:18:43.159413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.123 [2024-11-05 19:18:43.159424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.123 qpair failed and we were unable to recover it. 00:29:14.123 [2024-11-05 19:18:43.159744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.123 [2024-11-05 19:18:43.159763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.123 qpair failed and we were unable to recover it. 00:29:14.123 [2024-11-05 19:18:43.160090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.123 [2024-11-05 19:18:43.160101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.123 qpair failed and we were unable to recover it. 00:29:14.123 [2024-11-05 19:18:43.160303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.123 [2024-11-05 19:18:43.160313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.123 qpair failed and we were unable to recover it. 00:29:14.123 [2024-11-05 19:18:43.160639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.123 [2024-11-05 19:18:43.160650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.123 qpair failed and we were unable to recover it. 00:29:14.123 [2024-11-05 19:18:43.160958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.123 [2024-11-05 19:18:43.160970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.123 qpair failed and we were unable to recover it. 00:29:14.123 [2024-11-05 19:18:43.161150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.123 [2024-11-05 19:18:43.161165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.123 qpair failed and we were unable to recover it. 00:29:14.123 [2024-11-05 19:18:43.161499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.123 [2024-11-05 19:18:43.161513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.123 qpair failed and we were unable to recover it. 00:29:14.123 [2024-11-05 19:18:43.161702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.123 [2024-11-05 19:18:43.161713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.123 qpair failed and we were unable to recover it. 00:29:14.123 [2024-11-05 19:18:43.162044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.123 [2024-11-05 19:18:43.162056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.123 qpair failed and we were unable to recover it. 00:29:14.123 [2024-11-05 19:18:43.162449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.123 [2024-11-05 19:18:43.162461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.123 qpair failed and we were unable to recover it. 00:29:14.123 [2024-11-05 19:18:43.162689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.123 [2024-11-05 19:18:43.162700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.123 qpair failed and we were unable to recover it. 00:29:14.123 [2024-11-05 19:18:43.163048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.123 [2024-11-05 19:18:43.163060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.123 qpair failed and we were unable to recover it. 00:29:14.123 [2024-11-05 19:18:43.163414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.123 [2024-11-05 19:18:43.163425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.123 qpair failed and we were unable to recover it. 00:29:14.123 [2024-11-05 19:18:43.163733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.123 [2024-11-05 19:18:43.163745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.123 qpair failed and we were unable to recover it. 00:29:14.123 [2024-11-05 19:18:43.164075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.123 [2024-11-05 19:18:43.164086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.123 qpair failed and we were unable to recover it. 00:29:14.123 [2024-11-05 19:18:43.164436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.123 [2024-11-05 19:18:43.164447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.123 qpair failed and we were unable to recover it. 00:29:14.123 [2024-11-05 19:18:43.164813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.123 [2024-11-05 19:18:43.164824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.123 qpair failed and we were unable to recover it. 00:29:14.123 [2024-11-05 19:18:43.165199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.123 [2024-11-05 19:18:43.165211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.123 qpair failed and we were unable to recover it. 00:29:14.123 [2024-11-05 19:18:43.165505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.123 [2024-11-05 19:18:43.165517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.123 qpair failed and we were unable to recover it. 00:29:14.123 [2024-11-05 19:18:43.165818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.123 [2024-11-05 19:18:43.165830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.124 qpair failed and we were unable to recover it. 00:29:14.124 [2024-11-05 19:18:43.166167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.124 [2024-11-05 19:18:43.166178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.124 qpair failed and we were unable to recover it. 00:29:14.124 [2024-11-05 19:18:43.166351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.124 [2024-11-05 19:18:43.166363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.124 qpair failed and we were unable to recover it. 00:29:14.124 [2024-11-05 19:18:43.166687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.124 [2024-11-05 19:18:43.166699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.124 qpair failed and we were unable to recover it. 00:29:14.124 [2024-11-05 19:18:43.167032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.124 [2024-11-05 19:18:43.167044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.124 qpair failed and we were unable to recover it. 00:29:14.124 [2024-11-05 19:18:43.167412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.124 [2024-11-05 19:18:43.167424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.124 qpair failed and we were unable to recover it. 00:29:14.124 [2024-11-05 19:18:43.167599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.124 [2024-11-05 19:18:43.167612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.124 qpair failed and we were unable to recover it. 00:29:14.124 [2024-11-05 19:18:43.167926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.124 [2024-11-05 19:18:43.167938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.124 qpair failed and we were unable to recover it. 00:29:14.124 [2024-11-05 19:18:43.168126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.124 [2024-11-05 19:18:43.168137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.124 qpair failed and we were unable to recover it. 00:29:14.124 [2024-11-05 19:18:43.168467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.124 [2024-11-05 19:18:43.168479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.124 qpair failed and we were unable to recover it. 00:29:14.124 [2024-11-05 19:18:43.168668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.124 [2024-11-05 19:18:43.168679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.124 qpair failed and we were unable to recover it. 00:29:14.124 [2024-11-05 19:18:43.168980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.124 [2024-11-05 19:18:43.168992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.124 qpair failed and we were unable to recover it. 00:29:14.124 [2024-11-05 19:18:43.169305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.124 [2024-11-05 19:18:43.169317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.124 qpair failed and we were unable to recover it. 00:29:14.124 [2024-11-05 19:18:43.169631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.124 [2024-11-05 19:18:43.169642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.124 qpair failed and we were unable to recover it. 00:29:14.124 [2024-11-05 19:18:43.169988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.124 [2024-11-05 19:18:43.170000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.124 qpair failed and we were unable to recover it. 00:29:14.124 [2024-11-05 19:18:43.170311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.124 [2024-11-05 19:18:43.170322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.124 qpair failed and we were unable to recover it. 00:29:14.124 [2024-11-05 19:18:43.170639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.124 [2024-11-05 19:18:43.170650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.124 qpair failed and we were unable to recover it. 00:29:14.124 [2024-11-05 19:18:43.170967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.124 [2024-11-05 19:18:43.170979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.124 qpair failed and we were unable to recover it. 00:29:14.124 [2024-11-05 19:18:43.171261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.124 [2024-11-05 19:18:43.171273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.124 qpair failed and we were unable to recover it. 00:29:14.124 [2024-11-05 19:18:43.171573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.124 [2024-11-05 19:18:43.171585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.124 qpair failed and we were unable to recover it. 00:29:14.124 [2024-11-05 19:18:43.171958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.124 [2024-11-05 19:18:43.171969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.124 qpair failed and we were unable to recover it. 00:29:14.124 [2024-11-05 19:18:43.172279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.124 [2024-11-05 19:18:43.172290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.124 qpair failed and we were unable to recover it. 00:29:14.124 [2024-11-05 19:18:43.172593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.124 [2024-11-05 19:18:43.172604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.124 qpair failed and we were unable to recover it. 00:29:14.124 [2024-11-05 19:18:43.172906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.124 [2024-11-05 19:18:43.172919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.124 qpair failed and we were unable to recover it. 00:29:14.124 [2024-11-05 19:18:43.173234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.124 [2024-11-05 19:18:43.173245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.124 qpair failed and we were unable to recover it. 00:29:14.124 [2024-11-05 19:18:43.173557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.124 [2024-11-05 19:18:43.173570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.124 qpair failed and we were unable to recover it. 00:29:14.124 [2024-11-05 19:18:43.173841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.124 [2024-11-05 19:18:43.173853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.124 qpair failed and we were unable to recover it. 00:29:14.124 [2024-11-05 19:18:43.174029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.124 [2024-11-05 19:18:43.174040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.124 qpair failed and we were unable to recover it. 00:29:14.124 [2024-11-05 19:18:43.174377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.124 [2024-11-05 19:18:43.174392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.124 qpair failed and we were unable to recover it. 00:29:14.124 [2024-11-05 19:18:43.174591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.124 [2024-11-05 19:18:43.174602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.124 qpair failed and we were unable to recover it. 00:29:14.124 [2024-11-05 19:18:43.174764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.124 [2024-11-05 19:18:43.174776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.124 qpair failed and we were unable to recover it. 00:29:14.124 [2024-11-05 19:18:43.175088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.124 [2024-11-05 19:18:43.175099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.124 qpair failed and we were unable to recover it. 00:29:14.124 [2024-11-05 19:18:43.175414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.124 [2024-11-05 19:18:43.175426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.124 qpair failed and we were unable to recover it. 00:29:14.124 [2024-11-05 19:18:43.175770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.124 [2024-11-05 19:18:43.175782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.124 qpair failed and we were unable to recover it. 00:29:14.124 [2024-11-05 19:18:43.175996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.124 [2024-11-05 19:18:43.176007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.124 qpair failed and we were unable to recover it. 00:29:14.124 [2024-11-05 19:18:43.176222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.124 [2024-11-05 19:18:43.176234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.124 qpair failed and we were unable to recover it. 00:29:14.124 [2024-11-05 19:18:43.176559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.124 [2024-11-05 19:18:43.176570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.124 qpair failed and we were unable to recover it. 00:29:14.124 [2024-11-05 19:18:43.176890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.124 [2024-11-05 19:18:43.176904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.124 qpair failed and we were unable to recover it. 00:29:14.124 [2024-11-05 19:18:43.177218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.125 [2024-11-05 19:18:43.177230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.125 qpair failed and we were unable to recover it. 00:29:14.125 [2024-11-05 19:18:43.177510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.125 [2024-11-05 19:18:43.177530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.125 qpair failed and we were unable to recover it. 00:29:14.125 [2024-11-05 19:18:43.177927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.125 [2024-11-05 19:18:43.177940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.125 qpair failed and we were unable to recover it. 00:29:14.125 [2024-11-05 19:18:43.178249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.125 [2024-11-05 19:18:43.178261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.125 qpair failed and we were unable to recover it. 00:29:14.125 [2024-11-05 19:18:43.178439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.125 [2024-11-05 19:18:43.178452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.125 qpair failed and we were unable to recover it. 00:29:14.125 [2024-11-05 19:18:43.178764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.125 [2024-11-05 19:18:43.178776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.125 qpair failed and we were unable to recover it. 00:29:14.125 [2024-11-05 19:18:43.179112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.125 [2024-11-05 19:18:43.179124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.125 qpair failed and we were unable to recover it. 00:29:14.125 [2024-11-05 19:18:43.179319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.125 [2024-11-05 19:18:43.179330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.125 qpair failed and we were unable to recover it. 00:29:14.125 [2024-11-05 19:18:43.179656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.125 [2024-11-05 19:18:43.179668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.125 qpair failed and we were unable to recover it. 00:29:14.125 [2024-11-05 19:18:43.179970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.125 [2024-11-05 19:18:43.179981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.125 qpair failed and we were unable to recover it. 00:29:14.125 [2024-11-05 19:18:43.180282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.125 [2024-11-05 19:18:43.180292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.125 qpair failed and we were unable to recover it. 00:29:14.125 [2024-11-05 19:18:43.180368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.125 [2024-11-05 19:18:43.180378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.125 qpair failed and we were unable to recover it. 00:29:14.125 [2024-11-05 19:18:43.180550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.125 [2024-11-05 19:18:43.180561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.125 qpair failed and we were unable to recover it. 00:29:14.125 [2024-11-05 19:18:43.180775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.125 [2024-11-05 19:18:43.180787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.125 qpair failed and we were unable to recover it. 00:29:14.125 [2024-11-05 19:18:43.180956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.125 [2024-11-05 19:18:43.180967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.125 qpair failed and we were unable to recover it. 00:29:14.125 [2024-11-05 19:18:43.181298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.125 [2024-11-05 19:18:43.181311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.125 qpair failed and we were unable to recover it. 00:29:14.125 [2024-11-05 19:18:43.181615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.125 [2024-11-05 19:18:43.181626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.125 qpair failed and we were unable to recover it. 00:29:14.125 [2024-11-05 19:18:43.181800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.125 [2024-11-05 19:18:43.181814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.125 qpair failed and we were unable to recover it. 00:29:14.125 [2024-11-05 19:18:43.182091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.125 [2024-11-05 19:18:43.182102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.125 qpair failed and we were unable to recover it. 00:29:14.125 [2024-11-05 19:18:43.182435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.125 [2024-11-05 19:18:43.182448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.125 qpair failed and we were unable to recover it. 00:29:14.125 [2024-11-05 19:18:43.182764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.125 [2024-11-05 19:18:43.182778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.125 qpair failed and we were unable to recover it. 00:29:14.125 [2024-11-05 19:18:43.183100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.125 [2024-11-05 19:18:43.183111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.125 qpair failed and we were unable to recover it. 00:29:14.125 [2024-11-05 19:18:43.183403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.125 [2024-11-05 19:18:43.183415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.125 qpair failed and we were unable to recover it. 00:29:14.125 [2024-11-05 19:18:43.183724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.125 [2024-11-05 19:18:43.183735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.125 qpair failed and we were unable to recover it. 00:29:14.125 [2024-11-05 19:18:43.183937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.125 [2024-11-05 19:18:43.183949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.125 qpair failed and we were unable to recover it. 00:29:14.125 [2024-11-05 19:18:43.184224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.125 [2024-11-05 19:18:43.184234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.125 qpair failed and we were unable to recover it. 00:29:14.125 [2024-11-05 19:18:43.184581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.125 [2024-11-05 19:18:43.184591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.125 qpair failed and we were unable to recover it. 00:29:14.125 [2024-11-05 19:18:43.184916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.125 [2024-11-05 19:18:43.184926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.125 qpair failed and we were unable to recover it. 00:29:14.125 [2024-11-05 19:18:43.185232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.125 [2024-11-05 19:18:43.185243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.125 qpair failed and we were unable to recover it. 00:29:14.125 [2024-11-05 19:18:43.185559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.125 [2024-11-05 19:18:43.185568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.125 qpair failed and we were unable to recover it. 00:29:14.125 [2024-11-05 19:18:43.185882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.125 [2024-11-05 19:18:43.185892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.125 qpair failed and we were unable to recover it. 00:29:14.125 [2024-11-05 19:18:43.186204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.125 [2024-11-05 19:18:43.186214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.125 qpair failed and we were unable to recover it. 00:29:14.125 [2024-11-05 19:18:43.186523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.125 [2024-11-05 19:18:43.186533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.125 qpair failed and we were unable to recover it. 00:29:14.125 [2024-11-05 19:18:43.186725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.125 [2024-11-05 19:18:43.186734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.125 qpair failed and we were unable to recover it. 00:29:14.125 [2024-11-05 19:18:43.187053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.125 [2024-11-05 19:18:43.187063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.125 qpair failed and we were unable to recover it. 00:29:14.125 [2024-11-05 19:18:43.187237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.125 [2024-11-05 19:18:43.187248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.125 qpair failed and we were unable to recover it. 00:29:14.125 [2024-11-05 19:18:43.187423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.125 [2024-11-05 19:18:43.187434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.125 qpair failed and we were unable to recover it. 00:29:14.125 [2024-11-05 19:18:43.187609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.125 [2024-11-05 19:18:43.187619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.125 qpair failed and we were unable to recover it. 00:29:14.125 [2024-11-05 19:18:43.187969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.125 [2024-11-05 19:18:43.187980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.125 qpair failed and we were unable to recover it. 00:29:14.125 [2024-11-05 19:18:43.188294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.126 [2024-11-05 19:18:43.188305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.126 qpair failed and we were unable to recover it. 00:29:14.126 [2024-11-05 19:18:43.188522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.126 [2024-11-05 19:18:43.188533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.126 qpair failed and we were unable to recover it. 00:29:14.126 [2024-11-05 19:18:43.188696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.126 [2024-11-05 19:18:43.188708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.126 qpair failed and we were unable to recover it. 00:29:14.126 [2024-11-05 19:18:43.188911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.126 [2024-11-05 19:18:43.188923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.126 qpair failed and we were unable to recover it. 00:29:14.126 [2024-11-05 19:18:43.189314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.126 [2024-11-05 19:18:43.189327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.126 qpair failed and we were unable to recover it. 00:29:14.126 [2024-11-05 19:18:43.189631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.126 [2024-11-05 19:18:43.189644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.126 qpair failed and we were unable to recover it. 00:29:14.126 [2024-11-05 19:18:43.189970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.126 [2024-11-05 19:18:43.189983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.126 qpair failed and we were unable to recover it. 00:29:14.126 [2024-11-05 19:18:43.190280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.126 [2024-11-05 19:18:43.190292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.126 qpair failed and we were unable to recover it. 00:29:14.126 [2024-11-05 19:18:43.190606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.126 [2024-11-05 19:18:43.190618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.126 qpair failed and we were unable to recover it. 00:29:14.126 [2024-11-05 19:18:43.190923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.126 [2024-11-05 19:18:43.190936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.126 qpair failed and we were unable to recover it. 00:29:14.126 [2024-11-05 19:18:43.191263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.126 [2024-11-05 19:18:43.191276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.126 qpair failed and we were unable to recover it. 00:29:14.126 [2024-11-05 19:18:43.191618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.126 [2024-11-05 19:18:43.191631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.126 qpair failed and we were unable to recover it. 00:29:14.126 [2024-11-05 19:18:43.192032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.126 [2024-11-05 19:18:43.192045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.126 qpair failed and we were unable to recover it. 00:29:14.126 [2024-11-05 19:18:43.192354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.126 [2024-11-05 19:18:43.192366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.126 qpair failed and we were unable to recover it. 00:29:14.126 [2024-11-05 19:18:43.192589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.126 [2024-11-05 19:18:43.192601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.126 qpair failed and we were unable to recover it. 00:29:14.126 [2024-11-05 19:18:43.192791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.126 [2024-11-05 19:18:43.192804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.126 qpair failed and we were unable to recover it. 00:29:14.126 [2024-11-05 19:18:43.192980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.126 [2024-11-05 19:18:43.192992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.126 qpair failed and we were unable to recover it. 00:29:14.126 [2024-11-05 19:18:43.193314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.126 [2024-11-05 19:18:43.193328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.126 qpair failed and we were unable to recover it. 00:29:14.126 [2024-11-05 19:18:43.193523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.126 [2024-11-05 19:18:43.193535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.126 qpair failed and we were unable to recover it. 00:29:14.126 [2024-11-05 19:18:43.193698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.126 [2024-11-05 19:18:43.193710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.126 qpair failed and we were unable to recover it. 00:29:14.126 [2024-11-05 19:18:43.193986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.126 [2024-11-05 19:18:43.194000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.126 qpair failed and we were unable to recover it. 00:29:14.126 [2024-11-05 19:18:43.194059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.126 [2024-11-05 19:18:43.194069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.126 qpair failed and we were unable to recover it. 00:29:14.126 [2024-11-05 19:18:43.194220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.126 [2024-11-05 19:18:43.194231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.126 qpair failed and we were unable to recover it. 00:29:14.126 [2024-11-05 19:18:43.194415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.126 [2024-11-05 19:18:43.194427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.126 qpair failed and we were unable to recover it. 00:29:14.126 [2024-11-05 19:18:43.194794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.126 [2024-11-05 19:18:43.194807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.126 qpair failed and we were unable to recover it. 00:29:14.126 [2024-11-05 19:18:43.195128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.126 [2024-11-05 19:18:43.195141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.126 qpair failed and we were unable to recover it. 00:29:14.126 [2024-11-05 19:18:43.195452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.126 [2024-11-05 19:18:43.195464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.126 qpair failed and we were unable to recover it. 00:29:14.126 [2024-11-05 19:18:43.195779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.126 [2024-11-05 19:18:43.195791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.126 qpair failed and we were unable to recover it. 00:29:14.126 [2024-11-05 19:18:43.196103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.126 [2024-11-05 19:18:43.196115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.126 qpair failed and we were unable to recover it. 00:29:14.126 [2024-11-05 19:18:43.196296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.126 [2024-11-05 19:18:43.196308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.126 qpair failed and we were unable to recover it. 00:29:14.126 [2024-11-05 19:18:43.196611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.126 [2024-11-05 19:18:43.196623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.126 qpair failed and we were unable to recover it. 00:29:14.126 [2024-11-05 19:18:43.196812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.126 [2024-11-05 19:18:43.196825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.126 qpair failed and we were unable to recover it. 00:29:14.126 [2024-11-05 19:18:43.197080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.126 [2024-11-05 19:18:43.197092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.126 qpair failed and we were unable to recover it. 00:29:14.126 [2024-11-05 19:18:43.197260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.126 [2024-11-05 19:18:43.197272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.126 qpair failed and we were unable to recover it. 00:29:14.126 [2024-11-05 19:18:43.197474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.126 [2024-11-05 19:18:43.197486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.126 qpair failed and we were unable to recover it. 00:29:14.126 19:18:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:14.126 [2024-11-05 19:18:43.197794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.126 [2024-11-05 19:18:43.197808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.126 qpair failed and we were unable to recover it. 00:29:14.126 [2024-11-05 19:18:43.197860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.126 [2024-11-05 19:18:43.197872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.126 qpair failed and we were unable to recover it. 00:29:14.126 19:18:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:14.126 [2024-11-05 19:18:43.198071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.127 [2024-11-05 19:18:43.198084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.127 qpair failed and we were unable to recover it. 00:29:14.127 [2024-11-05 19:18:43.198243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.127 [2024-11-05 19:18:43.198256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.127 qpair failed and we were unable to recover it. 00:29:14.127 19:18:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:14.127 [2024-11-05 19:18:43.198446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.127 [2024-11-05 19:18:43.198459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.127 qpair failed and we were unable to recover it. 00:29:14.127 19:18:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:14.127 [2024-11-05 19:18:43.198750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.127 [2024-11-05 19:18:43.198763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.127 qpair failed and we were unable to recover it. 00:29:14.127 [2024-11-05 19:18:43.198960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.127 [2024-11-05 19:18:43.198973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.127 qpair failed and we were unable to recover it. 00:29:14.127 [2024-11-05 19:18:43.199288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.127 [2024-11-05 19:18:43.199300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.127 qpair failed and we were unable to recover it. 00:29:14.127 [2024-11-05 19:18:43.199629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.127 [2024-11-05 19:18:43.199642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.127 qpair failed and we were unable to recover it. 00:29:14.127 [2024-11-05 19:18:43.199960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.127 [2024-11-05 19:18:43.199973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.127 qpair failed and we were unable to recover it. 00:29:14.127 [2024-11-05 19:18:43.200264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.127 [2024-11-05 19:18:43.200276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.127 qpair failed and we were unable to recover it. 00:29:14.127 [2024-11-05 19:18:43.200560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.127 [2024-11-05 19:18:43.200572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.127 qpair failed and we were unable to recover it. 00:29:14.127 [2024-11-05 19:18:43.200883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.127 [2024-11-05 19:18:43.200896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.127 qpair failed and we were unable to recover it. 00:29:14.127 [2024-11-05 19:18:43.201119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.127 [2024-11-05 19:18:43.201131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.127 qpair failed and we were unable to recover it. 00:29:14.127 [2024-11-05 19:18:43.201318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.127 [2024-11-05 19:18:43.201330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.127 qpair failed and we were unable to recover it. 00:29:14.127 [2024-11-05 19:18:43.201661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.127 [2024-11-05 19:18:43.201672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.127 qpair failed and we were unable to recover it. 00:29:14.127 [2024-11-05 19:18:43.201970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.127 [2024-11-05 19:18:43.201982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.127 qpair failed and we were unable to recover it. 00:29:14.127 [2024-11-05 19:18:43.202153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.127 [2024-11-05 19:18:43.202164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.127 qpair failed and we were unable to recover it. 00:29:14.127 [2024-11-05 19:18:43.202325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.127 [2024-11-05 19:18:43.202337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.127 qpair failed and we were unable to recover it. 00:29:14.127 [2024-11-05 19:18:43.202516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.127 [2024-11-05 19:18:43.202527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.127 qpair failed and we were unable to recover it. 00:29:14.127 [2024-11-05 19:18:43.202838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.127 [2024-11-05 19:18:43.202849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.127 qpair failed and we were unable to recover it. 00:29:14.127 [2024-11-05 19:18:43.203174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.127 [2024-11-05 19:18:43.203185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.127 qpair failed and we were unable to recover it. 00:29:14.127 [2024-11-05 19:18:43.203357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.127 [2024-11-05 19:18:43.203369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.127 qpair failed and we were unable to recover it. 00:29:14.127 [2024-11-05 19:18:43.203695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.127 [2024-11-05 19:18:43.203706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.127 qpair failed and we were unable to recover it. 00:29:14.127 [2024-11-05 19:18:43.203986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.127 [2024-11-05 19:18:43.203997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.127 qpair failed and we were unable to recover it. 00:29:14.127 [2024-11-05 19:18:43.204307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.127 [2024-11-05 19:18:43.204318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.127 qpair failed and we were unable to recover it. 00:29:14.127 [2024-11-05 19:18:43.204612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.127 [2024-11-05 19:18:43.204624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.127 qpair failed and we were unable to recover it. 00:29:14.127 [2024-11-05 19:18:43.204974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.127 [2024-11-05 19:18:43.204986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.127 qpair failed and we were unable to recover it. 00:29:14.127 [2024-11-05 19:18:43.205281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.127 [2024-11-05 19:18:43.205293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.127 qpair failed and we were unable to recover it. 00:29:14.127 [2024-11-05 19:18:43.205605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.127 [2024-11-05 19:18:43.205616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.127 qpair failed and we were unable to recover it. 00:29:14.127 [2024-11-05 19:18:43.205952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.127 [2024-11-05 19:18:43.205963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.127 qpair failed and we were unable to recover it. 00:29:14.127 [2024-11-05 19:18:43.206277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.127 [2024-11-05 19:18:43.206288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.127 qpair failed and we were unable to recover it. 00:29:14.127 [2024-11-05 19:18:43.206590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.128 [2024-11-05 19:18:43.206601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.128 qpair failed and we were unable to recover it. 00:29:14.128 [2024-11-05 19:18:43.206920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.128 [2024-11-05 19:18:43.206931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.128 qpair failed and we were unable to recover it. 00:29:14.128 [2024-11-05 19:18:43.207245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.128 [2024-11-05 19:18:43.207256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.128 qpair failed and we were unable to recover it. 00:29:14.128 [2024-11-05 19:18:43.207564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.128 [2024-11-05 19:18:43.207575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.128 qpair failed and we were unable to recover it. 00:29:14.128 [2024-11-05 19:18:43.207850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.128 [2024-11-05 19:18:43.207864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.128 qpair failed and we were unable to recover it. 00:29:14.128 [2024-11-05 19:18:43.208219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.128 [2024-11-05 19:18:43.208230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.128 qpair failed and we were unable to recover it. 00:29:14.128 [2024-11-05 19:18:43.208533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.128 [2024-11-05 19:18:43.208545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.128 qpair failed and we were unable to recover it. 00:29:14.128 [2024-11-05 19:18:43.208863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.128 [2024-11-05 19:18:43.208875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.128 qpair failed and we were unable to recover it. 00:29:14.128 [2024-11-05 19:18:43.209182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.128 [2024-11-05 19:18:43.209194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.128 qpair failed and we were unable to recover it. 00:29:14.128 [2024-11-05 19:18:43.209503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.128 [2024-11-05 19:18:43.209513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.128 qpair failed and we were unable to recover it. 00:29:14.128 [2024-11-05 19:18:43.209842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.128 [2024-11-05 19:18:43.209855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.128 qpair failed and we were unable to recover it. 00:29:14.128 [2024-11-05 19:18:43.210186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.128 [2024-11-05 19:18:43.210197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.128 qpair failed and we were unable to recover it. 00:29:14.128 [2024-11-05 19:18:43.210349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.128 [2024-11-05 19:18:43.210360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.128 qpair failed and we were unable to recover it. 00:29:14.128 [2024-11-05 19:18:43.210689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.128 [2024-11-05 19:18:43.210700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.128 qpair failed and we were unable to recover it. 00:29:14.128 [2024-11-05 19:18:43.211066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.128 [2024-11-05 19:18:43.211077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.128 qpair failed and we were unable to recover it. 00:29:14.128 [2024-11-05 19:18:43.211409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.128 [2024-11-05 19:18:43.211420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.128 qpair failed and we were unable to recover it. 00:29:14.128 [2024-11-05 19:18:43.211724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.128 [2024-11-05 19:18:43.211735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.128 qpair failed and we were unable to recover it. 00:29:14.128 [2024-11-05 19:18:43.212083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.128 [2024-11-05 19:18:43.212094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.128 qpair failed and we were unable to recover it. 00:29:14.128 [2024-11-05 19:18:43.212373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.128 [2024-11-05 19:18:43.212385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.128 qpair failed and we were unable to recover it. 00:29:14.128 [2024-11-05 19:18:43.212693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.128 [2024-11-05 19:18:43.212704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.128 qpair failed and we were unable to recover it. 00:29:14.128 [2024-11-05 19:18:43.212889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.128 [2024-11-05 19:18:43.212901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.128 qpair failed and we were unable to recover it. 00:29:14.128 [2024-11-05 19:18:43.213273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.128 [2024-11-05 19:18:43.213284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.128 qpair failed and we were unable to recover it. 00:29:14.128 [2024-11-05 19:18:43.213617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.128 [2024-11-05 19:18:43.213628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.128 qpair failed and we were unable to recover it. 00:29:14.128 [2024-11-05 19:18:43.213932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.128 [2024-11-05 19:18:43.213943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.128 qpair failed and we were unable to recover it. 00:29:14.128 [2024-11-05 19:18:43.214115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.128 [2024-11-05 19:18:43.214127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.128 qpair failed and we were unable to recover it. 00:29:14.128 [2024-11-05 19:18:43.214460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.128 [2024-11-05 19:18:43.214471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.128 qpair failed and we were unable to recover it. 00:29:14.128 [2024-11-05 19:18:43.214704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.128 [2024-11-05 19:18:43.214716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.128 qpair failed and we were unable to recover it. 00:29:14.128 [2024-11-05 19:18:43.215082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.128 [2024-11-05 19:18:43.215094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.128 qpair failed and we were unable to recover it. 00:29:14.128 [2024-11-05 19:18:43.215392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.128 [2024-11-05 19:18:43.215404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.128 qpair failed and we were unable to recover it. 00:29:14.128 [2024-11-05 19:18:43.215713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.128 [2024-11-05 19:18:43.215725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.128 qpair failed and we were unable to recover it. 00:29:14.128 [2024-11-05 19:18:43.216083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.128 [2024-11-05 19:18:43.216096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.128 qpair failed and we were unable to recover it. 00:29:14.128 [2024-11-05 19:18:43.216407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.128 [2024-11-05 19:18:43.216423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.128 qpair failed and we were unable to recover it. 00:29:14.128 [2024-11-05 19:18:43.216724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.128 [2024-11-05 19:18:43.216736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.128 qpair failed and we were unable to recover it. 00:29:14.128 [2024-11-05 19:18:43.216904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.128 [2024-11-05 19:18:43.216917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.128 qpair failed and we were unable to recover it. 00:29:14.128 [2024-11-05 19:18:43.217135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.128 [2024-11-05 19:18:43.217147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.128 qpair failed and we were unable to recover it. 00:29:14.128 [2024-11-05 19:18:43.217473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.128 [2024-11-05 19:18:43.217484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.128 qpair failed and we were unable to recover it. 00:29:14.128 [2024-11-05 19:18:43.217794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.128 [2024-11-05 19:18:43.217806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.128 qpair failed and we were unable to recover it. 00:29:14.128 [2024-11-05 19:18:43.218092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.128 [2024-11-05 19:18:43.218103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.128 qpair failed and we were unable to recover it. 00:29:14.128 [2024-11-05 19:18:43.218292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.128 [2024-11-05 19:18:43.218303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.128 qpair failed and we were unable to recover it. 00:29:14.128 [2024-11-05 19:18:43.218637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.129 [2024-11-05 19:18:43.218649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.129 qpair failed and we were unable to recover it. 00:29:14.129 [2024-11-05 19:18:43.218958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.129 [2024-11-05 19:18:43.218970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.129 qpair failed and we were unable to recover it. 00:29:14.129 [2024-11-05 19:18:43.219178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.129 [2024-11-05 19:18:43.219189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.129 qpair failed and we were unable to recover it. 00:29:14.129 [2024-11-05 19:18:43.219506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.129 [2024-11-05 19:18:43.219517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.129 qpair failed and we were unable to recover it. 00:29:14.129 [2024-11-05 19:18:43.219827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.129 [2024-11-05 19:18:43.219839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.129 qpair failed and we were unable to recover it. 00:29:14.129 [2024-11-05 19:18:43.220154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.129 [2024-11-05 19:18:43.220166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.129 qpair failed and we were unable to recover it. 00:29:14.129 [2024-11-05 19:18:43.220343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.129 [2024-11-05 19:18:43.220354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.129 qpair failed and we were unable to recover it. 00:29:14.129 [2024-11-05 19:18:43.220664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.129 [2024-11-05 19:18:43.220676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.129 qpair failed and we were unable to recover it. 00:29:14.129 [2024-11-05 19:18:43.221028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.129 [2024-11-05 19:18:43.221040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.129 qpair failed and we were unable to recover it. 00:29:14.129 [2024-11-05 19:18:43.221368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.129 [2024-11-05 19:18:43.221380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.129 qpair failed and we were unable to recover it. 00:29:14.129 [2024-11-05 19:18:43.221707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.129 [2024-11-05 19:18:43.221718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.129 qpair failed and we were unable to recover it. 00:29:14.129 [2024-11-05 19:18:43.222041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.129 [2024-11-05 19:18:43.222054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.129 qpair failed and we were unable to recover it. 00:29:14.129 [2024-11-05 19:18:43.222359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.129 [2024-11-05 19:18:43.222371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.129 qpair failed and we were unable to recover it. 00:29:14.129 [2024-11-05 19:18:43.222683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.129 [2024-11-05 19:18:43.222694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.129 qpair failed and we were unable to recover it. 00:29:14.129 [2024-11-05 19:18:43.223046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.129 [2024-11-05 19:18:43.223058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.129 qpair failed and we were unable to recover it. 00:29:14.129 [2024-11-05 19:18:43.223394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.129 [2024-11-05 19:18:43.223405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.129 qpair failed and we were unable to recover it. 00:29:14.129 [2024-11-05 19:18:43.223712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.129 [2024-11-05 19:18:43.223724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.129 qpair failed and we were unable to recover it. 00:29:14.129 [2024-11-05 19:18:43.224030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.129 [2024-11-05 19:18:43.224041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.129 qpair failed and we were unable to recover it. 00:29:14.129 [2024-11-05 19:18:43.224345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.129 [2024-11-05 19:18:43.224356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.129 qpair failed and we were unable to recover it. 00:29:14.129 [2024-11-05 19:18:43.224686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.129 [2024-11-05 19:18:43.224698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.129 qpair failed and we were unable to recover it. 00:29:14.129 [2024-11-05 19:18:43.225011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.129 [2024-11-05 19:18:43.225023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.129 qpair failed and we were unable to recover it. 00:29:14.129 [2024-11-05 19:18:43.225336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.129 Malloc0 00:29:14.129 [2024-11-05 19:18:43.225348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.129 qpair failed and we were unable to recover it. 00:29:14.129 [2024-11-05 19:18:43.225662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.129 [2024-11-05 19:18:43.225674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.129 qpair failed and we were unable to recover it. 00:29:14.129 [2024-11-05 19:18:43.225978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.129 [2024-11-05 19:18:43.225992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.129 qpair failed and we were unable to recover it. 00:29:14.129 19:18:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:14.129 [2024-11-05 19:18:43.226312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.129 [2024-11-05 19:18:43.226324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.129 qpair failed and we were unable to recover it. 00:29:14.129 19:18:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:14.129 [2024-11-05 19:18:43.226622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.129 [2024-11-05 19:18:43.226634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.129 qpair failed and we were unable to recover it. 00:29:14.129 19:18:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:14.129 [2024-11-05 19:18:43.226914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.129 [2024-11-05 19:18:43.226926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.129 qpair failed and we were unable to recover it. 00:29:14.129 19:18:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:14.129 [2024-11-05 19:18:43.227282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.129 [2024-11-05 19:18:43.227294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.129 qpair failed and we were unable to recover it. 00:29:14.129 [2024-11-05 19:18:43.227596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.129 [2024-11-05 19:18:43.227608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.129 qpair failed and we were unable to recover it. 00:29:14.129 [2024-11-05 19:18:43.227912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.129 [2024-11-05 19:18:43.227923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.129 qpair failed and we were unable to recover it. 00:29:14.129 [2024-11-05 19:18:43.228234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.129 [2024-11-05 19:18:43.228246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.129 qpair failed and we were unable to recover it. 00:29:14.129 [2024-11-05 19:18:43.228576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.129 [2024-11-05 19:18:43.228590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.129 qpair failed and we were unable to recover it. 00:29:14.129 [2024-11-05 19:18:43.228902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.129 [2024-11-05 19:18:43.228914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.129 qpair failed and we were unable to recover it. 00:29:14.129 [2024-11-05 19:18:43.229223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.129 [2024-11-05 19:18:43.229234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.129 qpair failed and we were unable to recover it. 00:29:14.129 [2024-11-05 19:18:43.229509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.129 [2024-11-05 19:18:43.229520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.129 qpair failed and we were unable to recover it. 00:29:14.129 [2024-11-05 19:18:43.229814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.130 [2024-11-05 19:18:43.229826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.130 qpair failed and we were unable to recover it. 00:29:14.130 [2024-11-05 19:18:43.229990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.130 [2024-11-05 19:18:43.230000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.130 qpair failed and we were unable to recover it. 00:29:14.130 [2024-11-05 19:18:43.230260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.130 [2024-11-05 19:18:43.230271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.130 qpair failed and we were unable to recover it. 00:29:14.130 [2024-11-05 19:18:43.230605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.130 [2024-11-05 19:18:43.230617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.130 qpair failed and we were unable to recover it. 00:29:14.130 [2024-11-05 19:18:43.230830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.130 [2024-11-05 19:18:43.230842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.130 qpair failed and we were unable to recover it. 00:29:14.130 [2024-11-05 19:18:43.231163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.130 [2024-11-05 19:18:43.231174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.130 qpair failed and we were unable to recover it. 00:29:14.130 [2024-11-05 19:18:43.231486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.130 [2024-11-05 19:18:43.231497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.130 qpair failed and we were unable to recover it. 00:29:14.130 [2024-11-05 19:18:43.231756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.130 [2024-11-05 19:18:43.231768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.130 qpair failed and we were unable to recover it. 00:29:14.130 [2024-11-05 19:18:43.232077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.130 [2024-11-05 19:18:43.232089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.130 qpair failed and we were unable to recover it. 00:29:14.130 [2024-11-05 19:18:43.232402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.130 [2024-11-05 19:18:43.232414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.130 qpair failed and we were unable to recover it. 00:29:14.130 [2024-11-05 19:18:43.232721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.130 [2024-11-05 19:18:43.232733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.130 qpair failed and we were unable to recover it. 00:29:14.130 [2024-11-05 19:18:43.232797] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:14.130 [2024-11-05 19:18:43.233052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.130 [2024-11-05 19:18:43.233063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.130 qpair failed and we were unable to recover it. 00:29:14.130 [2024-11-05 19:18:43.233345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.130 [2024-11-05 19:18:43.233357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.130 qpair failed and we were unable to recover it. 00:29:14.130 [2024-11-05 19:18:43.233674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.130 [2024-11-05 19:18:43.233686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.130 qpair failed and we were unable to recover it. 00:29:14.130 [2024-11-05 19:18:43.233980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.130 [2024-11-05 19:18:43.233992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.130 qpair failed and we were unable to recover it. 00:29:14.130 [2024-11-05 19:18:43.234331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.130 [2024-11-05 19:18:43.234343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.130 qpair failed and we were unable to recover it. 00:29:14.130 [2024-11-05 19:18:43.234645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.130 [2024-11-05 19:18:43.234657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.130 qpair failed and we were unable to recover it. 00:29:14.130 [2024-11-05 19:18:43.234874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.130 [2024-11-05 19:18:43.234886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.130 qpair failed and we were unable to recover it. 00:29:14.130 [2024-11-05 19:18:43.235217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.130 [2024-11-05 19:18:43.235228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.130 qpair failed and we were unable to recover it. 00:29:14.130 [2024-11-05 19:18:43.235400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.130 [2024-11-05 19:18:43.235411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.130 qpair failed and we were unable to recover it. 00:29:14.130 [2024-11-05 19:18:43.235731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.130 [2024-11-05 19:18:43.235742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.130 qpair failed and we were unable to recover it. 00:29:14.130 [2024-11-05 19:18:43.236028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.130 [2024-11-05 19:18:43.236048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.130 qpair failed and we were unable to recover it. 00:29:14.130 [2024-11-05 19:18:43.236372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.130 [2024-11-05 19:18:43.236383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.130 qpair failed and we were unable to recover it. 00:29:14.130 [2024-11-05 19:18:43.236703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.130 [2024-11-05 19:18:43.236714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.130 qpair failed and we were unable to recover it. 00:29:14.130 [2024-11-05 19:18:43.237023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.130 [2024-11-05 19:18:43.237035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.130 qpair failed and we were unable to recover it. 00:29:14.130 [2024-11-05 19:18:43.237383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.130 [2024-11-05 19:18:43.237394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.130 qpair failed and we were unable to recover it. 00:29:14.130 [2024-11-05 19:18:43.237720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.130 [2024-11-05 19:18:43.237731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.130 qpair failed and we were unable to recover it. 00:29:14.130 [2024-11-05 19:18:43.238045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.130 [2024-11-05 19:18:43.238058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.130 qpair failed and we were unable to recover it. 00:29:14.130 [2024-11-05 19:18:43.238395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.130 [2024-11-05 19:18:43.238407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.130 qpair failed and we were unable to recover it. 00:29:14.130 [2024-11-05 19:18:43.238593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.130 [2024-11-05 19:18:43.238603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.130 qpair failed and we were unable to recover it. 00:29:14.130 [2024-11-05 19:18:43.238911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.130 [2024-11-05 19:18:43.238923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.130 qpair failed and we were unable to recover it. 00:29:14.130 [2024-11-05 19:18:43.239238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.130 [2024-11-05 19:18:43.239249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.130 qpair failed and we were unable to recover it. 00:29:14.130 [2024-11-05 19:18:43.239597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.130 [2024-11-05 19:18:43.239608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.130 qpair failed and we were unable to recover it. 00:29:14.130 [2024-11-05 19:18:43.239923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.130 [2024-11-05 19:18:43.239935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.130 qpair failed and we were unable to recover it. 00:29:14.130 [2024-11-05 19:18:43.240255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.130 [2024-11-05 19:18:43.240266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.130 qpair failed and we were unable to recover it. 00:29:14.130 [2024-11-05 19:18:43.240572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.130 [2024-11-05 19:18:43.240585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.130 qpair failed and we were unable to recover it. 00:29:14.130 [2024-11-05 19:18:43.240874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.130 [2024-11-05 19:18:43.240885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.130 qpair failed and we were unable to recover it. 00:29:14.130 [2024-11-05 19:18:43.241087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.130 [2024-11-05 19:18:43.241098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.130 qpair failed and we were unable to recover it. 00:29:14.130 [2024-11-05 19:18:43.241412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.131 [2024-11-05 19:18:43.241424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.131 qpair failed and we were unable to recover it. 00:29:14.131 [2024-11-05 19:18:43.241696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.131 [2024-11-05 19:18:43.241706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.131 qpair failed and we were unable to recover it. 00:29:14.131 19:18:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:14.131 [2024-11-05 19:18:43.241993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.131 [2024-11-05 19:18:43.242005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.131 qpair failed and we were unable to recover it. 00:29:14.131 19:18:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:14.131 [2024-11-05 19:18:43.242305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.131 [2024-11-05 19:18:43.242317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.131 qpair failed and we were unable to recover it. 00:29:14.131 19:18:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:14.131 [2024-11-05 19:18:43.242628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.131 [2024-11-05 19:18:43.242639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.131 qpair failed and we were unable to recover it. 00:29:14.131 19:18:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:14.131 [2024-11-05 19:18:43.242848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.131 [2024-11-05 19:18:43.242859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.131 qpair failed and we were unable to recover it. 00:29:14.131 [2024-11-05 19:18:43.243271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.131 [2024-11-05 19:18:43.243283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.131 qpair failed and we were unable to recover it. 00:29:14.131 [2024-11-05 19:18:43.243588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.131 [2024-11-05 19:18:43.243600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.131 qpair failed and we were unable to recover it. 00:29:14.131 [2024-11-05 19:18:43.243911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.131 [2024-11-05 19:18:43.243923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.131 qpair failed and we were unable to recover it. 00:29:14.131 [2024-11-05 19:18:43.244119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.131 [2024-11-05 19:18:43.244129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.131 qpair failed and we were unable to recover it. 00:29:14.131 [2024-11-05 19:18:43.244449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.131 [2024-11-05 19:18:43.244463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.131 qpair failed and we were unable to recover it. 00:29:14.131 [2024-11-05 19:18:43.244781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.131 [2024-11-05 19:18:43.244794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.131 qpair failed and we were unable to recover it. 00:29:14.131 [2024-11-05 19:18:43.245031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.131 [2024-11-05 19:18:43.245043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.131 qpair failed and we were unable to recover it. 00:29:14.131 [2024-11-05 19:18:43.245358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.131 [2024-11-05 19:18:43.245369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.131 qpair failed and we were unable to recover it. 00:29:14.131 [2024-11-05 19:18:43.245652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.131 [2024-11-05 19:18:43.245663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.131 qpair failed and we were unable to recover it. 00:29:14.131 [2024-11-05 19:18:43.245940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.131 [2024-11-05 19:18:43.245951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.131 qpair failed and we were unable to recover it. 00:29:14.131 [2024-11-05 19:18:43.246272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.131 [2024-11-05 19:18:43.246283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.131 qpair failed and we were unable to recover it. 00:29:14.131 [2024-11-05 19:18:43.246592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.131 [2024-11-05 19:18:43.246603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.131 qpair failed and we were unable to recover it. 00:29:14.131 [2024-11-05 19:18:43.246935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.131 [2024-11-05 19:18:43.246947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.131 qpair failed and we were unable to recover it. 00:29:14.131 [2024-11-05 19:18:43.247249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.131 [2024-11-05 19:18:43.247261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.131 qpair failed and we were unable to recover it. 00:29:14.131 [2024-11-05 19:18:43.247613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.131 [2024-11-05 19:18:43.247625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.131 qpair failed and we were unable to recover it. 00:29:14.131 [2024-11-05 19:18:43.247811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.131 [2024-11-05 19:18:43.247824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.131 qpair failed and we were unable to recover it. 00:29:14.131 [2024-11-05 19:18:43.248162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.131 [2024-11-05 19:18:43.248174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.131 qpair failed and we were unable to recover it. 00:29:14.131 [2024-11-05 19:18:43.248489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.131 [2024-11-05 19:18:43.248500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.131 qpair failed and we were unable to recover it. 00:29:14.131 [2024-11-05 19:18:43.248801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.131 [2024-11-05 19:18:43.248813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.131 qpair failed and we were unable to recover it. 00:29:14.131 [2024-11-05 19:18:43.249124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.131 [2024-11-05 19:18:43.249135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.131 qpair failed and we were unable to recover it. 00:29:14.131 [2024-11-05 19:18:43.249415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.131 [2024-11-05 19:18:43.249426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.131 qpair failed and we were unable to recover it. 00:29:14.131 [2024-11-05 19:18:43.249739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.131 [2024-11-05 19:18:43.249753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.131 qpair failed and we were unable to recover it. 00:29:14.131 [2024-11-05 19:18:43.249918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.131 [2024-11-05 19:18:43.249929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.131 qpair failed and we were unable to recover it. 00:29:14.131 [2024-11-05 19:18:43.250233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.131 [2024-11-05 19:18:43.250244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.131 qpair failed and we were unable to recover it. 00:29:14.131 [2024-11-05 19:18:43.250576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.131 [2024-11-05 19:18:43.250587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.131 qpair failed and we were unable to recover it. 00:29:14.131 [2024-11-05 19:18:43.250809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.131 [2024-11-05 19:18:43.250820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.131 qpair failed and we were unable to recover it. 00:29:14.131 [2024-11-05 19:18:43.251043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.131 [2024-11-05 19:18:43.251055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.131 qpair failed and we were unable to recover it. 00:29:14.131 [2024-11-05 19:18:43.251344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.132 [2024-11-05 19:18:43.251355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.132 qpair failed and we were unable to recover it. 00:29:14.132 [2024-11-05 19:18:43.251694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.132 [2024-11-05 19:18:43.251705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.132 qpair failed and we were unable to recover it. 00:29:14.132 [2024-11-05 19:18:43.252027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.132 [2024-11-05 19:18:43.252039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.132 qpair failed and we were unable to recover it. 00:29:14.132 [2024-11-05 19:18:43.252351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.132 [2024-11-05 19:18:43.252363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.132 qpair failed and we were unable to recover it. 00:29:14.132 [2024-11-05 19:18:43.252687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.132 [2024-11-05 19:18:43.252701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.132 qpair failed and we were unable to recover it. 00:29:14.132 [2024-11-05 19:18:43.253086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.132 [2024-11-05 19:18:43.253098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.132 qpair failed and we were unable to recover it. 00:29:14.132 [2024-11-05 19:18:43.253402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.132 [2024-11-05 19:18:43.253413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.132 qpair failed and we were unable to recover it. 00:29:14.132 [2024-11-05 19:18:43.253731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.132 [2024-11-05 19:18:43.253742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.132 qpair failed and we were unable to recover it. 00:29:14.132 19:18:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:14.132 [2024-11-05 19:18:43.254089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.132 [2024-11-05 19:18:43.254101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.132 qpair failed and we were unable to recover it. 00:29:14.132 19:18:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:14.132 [2024-11-05 19:18:43.254314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.132 [2024-11-05 19:18:43.254325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.132 qpair failed and we were unable to recover it. 00:29:14.132 [2024-11-05 19:18:43.254515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.132 [2024-11-05 19:18:43.254527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.132 qpair failed and we were unable to recover it. 00:29:14.132 19:18:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:14.132 [2024-11-05 19:18:43.254705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.132 [2024-11-05 19:18:43.254717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.132 qpair failed and we were unable to recover it. 00:29:14.132 19:18:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:14.132 [2024-11-05 19:18:43.255060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.132 [2024-11-05 19:18:43.255072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.132 qpair failed and we were unable to recover it. 00:29:14.132 [2024-11-05 19:18:43.255371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.132 [2024-11-05 19:18:43.255383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.132 qpair failed and we were unable to recover it. 00:29:14.132 [2024-11-05 19:18:43.255576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.132 [2024-11-05 19:18:43.255589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.132 qpair failed and we were unable to recover it. 00:29:14.132 [2024-11-05 19:18:43.255915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.132 [2024-11-05 19:18:43.255927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.132 qpair failed and we were unable to recover it. 00:29:14.132 [2024-11-05 19:18:43.256254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.132 [2024-11-05 19:18:43.256265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.132 qpair failed and we were unable to recover it. 00:29:14.132 [2024-11-05 19:18:43.256453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.132 [2024-11-05 19:18:43.256465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.132 qpair failed and we were unable to recover it. 00:29:14.132 [2024-11-05 19:18:43.256644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.132 [2024-11-05 19:18:43.256655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.132 qpair failed and we were unable to recover it. 00:29:14.132 [2024-11-05 19:18:43.256978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.132 [2024-11-05 19:18:43.256990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.132 qpair failed and we were unable to recover it. 00:29:14.132 [2024-11-05 19:18:43.257331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.132 [2024-11-05 19:18:43.257343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.132 qpair failed and we were unable to recover it. 00:29:14.132 [2024-11-05 19:18:43.257721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.132 [2024-11-05 19:18:43.257732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.132 qpair failed and we were unable to recover it. 00:29:14.132 [2024-11-05 19:18:43.258044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.132 [2024-11-05 19:18:43.258058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.132 qpair failed and we were unable to recover it. 00:29:14.132 [2024-11-05 19:18:43.258325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.132 [2024-11-05 19:18:43.258336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.132 qpair failed and we were unable to recover it. 00:29:14.132 [2024-11-05 19:18:43.258653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.132 [2024-11-05 19:18:43.258664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.132 qpair failed and we were unable to recover it. 00:29:14.132 [2024-11-05 19:18:43.258965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.132 [2024-11-05 19:18:43.258977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.132 qpair failed and we were unable to recover it. 00:29:14.132 [2024-11-05 19:18:43.259283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.132 [2024-11-05 19:18:43.259294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.132 qpair failed and we were unable to recover it. 00:29:14.132 [2024-11-05 19:18:43.259597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.132 [2024-11-05 19:18:43.259609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.132 qpair failed and we were unable to recover it. 00:29:14.132 [2024-11-05 19:18:43.259797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.133 [2024-11-05 19:18:43.259809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.133 qpair failed and we were unable to recover it. 00:29:14.133 [2024-11-05 19:18:43.260009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.133 [2024-11-05 19:18:43.260019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.133 qpair failed and we were unable to recover it. 00:29:14.133 [2024-11-05 19:18:43.260299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.133 [2024-11-05 19:18:43.260310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.133 qpair failed and we were unable to recover it. 00:29:14.133 [2024-11-05 19:18:43.260676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.133 [2024-11-05 19:18:43.260688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.133 qpair failed and we were unable to recover it. 00:29:14.133 [2024-11-05 19:18:43.260859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.133 [2024-11-05 19:18:43.260870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.133 qpair failed and we were unable to recover it. 00:29:14.133 [2024-11-05 19:18:43.261192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.133 [2024-11-05 19:18:43.261204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.133 qpair failed and we were unable to recover it. 00:29:14.133 [2024-11-05 19:18:43.261521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.133 [2024-11-05 19:18:43.261533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.133 qpair failed and we were unable to recover it. 00:29:14.133 [2024-11-05 19:18:43.261862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.133 [2024-11-05 19:18:43.261874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.133 qpair failed and we were unable to recover it. 00:29:14.133 [2024-11-05 19:18:43.262178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.133 [2024-11-05 19:18:43.262190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.133 qpair failed and we were unable to recover it. 00:29:14.133 [2024-11-05 19:18:43.262525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.133 [2024-11-05 19:18:43.262537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.133 qpair failed and we were unable to recover it. 00:29:14.133 [2024-11-05 19:18:43.262855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.133 [2024-11-05 19:18:43.262867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.133 qpair failed and we were unable to recover it. 00:29:14.133 [2024-11-05 19:18:43.263178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.133 [2024-11-05 19:18:43.263190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.133 qpair failed and we were unable to recover it. 00:29:14.133 [2024-11-05 19:18:43.263502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.133 [2024-11-05 19:18:43.263513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.133 qpair failed and we were unable to recover it. 00:29:14.133 [2024-11-05 19:18:43.263851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.133 [2024-11-05 19:18:43.263863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.133 qpair failed and we were unable to recover it. 00:29:14.133 [2024-11-05 19:18:43.264186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.133 [2024-11-05 19:18:43.264196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.133 qpair failed and we were unable to recover it. 00:29:14.133 [2024-11-05 19:18:43.264508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.133 [2024-11-05 19:18:43.264521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.133 qpair failed and we were unable to recover it. 00:29:14.133 [2024-11-05 19:18:43.264811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.133 [2024-11-05 19:18:43.264823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.133 qpair failed and we were unable to recover it. 00:29:14.133 [2024-11-05 19:18:43.265192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.133 [2024-11-05 19:18:43.265203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.133 qpair failed and we were unable to recover it. 00:29:14.133 [2024-11-05 19:18:43.265513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.133 [2024-11-05 19:18:43.265525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.133 qpair failed and we were unable to recover it. 00:29:14.133 19:18:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:14.133 [2024-11-05 19:18:43.265838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.133 [2024-11-05 19:18:43.265851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.133 qpair failed and we were unable to recover it. 00:29:14.133 [2024-11-05 19:18:43.266018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.133 [2024-11-05 19:18:43.266029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.133 qpair failed and we were unable to recover it. 00:29:14.133 19:18:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:14.133 [2024-11-05 19:18:43.266331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.133 [2024-11-05 19:18:43.266343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.133 qpair failed and we were unable to recover it. 00:29:14.133 19:18:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:14.133 [2024-11-05 19:18:43.266686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.133 [2024-11-05 19:18:43.266698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.133 qpair failed and we were unable to recover it. 00:29:14.133 19:18:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:14.133 [2024-11-05 19:18:43.267035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.133 [2024-11-05 19:18:43.267047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.133 qpair failed and we were unable to recover it. 00:29:14.133 [2024-11-05 19:18:43.267366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.133 [2024-11-05 19:18:43.267377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.133 qpair failed and we were unable to recover it. 00:29:14.133 [2024-11-05 19:18:43.267715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.133 [2024-11-05 19:18:43.267727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.133 qpair failed and we were unable to recover it. 00:29:14.133 [2024-11-05 19:18:43.268033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.133 [2024-11-05 19:18:43.268045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.133 qpair failed and we were unable to recover it. 00:29:14.133 [2024-11-05 19:18:43.268350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.133 [2024-11-05 19:18:43.268363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.133 qpair failed and we were unable to recover it. 00:29:14.133 [2024-11-05 19:18:43.268678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.133 [2024-11-05 19:18:43.268690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.133 qpair failed and we were unable to recover it. 00:29:14.133 [2024-11-05 19:18:43.268779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.133 [2024-11-05 19:18:43.268790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150a0c0 with addr=10.0.0.2, port=4420 00:29:14.133 qpair failed and we were unable to recover it. 00:29:14.133 Read completed with error (sct=0, sc=8) 00:29:14.133 starting I/O failed 00:29:14.133 Read completed with error (sct=0, sc=8) 00:29:14.133 starting I/O failed 00:29:14.133 Read completed with error (sct=0, sc=8) 00:29:14.133 starting I/O failed 00:29:14.133 Read completed with error (sct=0, sc=8) 00:29:14.133 starting I/O failed 00:29:14.133 Read completed with error (sct=0, sc=8) 00:29:14.133 starting I/O failed 00:29:14.133 Read completed with error (sct=0, sc=8) 00:29:14.133 starting I/O failed 00:29:14.133 Read completed with error (sct=0, sc=8) 00:29:14.133 starting I/O failed 00:29:14.133 Read completed with error (sct=0, sc=8) 00:29:14.133 starting I/O failed 00:29:14.133 Write completed with error (sct=0, sc=8) 00:29:14.133 starting I/O failed 00:29:14.133 Write completed with error (sct=0, sc=8) 00:29:14.133 starting I/O failed 00:29:14.133 Read completed with error (sct=0, sc=8) 00:29:14.133 starting I/O failed 00:29:14.133 Read completed with error (sct=0, sc=8) 00:29:14.133 starting I/O failed 00:29:14.133 Write completed with error (sct=0, sc=8) 00:29:14.133 starting I/O failed 00:29:14.133 Read completed with error (sct=0, sc=8) 00:29:14.133 starting I/O failed 00:29:14.133 Read completed with error (sct=0, sc=8) 00:29:14.133 starting I/O failed 00:29:14.133 Read completed with error (sct=0, sc=8) 00:29:14.133 starting I/O failed 00:29:14.133 Write completed with error (sct=0, sc=8) 00:29:14.133 starting I/O failed 00:29:14.133 Write completed with error (sct=0, sc=8) 00:29:14.133 starting I/O failed 00:29:14.133 Read completed with error (sct=0, sc=8) 00:29:14.133 starting I/O failed 00:29:14.133 Write completed with error (sct=0, sc=8) 00:29:14.133 starting I/O failed 00:29:14.133 Read completed with error (sct=0, sc=8) 00:29:14.133 starting I/O failed 00:29:14.134 Write completed with error (sct=0, sc=8) 00:29:14.134 starting I/O failed 00:29:14.134 Write completed with error (sct=0, sc=8) 00:29:14.134 starting I/O failed 00:29:14.134 Read completed with error (sct=0, sc=8) 00:29:14.134 starting I/O failed 00:29:14.134 Read completed with error (sct=0, sc=8) 00:29:14.134 starting I/O failed 00:29:14.134 Read completed with error (sct=0, sc=8) 00:29:14.134 starting I/O failed 00:29:14.134 Read completed with error (sct=0, sc=8) 00:29:14.134 starting I/O failed 00:29:14.134 Read completed with error (sct=0, sc=8) 00:29:14.134 starting I/O failed 00:29:14.134 Read completed with error (sct=0, sc=8) 00:29:14.134 starting I/O failed 00:29:14.134 Read completed with error (sct=0, sc=8) 00:29:14.134 starting I/O failed 00:29:14.134 Read completed with error (sct=0, sc=8) 00:29:14.134 starting I/O failed 00:29:14.134 Read completed with error (sct=0, sc=8) 00:29:14.134 starting I/O failed 00:29:14.134 [2024-11-05 19:18:43.269014] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.134 [2024-11-05 19:18:43.269368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.134 [2024-11-05 19:18:43.269387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd674000b90 with addr=10.0.0.2, port=4420 00:29:14.134 qpair failed and we were unable to recover it. 00:29:14.134 [2024-11-05 19:18:43.269706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.134 [2024-11-05 19:18:43.269716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd674000b90 with addr=10.0.0.2, port=4420 00:29:14.134 qpair failed and we were unable to recover it. 00:29:14.134 [2024-11-05 19:18:43.270139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.134 [2024-11-05 19:18:43.270169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd674000b90 with addr=10.0.0.2, port=4420 00:29:14.134 qpair failed and we were unable to recover it. 00:29:14.134 [2024-11-05 19:18:43.270494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.134 [2024-11-05 19:18:43.270508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd674000b90 with addr=10.0.0.2, port=4420 00:29:14.134 qpair failed and we were unable to recover it. 00:29:14.134 [2024-11-05 19:18:43.270964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.134 [2024-11-05 19:18:43.270994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd674000b90 with addr=10.0.0.2, port=4420 00:29:14.134 qpair failed and we were unable to recover it. 00:29:14.134 [2024-11-05 19:18:43.271359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.134 [2024-11-05 19:18:43.271370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd674000b90 with addr=10.0.0.2, port=4420 00:29:14.134 qpair failed and we were unable to recover it. 00:29:14.134 [2024-11-05 19:18:43.271712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.134 [2024-11-05 19:18:43.271720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd674000b90 with addr=10.0.0.2, port=4420 00:29:14.134 qpair failed and we were unable to recover it. 00:29:14.134 [2024-11-05 19:18:43.272152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.134 [2024-11-05 19:18:43.272182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd674000b90 with addr=10.0.0.2, port=4420 00:29:14.134 qpair failed and we were unable to recover it. 00:29:14.134 [2024-11-05 19:18:43.272498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.134 [2024-11-05 19:18:43.272508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd674000b90 with addr=10.0.0.2, port=4420 00:29:14.134 qpair failed and we were unable to recover it. 00:29:14.134 [2024-11-05 19:18:43.273007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.134 [2024-11-05 19:18:43.273037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd674000b90 with addr=10.0.0.2, port=4420 00:29:14.134 qpair failed and we were unable to recover it. 00:29:14.134 [2024-11-05 19:18:43.273072] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:14.134 19:18:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:14.134 19:18:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:14.134 19:18:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:14.134 19:18:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:14.134 [2024-11-05 19:18:43.283762] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.134 [2024-11-05 19:18:43.283824] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.134 [2024-11-05 19:18:43.283837] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.134 [2024-11-05 19:18:43.283843] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.134 [2024-11-05 19:18:43.283848] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:14.134 [2024-11-05 19:18:43.283862] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.134 qpair failed and we were unable to recover it. 00:29:14.134 19:18:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:14.134 19:18:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 517847 00:29:14.134 [2024-11-05 19:18:43.293700] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.134 [2024-11-05 19:18:43.293765] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.134 [2024-11-05 19:18:43.293777] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.134 [2024-11-05 19:18:43.293783] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.134 [2024-11-05 19:18:43.293787] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:14.134 [2024-11-05 19:18:43.293799] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.134 qpair failed and we were unable to recover it. 00:29:14.134 [2024-11-05 19:18:43.303556] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.134 [2024-11-05 19:18:43.303606] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.134 [2024-11-05 19:18:43.303616] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.134 [2024-11-05 19:18:43.303622] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.134 [2024-11-05 19:18:43.303626] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:14.134 [2024-11-05 19:18:43.303637] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.134 qpair failed and we were unable to recover it. 00:29:14.134 [2024-11-05 19:18:43.313580] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.134 [2024-11-05 19:18:43.313635] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.134 [2024-11-05 19:18:43.313645] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.134 [2024-11-05 19:18:43.313650] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.134 [2024-11-05 19:18:43.313655] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:14.134 [2024-11-05 19:18:43.313665] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.134 qpair failed and we were unable to recover it. 00:29:14.134 [2024-11-05 19:18:43.323574] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.134 [2024-11-05 19:18:43.323664] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.134 [2024-11-05 19:18:43.323675] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.134 [2024-11-05 19:18:43.323681] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.134 [2024-11-05 19:18:43.323686] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:14.134 [2024-11-05 19:18:43.323697] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.134 qpair failed and we were unable to recover it. 00:29:14.134 [2024-11-05 19:18:43.333640] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.134 [2024-11-05 19:18:43.333692] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.134 [2024-11-05 19:18:43.333703] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.134 [2024-11-05 19:18:43.333711] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.134 [2024-11-05 19:18:43.333716] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:14.134 [2024-11-05 19:18:43.333727] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.134 qpair failed and we were unable to recover it. 00:29:14.134 [2024-11-05 19:18:43.343691] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.134 [2024-11-05 19:18:43.343741] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.134 [2024-11-05 19:18:43.343755] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.134 [2024-11-05 19:18:43.343760] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.134 [2024-11-05 19:18:43.343765] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:14.135 [2024-11-05 19:18:43.343776] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.135 qpair failed and we were unable to recover it. 00:29:14.135 [2024-11-05 19:18:43.353702] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.135 [2024-11-05 19:18:43.353799] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.135 [2024-11-05 19:18:43.353809] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.135 [2024-11-05 19:18:43.353814] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.135 [2024-11-05 19:18:43.353819] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:14.135 [2024-11-05 19:18:43.353831] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.135 qpair failed and we were unable to recover it. 00:29:14.135 [2024-11-05 19:18:43.363811] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.135 [2024-11-05 19:18:43.363878] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.135 [2024-11-05 19:18:43.363888] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.135 [2024-11-05 19:18:43.363893] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.135 [2024-11-05 19:18:43.363898] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:14.135 [2024-11-05 19:18:43.363908] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.135 qpair failed and we were unable to recover it. 00:29:14.135 [2024-11-05 19:18:43.373800] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.135 [2024-11-05 19:18:43.373852] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.135 [2024-11-05 19:18:43.373862] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.135 [2024-11-05 19:18:43.373867] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.135 [2024-11-05 19:18:43.373872] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:14.135 [2024-11-05 19:18:43.373888] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.135 qpair failed and we were unable to recover it. 00:29:14.135 [2024-11-05 19:18:43.383837] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.135 [2024-11-05 19:18:43.383885] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.135 [2024-11-05 19:18:43.383895] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.135 [2024-11-05 19:18:43.383900] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.135 [2024-11-05 19:18:43.383904] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:14.135 [2024-11-05 19:18:43.383915] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.135 qpair failed and we were unable to recover it. 00:29:14.135 [2024-11-05 19:18:43.393836] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.135 [2024-11-05 19:18:43.393887] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.135 [2024-11-05 19:18:43.393897] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.135 [2024-11-05 19:18:43.393902] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.135 [2024-11-05 19:18:43.393906] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:14.135 [2024-11-05 19:18:43.393916] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.135 qpair failed and we were unable to recover it. 00:29:14.135 [2024-11-05 19:18:43.403892] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.135 [2024-11-05 19:18:43.403940] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.135 [2024-11-05 19:18:43.403950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.135 [2024-11-05 19:18:43.403955] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.135 [2024-11-05 19:18:43.403960] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:14.135 [2024-11-05 19:18:43.403971] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.135 qpair failed and we were unable to recover it. 00:29:14.135 [2024-11-05 19:18:43.413953] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.135 [2024-11-05 19:18:43.414002] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.135 [2024-11-05 19:18:43.414012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.135 [2024-11-05 19:18:43.414017] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.135 [2024-11-05 19:18:43.414022] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:14.135 [2024-11-05 19:18:43.414032] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.135 qpair failed and we were unable to recover it. 00:29:14.135 [2024-11-05 19:18:43.423918] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.135 [2024-11-05 19:18:43.423969] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.135 [2024-11-05 19:18:43.423980] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.135 [2024-11-05 19:18:43.423985] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.135 [2024-11-05 19:18:43.423990] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:14.135 [2024-11-05 19:18:43.424000] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.135 qpair failed and we were unable to recover it. 00:29:14.135 [2024-11-05 19:18:43.433841] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.396 [2024-11-05 19:18:43.433891] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.396 [2024-11-05 19:18:43.433901] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.396 [2024-11-05 19:18:43.433906] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.396 [2024-11-05 19:18:43.433911] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:14.396 [2024-11-05 19:18:43.433922] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.396 qpair failed and we were unable to recover it. 00:29:14.396 [2024-11-05 19:18:43.443997] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.396 [2024-11-05 19:18:43.444049] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.396 [2024-11-05 19:18:43.444059] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.396 [2024-11-05 19:18:43.444064] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.396 [2024-11-05 19:18:43.444069] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:14.396 [2024-11-05 19:18:43.444079] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.396 qpair failed and we were unable to recover it. 00:29:14.396 [2024-11-05 19:18:43.453987] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.396 [2024-11-05 19:18:43.454037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.396 [2024-11-05 19:18:43.454047] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.396 [2024-11-05 19:18:43.454052] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.396 [2024-11-05 19:18:43.454057] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:14.396 [2024-11-05 19:18:43.454067] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.396 qpair failed and we were unable to recover it. 00:29:14.396 [2024-11-05 19:18:43.464035] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.396 [2024-11-05 19:18:43.464080] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.396 [2024-11-05 19:18:43.464093] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.396 [2024-11-05 19:18:43.464098] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.396 [2024-11-05 19:18:43.464103] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:14.396 [2024-11-05 19:18:43.464113] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.396 qpair failed and we were unable to recover it. 00:29:14.396 [2024-11-05 19:18:43.474039] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.396 [2024-11-05 19:18:43.474091] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.396 [2024-11-05 19:18:43.474101] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.396 [2024-11-05 19:18:43.474107] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.396 [2024-11-05 19:18:43.474113] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:14.396 [2024-11-05 19:18:43.474123] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.396 qpair failed and we were unable to recover it. 00:29:14.396 [2024-11-05 19:18:43.483968] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.396 [2024-11-05 19:18:43.484020] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.396 [2024-11-05 19:18:43.484030] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.396 [2024-11-05 19:18:43.484035] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.396 [2024-11-05 19:18:43.484039] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:14.396 [2024-11-05 19:18:43.484049] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.396 qpair failed and we were unable to recover it. 00:29:14.396 [2024-11-05 19:18:43.494095] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.396 [2024-11-05 19:18:43.494150] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.396 [2024-11-05 19:18:43.494160] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.396 [2024-11-05 19:18:43.494165] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.396 [2024-11-05 19:18:43.494170] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:14.396 [2024-11-05 19:18:43.494180] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.396 qpair failed and we were unable to recover it. 00:29:14.396 [2024-11-05 19:18:43.504155] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.396 [2024-11-05 19:18:43.504204] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.396 [2024-11-05 19:18:43.504213] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.396 [2024-11-05 19:18:43.504219] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.396 [2024-11-05 19:18:43.504224] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:14.396 [2024-11-05 19:18:43.504237] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.396 qpair failed and we were unable to recover it. 00:29:14.396 [2024-11-05 19:18:43.514143] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.396 [2024-11-05 19:18:43.514206] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.396 [2024-11-05 19:18:43.514217] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.396 [2024-11-05 19:18:43.514223] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.396 [2024-11-05 19:18:43.514227] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:14.396 [2024-11-05 19:18:43.514238] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.396 qpair failed and we were unable to recover it. 00:29:14.396 [2024-11-05 19:18:43.524258] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.396 [2024-11-05 19:18:43.524312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.396 [2024-11-05 19:18:43.524323] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.396 [2024-11-05 19:18:43.524328] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.396 [2024-11-05 19:18:43.524332] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:14.396 [2024-11-05 19:18:43.524343] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.396 qpair failed and we were unable to recover it. 00:29:14.396 [2024-11-05 19:18:43.534267] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.396 [2024-11-05 19:18:43.534316] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.396 [2024-11-05 19:18:43.534326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.396 [2024-11-05 19:18:43.534331] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.396 [2024-11-05 19:18:43.534336] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:14.396 [2024-11-05 19:18:43.534346] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.396 qpair failed and we were unable to recover it. 00:29:14.396 [2024-11-05 19:18:43.544310] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.396 [2024-11-05 19:18:43.544364] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.396 [2024-11-05 19:18:43.544374] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.396 [2024-11-05 19:18:43.544379] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.396 [2024-11-05 19:18:43.544383] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:14.396 [2024-11-05 19:18:43.544393] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.396 qpair failed and we were unable to recover it. 00:29:14.397 [2024-11-05 19:18:43.554280] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.397 [2024-11-05 19:18:43.554327] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.397 [2024-11-05 19:18:43.554337] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.397 [2024-11-05 19:18:43.554342] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.397 [2024-11-05 19:18:43.554347] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:14.397 [2024-11-05 19:18:43.554357] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.397 qpair failed and we were unable to recover it. 00:29:14.397 [2024-11-05 19:18:43.564311] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.397 [2024-11-05 19:18:43.564361] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.397 [2024-11-05 19:18:43.564370] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.397 [2024-11-05 19:18:43.564375] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.397 [2024-11-05 19:18:43.564380] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:14.397 [2024-11-05 19:18:43.564391] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.397 qpair failed and we were unable to recover it. 00:29:14.397 [2024-11-05 19:18:43.574336] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.397 [2024-11-05 19:18:43.574386] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.397 [2024-11-05 19:18:43.574396] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.397 [2024-11-05 19:18:43.574401] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.397 [2024-11-05 19:18:43.574406] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:14.397 [2024-11-05 19:18:43.574416] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.397 qpair failed and we were unable to recover it. 00:29:14.397 [2024-11-05 19:18:43.584378] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.397 [2024-11-05 19:18:43.584423] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.397 [2024-11-05 19:18:43.584433] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.397 [2024-11-05 19:18:43.584438] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.397 [2024-11-05 19:18:43.584443] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:14.397 [2024-11-05 19:18:43.584453] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.397 qpair failed and we were unable to recover it. 00:29:14.397 [2024-11-05 19:18:43.594355] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.397 [2024-11-05 19:18:43.594405] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.397 [2024-11-05 19:18:43.594417] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.397 [2024-11-05 19:18:43.594422] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.397 [2024-11-05 19:18:43.594427] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:14.397 [2024-11-05 19:18:43.594438] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.397 qpair failed and we were unable to recover it. 00:29:14.397 [2024-11-05 19:18:43.604484] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.397 [2024-11-05 19:18:43.604533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.397 [2024-11-05 19:18:43.604543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.397 [2024-11-05 19:18:43.604548] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.397 [2024-11-05 19:18:43.604553] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:14.397 [2024-11-05 19:18:43.604563] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.397 qpair failed and we were unable to recover it. 00:29:14.397 [2024-11-05 19:18:43.614311] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.397 [2024-11-05 19:18:43.614358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.397 [2024-11-05 19:18:43.614368] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.397 [2024-11-05 19:18:43.614373] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.397 [2024-11-05 19:18:43.614378] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:14.397 [2024-11-05 19:18:43.614388] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.397 qpair failed and we were unable to recover it. 00:29:14.397 [2024-11-05 19:18:43.624454] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.397 [2024-11-05 19:18:43.624503] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.397 [2024-11-05 19:18:43.624513] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.397 [2024-11-05 19:18:43.624518] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.397 [2024-11-05 19:18:43.624523] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:14.397 [2024-11-05 19:18:43.624533] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.397 qpair failed and we were unable to recover it. 00:29:14.397 [2024-11-05 19:18:43.634493] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.397 [2024-11-05 19:18:43.634555] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.397 [2024-11-05 19:18:43.634565] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.397 [2024-11-05 19:18:43.634570] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.397 [2024-11-05 19:18:43.634577] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:14.397 [2024-11-05 19:18:43.634587] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.397 qpair failed and we were unable to recover it. 00:29:14.397 [2024-11-05 19:18:43.644521] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.397 [2024-11-05 19:18:43.644575] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.397 [2024-11-05 19:18:43.644585] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.397 [2024-11-05 19:18:43.644590] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.397 [2024-11-05 19:18:43.644594] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:14.397 [2024-11-05 19:18:43.644605] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.397 qpair failed and we were unable to recover it. 00:29:14.397 [2024-11-05 19:18:43.654542] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.397 [2024-11-05 19:18:43.654611] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.397 [2024-11-05 19:18:43.654630] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.397 [2024-11-05 19:18:43.654637] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.397 [2024-11-05 19:18:43.654642] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:14.397 [2024-11-05 19:18:43.654657] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.397 qpair failed and we were unable to recover it. 00:29:14.397 [2024-11-05 19:18:43.664430] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.397 [2024-11-05 19:18:43.664475] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.397 [2024-11-05 19:18:43.664487] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.397 [2024-11-05 19:18:43.664492] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.397 [2024-11-05 19:18:43.664497] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:14.397 [2024-11-05 19:18:43.664508] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.397 qpair failed and we were unable to recover it. 00:29:14.397 [2024-11-05 19:18:43.674607] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.397 [2024-11-05 19:18:43.674661] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.397 [2024-11-05 19:18:43.674671] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.397 [2024-11-05 19:18:43.674677] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.397 [2024-11-05 19:18:43.674681] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:14.397 [2024-11-05 19:18:43.674692] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.397 qpair failed and we were unable to recover it. 00:29:14.397 [2024-11-05 19:18:43.684636] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.398 [2024-11-05 19:18:43.684688] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.398 [2024-11-05 19:18:43.684698] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.398 [2024-11-05 19:18:43.684703] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.398 [2024-11-05 19:18:43.684708] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:14.398 [2024-11-05 19:18:43.684718] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.398 qpair failed and we were unable to recover it. 00:29:14.398 [2024-11-05 19:18:43.694509] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.398 [2024-11-05 19:18:43.694555] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.398 [2024-11-05 19:18:43.694566] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.398 [2024-11-05 19:18:43.694571] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.398 [2024-11-05 19:18:43.694575] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:14.398 [2024-11-05 19:18:43.694586] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.398 qpair failed and we were unable to recover it. 00:29:14.398 [2024-11-05 19:18:43.704550] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.398 [2024-11-05 19:18:43.704608] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.398 [2024-11-05 19:18:43.704618] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.398 [2024-11-05 19:18:43.704624] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.398 [2024-11-05 19:18:43.704628] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:14.398 [2024-11-05 19:18:43.704639] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.398 qpair failed and we were unable to recover it. 00:29:14.398 [2024-11-05 19:18:43.714708] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.398 [2024-11-05 19:18:43.714760] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.398 [2024-11-05 19:18:43.714771] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.398 [2024-11-05 19:18:43.714776] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.398 [2024-11-05 19:18:43.714780] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:14.398 [2024-11-05 19:18:43.714791] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.398 qpair failed and we were unable to recover it. 00:29:14.660 [2024-11-05 19:18:43.724609] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.660 [2024-11-05 19:18:43.724660] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.660 [2024-11-05 19:18:43.724673] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.660 [2024-11-05 19:18:43.724678] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.660 [2024-11-05 19:18:43.724682] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:14.660 [2024-11-05 19:18:43.724693] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.660 qpair failed and we were unable to recover it. 00:29:14.660 [2024-11-05 19:18:43.734759] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.660 [2024-11-05 19:18:43.734814] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.660 [2024-11-05 19:18:43.734824] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.660 [2024-11-05 19:18:43.734829] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.660 [2024-11-05 19:18:43.734834] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:14.660 [2024-11-05 19:18:43.734845] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.660 qpair failed and we were unable to recover it. 00:29:14.660 [2024-11-05 19:18:43.744804] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.660 [2024-11-05 19:18:43.744853] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.660 [2024-11-05 19:18:43.744863] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.660 [2024-11-05 19:18:43.744868] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.660 [2024-11-05 19:18:43.744872] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:14.660 [2024-11-05 19:18:43.744883] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.660 qpair failed and we were unable to recover it. 00:29:14.660 [2024-11-05 19:18:43.754817] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.660 [2024-11-05 19:18:43.754866] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.660 [2024-11-05 19:18:43.754876] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.660 [2024-11-05 19:18:43.754882] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.660 [2024-11-05 19:18:43.754886] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:14.660 [2024-11-05 19:18:43.754897] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.660 qpair failed and we were unable to recover it. 00:29:14.660 [2024-11-05 19:18:43.764848] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.660 [2024-11-05 19:18:43.764897] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.660 [2024-11-05 19:18:43.764907] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.660 [2024-11-05 19:18:43.764914] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.660 [2024-11-05 19:18:43.764919] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:14.660 [2024-11-05 19:18:43.764929] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.660 qpair failed and we were unable to recover it. 00:29:14.660 [2024-11-05 19:18:43.774892] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.660 [2024-11-05 19:18:43.774938] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.660 [2024-11-05 19:18:43.774948] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.660 [2024-11-05 19:18:43.774953] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.660 [2024-11-05 19:18:43.774958] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:14.660 [2024-11-05 19:18:43.774968] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.660 qpair failed and we were unable to recover it. 00:29:14.660 [2024-11-05 19:18:43.784908] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.660 [2024-11-05 19:18:43.784957] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.660 [2024-11-05 19:18:43.784966] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.660 [2024-11-05 19:18:43.784972] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.660 [2024-11-05 19:18:43.784976] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:14.660 [2024-11-05 19:18:43.784986] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.660 qpair failed and we were unable to recover it. 00:29:14.660 [2024-11-05 19:18:43.794953] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.660 [2024-11-05 19:18:43.795001] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.660 [2024-11-05 19:18:43.795011] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.660 [2024-11-05 19:18:43.795017] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.660 [2024-11-05 19:18:43.795021] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:14.660 [2024-11-05 19:18:43.795032] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.660 qpair failed and we were unable to recover it. 00:29:14.660 [2024-11-05 19:18:43.804983] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.660 [2024-11-05 19:18:43.805030] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.661 [2024-11-05 19:18:43.805039] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.661 [2024-11-05 19:18:43.805045] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.661 [2024-11-05 19:18:43.805049] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:14.661 [2024-11-05 19:18:43.805059] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.661 qpair failed and we were unable to recover it. 00:29:14.661 [2024-11-05 19:18:43.814982] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.661 [2024-11-05 19:18:43.815038] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.661 [2024-11-05 19:18:43.815048] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.661 [2024-11-05 19:18:43.815053] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.661 [2024-11-05 19:18:43.815058] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:14.661 [2024-11-05 19:18:43.815068] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.661 qpair failed and we were unable to recover it. 00:29:14.661 [2024-11-05 19:18:43.825033] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.661 [2024-11-05 19:18:43.825080] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.661 [2024-11-05 19:18:43.825090] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.661 [2024-11-05 19:18:43.825096] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.661 [2024-11-05 19:18:43.825100] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:14.661 [2024-11-05 19:18:43.825110] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.661 qpair failed and we were unable to recover it. 00:29:14.661 [2024-11-05 19:18:43.834931] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.661 [2024-11-05 19:18:43.834982] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.661 [2024-11-05 19:18:43.834993] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.661 [2024-11-05 19:18:43.834998] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.661 [2024-11-05 19:18:43.835003] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:14.661 [2024-11-05 19:18:43.835014] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.661 qpair failed and we were unable to recover it. 00:29:14.661 [2024-11-05 19:18:43.844957] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.661 [2024-11-05 19:18:43.845011] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.661 [2024-11-05 19:18:43.845021] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.661 [2024-11-05 19:18:43.845026] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.661 [2024-11-05 19:18:43.845031] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:14.661 [2024-11-05 19:18:43.845041] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.661 qpair failed and we were unable to recover it. 00:29:14.661 [2024-11-05 19:18:43.855105] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.661 [2024-11-05 19:18:43.855155] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.661 [2024-11-05 19:18:43.855165] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.661 [2024-11-05 19:18:43.855170] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.661 [2024-11-05 19:18:43.855175] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:14.661 [2024-11-05 19:18:43.855185] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.661 qpair failed and we were unable to recover it. 00:29:14.661 [2024-11-05 19:18:43.865112] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.661 [2024-11-05 19:18:43.865165] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.661 [2024-11-05 19:18:43.865175] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.661 [2024-11-05 19:18:43.865180] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.661 [2024-11-05 19:18:43.865185] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:14.661 [2024-11-05 19:18:43.865195] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.661 qpair failed and we were unable to recover it. 00:29:14.661 [2024-11-05 19:18:43.875179] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.661 [2024-11-05 19:18:43.875226] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.661 [2024-11-05 19:18:43.875236] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.661 [2024-11-05 19:18:43.875240] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.661 [2024-11-05 19:18:43.875245] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:14.661 [2024-11-05 19:18:43.875256] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.661 qpair failed and we were unable to recover it. 00:29:14.661 [2024-11-05 19:18:43.885220] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.661 [2024-11-05 19:18:43.885269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.661 [2024-11-05 19:18:43.885279] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.661 [2024-11-05 19:18:43.885284] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.661 [2024-11-05 19:18:43.885288] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:14.661 [2024-11-05 19:18:43.885298] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.661 qpair failed and we were unable to recover it. 00:29:14.661 [2024-11-05 19:18:43.895233] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.661 [2024-11-05 19:18:43.895306] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.661 [2024-11-05 19:18:43.895316] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.661 [2024-11-05 19:18:43.895324] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.661 [2024-11-05 19:18:43.895328] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:14.661 [2024-11-05 19:18:43.895338] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.661 qpair failed and we were unable to recover it. 00:29:14.661 [2024-11-05 19:18:43.905247] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.661 [2024-11-05 19:18:43.905291] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.661 [2024-11-05 19:18:43.905301] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.661 [2024-11-05 19:18:43.905306] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.661 [2024-11-05 19:18:43.905311] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:14.661 [2024-11-05 19:18:43.905321] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.661 qpair failed and we were unable to recover it. 00:29:14.661 [2024-11-05 19:18:43.915149] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.661 [2024-11-05 19:18:43.915204] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.661 [2024-11-05 19:18:43.915214] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.661 [2024-11-05 19:18:43.915219] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.661 [2024-11-05 19:18:43.915223] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:14.661 [2024-11-05 19:18:43.915234] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.661 qpair failed and we were unable to recover it. 00:29:14.661 [2024-11-05 19:18:43.925182] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.661 [2024-11-05 19:18:43.925230] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.661 [2024-11-05 19:18:43.925240] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.661 [2024-11-05 19:18:43.925245] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.661 [2024-11-05 19:18:43.925250] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:14.661 [2024-11-05 19:18:43.925260] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.661 qpair failed and we were unable to recover it. 00:29:14.661 [2024-11-05 19:18:43.935316] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.662 [2024-11-05 19:18:43.935365] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.662 [2024-11-05 19:18:43.935375] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.662 [2024-11-05 19:18:43.935380] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.662 [2024-11-05 19:18:43.935385] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:14.662 [2024-11-05 19:18:43.935398] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.662 qpair failed and we were unable to recover it. 00:29:14.662 [2024-11-05 19:18:43.945231] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.662 [2024-11-05 19:18:43.945275] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.662 [2024-11-05 19:18:43.945286] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.662 [2024-11-05 19:18:43.945291] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.662 [2024-11-05 19:18:43.945295] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:14.662 [2024-11-05 19:18:43.945306] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.662 qpair failed and we were unable to recover it. 00:29:14.662 [2024-11-05 19:18:43.955406] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.662 [2024-11-05 19:18:43.955484] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.662 [2024-11-05 19:18:43.955494] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.662 [2024-11-05 19:18:43.955499] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.662 [2024-11-05 19:18:43.955504] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:14.662 [2024-11-05 19:18:43.955514] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.662 qpair failed and we were unable to recover it. 00:29:14.662 [2024-11-05 19:18:43.965295] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.662 [2024-11-05 19:18:43.965344] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.662 [2024-11-05 19:18:43.965354] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.662 [2024-11-05 19:18:43.965359] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.662 [2024-11-05 19:18:43.965363] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:14.662 [2024-11-05 19:18:43.965374] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.662 qpair failed and we were unable to recover it. 00:29:14.662 [2024-11-05 19:18:43.975437] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.662 [2024-11-05 19:18:43.975488] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.662 [2024-11-05 19:18:43.975497] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.662 [2024-11-05 19:18:43.975503] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.662 [2024-11-05 19:18:43.975508] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:14.662 [2024-11-05 19:18:43.975518] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.662 qpair failed and we were unable to recover it. 00:29:14.925 [2024-11-05 19:18:43.985469] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.925 [2024-11-05 19:18:43.985518] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.925 [2024-11-05 19:18:43.985528] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.925 [2024-11-05 19:18:43.985533] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.925 [2024-11-05 19:18:43.985538] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:14.925 [2024-11-05 19:18:43.985548] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.925 qpair failed and we were unable to recover it. 00:29:14.925 [2024-11-05 19:18:43.995374] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.925 [2024-11-05 19:18:43.995428] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.925 [2024-11-05 19:18:43.995438] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.925 [2024-11-05 19:18:43.995443] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.925 [2024-11-05 19:18:43.995447] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:14.925 [2024-11-05 19:18:43.995458] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.925 qpair failed and we were unable to recover it. 00:29:14.925 [2024-11-05 19:18:44.005544] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.925 [2024-11-05 19:18:44.005597] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.925 [2024-11-05 19:18:44.005610] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.925 [2024-11-05 19:18:44.005616] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.925 [2024-11-05 19:18:44.005620] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:14.925 [2024-11-05 19:18:44.005632] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.925 qpair failed and we were unable to recover it. 00:29:14.925 [2024-11-05 19:18:44.015539] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.925 [2024-11-05 19:18:44.015587] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.925 [2024-11-05 19:18:44.015596] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.925 [2024-11-05 19:18:44.015602] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.925 [2024-11-05 19:18:44.015606] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:14.925 [2024-11-05 19:18:44.015617] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.925 qpair failed and we were unable to recover it. 00:29:14.925 [2024-11-05 19:18:44.025594] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.925 [2024-11-05 19:18:44.025647] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.925 [2024-11-05 19:18:44.025660] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.925 [2024-11-05 19:18:44.025665] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.925 [2024-11-05 19:18:44.025670] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:14.925 [2024-11-05 19:18:44.025680] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.925 qpair failed and we were unable to recover it. 00:29:14.925 [2024-11-05 19:18:44.035499] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.925 [2024-11-05 19:18:44.035547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.925 [2024-11-05 19:18:44.035557] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.925 [2024-11-05 19:18:44.035562] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.925 [2024-11-05 19:18:44.035567] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:14.925 [2024-11-05 19:18:44.035577] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.925 qpair failed and we were unable to recover it. 00:29:14.925 [2024-11-05 19:18:44.045651] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.925 [2024-11-05 19:18:44.045732] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.925 [2024-11-05 19:18:44.045742] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.925 [2024-11-05 19:18:44.045750] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.925 [2024-11-05 19:18:44.045755] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:14.925 [2024-11-05 19:18:44.045765] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.925 qpair failed and we were unable to recover it. 00:29:14.925 [2024-11-05 19:18:44.055675] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.925 [2024-11-05 19:18:44.055724] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.925 [2024-11-05 19:18:44.055734] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.925 [2024-11-05 19:18:44.055739] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.925 [2024-11-05 19:18:44.055743] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:14.925 [2024-11-05 19:18:44.055758] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.925 qpair failed and we were unable to recover it. 00:29:14.925 [2024-11-05 19:18:44.065666] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.925 [2024-11-05 19:18:44.065707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.925 [2024-11-05 19:18:44.065717] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.925 [2024-11-05 19:18:44.065722] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.925 [2024-11-05 19:18:44.065730] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:14.926 [2024-11-05 19:18:44.065740] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.926 qpair failed and we were unable to recover it. 00:29:14.926 [2024-11-05 19:18:44.075735] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.926 [2024-11-05 19:18:44.075799] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.926 [2024-11-05 19:18:44.075808] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.926 [2024-11-05 19:18:44.075814] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.926 [2024-11-05 19:18:44.075818] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:14.926 [2024-11-05 19:18:44.075829] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.926 qpair failed and we were unable to recover it. 00:29:14.926 [2024-11-05 19:18:44.085771] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.926 [2024-11-05 19:18:44.085820] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.926 [2024-11-05 19:18:44.085831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.926 [2024-11-05 19:18:44.085836] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.926 [2024-11-05 19:18:44.085841] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:14.926 [2024-11-05 19:18:44.085852] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.926 qpair failed and we were unable to recover it. 00:29:14.926 [2024-11-05 19:18:44.095777] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.926 [2024-11-05 19:18:44.095863] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.926 [2024-11-05 19:18:44.095873] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.926 [2024-11-05 19:18:44.095878] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.926 [2024-11-05 19:18:44.095883] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:14.926 [2024-11-05 19:18:44.095893] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.926 qpair failed and we were unable to recover it. 00:29:14.926 [2024-11-05 19:18:44.105784] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.926 [2024-11-05 19:18:44.105830] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.926 [2024-11-05 19:18:44.105840] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.926 [2024-11-05 19:18:44.105845] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.926 [2024-11-05 19:18:44.105849] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:14.926 [2024-11-05 19:18:44.105860] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.926 qpair failed and we were unable to recover it. 00:29:14.926 [2024-11-05 19:18:44.115846] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.926 [2024-11-05 19:18:44.115897] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.926 [2024-11-05 19:18:44.115907] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.926 [2024-11-05 19:18:44.115913] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.926 [2024-11-05 19:18:44.115919] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:14.926 [2024-11-05 19:18:44.115929] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.926 qpair failed and we were unable to recover it. 00:29:14.926 [2024-11-05 19:18:44.125851] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.926 [2024-11-05 19:18:44.125895] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.926 [2024-11-05 19:18:44.125905] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.926 [2024-11-05 19:18:44.125910] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.926 [2024-11-05 19:18:44.125915] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:14.926 [2024-11-05 19:18:44.125925] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.926 qpair failed and we were unable to recover it. 00:29:14.926 [2024-11-05 19:18:44.135784] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.926 [2024-11-05 19:18:44.135840] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.926 [2024-11-05 19:18:44.135850] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.926 [2024-11-05 19:18:44.135855] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.926 [2024-11-05 19:18:44.135860] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:14.926 [2024-11-05 19:18:44.135870] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.926 qpair failed and we were unable to recover it. 00:29:14.926 [2024-11-05 19:18:44.145894] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.926 [2024-11-05 19:18:44.145977] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.926 [2024-11-05 19:18:44.145987] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.926 [2024-11-05 19:18:44.145993] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.926 [2024-11-05 19:18:44.145997] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:14.926 [2024-11-05 19:18:44.146009] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.926 qpair failed and we were unable to recover it. 00:29:14.926 [2024-11-05 19:18:44.155931] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.926 [2024-11-05 19:18:44.155983] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.926 [2024-11-05 19:18:44.155998] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.926 [2024-11-05 19:18:44.156003] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.926 [2024-11-05 19:18:44.156008] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:14.926 [2024-11-05 19:18:44.156018] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.926 qpair failed and we were unable to recover it. 00:29:14.926 [2024-11-05 19:18:44.165983] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.926 [2024-11-05 19:18:44.166033] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.926 [2024-11-05 19:18:44.166043] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.926 [2024-11-05 19:18:44.166048] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.926 [2024-11-05 19:18:44.166053] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:14.926 [2024-11-05 19:18:44.166062] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.926 qpair failed and we were unable to recover it. 00:29:14.926 [2024-11-05 19:18:44.175990] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.926 [2024-11-05 19:18:44.176048] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.926 [2024-11-05 19:18:44.176058] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.926 [2024-11-05 19:18:44.176063] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.926 [2024-11-05 19:18:44.176067] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:14.926 [2024-11-05 19:18:44.176077] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.926 qpair failed and we were unable to recover it. 00:29:14.927 [2024-11-05 19:18:44.185903] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.927 [2024-11-05 19:18:44.185969] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.927 [2024-11-05 19:18:44.185980] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.927 [2024-11-05 19:18:44.185985] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.927 [2024-11-05 19:18:44.185990] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:14.927 [2024-11-05 19:18:44.186000] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.927 qpair failed and we were unable to recover it. 00:29:14.927 [2024-11-05 19:18:44.195941] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.927 [2024-11-05 19:18:44.195987] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.927 [2024-11-05 19:18:44.195997] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.927 [2024-11-05 19:18:44.196003] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.927 [2024-11-05 19:18:44.196010] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:14.927 [2024-11-05 19:18:44.196020] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.927 qpair failed and we were unable to recover it. 00:29:14.927 [2024-11-05 19:18:44.206106] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.927 [2024-11-05 19:18:44.206206] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.927 [2024-11-05 19:18:44.206217] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.927 [2024-11-05 19:18:44.206222] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.927 [2024-11-05 19:18:44.206226] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:14.927 [2024-11-05 19:18:44.206237] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.927 qpair failed and we were unable to recover it. 00:29:14.927 [2024-11-05 19:18:44.216124] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.927 [2024-11-05 19:18:44.216168] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.927 [2024-11-05 19:18:44.216178] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.927 [2024-11-05 19:18:44.216183] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.927 [2024-11-05 19:18:44.216187] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:14.927 [2024-11-05 19:18:44.216198] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.927 qpair failed and we were unable to recover it. 00:29:14.927 [2024-11-05 19:18:44.226155] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.927 [2024-11-05 19:18:44.226199] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.927 [2024-11-05 19:18:44.226209] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.927 [2024-11-05 19:18:44.226214] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.927 [2024-11-05 19:18:44.226218] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:14.927 [2024-11-05 19:18:44.226229] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.927 qpair failed and we were unable to recover it. 00:29:14.927 [2024-11-05 19:18:44.236064] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.927 [2024-11-05 19:18:44.236120] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.927 [2024-11-05 19:18:44.236130] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.927 [2024-11-05 19:18:44.236135] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.927 [2024-11-05 19:18:44.236140] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:14.927 [2024-11-05 19:18:44.236150] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.927 qpair failed and we were unable to recover it. 00:29:14.927 [2024-11-05 19:18:44.246194] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.927 [2024-11-05 19:18:44.246243] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.927 [2024-11-05 19:18:44.246253] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.927 [2024-11-05 19:18:44.246258] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.927 [2024-11-05 19:18:44.246263] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:14.927 [2024-11-05 19:18:44.246273] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.927 qpair failed and we were unable to recover it. 00:29:15.190 [2024-11-05 19:18:44.256283] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.190 [2024-11-05 19:18:44.256352] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.190 [2024-11-05 19:18:44.256361] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.190 [2024-11-05 19:18:44.256367] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.190 [2024-11-05 19:18:44.256371] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:15.190 [2024-11-05 19:18:44.256382] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.190 qpair failed and we were unable to recover it. 00:29:15.190 [2024-11-05 19:18:44.266259] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.190 [2024-11-05 19:18:44.266303] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.190 [2024-11-05 19:18:44.266313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.190 [2024-11-05 19:18:44.266318] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.190 [2024-11-05 19:18:44.266323] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:15.190 [2024-11-05 19:18:44.266333] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.190 qpair failed and we were unable to recover it. 00:29:15.190 [2024-11-05 19:18:44.276338] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.190 [2024-11-05 19:18:44.276388] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.190 [2024-11-05 19:18:44.276398] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.190 [2024-11-05 19:18:44.276403] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.190 [2024-11-05 19:18:44.276408] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:15.190 [2024-11-05 19:18:44.276418] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.190 qpair failed and we were unable to recover it. 00:29:15.190 [2024-11-05 19:18:44.286307] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.190 [2024-11-05 19:18:44.286391] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.190 [2024-11-05 19:18:44.286404] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.190 [2024-11-05 19:18:44.286410] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.190 [2024-11-05 19:18:44.286415] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:15.190 [2024-11-05 19:18:44.286426] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.190 qpair failed and we were unable to recover it. 00:29:15.190 [2024-11-05 19:18:44.296346] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.190 [2024-11-05 19:18:44.296389] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.190 [2024-11-05 19:18:44.296399] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.190 [2024-11-05 19:18:44.296404] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.190 [2024-11-05 19:18:44.296409] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:15.190 [2024-11-05 19:18:44.296420] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.190 qpair failed and we were unable to recover it. 00:29:15.190 [2024-11-05 19:18:44.306360] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.190 [2024-11-05 19:18:44.306406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.190 [2024-11-05 19:18:44.306416] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.190 [2024-11-05 19:18:44.306421] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.190 [2024-11-05 19:18:44.306426] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:15.190 [2024-11-05 19:18:44.306436] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.190 qpair failed and we were unable to recover it. 00:29:15.190 [2024-11-05 19:18:44.316271] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.190 [2024-11-05 19:18:44.316319] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.190 [2024-11-05 19:18:44.316329] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.191 [2024-11-05 19:18:44.316334] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.191 [2024-11-05 19:18:44.316339] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:15.191 [2024-11-05 19:18:44.316349] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.191 qpair failed and we were unable to recover it. 00:29:15.191 [2024-11-05 19:18:44.326438] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.191 [2024-11-05 19:18:44.326490] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.191 [2024-11-05 19:18:44.326499] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.191 [2024-11-05 19:18:44.326507] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.191 [2024-11-05 19:18:44.326512] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:15.191 [2024-11-05 19:18:44.326522] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.191 qpair failed and we were unable to recover it. 00:29:15.191 [2024-11-05 19:18:44.336423] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.191 [2024-11-05 19:18:44.336472] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.191 [2024-11-05 19:18:44.336491] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.191 [2024-11-05 19:18:44.336497] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.191 [2024-11-05 19:18:44.336502] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:15.191 [2024-11-05 19:18:44.336517] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.191 qpair failed and we were unable to recover it. 00:29:15.191 [2024-11-05 19:18:44.346344] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.191 [2024-11-05 19:18:44.346393] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.191 [2024-11-05 19:18:44.346404] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.191 [2024-11-05 19:18:44.346410] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.191 [2024-11-05 19:18:44.346414] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:15.191 [2024-11-05 19:18:44.346426] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.191 qpair failed and we were unable to recover it. 00:29:15.191 [2024-11-05 19:18:44.356522] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.191 [2024-11-05 19:18:44.356578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.191 [2024-11-05 19:18:44.356596] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.191 [2024-11-05 19:18:44.356603] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.191 [2024-11-05 19:18:44.356608] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:15.191 [2024-11-05 19:18:44.356622] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.191 qpair failed and we were unable to recover it. 00:29:15.191 [2024-11-05 19:18:44.366554] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.191 [2024-11-05 19:18:44.366606] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.191 [2024-11-05 19:18:44.366618] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.191 [2024-11-05 19:18:44.366623] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.191 [2024-11-05 19:18:44.366627] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:15.191 [2024-11-05 19:18:44.366639] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.191 qpair failed and we were unable to recover it. 00:29:15.191 [2024-11-05 19:18:44.376570] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.191 [2024-11-05 19:18:44.376619] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.191 [2024-11-05 19:18:44.376630] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.191 [2024-11-05 19:18:44.376635] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.191 [2024-11-05 19:18:44.376640] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:15.191 [2024-11-05 19:18:44.376650] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.191 qpair failed and we were unable to recover it. 00:29:15.191 [2024-11-05 19:18:44.386594] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.191 [2024-11-05 19:18:44.386667] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.191 [2024-11-05 19:18:44.386677] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.191 [2024-11-05 19:18:44.386682] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.191 [2024-11-05 19:18:44.386687] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:15.191 [2024-11-05 19:18:44.386698] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.191 qpair failed and we were unable to recover it. 00:29:15.191 [2024-11-05 19:18:44.396631] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.191 [2024-11-05 19:18:44.396680] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.191 [2024-11-05 19:18:44.396690] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.191 [2024-11-05 19:18:44.396696] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.191 [2024-11-05 19:18:44.396700] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:15.191 [2024-11-05 19:18:44.396711] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.191 qpair failed and we were unable to recover it. 00:29:15.191 [2024-11-05 19:18:44.406631] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.191 [2024-11-05 19:18:44.406685] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.191 [2024-11-05 19:18:44.406695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.191 [2024-11-05 19:18:44.406700] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.191 [2024-11-05 19:18:44.406705] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:15.191 [2024-11-05 19:18:44.406715] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.191 qpair failed and we were unable to recover it. 00:29:15.191 [2024-11-05 19:18:44.416544] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.191 [2024-11-05 19:18:44.416606] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.191 [2024-11-05 19:18:44.416616] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.191 [2024-11-05 19:18:44.416621] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.191 [2024-11-05 19:18:44.416626] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:15.191 [2024-11-05 19:18:44.416637] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.191 qpair failed and we were unable to recover it. 00:29:15.191 [2024-11-05 19:18:44.426704] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.191 [2024-11-05 19:18:44.426796] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.191 [2024-11-05 19:18:44.426806] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.191 [2024-11-05 19:18:44.426811] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.191 [2024-11-05 19:18:44.426816] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:15.191 [2024-11-05 19:18:44.426827] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.191 qpair failed and we were unable to recover it. 00:29:15.191 [2024-11-05 19:18:44.436727] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.191 [2024-11-05 19:18:44.436783] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.191 [2024-11-05 19:18:44.436793] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.191 [2024-11-05 19:18:44.436798] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.191 [2024-11-05 19:18:44.436803] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:15.191 [2024-11-05 19:18:44.436813] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.191 qpair failed and we were unable to recover it. 00:29:15.191 [2024-11-05 19:18:44.446772] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.191 [2024-11-05 19:18:44.446823] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.191 [2024-11-05 19:18:44.446834] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.191 [2024-11-05 19:18:44.446839] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.191 [2024-11-05 19:18:44.446843] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:15.192 [2024-11-05 19:18:44.446854] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.192 qpair failed and we were unable to recover it. 00:29:15.192 [2024-11-05 19:18:44.456784] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.192 [2024-11-05 19:18:44.456840] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.192 [2024-11-05 19:18:44.456869] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.192 [2024-11-05 19:18:44.456877] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.192 [2024-11-05 19:18:44.456882] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:15.192 [2024-11-05 19:18:44.456901] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.192 qpair failed and we were unable to recover it. 00:29:15.192 [2024-11-05 19:18:44.466815] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.192 [2024-11-05 19:18:44.466894] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.192 [2024-11-05 19:18:44.466905] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.192 [2024-11-05 19:18:44.466910] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.192 [2024-11-05 19:18:44.466915] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:15.192 [2024-11-05 19:18:44.466926] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.192 qpair failed and we were unable to recover it. 00:29:15.192 [2024-11-05 19:18:44.476707] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.192 [2024-11-05 19:18:44.476761] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.192 [2024-11-05 19:18:44.476772] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.192 [2024-11-05 19:18:44.476777] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.192 [2024-11-05 19:18:44.476782] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:15.192 [2024-11-05 19:18:44.476794] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.192 qpair failed and we were unable to recover it. 00:29:15.192 [2024-11-05 19:18:44.486863] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.192 [2024-11-05 19:18:44.486915] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.192 [2024-11-05 19:18:44.486926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.192 [2024-11-05 19:18:44.486931] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.192 [2024-11-05 19:18:44.486936] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:15.192 [2024-11-05 19:18:44.486946] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.192 qpair failed and we were unable to recover it. 00:29:15.192 [2024-11-05 19:18:44.496904] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.192 [2024-11-05 19:18:44.496958] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.192 [2024-11-05 19:18:44.496968] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.192 [2024-11-05 19:18:44.496974] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.192 [2024-11-05 19:18:44.496979] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:15.192 [2024-11-05 19:18:44.496992] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.192 qpair failed and we were unable to recover it. 00:29:15.192 [2024-11-05 19:18:44.506891] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.192 [2024-11-05 19:18:44.506936] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.192 [2024-11-05 19:18:44.506945] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.192 [2024-11-05 19:18:44.506951] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.192 [2024-11-05 19:18:44.506955] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:15.192 [2024-11-05 19:18:44.506966] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.192 qpair failed and we were unable to recover it. 00:29:15.455 [2024-11-05 19:18:44.516940] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.455 [2024-11-05 19:18:44.516989] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.455 [2024-11-05 19:18:44.516999] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.455 [2024-11-05 19:18:44.517005] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.455 [2024-11-05 19:18:44.517009] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:15.455 [2024-11-05 19:18:44.517020] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.455 qpair failed and we were unable to recover it. 00:29:15.455 [2024-11-05 19:18:44.527005] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.455 [2024-11-05 19:18:44.527053] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.455 [2024-11-05 19:18:44.527063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.455 [2024-11-05 19:18:44.527069] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.455 [2024-11-05 19:18:44.527073] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:15.455 [2024-11-05 19:18:44.527083] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.455 qpair failed and we were unable to recover it. 00:29:15.455 [2024-11-05 19:18:44.537008] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.455 [2024-11-05 19:18:44.537060] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.455 [2024-11-05 19:18:44.537070] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.455 [2024-11-05 19:18:44.537075] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.455 [2024-11-05 19:18:44.537080] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:15.455 [2024-11-05 19:18:44.537090] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.455 qpair failed and we were unable to recover it. 00:29:15.455 [2024-11-05 19:18:44.547048] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.455 [2024-11-05 19:18:44.547096] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.455 [2024-11-05 19:18:44.547106] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.455 [2024-11-05 19:18:44.547111] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.455 [2024-11-05 19:18:44.547116] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:15.455 [2024-11-05 19:18:44.547126] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.455 qpair failed and we were unable to recover it. 00:29:15.455 [2024-11-05 19:18:44.557061] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.455 [2024-11-05 19:18:44.557109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.455 [2024-11-05 19:18:44.557119] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.455 [2024-11-05 19:18:44.557124] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.455 [2024-11-05 19:18:44.557129] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:15.455 [2024-11-05 19:18:44.557139] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.455 qpair failed and we were unable to recover it. 00:29:15.455 [2024-11-05 19:18:44.567098] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.455 [2024-11-05 19:18:44.567148] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.455 [2024-11-05 19:18:44.567158] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.455 [2024-11-05 19:18:44.567163] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.455 [2024-11-05 19:18:44.567168] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:15.455 [2024-11-05 19:18:44.567179] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.455 qpair failed and we were unable to recover it. 00:29:15.455 [2024-11-05 19:18:44.577144] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.455 [2024-11-05 19:18:44.577194] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.455 [2024-11-05 19:18:44.577204] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.455 [2024-11-05 19:18:44.577209] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.455 [2024-11-05 19:18:44.577214] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:15.455 [2024-11-05 19:18:44.577224] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.455 qpair failed and we were unable to recover it. 00:29:15.455 [2024-11-05 19:18:44.587133] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.455 [2024-11-05 19:18:44.587180] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.455 [2024-11-05 19:18:44.587192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.455 [2024-11-05 19:18:44.587197] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.455 [2024-11-05 19:18:44.587202] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:15.455 [2024-11-05 19:18:44.587212] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.455 qpair failed and we were unable to recover it. 00:29:15.455 [2024-11-05 19:18:44.597176] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.455 [2024-11-05 19:18:44.597227] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.455 [2024-11-05 19:18:44.597237] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.455 [2024-11-05 19:18:44.597242] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.455 [2024-11-05 19:18:44.597247] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:15.455 [2024-11-05 19:18:44.597258] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.455 qpair failed and we were unable to recover it. 00:29:15.455 [2024-11-05 19:18:44.607219] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.455 [2024-11-05 19:18:44.607270] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.455 [2024-11-05 19:18:44.607281] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.455 [2024-11-05 19:18:44.607286] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.455 [2024-11-05 19:18:44.607291] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:15.455 [2024-11-05 19:18:44.607301] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.455 qpair failed and we were unable to recover it. 00:29:15.455 [2024-11-05 19:18:44.617229] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.455 [2024-11-05 19:18:44.617272] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.456 [2024-11-05 19:18:44.617282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.456 [2024-11-05 19:18:44.617287] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.456 [2024-11-05 19:18:44.617292] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:15.456 [2024-11-05 19:18:44.617303] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.456 qpair failed and we were unable to recover it. 00:29:15.456 [2024-11-05 19:18:44.627128] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.456 [2024-11-05 19:18:44.627175] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.456 [2024-11-05 19:18:44.627185] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.456 [2024-11-05 19:18:44.627190] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.456 [2024-11-05 19:18:44.627198] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:15.456 [2024-11-05 19:18:44.627209] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.456 qpair failed and we were unable to recover it. 00:29:15.456 [2024-11-05 19:18:44.637285] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.456 [2024-11-05 19:18:44.637334] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.456 [2024-11-05 19:18:44.637344] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.456 [2024-11-05 19:18:44.637349] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.456 [2024-11-05 19:18:44.637354] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:15.456 [2024-11-05 19:18:44.637364] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.456 qpair failed and we were unable to recover it. 00:29:15.456 [2024-11-05 19:18:44.647316] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.456 [2024-11-05 19:18:44.647366] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.456 [2024-11-05 19:18:44.647375] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.456 [2024-11-05 19:18:44.647380] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.456 [2024-11-05 19:18:44.647385] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:15.456 [2024-11-05 19:18:44.647395] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.456 qpair failed and we were unable to recover it. 00:29:15.456 [2024-11-05 19:18:44.657340] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.456 [2024-11-05 19:18:44.657385] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.456 [2024-11-05 19:18:44.657395] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.456 [2024-11-05 19:18:44.657400] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.456 [2024-11-05 19:18:44.657405] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:15.456 [2024-11-05 19:18:44.657415] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.456 qpair failed and we were unable to recover it. 00:29:15.456 [2024-11-05 19:18:44.667345] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.456 [2024-11-05 19:18:44.667393] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.456 [2024-11-05 19:18:44.667404] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.456 [2024-11-05 19:18:44.667410] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.456 [2024-11-05 19:18:44.667414] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:15.456 [2024-11-05 19:18:44.667424] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.456 qpair failed and we were unable to recover it. 00:29:15.456 [2024-11-05 19:18:44.677393] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.456 [2024-11-05 19:18:44.677445] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.456 [2024-11-05 19:18:44.677458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.456 [2024-11-05 19:18:44.677463] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.456 [2024-11-05 19:18:44.677468] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:15.456 [2024-11-05 19:18:44.677479] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.456 qpair failed and we were unable to recover it. 00:29:15.456 [2024-11-05 19:18:44.687434] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.456 [2024-11-05 19:18:44.687488] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.456 [2024-11-05 19:18:44.687507] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.456 [2024-11-05 19:18:44.687513] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.456 [2024-11-05 19:18:44.687519] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:15.456 [2024-11-05 19:18:44.687533] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.456 qpair failed and we were unable to recover it. 00:29:15.456 [2024-11-05 19:18:44.697462] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.456 [2024-11-05 19:18:44.697507] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.456 [2024-11-05 19:18:44.697519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.456 [2024-11-05 19:18:44.697524] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.456 [2024-11-05 19:18:44.697529] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:15.456 [2024-11-05 19:18:44.697540] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.456 qpair failed and we were unable to recover it. 00:29:15.456 [2024-11-05 19:18:44.707447] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.456 [2024-11-05 19:18:44.707496] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.456 [2024-11-05 19:18:44.707507] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.456 [2024-11-05 19:18:44.707512] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.456 [2024-11-05 19:18:44.707516] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:15.456 [2024-11-05 19:18:44.707527] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.456 qpair failed and we were unable to recover it. 00:29:15.456 [2024-11-05 19:18:44.717395] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.456 [2024-11-05 19:18:44.717449] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.456 [2024-11-05 19:18:44.717464] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.456 [2024-11-05 19:18:44.717469] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.456 [2024-11-05 19:18:44.717474] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:15.456 [2024-11-05 19:18:44.717486] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.456 qpair failed and we were unable to recover it. 00:29:15.456 [2024-11-05 19:18:44.727557] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.456 [2024-11-05 19:18:44.727606] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.456 [2024-11-05 19:18:44.727616] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.456 [2024-11-05 19:18:44.727622] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.456 [2024-11-05 19:18:44.727626] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:15.456 [2024-11-05 19:18:44.727637] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.456 qpair failed and we were unable to recover it. 00:29:15.456 [2024-11-05 19:18:44.737537] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.456 [2024-11-05 19:18:44.737590] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.456 [2024-11-05 19:18:44.737600] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.456 [2024-11-05 19:18:44.737605] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.456 [2024-11-05 19:18:44.737610] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:15.456 [2024-11-05 19:18:44.737620] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.457 qpair failed and we were unable to recover it. 00:29:15.457 [2024-11-05 19:18:44.747585] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.457 [2024-11-05 19:18:44.747641] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.457 [2024-11-05 19:18:44.747651] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.457 [2024-11-05 19:18:44.747656] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.457 [2024-11-05 19:18:44.747661] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:15.457 [2024-11-05 19:18:44.747672] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.457 qpair failed and we were unable to recover it. 00:29:15.457 [2024-11-05 19:18:44.757614] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.457 [2024-11-05 19:18:44.757665] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.457 [2024-11-05 19:18:44.757674] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.457 [2024-11-05 19:18:44.757680] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.457 [2024-11-05 19:18:44.757687] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:15.457 [2024-11-05 19:18:44.757697] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.457 qpair failed and we were unable to recover it. 00:29:15.457 [2024-11-05 19:18:44.767531] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.457 [2024-11-05 19:18:44.767581] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.457 [2024-11-05 19:18:44.767591] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.457 [2024-11-05 19:18:44.767596] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.457 [2024-11-05 19:18:44.767601] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:15.457 [2024-11-05 19:18:44.767612] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.457 qpair failed and we were unable to recover it. 00:29:15.457 [2024-11-05 19:18:44.777685] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.457 [2024-11-05 19:18:44.777735] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.457 [2024-11-05 19:18:44.777748] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.457 [2024-11-05 19:18:44.777754] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.457 [2024-11-05 19:18:44.777758] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:15.457 [2024-11-05 19:18:44.777769] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.457 qpair failed and we were unable to recover it. 00:29:15.719 [2024-11-05 19:18:44.787610] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.719 [2024-11-05 19:18:44.787659] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.719 [2024-11-05 19:18:44.787669] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.719 [2024-11-05 19:18:44.787674] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.719 [2024-11-05 19:18:44.787679] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:15.719 [2024-11-05 19:18:44.787689] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.719 qpair failed and we were unable to recover it. 00:29:15.719 [2024-11-05 19:18:44.797611] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.719 [2024-11-05 19:18:44.797663] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.719 [2024-11-05 19:18:44.797672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.719 [2024-11-05 19:18:44.797678] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.719 [2024-11-05 19:18:44.797682] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:15.719 [2024-11-05 19:18:44.797693] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.719 qpair failed and we were unable to recover it. 00:29:15.719 [2024-11-05 19:18:44.807786] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.719 [2024-11-05 19:18:44.807839] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.719 [2024-11-05 19:18:44.807849] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.719 [2024-11-05 19:18:44.807854] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.719 [2024-11-05 19:18:44.807859] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:15.719 [2024-11-05 19:18:44.807870] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.719 qpair failed and we were unable to recover it. 00:29:15.719 [2024-11-05 19:18:44.817795] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.719 [2024-11-05 19:18:44.817843] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.719 [2024-11-05 19:18:44.817853] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.720 [2024-11-05 19:18:44.817858] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.720 [2024-11-05 19:18:44.817863] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:15.720 [2024-11-05 19:18:44.817875] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.720 qpair failed and we were unable to recover it. 00:29:15.720 [2024-11-05 19:18:44.827816] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.720 [2024-11-05 19:18:44.827862] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.720 [2024-11-05 19:18:44.827872] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.720 [2024-11-05 19:18:44.827877] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.720 [2024-11-05 19:18:44.827882] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:15.720 [2024-11-05 19:18:44.827892] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.720 qpair failed and we were unable to recover it. 00:29:15.720 [2024-11-05 19:18:44.837846] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.720 [2024-11-05 19:18:44.837897] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.720 [2024-11-05 19:18:44.837907] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.720 [2024-11-05 19:18:44.837913] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.720 [2024-11-05 19:18:44.837918] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:15.720 [2024-11-05 19:18:44.837929] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.720 qpair failed and we were unable to recover it. 00:29:15.720 [2024-11-05 19:18:44.847891] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.720 [2024-11-05 19:18:44.847943] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.720 [2024-11-05 19:18:44.847955] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.720 [2024-11-05 19:18:44.847961] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.720 [2024-11-05 19:18:44.847965] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:15.720 [2024-11-05 19:18:44.847976] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.720 qpair failed and we were unable to recover it. 00:29:15.720 [2024-11-05 19:18:44.857886] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.720 [2024-11-05 19:18:44.857940] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.720 [2024-11-05 19:18:44.857950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.720 [2024-11-05 19:18:44.857955] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.720 [2024-11-05 19:18:44.857960] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:15.720 [2024-11-05 19:18:44.857970] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.720 qpair failed and we were unable to recover it. 00:29:15.720 [2024-11-05 19:18:44.867918] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.720 [2024-11-05 19:18:44.867968] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.720 [2024-11-05 19:18:44.867978] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.720 [2024-11-05 19:18:44.867983] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.720 [2024-11-05 19:18:44.867988] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:15.720 [2024-11-05 19:18:44.867998] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.720 qpair failed and we were unable to recover it. 00:29:15.720 [2024-11-05 19:18:44.878000] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.720 [2024-11-05 19:18:44.878046] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.720 [2024-11-05 19:18:44.878055] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.720 [2024-11-05 19:18:44.878060] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.720 [2024-11-05 19:18:44.878065] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:15.720 [2024-11-05 19:18:44.878075] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.720 qpair failed and we were unable to recover it. 00:29:15.720 [2024-11-05 19:18:44.888016] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.720 [2024-11-05 19:18:44.888064] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.720 [2024-11-05 19:18:44.888074] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.720 [2024-11-05 19:18:44.888082] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.720 [2024-11-05 19:18:44.888086] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:15.720 [2024-11-05 19:18:44.888096] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.720 qpair failed and we were unable to recover it. 00:29:15.720 [2024-11-05 19:18:44.898027] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.720 [2024-11-05 19:18:44.898071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.720 [2024-11-05 19:18:44.898081] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.720 [2024-11-05 19:18:44.898086] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.720 [2024-11-05 19:18:44.898091] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:15.720 [2024-11-05 19:18:44.898101] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.720 qpair failed and we were unable to recover it. 00:29:15.720 [2024-11-05 19:18:44.908057] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.720 [2024-11-05 19:18:44.908104] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.720 [2024-11-05 19:18:44.908114] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.720 [2024-11-05 19:18:44.908119] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.720 [2024-11-05 19:18:44.908124] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:15.720 [2024-11-05 19:18:44.908134] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.720 qpair failed and we were unable to recover it. 00:29:15.720 [2024-11-05 19:18:44.918053] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.720 [2024-11-05 19:18:44.918100] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.720 [2024-11-05 19:18:44.918109] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.720 [2024-11-05 19:18:44.918115] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.720 [2024-11-05 19:18:44.918120] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:15.720 [2024-11-05 19:18:44.918130] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.720 qpair failed and we were unable to recover it. 00:29:15.720 [2024-11-05 19:18:44.928107] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.720 [2024-11-05 19:18:44.928157] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.720 [2024-11-05 19:18:44.928167] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.720 [2024-11-05 19:18:44.928172] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.720 [2024-11-05 19:18:44.928177] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:15.720 [2024-11-05 19:18:44.928187] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.720 qpair failed and we were unable to recover it. 00:29:15.720 [2024-11-05 19:18:44.938120] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.720 [2024-11-05 19:18:44.938165] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.720 [2024-11-05 19:18:44.938175] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.720 [2024-11-05 19:18:44.938180] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.720 [2024-11-05 19:18:44.938185] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:15.720 [2024-11-05 19:18:44.938195] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.720 qpair failed and we were unable to recover it. 00:29:15.720 [2024-11-05 19:18:44.948151] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.720 [2024-11-05 19:18:44.948198] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.720 [2024-11-05 19:18:44.948208] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.720 [2024-11-05 19:18:44.948213] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.721 [2024-11-05 19:18:44.948218] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:15.721 [2024-11-05 19:18:44.948228] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.721 qpair failed and we were unable to recover it. 00:29:15.721 [2024-11-05 19:18:44.958198] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.721 [2024-11-05 19:18:44.958246] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.721 [2024-11-05 19:18:44.958256] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.721 [2024-11-05 19:18:44.958261] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.721 [2024-11-05 19:18:44.958266] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:15.721 [2024-11-05 19:18:44.958275] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.721 qpair failed and we were unable to recover it. 00:29:15.721 [2024-11-05 19:18:44.968215] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.721 [2024-11-05 19:18:44.968264] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.721 [2024-11-05 19:18:44.968274] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.721 [2024-11-05 19:18:44.968279] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.721 [2024-11-05 19:18:44.968283] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:15.721 [2024-11-05 19:18:44.968293] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.721 qpair failed and we were unable to recover it. 00:29:15.721 [2024-11-05 19:18:44.978196] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.721 [2024-11-05 19:18:44.978260] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.721 [2024-11-05 19:18:44.978270] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.721 [2024-11-05 19:18:44.978275] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.721 [2024-11-05 19:18:44.978279] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:15.721 [2024-11-05 19:18:44.978289] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.721 qpair failed and we were unable to recover it. 00:29:15.721 [2024-11-05 19:18:44.988245] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.721 [2024-11-05 19:18:44.988289] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.721 [2024-11-05 19:18:44.988298] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.721 [2024-11-05 19:18:44.988304] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.721 [2024-11-05 19:18:44.988309] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:15.721 [2024-11-05 19:18:44.988319] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.721 qpair failed and we were unable to recover it. 00:29:15.721 [2024-11-05 19:18:44.998275] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.721 [2024-11-05 19:18:44.998327] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.721 [2024-11-05 19:18:44.998337] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.721 [2024-11-05 19:18:44.998342] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.721 [2024-11-05 19:18:44.998347] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:15.721 [2024-11-05 19:18:44.998357] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.721 qpair failed and we were unable to recover it. 00:29:15.721 [2024-11-05 19:18:45.008330] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.721 [2024-11-05 19:18:45.008379] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.721 [2024-11-05 19:18:45.008389] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.721 [2024-11-05 19:18:45.008394] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.721 [2024-11-05 19:18:45.008399] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:15.721 [2024-11-05 19:18:45.008409] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.721 qpair failed and we were unable to recover it. 00:29:15.721 [2024-11-05 19:18:45.018345] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.721 [2024-11-05 19:18:45.018388] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.721 [2024-11-05 19:18:45.018398] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.721 [2024-11-05 19:18:45.018409] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.721 [2024-11-05 19:18:45.018414] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:15.721 [2024-11-05 19:18:45.018424] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.721 qpair failed and we were unable to recover it. 00:29:15.721 [2024-11-05 19:18:45.028240] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.721 [2024-11-05 19:18:45.028282] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.721 [2024-11-05 19:18:45.028292] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.721 [2024-11-05 19:18:45.028297] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.721 [2024-11-05 19:18:45.028301] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:15.721 [2024-11-05 19:18:45.028312] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.721 qpair failed and we were unable to recover it. 00:29:15.721 [2024-11-05 19:18:45.038384] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.721 [2024-11-05 19:18:45.038435] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.721 [2024-11-05 19:18:45.038445] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.721 [2024-11-05 19:18:45.038450] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.721 [2024-11-05 19:18:45.038454] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:15.721 [2024-11-05 19:18:45.038465] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.721 qpair failed and we were unable to recover it. 00:29:15.984 [2024-11-05 19:18:45.048312] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.984 [2024-11-05 19:18:45.048370] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.984 [2024-11-05 19:18:45.048381] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.984 [2024-11-05 19:18:45.048387] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.984 [2024-11-05 19:18:45.048391] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:15.984 [2024-11-05 19:18:45.048402] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.984 qpair failed and we were unable to recover it. 00:29:15.984 [2024-11-05 19:18:45.058463] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.984 [2024-11-05 19:18:45.058516] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.984 [2024-11-05 19:18:45.058526] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.984 [2024-11-05 19:18:45.058531] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.984 [2024-11-05 19:18:45.058536] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:15.984 [2024-11-05 19:18:45.058549] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.984 qpair failed and we were unable to recover it. 00:29:15.984 [2024-11-05 19:18:45.068356] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.984 [2024-11-05 19:18:45.068402] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.984 [2024-11-05 19:18:45.068416] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.984 [2024-11-05 19:18:45.068421] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.984 [2024-11-05 19:18:45.068426] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:15.984 [2024-11-05 19:18:45.068438] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.984 qpair failed and we were unable to recover it. 00:29:15.984 [2024-11-05 19:18:45.078532] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.984 [2024-11-05 19:18:45.078581] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.984 [2024-11-05 19:18:45.078591] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.984 [2024-11-05 19:18:45.078596] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.984 [2024-11-05 19:18:45.078600] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:15.984 [2024-11-05 19:18:45.078611] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.984 qpair failed and we were unable to recover it. 00:29:15.984 [2024-11-05 19:18:45.088440] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.984 [2024-11-05 19:18:45.088493] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.984 [2024-11-05 19:18:45.088503] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.984 [2024-11-05 19:18:45.088509] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.984 [2024-11-05 19:18:45.088513] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:15.984 [2024-11-05 19:18:45.088524] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.984 qpair failed and we were unable to recover it. 00:29:15.984 [2024-11-05 19:18:45.098566] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.984 [2024-11-05 19:18:45.098613] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.984 [2024-11-05 19:18:45.098624] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.984 [2024-11-05 19:18:45.098629] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.984 [2024-11-05 19:18:45.098634] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:15.984 [2024-11-05 19:18:45.098644] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.984 qpair failed and we were unable to recover it. 00:29:15.984 [2024-11-05 19:18:45.108577] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.984 [2024-11-05 19:18:45.108630] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.984 [2024-11-05 19:18:45.108650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.984 [2024-11-05 19:18:45.108656] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.984 [2024-11-05 19:18:45.108661] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:15.984 [2024-11-05 19:18:45.108676] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.984 qpair failed and we were unable to recover it. 00:29:15.984 [2024-11-05 19:18:45.118643] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.984 [2024-11-05 19:18:45.118695] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.984 [2024-11-05 19:18:45.118714] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.984 [2024-11-05 19:18:45.118720] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.984 [2024-11-05 19:18:45.118725] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:15.984 [2024-11-05 19:18:45.118740] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.984 qpair failed and we were unable to recover it. 00:29:15.984 [2024-11-05 19:18:45.128682] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.984 [2024-11-05 19:18:45.128734] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.984 [2024-11-05 19:18:45.128750] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.984 [2024-11-05 19:18:45.128755] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.984 [2024-11-05 19:18:45.128760] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:15.984 [2024-11-05 19:18:45.128773] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.984 qpair failed and we were unable to recover it. 00:29:15.984 [2024-11-05 19:18:45.138678] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.984 [2024-11-05 19:18:45.138723] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.984 [2024-11-05 19:18:45.138734] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.984 [2024-11-05 19:18:45.138739] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.984 [2024-11-05 19:18:45.138744] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:15.984 [2024-11-05 19:18:45.138760] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.984 qpair failed and we were unable to recover it. 00:29:15.984 [2024-11-05 19:18:45.148696] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.984 [2024-11-05 19:18:45.148748] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.984 [2024-11-05 19:18:45.148762] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.984 [2024-11-05 19:18:45.148767] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.984 [2024-11-05 19:18:45.148772] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:15.984 [2024-11-05 19:18:45.148783] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.984 qpair failed and we were unable to recover it. 00:29:15.984 [2024-11-05 19:18:45.158742] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.984 [2024-11-05 19:18:45.158796] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.984 [2024-11-05 19:18:45.158806] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.984 [2024-11-05 19:18:45.158811] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.984 [2024-11-05 19:18:45.158816] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:15.984 [2024-11-05 19:18:45.158827] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.984 qpair failed and we were unable to recover it. 00:29:15.984 [2024-11-05 19:18:45.168780] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.984 [2024-11-05 19:18:45.168835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.984 [2024-11-05 19:18:45.168846] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.984 [2024-11-05 19:18:45.168851] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.984 [2024-11-05 19:18:45.168856] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:15.985 [2024-11-05 19:18:45.168867] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.985 qpair failed and we were unable to recover it. 00:29:15.985 [2024-11-05 19:18:45.178805] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.985 [2024-11-05 19:18:45.178850] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.985 [2024-11-05 19:18:45.178861] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.985 [2024-11-05 19:18:45.178867] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.985 [2024-11-05 19:18:45.178871] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:15.985 [2024-11-05 19:18:45.178882] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.985 qpair failed and we were unable to recover it. 00:29:15.985 [2024-11-05 19:18:45.188811] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.985 [2024-11-05 19:18:45.188859] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.985 [2024-11-05 19:18:45.188869] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.985 [2024-11-05 19:18:45.188874] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.985 [2024-11-05 19:18:45.188881] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:15.985 [2024-11-05 19:18:45.188892] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.985 qpair failed and we were unable to recover it. 00:29:15.985 [2024-11-05 19:18:45.198878] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.985 [2024-11-05 19:18:45.198925] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.985 [2024-11-05 19:18:45.198935] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.985 [2024-11-05 19:18:45.198940] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.985 [2024-11-05 19:18:45.198944] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:15.985 [2024-11-05 19:18:45.198954] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.985 qpair failed and we were unable to recover it. 00:29:15.985 [2024-11-05 19:18:45.208887] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.985 [2024-11-05 19:18:45.208936] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.985 [2024-11-05 19:18:45.208946] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.985 [2024-11-05 19:18:45.208951] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.985 [2024-11-05 19:18:45.208956] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:15.985 [2024-11-05 19:18:45.208967] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.985 qpair failed and we were unable to recover it. 00:29:15.985 [2024-11-05 19:18:45.218893] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.985 [2024-11-05 19:18:45.218939] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.985 [2024-11-05 19:18:45.218948] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.985 [2024-11-05 19:18:45.218954] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.985 [2024-11-05 19:18:45.218958] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:15.985 [2024-11-05 19:18:45.218969] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.985 qpair failed and we were unable to recover it. 00:29:15.985 [2024-11-05 19:18:45.228960] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.985 [2024-11-05 19:18:45.229049] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.985 [2024-11-05 19:18:45.229059] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.985 [2024-11-05 19:18:45.229064] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.985 [2024-11-05 19:18:45.229069] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:15.985 [2024-11-05 19:18:45.229079] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.985 qpair failed and we were unable to recover it. 00:29:15.985 [2024-11-05 19:18:45.238848] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.985 [2024-11-05 19:18:45.238899] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.985 [2024-11-05 19:18:45.238909] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.985 [2024-11-05 19:18:45.238914] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.985 [2024-11-05 19:18:45.238918] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:15.985 [2024-11-05 19:18:45.238928] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.985 qpair failed and we were unable to recover it. 00:29:15.985 [2024-11-05 19:18:45.248956] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.985 [2024-11-05 19:18:45.249002] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.985 [2024-11-05 19:18:45.249012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.985 [2024-11-05 19:18:45.249017] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.985 [2024-11-05 19:18:45.249021] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:15.985 [2024-11-05 19:18:45.249031] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.985 qpair failed and we were unable to recover it. 00:29:15.985 [2024-11-05 19:18:45.259000] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.985 [2024-11-05 19:18:45.259044] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.985 [2024-11-05 19:18:45.259054] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.985 [2024-11-05 19:18:45.259059] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.985 [2024-11-05 19:18:45.259064] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:15.985 [2024-11-05 19:18:45.259074] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.985 qpair failed and we were unable to recover it. 00:29:15.985 [2024-11-05 19:18:45.269075] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.985 [2024-11-05 19:18:45.269141] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.985 [2024-11-05 19:18:45.269151] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.985 [2024-11-05 19:18:45.269156] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.985 [2024-11-05 19:18:45.269161] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:15.985 [2024-11-05 19:18:45.269171] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.985 qpair failed and we were unable to recover it. 00:29:15.985 [2024-11-05 19:18:45.279089] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.985 [2024-11-05 19:18:45.279140] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.985 [2024-11-05 19:18:45.279153] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.985 [2024-11-05 19:18:45.279158] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.985 [2024-11-05 19:18:45.279163] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:15.985 [2024-11-05 19:18:45.279173] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.985 qpair failed and we were unable to recover it. 00:29:15.985 [2024-11-05 19:18:45.289075] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.985 [2024-11-05 19:18:45.289153] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.985 [2024-11-05 19:18:45.289164] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.985 [2024-11-05 19:18:45.289169] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.985 [2024-11-05 19:18:45.289174] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:15.985 [2024-11-05 19:18:45.289185] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.985 qpair failed and we were unable to recover it. 00:29:15.985 [2024-11-05 19:18:45.299113] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.985 [2024-11-05 19:18:45.299161] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.985 [2024-11-05 19:18:45.299171] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.985 [2024-11-05 19:18:45.299176] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.985 [2024-11-05 19:18:45.299181] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:15.985 [2024-11-05 19:18:45.299192] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.986 qpair failed and we were unable to recover it. 00:29:16.247 [2024-11-05 19:18:45.309140] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.247 [2024-11-05 19:18:45.309187] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.248 [2024-11-05 19:18:45.309196] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.248 [2024-11-05 19:18:45.309202] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.248 [2024-11-05 19:18:45.309207] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:16.248 [2024-11-05 19:18:45.309217] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.248 qpair failed and we were unable to recover it. 00:29:16.248 [2024-11-05 19:18:45.319187] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.248 [2024-11-05 19:18:45.319239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.248 [2024-11-05 19:18:45.319249] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.248 [2024-11-05 19:18:45.319254] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.248 [2024-11-05 19:18:45.319262] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:16.248 [2024-11-05 19:18:45.319273] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.248 qpair failed and we were unable to recover it. 00:29:16.248 [2024-11-05 19:18:45.329203] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.248 [2024-11-05 19:18:45.329246] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.248 [2024-11-05 19:18:45.329256] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.248 [2024-11-05 19:18:45.329261] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.248 [2024-11-05 19:18:45.329266] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:16.248 [2024-11-05 19:18:45.329276] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.248 qpair failed and we were unable to recover it. 00:29:16.248 [2024-11-05 19:18:45.339243] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.248 [2024-11-05 19:18:45.339293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.248 [2024-11-05 19:18:45.339304] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.248 [2024-11-05 19:18:45.339309] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.248 [2024-11-05 19:18:45.339314] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:16.248 [2024-11-05 19:18:45.339325] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.248 qpair failed and we were unable to recover it. 00:29:16.248 [2024-11-05 19:18:45.349235] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.248 [2024-11-05 19:18:45.349280] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.248 [2024-11-05 19:18:45.349290] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.248 [2024-11-05 19:18:45.349295] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.248 [2024-11-05 19:18:45.349300] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:16.248 [2024-11-05 19:18:45.349310] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.248 qpair failed and we were unable to recover it. 00:29:16.248 [2024-11-05 19:18:45.359292] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.248 [2024-11-05 19:18:45.359341] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.248 [2024-11-05 19:18:45.359351] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.248 [2024-11-05 19:18:45.359356] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.248 [2024-11-05 19:18:45.359361] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:16.248 [2024-11-05 19:18:45.359371] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.248 qpair failed and we were unable to recover it. 00:29:16.248 [2024-11-05 19:18:45.369299] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.248 [2024-11-05 19:18:45.369341] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.248 [2024-11-05 19:18:45.369351] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.248 [2024-11-05 19:18:45.369356] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.248 [2024-11-05 19:18:45.369361] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:16.248 [2024-11-05 19:18:45.369371] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.248 qpair failed and we were unable to recover it. 00:29:16.248 [2024-11-05 19:18:45.379348] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.248 [2024-11-05 19:18:45.379395] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.248 [2024-11-05 19:18:45.379405] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.248 [2024-11-05 19:18:45.379410] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.248 [2024-11-05 19:18:45.379414] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:16.248 [2024-11-05 19:18:45.379425] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.248 qpair failed and we were unable to recover it. 00:29:16.248 [2024-11-05 19:18:45.389307] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.248 [2024-11-05 19:18:45.389348] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.248 [2024-11-05 19:18:45.389358] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.248 [2024-11-05 19:18:45.389363] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.248 [2024-11-05 19:18:45.389368] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:16.248 [2024-11-05 19:18:45.389378] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.248 qpair failed and we were unable to recover it. 00:29:16.248 [2024-11-05 19:18:45.399349] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.248 [2024-11-05 19:18:45.399399] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.248 [2024-11-05 19:18:45.399408] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.248 [2024-11-05 19:18:45.399413] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.248 [2024-11-05 19:18:45.399418] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:16.248 [2024-11-05 19:18:45.399428] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.248 qpair failed and we were unable to recover it. 00:29:16.248 [2024-11-05 19:18:45.409417] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.248 [2024-11-05 19:18:45.409461] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.248 [2024-11-05 19:18:45.409473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.248 [2024-11-05 19:18:45.409478] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.248 [2024-11-05 19:18:45.409483] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:16.248 [2024-11-05 19:18:45.409494] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.248 qpair failed and we were unable to recover it. 00:29:16.248 [2024-11-05 19:18:45.419459] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.248 [2024-11-05 19:18:45.419507] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.248 [2024-11-05 19:18:45.419517] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.248 [2024-11-05 19:18:45.419522] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.248 [2024-11-05 19:18:45.419527] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:16.248 [2024-11-05 19:18:45.419537] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.248 qpair failed and we were unable to recover it. 00:29:16.248 [2024-11-05 19:18:45.429455] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.248 [2024-11-05 19:18:45.429494] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.248 [2024-11-05 19:18:45.429504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.248 [2024-11-05 19:18:45.429509] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.248 [2024-11-05 19:18:45.429514] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:16.248 [2024-11-05 19:18:45.429524] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.248 qpair failed and we were unable to recover it. 00:29:16.248 [2024-11-05 19:18:45.439529] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.248 [2024-11-05 19:18:45.439585] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.248 [2024-11-05 19:18:45.439595] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.248 [2024-11-05 19:18:45.439600] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.248 [2024-11-05 19:18:45.439605] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:16.248 [2024-11-05 19:18:45.439615] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.248 qpair failed and we were unable to recover it. 00:29:16.248 [2024-11-05 19:18:45.449422] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.248 [2024-11-05 19:18:45.449475] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.248 [2024-11-05 19:18:45.449484] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.248 [2024-11-05 19:18:45.449492] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.248 [2024-11-05 19:18:45.449497] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:16.248 [2024-11-05 19:18:45.449508] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.248 qpair failed and we were unable to recover it. 00:29:16.248 [2024-11-05 19:18:45.459575] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.248 [2024-11-05 19:18:45.459628] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.248 [2024-11-05 19:18:45.459647] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.248 [2024-11-05 19:18:45.459654] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.248 [2024-11-05 19:18:45.459659] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:16.248 [2024-11-05 19:18:45.459673] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.248 qpair failed and we were unable to recover it. 00:29:16.248 [2024-11-05 19:18:45.469427] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.248 [2024-11-05 19:18:45.469470] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.248 [2024-11-05 19:18:45.469482] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.248 [2024-11-05 19:18:45.469487] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.248 [2024-11-05 19:18:45.469492] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:16.248 [2024-11-05 19:18:45.469503] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.248 qpair failed and we were unable to recover it. 00:29:16.248 [2024-11-05 19:18:45.479638] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.248 [2024-11-05 19:18:45.479690] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.248 [2024-11-05 19:18:45.479700] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.248 [2024-11-05 19:18:45.479705] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.248 [2024-11-05 19:18:45.479710] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:16.248 [2024-11-05 19:18:45.479721] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.248 qpair failed and we were unable to recover it. 00:29:16.248 [2024-11-05 19:18:45.489625] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.248 [2024-11-05 19:18:45.489671] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.248 [2024-11-05 19:18:45.489682] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.248 [2024-11-05 19:18:45.489687] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.248 [2024-11-05 19:18:45.489692] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:16.248 [2024-11-05 19:18:45.489705] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.248 qpair failed and we were unable to recover it. 00:29:16.248 [2024-11-05 19:18:45.499680] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.248 [2024-11-05 19:18:45.499728] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.248 [2024-11-05 19:18:45.499738] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.248 [2024-11-05 19:18:45.499743] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.248 [2024-11-05 19:18:45.499752] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:16.249 [2024-11-05 19:18:45.499763] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.249 qpair failed and we were unable to recover it. 00:29:16.249 [2024-11-05 19:18:45.509544] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.249 [2024-11-05 19:18:45.509588] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.249 [2024-11-05 19:18:45.509598] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.249 [2024-11-05 19:18:45.509603] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.249 [2024-11-05 19:18:45.509608] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:16.249 [2024-11-05 19:18:45.509618] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.249 qpair failed and we were unable to recover it. 00:29:16.249 [2024-11-05 19:18:45.519850] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.249 [2024-11-05 19:18:45.519907] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.249 [2024-11-05 19:18:45.519917] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.249 [2024-11-05 19:18:45.519922] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.249 [2024-11-05 19:18:45.519927] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:16.249 [2024-11-05 19:18:45.519937] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.249 qpair failed and we were unable to recover it. 00:29:16.249 [2024-11-05 19:18:45.529641] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.249 [2024-11-05 19:18:45.529710] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.249 [2024-11-05 19:18:45.529719] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.249 [2024-11-05 19:18:45.529724] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.249 [2024-11-05 19:18:45.529729] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:16.249 [2024-11-05 19:18:45.529739] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.249 qpair failed and we were unable to recover it. 00:29:16.249 [2024-11-05 19:18:45.539788] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.249 [2024-11-05 19:18:45.539837] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.249 [2024-11-05 19:18:45.539847] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.249 [2024-11-05 19:18:45.539853] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.249 [2024-11-05 19:18:45.539857] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:16.249 [2024-11-05 19:18:45.539868] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.249 qpair failed and we were unable to recover it. 00:29:16.249 [2024-11-05 19:18:45.549767] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.249 [2024-11-05 19:18:45.549810] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.249 [2024-11-05 19:18:45.549820] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.249 [2024-11-05 19:18:45.549825] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.249 [2024-11-05 19:18:45.549830] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:16.249 [2024-11-05 19:18:45.549840] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.249 qpair failed and we were unable to recover it. 00:29:16.249 [2024-11-05 19:18:45.559825] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.249 [2024-11-05 19:18:45.559872] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.249 [2024-11-05 19:18:45.559881] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.249 [2024-11-05 19:18:45.559886] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.249 [2024-11-05 19:18:45.559891] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:16.249 [2024-11-05 19:18:45.559902] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.249 qpair failed and we were unable to recover it. 00:29:16.249 [2024-11-05 19:18:45.569744] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.249 [2024-11-05 19:18:45.569797] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.249 [2024-11-05 19:18:45.569807] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.249 [2024-11-05 19:18:45.569812] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.249 [2024-11-05 19:18:45.569817] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:16.249 [2024-11-05 19:18:45.569827] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.249 qpair failed and we were unable to recover it. 00:29:16.511 [2024-11-05 19:18:45.579888] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.511 [2024-11-05 19:18:45.579983] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.511 [2024-11-05 19:18:45.579993] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.512 [2024-11-05 19:18:45.580001] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.512 [2024-11-05 19:18:45.580006] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:16.512 [2024-11-05 19:18:45.580016] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.512 qpair failed and we were unable to recover it. 00:29:16.512 [2024-11-05 19:18:45.589911] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.512 [2024-11-05 19:18:45.589956] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.512 [2024-11-05 19:18:45.589966] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.512 [2024-11-05 19:18:45.589971] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.512 [2024-11-05 19:18:45.589975] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:16.512 [2024-11-05 19:18:45.589986] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.512 qpair failed and we were unable to recover it. 00:29:16.512 [2024-11-05 19:18:45.599990] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.512 [2024-11-05 19:18:45.600038] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.512 [2024-11-05 19:18:45.600048] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.512 [2024-11-05 19:18:45.600054] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.512 [2024-11-05 19:18:45.600058] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:16.512 [2024-11-05 19:18:45.600068] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.512 qpair failed and we were unable to recover it. 00:29:16.512 [2024-11-05 19:18:45.609960] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.512 [2024-11-05 19:18:45.610010] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.512 [2024-11-05 19:18:45.610020] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.512 [2024-11-05 19:18:45.610025] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.512 [2024-11-05 19:18:45.610030] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:16.512 [2024-11-05 19:18:45.610040] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.512 qpair failed and we were unable to recover it. 00:29:16.512 [2024-11-05 19:18:45.619958] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.512 [2024-11-05 19:18:45.620001] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.512 [2024-11-05 19:18:45.620011] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.512 [2024-11-05 19:18:45.620016] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.512 [2024-11-05 19:18:45.620021] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:16.512 [2024-11-05 19:18:45.620034] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.512 qpair failed and we were unable to recover it. 00:29:16.512 [2024-11-05 19:18:45.629997] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.512 [2024-11-05 19:18:45.630039] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.512 [2024-11-05 19:18:45.630049] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.512 [2024-11-05 19:18:45.630054] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.512 [2024-11-05 19:18:45.630058] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:16.512 [2024-11-05 19:18:45.630068] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.512 qpair failed and we were unable to recover it. 00:29:16.512 [2024-11-05 19:18:45.640061] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.512 [2024-11-05 19:18:45.640111] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.512 [2024-11-05 19:18:45.640121] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.512 [2024-11-05 19:18:45.640126] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.512 [2024-11-05 19:18:45.640131] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:16.512 [2024-11-05 19:18:45.640142] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.512 qpair failed and we were unable to recover it. 00:29:16.512 [2024-11-05 19:18:45.650045] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.512 [2024-11-05 19:18:45.650087] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.512 [2024-11-05 19:18:45.650096] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.512 [2024-11-05 19:18:45.650101] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.512 [2024-11-05 19:18:45.650106] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:16.512 [2024-11-05 19:18:45.650116] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.512 qpair failed and we were unable to recover it. 00:29:16.512 [2024-11-05 19:18:45.660038] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.512 [2024-11-05 19:18:45.660078] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.512 [2024-11-05 19:18:45.660088] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.512 [2024-11-05 19:18:45.660093] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.512 [2024-11-05 19:18:45.660098] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:16.512 [2024-11-05 19:18:45.660108] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.512 qpair failed and we were unable to recover it. 00:29:16.512 [2024-11-05 19:18:45.670090] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.512 [2024-11-05 19:18:45.670127] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.512 [2024-11-05 19:18:45.670137] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.512 [2024-11-05 19:18:45.670142] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.512 [2024-11-05 19:18:45.670147] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:16.512 [2024-11-05 19:18:45.670157] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.512 qpair failed and we were unable to recover it. 00:29:16.512 [2024-11-05 19:18:45.680105] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.512 [2024-11-05 19:18:45.680146] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.512 [2024-11-05 19:18:45.680155] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.512 [2024-11-05 19:18:45.680161] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.512 [2024-11-05 19:18:45.680165] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:16.512 [2024-11-05 19:18:45.680176] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.512 qpair failed and we were unable to recover it. 00:29:16.512 [2024-11-05 19:18:45.690147] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.512 [2024-11-05 19:18:45.690189] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.512 [2024-11-05 19:18:45.690199] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.512 [2024-11-05 19:18:45.690204] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.512 [2024-11-05 19:18:45.690209] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:16.512 [2024-11-05 19:18:45.690220] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.512 qpair failed and we were unable to recover it. 00:29:16.512 [2024-11-05 19:18:45.700144] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.512 [2024-11-05 19:18:45.700188] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.512 [2024-11-05 19:18:45.700198] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.512 [2024-11-05 19:18:45.700203] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.512 [2024-11-05 19:18:45.700208] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:16.512 [2024-11-05 19:18:45.700219] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.512 qpair failed and we were unable to recover it. 00:29:16.512 [2024-11-05 19:18:45.710184] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.512 [2024-11-05 19:18:45.710224] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.512 [2024-11-05 19:18:45.710239] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.512 [2024-11-05 19:18:45.710244] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.513 [2024-11-05 19:18:45.710249] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:16.513 [2024-11-05 19:18:45.710260] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.513 qpair failed and we were unable to recover it. 00:29:16.513 [2024-11-05 19:18:45.720213] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.513 [2024-11-05 19:18:45.720260] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.513 [2024-11-05 19:18:45.720270] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.513 [2024-11-05 19:18:45.720275] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.513 [2024-11-05 19:18:45.720280] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:16.513 [2024-11-05 19:18:45.720290] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.513 qpair failed and we were unable to recover it. 00:29:16.513 [2024-11-05 19:18:45.730264] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.513 [2024-11-05 19:18:45.730341] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.513 [2024-11-05 19:18:45.730351] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.513 [2024-11-05 19:18:45.730356] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.513 [2024-11-05 19:18:45.730360] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:16.513 [2024-11-05 19:18:45.730371] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.513 qpair failed and we were unable to recover it. 00:29:16.513 [2024-11-05 19:18:45.740270] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.513 [2024-11-05 19:18:45.740309] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.513 [2024-11-05 19:18:45.740320] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.513 [2024-11-05 19:18:45.740325] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.513 [2024-11-05 19:18:45.740330] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:16.513 [2024-11-05 19:18:45.740340] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.513 qpair failed and we were unable to recover it. 00:29:16.513 [2024-11-05 19:18:45.750268] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.513 [2024-11-05 19:18:45.750305] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.513 [2024-11-05 19:18:45.750315] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.513 [2024-11-05 19:18:45.750320] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.513 [2024-11-05 19:18:45.750327] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:16.513 [2024-11-05 19:18:45.750337] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.513 qpair failed and we were unable to recover it. 00:29:16.513 [2024-11-05 19:18:45.760340] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.513 [2024-11-05 19:18:45.760382] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.513 [2024-11-05 19:18:45.760392] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.513 [2024-11-05 19:18:45.760398] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.513 [2024-11-05 19:18:45.760403] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:16.513 [2024-11-05 19:18:45.760413] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.513 qpair failed and we were unable to recover it. 00:29:16.513 [2024-11-05 19:18:45.770376] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.513 [2024-11-05 19:18:45.770449] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.513 [2024-11-05 19:18:45.770459] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.513 [2024-11-05 19:18:45.770464] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.513 [2024-11-05 19:18:45.770469] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:16.513 [2024-11-05 19:18:45.770479] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.513 qpair failed and we were unable to recover it. 00:29:16.513 [2024-11-05 19:18:45.780369] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.513 [2024-11-05 19:18:45.780413] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.513 [2024-11-05 19:18:45.780431] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.513 [2024-11-05 19:18:45.780438] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.513 [2024-11-05 19:18:45.780443] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:16.513 [2024-11-05 19:18:45.780457] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.513 qpair failed and we were unable to recover it. 00:29:16.513 [2024-11-05 19:18:45.790378] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.513 [2024-11-05 19:18:45.790418] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.513 [2024-11-05 19:18:45.790430] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.513 [2024-11-05 19:18:45.790435] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.513 [2024-11-05 19:18:45.790440] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:16.513 [2024-11-05 19:18:45.790451] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.513 qpair failed and we were unable to recover it. 00:29:16.513 [2024-11-05 19:18:45.800448] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.513 [2024-11-05 19:18:45.800488] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.513 [2024-11-05 19:18:45.800498] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.513 [2024-11-05 19:18:45.800504] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.513 [2024-11-05 19:18:45.800508] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:16.513 [2024-11-05 19:18:45.800520] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.513 qpair failed and we were unable to recover it. 00:29:16.513 [2024-11-05 19:18:45.810485] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.513 [2024-11-05 19:18:45.810526] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.513 [2024-11-05 19:18:45.810537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.513 [2024-11-05 19:18:45.810542] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.513 [2024-11-05 19:18:45.810547] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:16.513 [2024-11-05 19:18:45.810557] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.513 qpair failed and we were unable to recover it. 00:29:16.513 [2024-11-05 19:18:45.820490] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.513 [2024-11-05 19:18:45.820529] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.513 [2024-11-05 19:18:45.820540] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.513 [2024-11-05 19:18:45.820545] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.513 [2024-11-05 19:18:45.820549] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:16.513 [2024-11-05 19:18:45.820560] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.513 qpair failed and we were unable to recover it. 00:29:16.513 [2024-11-05 19:18:45.830534] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.513 [2024-11-05 19:18:45.830571] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.513 [2024-11-05 19:18:45.830581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.513 [2024-11-05 19:18:45.830586] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.513 [2024-11-05 19:18:45.830591] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:16.513 [2024-11-05 19:18:45.830602] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.513 qpair failed and we were unable to recover it. 00:29:16.776 [2024-11-05 19:18:45.840553] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.776 [2024-11-05 19:18:45.840594] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.776 [2024-11-05 19:18:45.840606] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.776 [2024-11-05 19:18:45.840612] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.776 [2024-11-05 19:18:45.840617] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:16.776 [2024-11-05 19:18:45.840628] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.776 qpair failed and we were unable to recover it. 00:29:16.776 [2024-11-05 19:18:45.850457] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.776 [2024-11-05 19:18:45.850515] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.776 [2024-11-05 19:18:45.850527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.776 [2024-11-05 19:18:45.850532] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.776 [2024-11-05 19:18:45.850537] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:16.776 [2024-11-05 19:18:45.850548] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.776 qpair failed and we were unable to recover it. 00:29:16.776 [2024-11-05 19:18:45.860649] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.776 [2024-11-05 19:18:45.860684] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.776 [2024-11-05 19:18:45.860695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.776 [2024-11-05 19:18:45.860700] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.776 [2024-11-05 19:18:45.860705] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:16.777 [2024-11-05 19:18:45.860715] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.777 qpair failed and we were unable to recover it. 00:29:16.777 [2024-11-05 19:18:45.870499] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.777 [2024-11-05 19:18:45.870543] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.777 [2024-11-05 19:18:45.870553] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.777 [2024-11-05 19:18:45.870558] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.777 [2024-11-05 19:18:45.870563] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:16.777 [2024-11-05 19:18:45.870573] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.777 qpair failed and we were unable to recover it. 00:29:16.777 [2024-11-05 19:18:45.880635] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.777 [2024-11-05 19:18:45.880677] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.777 [2024-11-05 19:18:45.880687] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.777 [2024-11-05 19:18:45.880692] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.777 [2024-11-05 19:18:45.880700] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:16.777 [2024-11-05 19:18:45.880711] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.777 qpair failed and we were unable to recover it. 00:29:16.777 [2024-11-05 19:18:45.890663] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.777 [2024-11-05 19:18:45.890706] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.777 [2024-11-05 19:18:45.890716] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.777 [2024-11-05 19:18:45.890721] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.777 [2024-11-05 19:18:45.890726] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:16.777 [2024-11-05 19:18:45.890736] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.777 qpair failed and we were unable to recover it. 00:29:16.777 [2024-11-05 19:18:45.900681] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.777 [2024-11-05 19:18:45.900722] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.777 [2024-11-05 19:18:45.900733] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.777 [2024-11-05 19:18:45.900738] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.777 [2024-11-05 19:18:45.900743] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:16.777 [2024-11-05 19:18:45.900758] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.777 qpair failed and we were unable to recover it. 00:29:16.777 [2024-11-05 19:18:45.910720] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.777 [2024-11-05 19:18:45.910760] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.777 [2024-11-05 19:18:45.910770] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.777 [2024-11-05 19:18:45.910775] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.777 [2024-11-05 19:18:45.910780] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:16.777 [2024-11-05 19:18:45.910791] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.777 qpair failed and we were unable to recover it. 00:29:16.777 [2024-11-05 19:18:45.920774] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.777 [2024-11-05 19:18:45.920816] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.777 [2024-11-05 19:18:45.920826] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.777 [2024-11-05 19:18:45.920832] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.777 [2024-11-05 19:18:45.920837] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:16.777 [2024-11-05 19:18:45.920847] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.777 qpair failed and we were unable to recover it. 00:29:16.777 [2024-11-05 19:18:45.930661] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.777 [2024-11-05 19:18:45.930704] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.777 [2024-11-05 19:18:45.930714] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.777 [2024-11-05 19:18:45.930719] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.777 [2024-11-05 19:18:45.930724] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:16.777 [2024-11-05 19:18:45.930734] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.777 qpair failed and we were unable to recover it. 00:29:16.777 [2024-11-05 19:18:45.940796] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.777 [2024-11-05 19:18:45.940835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.777 [2024-11-05 19:18:45.940845] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.777 [2024-11-05 19:18:45.940850] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.777 [2024-11-05 19:18:45.940855] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:16.777 [2024-11-05 19:18:45.940865] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.777 qpair failed and we were unable to recover it. 00:29:16.777 [2024-11-05 19:18:45.950816] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.777 [2024-11-05 19:18:45.950898] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.777 [2024-11-05 19:18:45.950908] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.777 [2024-11-05 19:18:45.950914] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.777 [2024-11-05 19:18:45.950919] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:16.777 [2024-11-05 19:18:45.950930] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.777 qpair failed and we were unable to recover it. 00:29:16.777 [2024-11-05 19:18:45.960886] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.777 [2024-11-05 19:18:45.960924] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.777 [2024-11-05 19:18:45.960934] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.777 [2024-11-05 19:18:45.960939] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.777 [2024-11-05 19:18:45.960944] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:16.777 [2024-11-05 19:18:45.960954] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.777 qpair failed and we were unable to recover it. 00:29:16.777 [2024-11-05 19:18:45.970917] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.777 [2024-11-05 19:18:45.970958] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.777 [2024-11-05 19:18:45.970970] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.777 [2024-11-05 19:18:45.970975] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.777 [2024-11-05 19:18:45.970980] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:16.777 [2024-11-05 19:18:45.970990] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.777 qpair failed and we were unable to recover it. 00:29:16.777 [2024-11-05 19:18:45.980941] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.777 [2024-11-05 19:18:45.980986] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.777 [2024-11-05 19:18:45.980996] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.777 [2024-11-05 19:18:45.981001] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.777 [2024-11-05 19:18:45.981006] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:16.777 [2024-11-05 19:18:45.981017] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.777 qpair failed and we were unable to recover it. 00:29:16.777 [2024-11-05 19:18:45.990969] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.777 [2024-11-05 19:18:45.991056] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.777 [2024-11-05 19:18:45.991065] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.777 [2024-11-05 19:18:45.991071] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.777 [2024-11-05 19:18:45.991076] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:16.777 [2024-11-05 19:18:45.991087] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.777 qpair failed and we were unable to recover it. 00:29:16.778 [2024-11-05 19:18:46.000963] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.778 [2024-11-05 19:18:46.001002] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.778 [2024-11-05 19:18:46.001013] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.778 [2024-11-05 19:18:46.001018] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.778 [2024-11-05 19:18:46.001022] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:16.778 [2024-11-05 19:18:46.001033] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.778 qpair failed and we were unable to recover it. 00:29:16.778 [2024-11-05 19:18:46.011014] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.778 [2024-11-05 19:18:46.011058] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.778 [2024-11-05 19:18:46.011068] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.778 [2024-11-05 19:18:46.011075] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.778 [2024-11-05 19:18:46.011080] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:16.778 [2024-11-05 19:18:46.011090] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.778 qpair failed and we were unable to recover it. 00:29:16.778 [2024-11-05 19:18:46.021023] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.778 [2024-11-05 19:18:46.021066] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.778 [2024-11-05 19:18:46.021076] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.778 [2024-11-05 19:18:46.021081] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.778 [2024-11-05 19:18:46.021086] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:16.778 [2024-11-05 19:18:46.021096] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.778 qpair failed and we were unable to recover it. 00:29:16.778 [2024-11-05 19:18:46.030925] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.778 [2024-11-05 19:18:46.030966] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.778 [2024-11-05 19:18:46.030975] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.778 [2024-11-05 19:18:46.030981] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.778 [2024-11-05 19:18:46.030985] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:16.778 [2024-11-05 19:18:46.030996] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.778 qpair failed and we were unable to recover it. 00:29:16.778 [2024-11-05 19:18:46.041101] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.778 [2024-11-05 19:18:46.041142] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.778 [2024-11-05 19:18:46.041152] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.778 [2024-11-05 19:18:46.041157] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.778 [2024-11-05 19:18:46.041162] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:16.778 [2024-11-05 19:18:46.041173] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.778 qpair failed and we were unable to recover it. 00:29:16.778 [2024-11-05 19:18:46.051182] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.778 [2024-11-05 19:18:46.051223] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.778 [2024-11-05 19:18:46.051233] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.778 [2024-11-05 19:18:46.051238] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.778 [2024-11-05 19:18:46.051243] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:16.778 [2024-11-05 19:18:46.051256] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.778 qpair failed and we were unable to recover it. 00:29:16.778 [2024-11-05 19:18:46.061003] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.778 [2024-11-05 19:18:46.061040] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.778 [2024-11-05 19:18:46.061050] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.778 [2024-11-05 19:18:46.061055] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.778 [2024-11-05 19:18:46.061060] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:16.778 [2024-11-05 19:18:46.061070] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.778 qpair failed and we were unable to recover it. 00:29:16.778 [2024-11-05 19:18:46.071179] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.778 [2024-11-05 19:18:46.071219] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.778 [2024-11-05 19:18:46.071230] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.778 [2024-11-05 19:18:46.071235] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.778 [2024-11-05 19:18:46.071240] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:16.778 [2024-11-05 19:18:46.071250] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.778 qpair failed and we were unable to recover it. 00:29:16.778 [2024-11-05 19:18:46.081202] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.778 [2024-11-05 19:18:46.081247] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.778 [2024-11-05 19:18:46.081257] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.778 [2024-11-05 19:18:46.081263] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.778 [2024-11-05 19:18:46.081267] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:16.778 [2024-11-05 19:18:46.081278] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.778 qpair failed and we were unable to recover it. 00:29:16.778 [2024-11-05 19:18:46.091245] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.778 [2024-11-05 19:18:46.091322] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.778 [2024-11-05 19:18:46.091332] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.778 [2024-11-05 19:18:46.091337] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.778 [2024-11-05 19:18:46.091342] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:16.778 [2024-11-05 19:18:46.091353] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.778 qpair failed and we were unable to recover it. 00:29:17.041 [2024-11-05 19:18:46.101242] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.041 [2024-11-05 19:18:46.101285] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.041 [2024-11-05 19:18:46.101295] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.041 [2024-11-05 19:18:46.101301] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.041 [2024-11-05 19:18:46.101305] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:17.041 [2024-11-05 19:18:46.101316] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.041 qpair failed and we were unable to recover it. 00:29:17.041 [2024-11-05 19:18:46.111139] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.041 [2024-11-05 19:18:46.111177] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.041 [2024-11-05 19:18:46.111187] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.041 [2024-11-05 19:18:46.111193] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.041 [2024-11-05 19:18:46.111197] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:17.041 [2024-11-05 19:18:46.111208] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.041 qpair failed and we were unable to recover it. 00:29:17.041 [2024-11-05 19:18:46.121170] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.041 [2024-11-05 19:18:46.121211] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.041 [2024-11-05 19:18:46.121220] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.041 [2024-11-05 19:18:46.121226] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.041 [2024-11-05 19:18:46.121230] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:17.041 [2024-11-05 19:18:46.121241] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.041 qpair failed and we were unable to recover it. 00:29:17.041 [2024-11-05 19:18:46.131225] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.041 [2024-11-05 19:18:46.131280] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.041 [2024-11-05 19:18:46.131290] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.041 [2024-11-05 19:18:46.131295] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.041 [2024-11-05 19:18:46.131300] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:17.041 [2024-11-05 19:18:46.131310] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.041 qpair failed and we were unable to recover it. 00:29:17.041 [2024-11-05 19:18:46.141343] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.041 [2024-11-05 19:18:46.141380] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.041 [2024-11-05 19:18:46.141390] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.041 [2024-11-05 19:18:46.141397] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.041 [2024-11-05 19:18:46.141402] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:17.041 [2024-11-05 19:18:46.141413] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.041 qpair failed and we were unable to recover it. 00:29:17.041 [2024-11-05 19:18:46.151342] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.041 [2024-11-05 19:18:46.151382] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.041 [2024-11-05 19:18:46.151392] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.041 [2024-11-05 19:18:46.151397] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.041 [2024-11-05 19:18:46.151402] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:17.041 [2024-11-05 19:18:46.151412] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.041 qpair failed and we were unable to recover it. 00:29:17.041 [2024-11-05 19:18:46.161427] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.041 [2024-11-05 19:18:46.161468] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.041 [2024-11-05 19:18:46.161477] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.041 [2024-11-05 19:18:46.161483] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.041 [2024-11-05 19:18:46.161487] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:17.041 [2024-11-05 19:18:46.161498] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.041 qpair failed and we were unable to recover it. 00:29:17.041 [2024-11-05 19:18:46.171461] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.041 [2024-11-05 19:18:46.171506] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.041 [2024-11-05 19:18:46.171516] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.041 [2024-11-05 19:18:46.171521] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.041 [2024-11-05 19:18:46.171526] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:17.041 [2024-11-05 19:18:46.171536] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.041 qpair failed and we were unable to recover it. 00:29:17.041 [2024-11-05 19:18:46.181433] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.041 [2024-11-05 19:18:46.181480] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.041 [2024-11-05 19:18:46.181499] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.041 [2024-11-05 19:18:46.181506] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.041 [2024-11-05 19:18:46.181511] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:17.041 [2024-11-05 19:18:46.181529] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.041 qpair failed and we were unable to recover it. 00:29:17.041 [2024-11-05 19:18:46.191390] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.041 [2024-11-05 19:18:46.191451] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.041 [2024-11-05 19:18:46.191463] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.041 [2024-11-05 19:18:46.191469] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.041 [2024-11-05 19:18:46.191474] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:17.041 [2024-11-05 19:18:46.191485] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.041 qpair failed and we were unable to recover it. 00:29:17.041 [2024-11-05 19:18:46.201529] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.041 [2024-11-05 19:18:46.201574] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.041 [2024-11-05 19:18:46.201593] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.041 [2024-11-05 19:18:46.201600] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.041 [2024-11-05 19:18:46.201605] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:17.041 [2024-11-05 19:18:46.201619] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.041 qpair failed and we were unable to recover it. 00:29:17.041 [2024-11-05 19:18:46.211541] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.041 [2024-11-05 19:18:46.211585] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.041 [2024-11-05 19:18:46.211597] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.041 [2024-11-05 19:18:46.211602] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.041 [2024-11-05 19:18:46.211607] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:17.041 [2024-11-05 19:18:46.211618] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.041 qpair failed and we were unable to recover it. 00:29:17.042 [2024-11-05 19:18:46.221616] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.042 [2024-11-05 19:18:46.221660] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.042 [2024-11-05 19:18:46.221670] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.042 [2024-11-05 19:18:46.221675] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.042 [2024-11-05 19:18:46.221679] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:17.042 [2024-11-05 19:18:46.221690] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.042 qpair failed and we were unable to recover it. 00:29:17.042 [2024-11-05 19:18:46.231595] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.042 [2024-11-05 19:18:46.231636] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.042 [2024-11-05 19:18:46.231646] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.042 [2024-11-05 19:18:46.231651] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.042 [2024-11-05 19:18:46.231655] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:17.042 [2024-11-05 19:18:46.231667] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.042 qpair failed and we were unable to recover it. 00:29:17.042 [2024-11-05 19:18:46.241618] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.042 [2024-11-05 19:18:46.241659] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.042 [2024-11-05 19:18:46.241669] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.042 [2024-11-05 19:18:46.241674] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.042 [2024-11-05 19:18:46.241679] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:17.042 [2024-11-05 19:18:46.241689] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.042 qpair failed and we were unable to recover it. 00:29:17.042 [2024-11-05 19:18:46.251701] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.042 [2024-11-05 19:18:46.251751] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.042 [2024-11-05 19:18:46.251761] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.042 [2024-11-05 19:18:46.251767] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.042 [2024-11-05 19:18:46.251772] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:17.042 [2024-11-05 19:18:46.251782] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.042 qpair failed and we were unable to recover it. 00:29:17.042 [2024-11-05 19:18:46.261696] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.042 [2024-11-05 19:18:46.261736] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.042 [2024-11-05 19:18:46.261750] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.042 [2024-11-05 19:18:46.261755] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.042 [2024-11-05 19:18:46.261760] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:17.042 [2024-11-05 19:18:46.261770] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.042 qpair failed and we were unable to recover it. 00:29:17.042 [2024-11-05 19:18:46.271702] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.042 [2024-11-05 19:18:46.271744] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.042 [2024-11-05 19:18:46.271761] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.042 [2024-11-05 19:18:46.271768] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.042 [2024-11-05 19:18:46.271773] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:17.042 [2024-11-05 19:18:46.271784] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.042 qpair failed and we were unable to recover it. 00:29:17.042 [2024-11-05 19:18:46.281750] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.042 [2024-11-05 19:18:46.281795] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.042 [2024-11-05 19:18:46.281805] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.042 [2024-11-05 19:18:46.281810] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.042 [2024-11-05 19:18:46.281815] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:17.042 [2024-11-05 19:18:46.281825] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.042 qpair failed and we were unable to recover it. 00:29:17.042 [2024-11-05 19:18:46.291782] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.042 [2024-11-05 19:18:46.291827] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.042 [2024-11-05 19:18:46.291837] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.042 [2024-11-05 19:18:46.291843] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.042 [2024-11-05 19:18:46.291848] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:17.042 [2024-11-05 19:18:46.291859] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.042 qpair failed and we were unable to recover it. 00:29:17.042 [2024-11-05 19:18:46.301791] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.042 [2024-11-05 19:18:46.301833] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.042 [2024-11-05 19:18:46.301843] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.042 [2024-11-05 19:18:46.301848] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.042 [2024-11-05 19:18:46.301853] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:17.042 [2024-11-05 19:18:46.301863] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.042 qpair failed and we were unable to recover it. 00:29:17.042 [2024-11-05 19:18:46.311829] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.042 [2024-11-05 19:18:46.311903] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.042 [2024-11-05 19:18:46.311913] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.042 [2024-11-05 19:18:46.311918] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.042 [2024-11-05 19:18:46.311925] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:17.042 [2024-11-05 19:18:46.311935] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.042 qpair failed and we were unable to recover it. 00:29:17.042 [2024-11-05 19:18:46.321752] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.042 [2024-11-05 19:18:46.321793] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.042 [2024-11-05 19:18:46.321803] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.042 [2024-11-05 19:18:46.321808] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.042 [2024-11-05 19:18:46.321813] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:17.042 [2024-11-05 19:18:46.321823] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.042 qpair failed and we were unable to recover it. 00:29:17.042 [2024-11-05 19:18:46.331756] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.042 [2024-11-05 19:18:46.331798] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.043 [2024-11-05 19:18:46.331809] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.043 [2024-11-05 19:18:46.331814] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.043 [2024-11-05 19:18:46.331819] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:17.043 [2024-11-05 19:18:46.331830] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.043 qpair failed and we were unable to recover it. 00:29:17.043 [2024-11-05 19:18:46.341924] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.043 [2024-11-05 19:18:46.341995] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.043 [2024-11-05 19:18:46.342005] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.043 [2024-11-05 19:18:46.342010] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.043 [2024-11-05 19:18:46.342015] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:17.043 [2024-11-05 19:18:46.342026] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.043 qpair failed and we were unable to recover it. 00:29:17.043 [2024-11-05 19:18:46.351963] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.043 [2024-11-05 19:18:46.352009] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.043 [2024-11-05 19:18:46.352018] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.043 [2024-11-05 19:18:46.352024] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.043 [2024-11-05 19:18:46.352029] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:17.043 [2024-11-05 19:18:46.352039] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.043 qpair failed and we were unable to recover it. 00:29:17.043 [2024-11-05 19:18:46.361948] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.043 [2024-11-05 19:18:46.362000] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.043 [2024-11-05 19:18:46.362010] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.043 [2024-11-05 19:18:46.362016] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.043 [2024-11-05 19:18:46.362020] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:17.043 [2024-11-05 19:18:46.362031] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.043 qpair failed and we were unable to recover it. 00:29:17.305 [2024-11-05 19:18:46.371990] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.305 [2024-11-05 19:18:46.372035] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.305 [2024-11-05 19:18:46.372045] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.305 [2024-11-05 19:18:46.372050] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.305 [2024-11-05 19:18:46.372055] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:17.305 [2024-11-05 19:18:46.372066] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.306 qpair failed and we were unable to recover it. 00:29:17.306 [2024-11-05 19:18:46.382005] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.306 [2024-11-05 19:18:46.382049] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.306 [2024-11-05 19:18:46.382059] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.306 [2024-11-05 19:18:46.382065] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.306 [2024-11-05 19:18:46.382070] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:17.306 [2024-11-05 19:18:46.382080] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.306 qpair failed and we were unable to recover it. 00:29:17.306 [2024-11-05 19:18:46.392017] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.306 [2024-11-05 19:18:46.392054] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.306 [2024-11-05 19:18:46.392064] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.306 [2024-11-05 19:18:46.392070] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.306 [2024-11-05 19:18:46.392074] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:17.306 [2024-11-05 19:18:46.392085] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.306 qpair failed and we were unable to recover it. 00:29:17.306 [2024-11-05 19:18:46.401936] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.306 [2024-11-05 19:18:46.401978] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.306 [2024-11-05 19:18:46.401995] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.306 [2024-11-05 19:18:46.402000] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.306 [2024-11-05 19:18:46.402005] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:17.306 [2024-11-05 19:18:46.402017] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.306 qpair failed and we were unable to recover it. 00:29:17.306 [2024-11-05 19:18:46.412118] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.306 [2024-11-05 19:18:46.412162] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.306 [2024-11-05 19:18:46.412173] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.306 [2024-11-05 19:18:46.412178] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.306 [2024-11-05 19:18:46.412183] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:17.306 [2024-11-05 19:18:46.412193] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.306 qpair failed and we were unable to recover it. 00:29:17.306 [2024-11-05 19:18:46.422102] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.306 [2024-11-05 19:18:46.422148] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.306 [2024-11-05 19:18:46.422158] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.306 [2024-11-05 19:18:46.422163] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.306 [2024-11-05 19:18:46.422167] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:17.306 [2024-11-05 19:18:46.422178] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.306 qpair failed and we were unable to recover it. 00:29:17.306 [2024-11-05 19:18:46.432148] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.306 [2024-11-05 19:18:46.432184] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.306 [2024-11-05 19:18:46.432194] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.306 [2024-11-05 19:18:46.432199] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.306 [2024-11-05 19:18:46.432204] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:17.306 [2024-11-05 19:18:46.432214] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.306 qpair failed and we were unable to recover it. 00:29:17.306 [2024-11-05 19:18:46.442174] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.306 [2024-11-05 19:18:46.442215] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.306 [2024-11-05 19:18:46.442225] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.306 [2024-11-05 19:18:46.442230] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.306 [2024-11-05 19:18:46.442238] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:17.306 [2024-11-05 19:18:46.442248] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.306 qpair failed and we were unable to recover it. 00:29:17.306 [2024-11-05 19:18:46.452226] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.306 [2024-11-05 19:18:46.452308] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.306 [2024-11-05 19:18:46.452318] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.306 [2024-11-05 19:18:46.452323] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.306 [2024-11-05 19:18:46.452328] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:17.306 [2024-11-05 19:18:46.452338] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.306 qpair failed and we were unable to recover it. 00:29:17.306 [2024-11-05 19:18:46.462221] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.306 [2024-11-05 19:18:46.462268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.306 [2024-11-05 19:18:46.462278] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.306 [2024-11-05 19:18:46.462283] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.306 [2024-11-05 19:18:46.462287] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:17.306 [2024-11-05 19:18:46.462298] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.306 qpair failed and we were unable to recover it. 00:29:17.306 [2024-11-05 19:18:46.472117] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.306 [2024-11-05 19:18:46.472154] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.306 [2024-11-05 19:18:46.472164] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.306 [2024-11-05 19:18:46.472169] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.306 [2024-11-05 19:18:46.472174] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:17.306 [2024-11-05 19:18:46.472184] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.306 qpair failed and we were unable to recover it. 00:29:17.306 [2024-11-05 19:18:46.482279] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.306 [2024-11-05 19:18:46.482321] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.306 [2024-11-05 19:18:46.482331] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.306 [2024-11-05 19:18:46.482336] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.306 [2024-11-05 19:18:46.482341] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:17.306 [2024-11-05 19:18:46.482352] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.306 qpair failed and we were unable to recover it. 00:29:17.306 [2024-11-05 19:18:46.492308] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.306 [2024-11-05 19:18:46.492353] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.306 [2024-11-05 19:18:46.492363] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.306 [2024-11-05 19:18:46.492369] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.306 [2024-11-05 19:18:46.492373] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:17.306 [2024-11-05 19:18:46.492384] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.306 qpair failed and we were unable to recover it. 00:29:17.306 [2024-11-05 19:18:46.502336] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.306 [2024-11-05 19:18:46.502382] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.306 [2024-11-05 19:18:46.502392] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.306 [2024-11-05 19:18:46.502397] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.306 [2024-11-05 19:18:46.502402] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:17.306 [2024-11-05 19:18:46.502412] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.307 qpair failed and we were unable to recover it. 00:29:17.307 [2024-11-05 19:18:46.512353] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.307 [2024-11-05 19:18:46.512391] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.307 [2024-11-05 19:18:46.512401] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.307 [2024-11-05 19:18:46.512406] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.307 [2024-11-05 19:18:46.512411] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:17.307 [2024-11-05 19:18:46.512421] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.307 qpair failed and we were unable to recover it. 00:29:17.307 [2024-11-05 19:18:46.522385] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.307 [2024-11-05 19:18:46.522427] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.307 [2024-11-05 19:18:46.522445] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.307 [2024-11-05 19:18:46.522450] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.307 [2024-11-05 19:18:46.522455] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:17.307 [2024-11-05 19:18:46.522469] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.307 qpair failed and we were unable to recover it. 00:29:17.307 [2024-11-05 19:18:46.532421] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.307 [2024-11-05 19:18:46.532473] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.307 [2024-11-05 19:18:46.532485] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.307 [2024-11-05 19:18:46.532490] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.307 [2024-11-05 19:18:46.532495] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:17.307 [2024-11-05 19:18:46.532506] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.307 qpair failed and we were unable to recover it. 00:29:17.307 [2024-11-05 19:18:46.542438] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.307 [2024-11-05 19:18:46.542483] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.307 [2024-11-05 19:18:46.542502] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.307 [2024-11-05 19:18:46.542508] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.307 [2024-11-05 19:18:46.542513] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:17.307 [2024-11-05 19:18:46.542527] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.307 qpair failed and we were unable to recover it. 00:29:17.307 [2024-11-05 19:18:46.552335] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.307 [2024-11-05 19:18:46.552377] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.307 [2024-11-05 19:18:46.552388] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.307 [2024-11-05 19:18:46.552393] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.307 [2024-11-05 19:18:46.552398] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:17.307 [2024-11-05 19:18:46.552409] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.307 qpair failed and we were unable to recover it. 00:29:17.307 [2024-11-05 19:18:46.562377] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.307 [2024-11-05 19:18:46.562419] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.307 [2024-11-05 19:18:46.562429] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.307 [2024-11-05 19:18:46.562434] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.307 [2024-11-05 19:18:46.562438] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:17.307 [2024-11-05 19:18:46.562449] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.307 qpair failed and we were unable to recover it. 00:29:17.307 [2024-11-05 19:18:46.572527] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.307 [2024-11-05 19:18:46.572621] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.307 [2024-11-05 19:18:46.572640] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.307 [2024-11-05 19:18:46.572650] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.307 [2024-11-05 19:18:46.572656] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:17.307 [2024-11-05 19:18:46.572671] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.307 qpair failed and we were unable to recover it. 00:29:17.307 [2024-11-05 19:18:46.582534] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.307 [2024-11-05 19:18:46.582586] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.307 [2024-11-05 19:18:46.582597] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.307 [2024-11-05 19:18:46.582603] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.307 [2024-11-05 19:18:46.582607] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:17.307 [2024-11-05 19:18:46.582619] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.307 qpair failed and we were unable to recover it. 00:29:17.307 [2024-11-05 19:18:46.592552] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.307 [2024-11-05 19:18:46.592619] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.307 [2024-11-05 19:18:46.592629] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.307 [2024-11-05 19:18:46.592635] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.307 [2024-11-05 19:18:46.592640] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:17.307 [2024-11-05 19:18:46.592650] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.307 qpair failed and we were unable to recover it. 00:29:17.307 [2024-11-05 19:18:46.602584] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.307 [2024-11-05 19:18:46.602623] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.307 [2024-11-05 19:18:46.602632] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.307 [2024-11-05 19:18:46.602638] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.307 [2024-11-05 19:18:46.602642] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:17.307 [2024-11-05 19:18:46.602653] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.307 qpair failed and we were unable to recover it. 00:29:17.307 [2024-11-05 19:18:46.612614] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.307 [2024-11-05 19:18:46.612657] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.307 [2024-11-05 19:18:46.612667] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.307 [2024-11-05 19:18:46.612672] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.307 [2024-11-05 19:18:46.612677] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:17.307 [2024-11-05 19:18:46.612690] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.307 qpair failed and we were unable to recover it. 00:29:17.307 [2024-11-05 19:18:46.622619] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.307 [2024-11-05 19:18:46.622660] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.307 [2024-11-05 19:18:46.622669] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.307 [2024-11-05 19:18:46.622675] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.307 [2024-11-05 19:18:46.622679] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:17.307 [2024-11-05 19:18:46.622689] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.307 qpair failed and we were unable to recover it. 00:29:17.570 [2024-11-05 19:18:46.632679] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.571 [2024-11-05 19:18:46.632721] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.571 [2024-11-05 19:18:46.632731] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.571 [2024-11-05 19:18:46.632736] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.571 [2024-11-05 19:18:46.632741] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:17.571 [2024-11-05 19:18:46.632755] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.571 qpair failed and we were unable to recover it. 00:29:17.571 [2024-11-05 19:18:46.642705] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.571 [2024-11-05 19:18:46.642750] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.571 [2024-11-05 19:18:46.642760] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.571 [2024-11-05 19:18:46.642765] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.571 [2024-11-05 19:18:46.642769] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:17.571 [2024-11-05 19:18:46.642780] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.571 qpair failed and we were unable to recover it. 00:29:17.571 [2024-11-05 19:18:46.652722] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.571 [2024-11-05 19:18:46.652762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.571 [2024-11-05 19:18:46.652772] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.571 [2024-11-05 19:18:46.652778] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.571 [2024-11-05 19:18:46.652782] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:17.571 [2024-11-05 19:18:46.652792] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.571 qpair failed and we were unable to recover it. 00:29:17.571 [2024-11-05 19:18:46.662762] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.571 [2024-11-05 19:18:46.662809] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.571 [2024-11-05 19:18:46.662819] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.571 [2024-11-05 19:18:46.662824] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.571 [2024-11-05 19:18:46.662829] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:17.571 [2024-11-05 19:18:46.662839] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.571 qpair failed and we were unable to recover it. 00:29:17.571 [2024-11-05 19:18:46.672781] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.571 [2024-11-05 19:18:46.672817] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.571 [2024-11-05 19:18:46.672827] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.571 [2024-11-05 19:18:46.672832] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.571 [2024-11-05 19:18:46.672837] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:17.571 [2024-11-05 19:18:46.672847] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.571 qpair failed and we were unable to recover it. 00:29:17.571 [2024-11-05 19:18:46.682783] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.571 [2024-11-05 19:18:46.682826] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.571 [2024-11-05 19:18:46.682835] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.571 [2024-11-05 19:18:46.682841] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.571 [2024-11-05 19:18:46.682845] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:17.571 [2024-11-05 19:18:46.682856] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.571 qpair failed and we were unable to recover it. 00:29:17.571 [2024-11-05 19:18:46.692841] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.571 [2024-11-05 19:18:46.692881] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.571 [2024-11-05 19:18:46.692891] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.571 [2024-11-05 19:18:46.692896] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.571 [2024-11-05 19:18:46.692901] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:17.571 [2024-11-05 19:18:46.692911] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.571 qpair failed and we were unable to recover it. 00:29:17.571 [2024-11-05 19:18:46.702869] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.571 [2024-11-05 19:18:46.702908] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.571 [2024-11-05 19:18:46.702918] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.571 [2024-11-05 19:18:46.702925] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.571 [2024-11-05 19:18:46.702930] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:17.571 [2024-11-05 19:18:46.702940] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.571 qpair failed and we were unable to recover it. 00:29:17.571 [2024-11-05 19:18:46.712761] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.571 [2024-11-05 19:18:46.712845] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.571 [2024-11-05 19:18:46.712855] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.571 [2024-11-05 19:18:46.712860] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.571 [2024-11-05 19:18:46.712865] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:17.571 [2024-11-05 19:18:46.712875] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.571 qpair failed and we were unable to recover it. 00:29:17.571 [2024-11-05 19:18:46.722901] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.571 [2024-11-05 19:18:46.722946] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.571 [2024-11-05 19:18:46.722955] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.571 [2024-11-05 19:18:46.722960] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.571 [2024-11-05 19:18:46.722965] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:17.571 [2024-11-05 19:18:46.722976] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.571 qpair failed and we were unable to recover it. 00:29:17.571 [2024-11-05 19:18:46.732893] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.571 [2024-11-05 19:18:46.732949] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.571 [2024-11-05 19:18:46.732959] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.571 [2024-11-05 19:18:46.732964] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.571 [2024-11-05 19:18:46.732969] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:17.571 [2024-11-05 19:18:46.732980] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.571 qpair failed and we were unable to recover it. 00:29:17.571 [2024-11-05 19:18:46.742999] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.571 [2024-11-05 19:18:46.743040] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.571 [2024-11-05 19:18:46.743050] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.571 [2024-11-05 19:18:46.743055] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.571 [2024-11-05 19:18:46.743059] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:17.571 [2024-11-05 19:18:46.743072] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.571 qpair failed and we were unable to recover it. 00:29:17.571 [2024-11-05 19:18:46.752998] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.571 [2024-11-05 19:18:46.753037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.571 [2024-11-05 19:18:46.753046] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.571 [2024-11-05 19:18:46.753051] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.571 [2024-11-05 19:18:46.753056] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:17.571 [2024-11-05 19:18:46.753066] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.571 qpair failed and we were unable to recover it. 00:29:17.572 [2024-11-05 19:18:46.763010] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.572 [2024-11-05 19:18:46.763083] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.572 [2024-11-05 19:18:46.763093] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.572 [2024-11-05 19:18:46.763098] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.572 [2024-11-05 19:18:46.763103] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:17.572 [2024-11-05 19:18:46.763114] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.572 qpair failed and we were unable to recover it. 00:29:17.572 [2024-11-05 19:18:46.773073] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.572 [2024-11-05 19:18:46.773122] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.572 [2024-11-05 19:18:46.773131] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.572 [2024-11-05 19:18:46.773137] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.572 [2024-11-05 19:18:46.773141] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:17.572 [2024-11-05 19:18:46.773152] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.572 qpair failed and we were unable to recover it. 00:29:17.572 [2024-11-05 19:18:46.782952] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.572 [2024-11-05 19:18:46.783039] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.572 [2024-11-05 19:18:46.783050] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.572 [2024-11-05 19:18:46.783056] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.572 [2024-11-05 19:18:46.783061] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:17.572 [2024-11-05 19:18:46.783072] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.572 qpair failed and we were unable to recover it. 00:29:17.572 [2024-11-05 19:18:46.793103] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.572 [2024-11-05 19:18:46.793141] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.572 [2024-11-05 19:18:46.793151] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.572 [2024-11-05 19:18:46.793156] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.572 [2024-11-05 19:18:46.793161] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:17.572 [2024-11-05 19:18:46.793172] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.572 qpair failed and we were unable to recover it. 00:29:17.572 [2024-11-05 19:18:46.803150] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.572 [2024-11-05 19:18:46.803191] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.572 [2024-11-05 19:18:46.803201] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.572 [2024-11-05 19:18:46.803206] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.572 [2024-11-05 19:18:46.803210] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:17.572 [2024-11-05 19:18:46.803221] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.572 qpair failed and we were unable to recover it. 00:29:17.572 [2024-11-05 19:18:46.813185] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.572 [2024-11-05 19:18:46.813242] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.572 [2024-11-05 19:18:46.813251] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.572 [2024-11-05 19:18:46.813257] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.572 [2024-11-05 19:18:46.813261] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:17.572 [2024-11-05 19:18:46.813271] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.572 qpair failed and we were unable to recover it. 00:29:17.572 [2024-11-05 19:18:46.823206] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.572 [2024-11-05 19:18:46.823244] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.572 [2024-11-05 19:18:46.823254] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.572 [2024-11-05 19:18:46.823259] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.572 [2024-11-05 19:18:46.823264] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:17.572 [2024-11-05 19:18:46.823274] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.572 qpair failed and we were unable to recover it. 00:29:17.572 [2024-11-05 19:18:46.833093] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.572 [2024-11-05 19:18:46.833131] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.572 [2024-11-05 19:18:46.833143] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.572 [2024-11-05 19:18:46.833149] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.572 [2024-11-05 19:18:46.833153] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:17.572 [2024-11-05 19:18:46.833164] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.572 qpair failed and we were unable to recover it. 00:29:17.572 [2024-11-05 19:18:46.843239] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.572 [2024-11-05 19:18:46.843281] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.572 [2024-11-05 19:18:46.843291] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.572 [2024-11-05 19:18:46.843296] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.572 [2024-11-05 19:18:46.843301] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:17.572 [2024-11-05 19:18:46.843311] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.572 qpair failed and we were unable to recover it. 00:29:17.572 [2024-11-05 19:18:46.853292] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.572 [2024-11-05 19:18:46.853377] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.572 [2024-11-05 19:18:46.853386] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.572 [2024-11-05 19:18:46.853392] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.572 [2024-11-05 19:18:46.853396] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:17.572 [2024-11-05 19:18:46.853406] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.572 qpair failed and we were unable to recover it. 00:29:17.572 [2024-11-05 19:18:46.863171] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.572 [2024-11-05 19:18:46.863215] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.572 [2024-11-05 19:18:46.863225] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.572 [2024-11-05 19:18:46.863230] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.572 [2024-11-05 19:18:46.863234] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:17.572 [2024-11-05 19:18:46.863245] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.572 qpair failed and we were unable to recover it. 00:29:17.572 [2024-11-05 19:18:46.873342] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.572 [2024-11-05 19:18:46.873382] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.572 [2024-11-05 19:18:46.873392] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.572 [2024-11-05 19:18:46.873397] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.572 [2024-11-05 19:18:46.873404] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:17.572 [2024-11-05 19:18:46.873415] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.572 qpair failed and we were unable to recover it. 00:29:17.572 [2024-11-05 19:18:46.883346] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.572 [2024-11-05 19:18:46.883386] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.572 [2024-11-05 19:18:46.883396] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.572 [2024-11-05 19:18:46.883401] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.572 [2024-11-05 19:18:46.883405] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:17.572 [2024-11-05 19:18:46.883415] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.572 qpair failed and we were unable to recover it. 00:29:17.572 [2024-11-05 19:18:46.893396] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.573 [2024-11-05 19:18:46.893439] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.573 [2024-11-05 19:18:46.893450] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.573 [2024-11-05 19:18:46.893455] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.573 [2024-11-05 19:18:46.893459] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:17.573 [2024-11-05 19:18:46.893469] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.573 qpair failed and we were unable to recover it. 00:29:17.835 [2024-11-05 19:18:46.903377] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.835 [2024-11-05 19:18:46.903425] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.835 [2024-11-05 19:18:46.903444] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.835 [2024-11-05 19:18:46.903450] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.835 [2024-11-05 19:18:46.903455] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:17.835 [2024-11-05 19:18:46.903470] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.835 qpair failed and we were unable to recover it. 00:29:17.835 [2024-11-05 19:18:46.913339] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.835 [2024-11-05 19:18:46.913378] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.835 [2024-11-05 19:18:46.913389] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.835 [2024-11-05 19:18:46.913394] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.835 [2024-11-05 19:18:46.913399] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:17.835 [2024-11-05 19:18:46.913410] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.835 qpair failed and we were unable to recover it. 00:29:17.835 [2024-11-05 19:18:46.923474] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.835 [2024-11-05 19:18:46.923514] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.835 [2024-11-05 19:18:46.923524] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.835 [2024-11-05 19:18:46.923530] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.835 [2024-11-05 19:18:46.923534] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:17.835 [2024-11-05 19:18:46.923545] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.835 qpair failed and we were unable to recover it. 00:29:17.835 [2024-11-05 19:18:46.933508] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.835 [2024-11-05 19:18:46.933556] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.835 [2024-11-05 19:18:46.933575] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.835 [2024-11-05 19:18:46.933581] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.835 [2024-11-05 19:18:46.933587] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:17.835 [2024-11-05 19:18:46.933601] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.836 qpair failed and we were unable to recover it. 00:29:17.836 [2024-11-05 19:18:46.943470] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.836 [2024-11-05 19:18:46.943511] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.836 [2024-11-05 19:18:46.943530] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.836 [2024-11-05 19:18:46.943537] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.836 [2024-11-05 19:18:46.943542] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:17.836 [2024-11-05 19:18:46.943557] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.836 qpair failed and we were unable to recover it. 00:29:17.836 [2024-11-05 19:18:46.953549] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.836 [2024-11-05 19:18:46.953624] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.836 [2024-11-05 19:18:46.953635] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.836 [2024-11-05 19:18:46.953641] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.836 [2024-11-05 19:18:46.953646] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:17.836 [2024-11-05 19:18:46.953657] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.836 qpair failed and we were unable to recover it. 00:29:17.836 [2024-11-05 19:18:46.963581] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.836 [2024-11-05 19:18:46.963626] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.836 [2024-11-05 19:18:46.963639] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.836 [2024-11-05 19:18:46.963645] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.836 [2024-11-05 19:18:46.963649] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:17.836 [2024-11-05 19:18:46.963660] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.836 qpair failed and we were unable to recover it. 00:29:17.836 [2024-11-05 19:18:46.973610] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.836 [2024-11-05 19:18:46.973652] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.836 [2024-11-05 19:18:46.973662] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.836 [2024-11-05 19:18:46.973667] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.836 [2024-11-05 19:18:46.973672] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:17.836 [2024-11-05 19:18:46.973682] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.836 qpair failed and we were unable to recover it. 00:29:17.836 [2024-11-05 19:18:46.983618] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.836 [2024-11-05 19:18:46.983660] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.836 [2024-11-05 19:18:46.983670] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.836 [2024-11-05 19:18:46.983675] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.836 [2024-11-05 19:18:46.983680] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:17.836 [2024-11-05 19:18:46.983690] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.836 qpair failed and we were unable to recover it. 00:29:17.836 [2024-11-05 19:18:46.993667] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.836 [2024-11-05 19:18:46.993709] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.836 [2024-11-05 19:18:46.993719] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.836 [2024-11-05 19:18:46.993724] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.836 [2024-11-05 19:18:46.993728] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:17.836 [2024-11-05 19:18:46.993739] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.836 qpair failed and we were unable to recover it. 00:29:17.836 [2024-11-05 19:18:47.003563] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.836 [2024-11-05 19:18:47.003616] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.836 [2024-11-05 19:18:47.003626] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.836 [2024-11-05 19:18:47.003631] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.836 [2024-11-05 19:18:47.003639] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:17.836 [2024-11-05 19:18:47.003650] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.836 qpair failed and we were unable to recover it. 00:29:17.836 [2024-11-05 19:18:47.013720] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.836 [2024-11-05 19:18:47.013796] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.836 [2024-11-05 19:18:47.013806] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.836 [2024-11-05 19:18:47.013812] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.836 [2024-11-05 19:18:47.013816] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:17.836 [2024-11-05 19:18:47.013827] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.836 qpair failed and we were unable to recover it. 00:29:17.836 [2024-11-05 19:18:47.023739] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.836 [2024-11-05 19:18:47.023828] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.836 [2024-11-05 19:18:47.023838] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.836 [2024-11-05 19:18:47.023844] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.836 [2024-11-05 19:18:47.023849] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:17.836 [2024-11-05 19:18:47.023859] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.836 qpair failed and we were unable to recover it. 00:29:17.836 [2024-11-05 19:18:47.033670] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.836 [2024-11-05 19:18:47.033711] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.836 [2024-11-05 19:18:47.033720] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.836 [2024-11-05 19:18:47.033726] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.836 [2024-11-05 19:18:47.033730] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:17.836 [2024-11-05 19:18:47.033742] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.836 qpair failed and we were unable to recover it. 00:29:17.836 [2024-11-05 19:18:47.043793] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.836 [2024-11-05 19:18:47.043834] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.836 [2024-11-05 19:18:47.043844] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.836 [2024-11-05 19:18:47.043849] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.836 [2024-11-05 19:18:47.043854] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:17.836 [2024-11-05 19:18:47.043864] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.836 qpair failed and we were unable to recover it. 00:29:17.836 [2024-11-05 19:18:47.053838] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.836 [2024-11-05 19:18:47.053909] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.836 [2024-11-05 19:18:47.053919] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.836 [2024-11-05 19:18:47.053925] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.836 [2024-11-05 19:18:47.053929] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:17.836 [2024-11-05 19:18:47.053940] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.836 qpair failed and we were unable to recover it. 00:29:17.836 [2024-11-05 19:18:47.063838] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.836 [2024-11-05 19:18:47.063885] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.836 [2024-11-05 19:18:47.063895] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.836 [2024-11-05 19:18:47.063900] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.836 [2024-11-05 19:18:47.063905] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:17.836 [2024-11-05 19:18:47.063915] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.836 qpair failed and we were unable to recover it. 00:29:17.836 [2024-11-05 19:18:47.073853] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.837 [2024-11-05 19:18:47.073894] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.837 [2024-11-05 19:18:47.073903] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.837 [2024-11-05 19:18:47.073908] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.837 [2024-11-05 19:18:47.073913] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:17.837 [2024-11-05 19:18:47.073923] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.837 qpair failed and we were unable to recover it. 00:29:17.837 [2024-11-05 19:18:47.083879] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.837 [2024-11-05 19:18:47.083920] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.837 [2024-11-05 19:18:47.083930] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.837 [2024-11-05 19:18:47.083935] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.837 [2024-11-05 19:18:47.083939] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:17.837 [2024-11-05 19:18:47.083949] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.837 qpair failed and we were unable to recover it. 00:29:17.837 [2024-11-05 19:18:47.093974] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.837 [2024-11-05 19:18:47.094061] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.837 [2024-11-05 19:18:47.094071] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.837 [2024-11-05 19:18:47.094077] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.837 [2024-11-05 19:18:47.094082] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:17.837 [2024-11-05 19:18:47.094093] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.837 qpair failed and we were unable to recover it. 00:29:17.837 [2024-11-05 19:18:47.103950] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.837 [2024-11-05 19:18:47.103990] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.837 [2024-11-05 19:18:47.104000] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.837 [2024-11-05 19:18:47.104005] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.837 [2024-11-05 19:18:47.104009] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:17.837 [2024-11-05 19:18:47.104019] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.837 qpair failed and we were unable to recover it. 00:29:17.837 [2024-11-05 19:18:47.113943] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.837 [2024-11-05 19:18:47.113986] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.837 [2024-11-05 19:18:47.113996] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.837 [2024-11-05 19:18:47.114001] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.837 [2024-11-05 19:18:47.114006] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:17.837 [2024-11-05 19:18:47.114016] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.837 qpair failed and we were unable to recover it. 00:29:17.837 [2024-11-05 19:18:47.123870] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.837 [2024-11-05 19:18:47.123912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.837 [2024-11-05 19:18:47.123922] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.837 [2024-11-05 19:18:47.123927] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.837 [2024-11-05 19:18:47.123932] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:17.837 [2024-11-05 19:18:47.123942] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.837 qpair failed and we were unable to recover it. 00:29:17.837 [2024-11-05 19:18:47.134067] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.837 [2024-11-05 19:18:47.134110] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.837 [2024-11-05 19:18:47.134120] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.837 [2024-11-05 19:18:47.134131] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.837 [2024-11-05 19:18:47.134135] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:17.837 [2024-11-05 19:18:47.134146] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.837 qpair failed and we were unable to recover it. 00:29:17.837 [2024-11-05 19:18:47.144072] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.837 [2024-11-05 19:18:47.144114] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.837 [2024-11-05 19:18:47.144124] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.837 [2024-11-05 19:18:47.144129] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.837 [2024-11-05 19:18:47.144134] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:17.837 [2024-11-05 19:18:47.144144] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.837 qpair failed and we were unable to recover it. 00:29:17.837 [2024-11-05 19:18:47.154087] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.837 [2024-11-05 19:18:47.154129] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.837 [2024-11-05 19:18:47.154139] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.837 [2024-11-05 19:18:47.154144] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.837 [2024-11-05 19:18:47.154148] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:17.837 [2024-11-05 19:18:47.154159] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.837 qpair failed and we were unable to recover it. 00:29:18.099 [2024-11-05 19:18:47.164126] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.100 [2024-11-05 19:18:47.164171] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.100 [2024-11-05 19:18:47.164181] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.100 [2024-11-05 19:18:47.164186] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.100 [2024-11-05 19:18:47.164191] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:18.100 [2024-11-05 19:18:47.164201] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.100 qpair failed and we were unable to recover it. 00:29:18.100 [2024-11-05 19:18:47.174207] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.100 [2024-11-05 19:18:47.174274] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.100 [2024-11-05 19:18:47.174284] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.100 [2024-11-05 19:18:47.174289] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.100 [2024-11-05 19:18:47.174294] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:18.100 [2024-11-05 19:18:47.174307] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.100 qpair failed and we were unable to recover it. 00:29:18.100 [2024-11-05 19:18:47.184338] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.100 [2024-11-05 19:18:47.184378] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.100 [2024-11-05 19:18:47.184388] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.100 [2024-11-05 19:18:47.184393] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.100 [2024-11-05 19:18:47.184398] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:18.100 [2024-11-05 19:18:47.184408] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.100 qpair failed and we were unable to recover it. 00:29:18.100 [2024-11-05 19:18:47.194195] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.100 [2024-11-05 19:18:47.194246] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.100 [2024-11-05 19:18:47.194256] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.100 [2024-11-05 19:18:47.194261] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.100 [2024-11-05 19:18:47.194265] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:18.100 [2024-11-05 19:18:47.194276] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.100 qpair failed and we were unable to recover it. 00:29:18.100 [2024-11-05 19:18:47.204211] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.100 [2024-11-05 19:18:47.204252] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.100 [2024-11-05 19:18:47.204261] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.100 [2024-11-05 19:18:47.204266] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.100 [2024-11-05 19:18:47.204271] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:18.100 [2024-11-05 19:18:47.204280] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.100 qpair failed and we were unable to recover it. 00:29:18.100 [2024-11-05 19:18:47.214234] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.100 [2024-11-05 19:18:47.214276] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.100 [2024-11-05 19:18:47.214287] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.100 [2024-11-05 19:18:47.214291] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.100 [2024-11-05 19:18:47.214296] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:18.100 [2024-11-05 19:18:47.214307] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.100 qpair failed and we were unable to recover it. 00:29:18.100 [2024-11-05 19:18:47.224296] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.100 [2024-11-05 19:18:47.224342] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.100 [2024-11-05 19:18:47.224352] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.100 [2024-11-05 19:18:47.224358] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.100 [2024-11-05 19:18:47.224362] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:18.100 [2024-11-05 19:18:47.224372] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.100 qpair failed and we were unable to recover it. 00:29:18.100 [2024-11-05 19:18:47.234309] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.100 [2024-11-05 19:18:47.234348] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.100 [2024-11-05 19:18:47.234358] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.100 [2024-11-05 19:18:47.234363] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.100 [2024-11-05 19:18:47.234368] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:18.100 [2024-11-05 19:18:47.234378] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.100 qpair failed and we were unable to recover it. 00:29:18.100 [2024-11-05 19:18:47.244345] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.100 [2024-11-05 19:18:47.244403] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.100 [2024-11-05 19:18:47.244412] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.100 [2024-11-05 19:18:47.244418] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.100 [2024-11-05 19:18:47.244422] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:18.100 [2024-11-05 19:18:47.244432] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.100 qpair failed and we were unable to recover it. 00:29:18.100 [2024-11-05 19:18:47.254244] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.100 [2024-11-05 19:18:47.254290] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.100 [2024-11-05 19:18:47.254301] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.100 [2024-11-05 19:18:47.254307] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.100 [2024-11-05 19:18:47.254311] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:18.100 [2024-11-05 19:18:47.254322] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.100 qpair failed and we were unable to recover it. 00:29:18.100 [2024-11-05 19:18:47.264368] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.100 [2024-11-05 19:18:47.264409] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.100 [2024-11-05 19:18:47.264419] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.100 [2024-11-05 19:18:47.264427] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.100 [2024-11-05 19:18:47.264432] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:18.100 [2024-11-05 19:18:47.264442] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.100 qpair failed and we were unable to recover it. 00:29:18.100 [2024-11-05 19:18:47.274385] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.100 [2024-11-05 19:18:47.274430] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.100 [2024-11-05 19:18:47.274449] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.100 [2024-11-05 19:18:47.274456] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.100 [2024-11-05 19:18:47.274461] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:18.100 [2024-11-05 19:18:47.274475] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.100 qpair failed and we were unable to recover it. 00:29:18.100 [2024-11-05 19:18:47.284324] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.100 [2024-11-05 19:18:47.284370] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.100 [2024-11-05 19:18:47.284382] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.100 [2024-11-05 19:18:47.284388] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.100 [2024-11-05 19:18:47.284392] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:18.100 [2024-11-05 19:18:47.284404] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.100 qpair failed and we were unable to recover it. 00:29:18.100 [2024-11-05 19:18:47.294484] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.100 [2024-11-05 19:18:47.294526] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.100 [2024-11-05 19:18:47.294537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.100 [2024-11-05 19:18:47.294542] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.100 [2024-11-05 19:18:47.294546] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:18.100 [2024-11-05 19:18:47.294557] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.100 qpair failed and we were unable to recover it. 00:29:18.100 [2024-11-05 19:18:47.304502] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.100 [2024-11-05 19:18:47.304543] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.100 [2024-11-05 19:18:47.304554] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.100 [2024-11-05 19:18:47.304559] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.100 [2024-11-05 19:18:47.304564] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:18.101 [2024-11-05 19:18:47.304579] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.101 qpair failed and we were unable to recover it. 00:29:18.101 [2024-11-05 19:18:47.314517] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.101 [2024-11-05 19:18:47.314559] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.101 [2024-11-05 19:18:47.314578] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.101 [2024-11-05 19:18:47.314584] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.101 [2024-11-05 19:18:47.314590] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:18.101 [2024-11-05 19:18:47.314604] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.101 qpair failed and we were unable to recover it. 00:29:18.101 [2024-11-05 19:18:47.324555] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.101 [2024-11-05 19:18:47.324598] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.101 [2024-11-05 19:18:47.324609] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.101 [2024-11-05 19:18:47.324615] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.101 [2024-11-05 19:18:47.324620] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:18.101 [2024-11-05 19:18:47.324631] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.101 qpair failed and we were unable to recover it. 00:29:18.101 [2024-11-05 19:18:47.334595] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.101 [2024-11-05 19:18:47.334639] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.101 [2024-11-05 19:18:47.334649] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.101 [2024-11-05 19:18:47.334654] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.101 [2024-11-05 19:18:47.334659] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:18.101 [2024-11-05 19:18:47.334670] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.101 qpair failed and we were unable to recover it. 00:29:18.101 [2024-11-05 19:18:47.344607] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.101 [2024-11-05 19:18:47.344647] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.101 [2024-11-05 19:18:47.344657] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.101 [2024-11-05 19:18:47.344662] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.101 [2024-11-05 19:18:47.344667] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:18.101 [2024-11-05 19:18:47.344678] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.101 qpair failed and we were unable to recover it. 00:29:18.101 [2024-11-05 19:18:47.354635] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.101 [2024-11-05 19:18:47.354673] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.101 [2024-11-05 19:18:47.354682] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.101 [2024-11-05 19:18:47.354688] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.101 [2024-11-05 19:18:47.354692] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:18.101 [2024-11-05 19:18:47.354704] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.101 qpair failed and we were unable to recover it. 00:29:18.101 [2024-11-05 19:18:47.364685] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.101 [2024-11-05 19:18:47.364777] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.101 [2024-11-05 19:18:47.364787] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.101 [2024-11-05 19:18:47.364793] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.101 [2024-11-05 19:18:47.364798] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:18.101 [2024-11-05 19:18:47.364808] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.101 qpair failed and we were unable to recover it. 00:29:18.101 [2024-11-05 19:18:47.374705] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.101 [2024-11-05 19:18:47.374754] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.101 [2024-11-05 19:18:47.374764] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.101 [2024-11-05 19:18:47.374769] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.101 [2024-11-05 19:18:47.374774] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:18.101 [2024-11-05 19:18:47.374785] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.101 qpair failed and we were unable to recover it. 00:29:18.101 [2024-11-05 19:18:47.384715] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.101 [2024-11-05 19:18:47.384762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.101 [2024-11-05 19:18:47.384773] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.101 [2024-11-05 19:18:47.384778] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.101 [2024-11-05 19:18:47.384783] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:18.101 [2024-11-05 19:18:47.384796] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.101 qpair failed and we were unable to recover it. 00:29:18.101 [2024-11-05 19:18:47.394786] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.101 [2024-11-05 19:18:47.394857] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.101 [2024-11-05 19:18:47.394871] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.101 [2024-11-05 19:18:47.394877] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.101 [2024-11-05 19:18:47.394881] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:18.101 [2024-11-05 19:18:47.394893] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.101 qpair failed and we were unable to recover it. 00:29:18.101 [2024-11-05 19:18:47.404776] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.101 [2024-11-05 19:18:47.404818] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.101 [2024-11-05 19:18:47.404829] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.101 [2024-11-05 19:18:47.404834] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.101 [2024-11-05 19:18:47.404839] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:18.101 [2024-11-05 19:18:47.404850] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.101 qpair failed and we were unable to recover it. 00:29:18.101 [2024-11-05 19:18:47.414807] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.101 [2024-11-05 19:18:47.414848] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.101 [2024-11-05 19:18:47.414858] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.101 [2024-11-05 19:18:47.414863] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.101 [2024-11-05 19:18:47.414868] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:18.101 [2024-11-05 19:18:47.414879] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.101 qpair failed and we were unable to recover it. 00:29:18.364 [2024-11-05 19:18:47.424822] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.364 [2024-11-05 19:18:47.424859] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.364 [2024-11-05 19:18:47.424869] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.364 [2024-11-05 19:18:47.424875] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.364 [2024-11-05 19:18:47.424879] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:18.364 [2024-11-05 19:18:47.424890] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.364 qpair failed and we were unable to recover it. 00:29:18.364 [2024-11-05 19:18:47.434835] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.364 [2024-11-05 19:18:47.434870] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.364 [2024-11-05 19:18:47.434880] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.364 [2024-11-05 19:18:47.434886] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.364 [2024-11-05 19:18:47.434893] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:18.364 [2024-11-05 19:18:47.434904] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.364 qpair failed and we were unable to recover it. 00:29:18.364 [2024-11-05 19:18:47.444886] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.364 [2024-11-05 19:18:47.444926] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.364 [2024-11-05 19:18:47.444936] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.364 [2024-11-05 19:18:47.444941] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.364 [2024-11-05 19:18:47.444946] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:18.364 [2024-11-05 19:18:47.444957] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.364 qpair failed and we were unable to recover it. 00:29:18.364 [2024-11-05 19:18:47.454912] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.364 [2024-11-05 19:18:47.454955] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.364 [2024-11-05 19:18:47.454966] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.364 [2024-11-05 19:18:47.454971] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.364 [2024-11-05 19:18:47.454976] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:18.364 [2024-11-05 19:18:47.454987] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.364 qpair failed and we were unable to recover it. 00:29:18.364 [2024-11-05 19:18:47.464932] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.364 [2024-11-05 19:18:47.464997] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.364 [2024-11-05 19:18:47.465008] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.364 [2024-11-05 19:18:47.465013] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.364 [2024-11-05 19:18:47.465017] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:18.364 [2024-11-05 19:18:47.465028] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.364 qpair failed and we were unable to recover it. 00:29:18.364 [2024-11-05 19:18:47.474978] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.364 [2024-11-05 19:18:47.475027] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.364 [2024-11-05 19:18:47.475037] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.364 [2024-11-05 19:18:47.475042] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.364 [2024-11-05 19:18:47.475046] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:18.364 [2024-11-05 19:18:47.475056] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.364 qpair failed and we were unable to recover it. 00:29:18.364 [2024-11-05 19:18:47.485001] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.364 [2024-11-05 19:18:47.485077] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.364 [2024-11-05 19:18:47.485087] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.364 [2024-11-05 19:18:47.485092] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.364 [2024-11-05 19:18:47.485096] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:18.364 [2024-11-05 19:18:47.485106] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.364 qpair failed and we were unable to recover it. 00:29:18.364 [2024-11-05 19:18:47.495038] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.364 [2024-11-05 19:18:47.495076] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.364 [2024-11-05 19:18:47.495086] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.364 [2024-11-05 19:18:47.495091] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.364 [2024-11-05 19:18:47.495096] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:18.364 [2024-11-05 19:18:47.495106] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.364 qpair failed and we were unable to recover it. 00:29:18.364 [2024-11-05 19:18:47.505050] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.364 [2024-11-05 19:18:47.505093] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.364 [2024-11-05 19:18:47.505103] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.364 [2024-11-05 19:18:47.505108] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.364 [2024-11-05 19:18:47.505113] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:18.364 [2024-11-05 19:18:47.505123] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.364 qpair failed and we were unable to recover it. 00:29:18.364 [2024-11-05 19:18:47.515063] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.364 [2024-11-05 19:18:47.515100] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.364 [2024-11-05 19:18:47.515110] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.364 [2024-11-05 19:18:47.515115] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.364 [2024-11-05 19:18:47.515119] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:18.364 [2024-11-05 19:18:47.515129] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.364 qpair failed and we were unable to recover it. 00:29:18.364 [2024-11-05 19:18:47.525093] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.364 [2024-11-05 19:18:47.525138] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.364 [2024-11-05 19:18:47.525150] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.364 [2024-11-05 19:18:47.525155] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.364 [2024-11-05 19:18:47.525160] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:18.364 [2024-11-05 19:18:47.525170] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.364 qpair failed and we were unable to recover it. 00:29:18.364 [2024-11-05 19:18:47.535147] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.364 [2024-11-05 19:18:47.535197] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.364 [2024-11-05 19:18:47.535206] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.364 [2024-11-05 19:18:47.535212] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.364 [2024-11-05 19:18:47.535216] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:18.364 [2024-11-05 19:18:47.535227] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.364 qpair failed and we were unable to recover it. 00:29:18.364 [2024-11-05 19:18:47.545157] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.364 [2024-11-05 19:18:47.545196] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.364 [2024-11-05 19:18:47.545206] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.364 [2024-11-05 19:18:47.545211] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.364 [2024-11-05 19:18:47.545215] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:18.365 [2024-11-05 19:18:47.545225] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.365 qpair failed and we were unable to recover it. 00:29:18.365 [2024-11-05 19:18:47.555191] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.365 [2024-11-05 19:18:47.555231] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.365 [2024-11-05 19:18:47.555240] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.365 [2024-11-05 19:18:47.555245] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.365 [2024-11-05 19:18:47.555250] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:18.365 [2024-11-05 19:18:47.555261] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.365 qpair failed and we were unable to recover it. 00:29:18.365 [2024-11-05 19:18:47.565215] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.365 [2024-11-05 19:18:47.565255] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.365 [2024-11-05 19:18:47.565265] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.365 [2024-11-05 19:18:47.565270] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.365 [2024-11-05 19:18:47.565277] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:18.365 [2024-11-05 19:18:47.565288] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.365 qpair failed and we were unable to recover it. 00:29:18.365 [2024-11-05 19:18:47.575239] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.365 [2024-11-05 19:18:47.575280] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.365 [2024-11-05 19:18:47.575289] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.365 [2024-11-05 19:18:47.575294] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.365 [2024-11-05 19:18:47.575299] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:18.365 [2024-11-05 19:18:47.575309] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.365 qpair failed and we were unable to recover it. 00:29:18.365 [2024-11-05 19:18:47.585232] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.365 [2024-11-05 19:18:47.585283] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.365 [2024-11-05 19:18:47.585293] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.365 [2024-11-05 19:18:47.585298] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.365 [2024-11-05 19:18:47.585303] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:18.365 [2024-11-05 19:18:47.585313] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.365 qpair failed and we were unable to recover it. 00:29:18.365 [2024-11-05 19:18:47.595269] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.365 [2024-11-05 19:18:47.595352] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.365 [2024-11-05 19:18:47.595363] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.365 [2024-11-05 19:18:47.595368] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.365 [2024-11-05 19:18:47.595374] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:18.365 [2024-11-05 19:18:47.595385] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.365 qpair failed and we were unable to recover it. 00:29:18.365 [2024-11-05 19:18:47.605304] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.365 [2024-11-05 19:18:47.605344] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.365 [2024-11-05 19:18:47.605354] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.365 [2024-11-05 19:18:47.605359] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.365 [2024-11-05 19:18:47.605365] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:18.365 [2024-11-05 19:18:47.605375] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.365 qpair failed and we were unable to recover it. 00:29:18.365 [2024-11-05 19:18:47.615320] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.365 [2024-11-05 19:18:47.615361] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.365 [2024-11-05 19:18:47.615371] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.365 [2024-11-05 19:18:47.615377] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.365 [2024-11-05 19:18:47.615381] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:18.365 [2024-11-05 19:18:47.615392] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.365 qpair failed and we were unable to recover it. 00:29:18.365 [2024-11-05 19:18:47.625372] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.365 [2024-11-05 19:18:47.625415] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.365 [2024-11-05 19:18:47.625424] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.365 [2024-11-05 19:18:47.625429] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.365 [2024-11-05 19:18:47.625434] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:18.365 [2024-11-05 19:18:47.625445] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.365 qpair failed and we were unable to recover it. 00:29:18.365 [2024-11-05 19:18:47.635391] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.365 [2024-11-05 19:18:47.635477] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.365 [2024-11-05 19:18:47.635487] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.365 [2024-11-05 19:18:47.635493] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.365 [2024-11-05 19:18:47.635498] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:18.365 [2024-11-05 19:18:47.635509] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.365 qpair failed and we were unable to recover it. 00:29:18.365 [2024-11-05 19:18:47.645415] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.365 [2024-11-05 19:18:47.645465] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.365 [2024-11-05 19:18:47.645484] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.365 [2024-11-05 19:18:47.645491] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.365 [2024-11-05 19:18:47.645496] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:18.365 [2024-11-05 19:18:47.645511] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.365 qpair failed and we were unable to recover it. 00:29:18.365 [2024-11-05 19:18:47.655458] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.365 [2024-11-05 19:18:47.655515] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.365 [2024-11-05 19:18:47.655527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.365 [2024-11-05 19:18:47.655532] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.365 [2024-11-05 19:18:47.655537] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:18.365 [2024-11-05 19:18:47.655548] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.365 qpair failed and we were unable to recover it. 00:29:18.365 [2024-11-05 19:18:47.665475] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.365 [2024-11-05 19:18:47.665525] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.365 [2024-11-05 19:18:47.665544] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.365 [2024-11-05 19:18:47.665551] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.365 [2024-11-05 19:18:47.665556] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:18.365 [2024-11-05 19:18:47.665571] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.365 qpair failed and we were unable to recover it. 00:29:18.365 [2024-11-05 19:18:47.675453] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.365 [2024-11-05 19:18:47.675496] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.365 [2024-11-05 19:18:47.675507] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.365 [2024-11-05 19:18:47.675513] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.365 [2024-11-05 19:18:47.675518] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:18.366 [2024-11-05 19:18:47.675529] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.366 qpair failed and we were unable to recover it. 00:29:18.366 [2024-11-05 19:18:47.685396] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.366 [2024-11-05 19:18:47.685438] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.366 [2024-11-05 19:18:47.685449] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.366 [2024-11-05 19:18:47.685454] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.366 [2024-11-05 19:18:47.685458] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:18.366 [2024-11-05 19:18:47.685469] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.366 qpair failed and we were unable to recover it. 00:29:18.628 [2024-11-05 19:18:47.695561] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.628 [2024-11-05 19:18:47.695603] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.628 [2024-11-05 19:18:47.695613] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.628 [2024-11-05 19:18:47.695622] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.628 [2024-11-05 19:18:47.695626] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:18.628 [2024-11-05 19:18:47.695637] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.628 qpair failed and we were unable to recover it. 00:29:18.628 [2024-11-05 19:18:47.705487] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.628 [2024-11-05 19:18:47.705535] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.628 [2024-11-05 19:18:47.705545] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.628 [2024-11-05 19:18:47.705550] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.628 [2024-11-05 19:18:47.705555] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:18.628 [2024-11-05 19:18:47.705565] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.628 qpair failed and we were unable to recover it. 00:29:18.628 [2024-11-05 19:18:47.715664] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.628 [2024-11-05 19:18:47.715754] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.628 [2024-11-05 19:18:47.715765] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.628 [2024-11-05 19:18:47.715770] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.628 [2024-11-05 19:18:47.715775] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:18.628 [2024-11-05 19:18:47.715785] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.628 qpair failed and we were unable to recover it. 00:29:18.628 [2024-11-05 19:18:47.725653] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.628 [2024-11-05 19:18:47.725695] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.628 [2024-11-05 19:18:47.725705] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.628 [2024-11-05 19:18:47.725710] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.628 [2024-11-05 19:18:47.725715] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:18.628 [2024-11-05 19:18:47.725725] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.628 qpair failed and we were unable to recover it. 00:29:18.628 [2024-11-05 19:18:47.735670] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.628 [2024-11-05 19:18:47.735712] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.628 [2024-11-05 19:18:47.735721] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.628 [2024-11-05 19:18:47.735727] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.628 [2024-11-05 19:18:47.735731] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:18.628 [2024-11-05 19:18:47.735744] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.628 qpair failed and we were unable to recover it. 00:29:18.628 [2024-11-05 19:18:47.745674] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.628 [2024-11-05 19:18:47.745720] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.628 [2024-11-05 19:18:47.745730] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.628 [2024-11-05 19:18:47.745735] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.628 [2024-11-05 19:18:47.745740] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:18.628 [2024-11-05 19:18:47.745754] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.628 qpair failed and we were unable to recover it. 00:29:18.628 [2024-11-05 19:18:47.755581] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.628 [2024-11-05 19:18:47.755618] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.628 [2024-11-05 19:18:47.755628] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.628 [2024-11-05 19:18:47.755634] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.628 [2024-11-05 19:18:47.755639] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:18.628 [2024-11-05 19:18:47.755649] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.628 qpair failed and we were unable to recover it. 00:29:18.628 [2024-11-05 19:18:47.765736] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.628 [2024-11-05 19:18:47.765783] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.628 [2024-11-05 19:18:47.765794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.628 [2024-11-05 19:18:47.765799] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.628 [2024-11-05 19:18:47.765803] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:18.628 [2024-11-05 19:18:47.765814] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.628 qpair failed and we were unable to recover it. 00:29:18.628 [2024-11-05 19:18:47.775775] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.629 [2024-11-05 19:18:47.775864] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.629 [2024-11-05 19:18:47.775874] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.629 [2024-11-05 19:18:47.775880] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.629 [2024-11-05 19:18:47.775884] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:18.629 [2024-11-05 19:18:47.775895] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.629 qpair failed and we were unable to recover it. 00:29:18.629 [2024-11-05 19:18:47.785826] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.629 [2024-11-05 19:18:47.785911] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.629 [2024-11-05 19:18:47.785921] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.629 [2024-11-05 19:18:47.785926] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.629 [2024-11-05 19:18:47.785931] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:18.629 [2024-11-05 19:18:47.785941] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.629 qpair failed and we were unable to recover it. 00:29:18.629 [2024-11-05 19:18:47.795807] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.629 [2024-11-05 19:18:47.795851] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.629 [2024-11-05 19:18:47.795861] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.629 [2024-11-05 19:18:47.795866] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.629 [2024-11-05 19:18:47.795871] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:18.629 [2024-11-05 19:18:47.795881] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.629 qpair failed and we were unable to recover it. 00:29:18.629 [2024-11-05 19:18:47.805851] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.629 [2024-11-05 19:18:47.805892] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.629 [2024-11-05 19:18:47.805902] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.629 [2024-11-05 19:18:47.805907] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.629 [2024-11-05 19:18:47.805912] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:18.629 [2024-11-05 19:18:47.805923] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.629 qpair failed and we were unable to recover it. 00:29:18.629 [2024-11-05 19:18:47.815847] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.629 [2024-11-05 19:18:47.815889] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.629 [2024-11-05 19:18:47.815899] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.629 [2024-11-05 19:18:47.815904] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.629 [2024-11-05 19:18:47.815909] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:18.629 [2024-11-05 19:18:47.815919] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.629 qpair failed and we were unable to recover it. 00:29:18.629 [2024-11-05 19:18:47.825903] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.629 [2024-11-05 19:18:47.825942] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.629 [2024-11-05 19:18:47.825952] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.629 [2024-11-05 19:18:47.825960] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.629 [2024-11-05 19:18:47.825965] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:18.629 [2024-11-05 19:18:47.825976] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.629 qpair failed and we were unable to recover it. 00:29:18.629 [2024-11-05 19:18:47.835792] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.629 [2024-11-05 19:18:47.835835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.629 [2024-11-05 19:18:47.835845] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.629 [2024-11-05 19:18:47.835851] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.629 [2024-11-05 19:18:47.835856] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:18.629 [2024-11-05 19:18:47.835867] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.629 qpair failed and we were unable to recover it. 00:29:18.629 [2024-11-05 19:18:47.845980] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.629 [2024-11-05 19:18:47.846025] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.629 [2024-11-05 19:18:47.846035] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.629 [2024-11-05 19:18:47.846040] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.629 [2024-11-05 19:18:47.846045] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:18.629 [2024-11-05 19:18:47.846056] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.629 qpair failed and we were unable to recover it. 00:29:18.629 [2024-11-05 19:18:47.855982] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.629 [2024-11-05 19:18:47.856026] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.629 [2024-11-05 19:18:47.856036] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.629 [2024-11-05 19:18:47.856041] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.629 [2024-11-05 19:18:47.856046] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:18.629 [2024-11-05 19:18:47.856056] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.629 qpair failed and we were unable to recover it. 00:29:18.629 [2024-11-05 19:18:47.865989] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.629 [2024-11-05 19:18:47.866044] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.629 [2024-11-05 19:18:47.866054] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.629 [2024-11-05 19:18:47.866059] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.629 [2024-11-05 19:18:47.866064] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:18.629 [2024-11-05 19:18:47.866077] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.629 qpair failed and we were unable to recover it. 00:29:18.629 [2024-11-05 19:18:47.876040] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.629 [2024-11-05 19:18:47.876077] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.629 [2024-11-05 19:18:47.876087] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.629 [2024-11-05 19:18:47.876093] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.629 [2024-11-05 19:18:47.876097] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:18.629 [2024-11-05 19:18:47.876108] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.629 qpair failed and we were unable to recover it. 00:29:18.629 [2024-11-05 19:18:47.886081] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.629 [2024-11-05 19:18:47.886122] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.629 [2024-11-05 19:18:47.886133] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.629 [2024-11-05 19:18:47.886138] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.629 [2024-11-05 19:18:47.886142] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:18.629 [2024-11-05 19:18:47.886152] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.629 qpair failed and we were unable to recover it. 00:29:18.629 [2024-11-05 19:18:47.896103] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.629 [2024-11-05 19:18:47.896146] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.629 [2024-11-05 19:18:47.896155] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.629 [2024-11-05 19:18:47.896160] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.629 [2024-11-05 19:18:47.896165] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:18.629 [2024-11-05 19:18:47.896175] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.629 qpair failed and we were unable to recover it. 00:29:18.629 [2024-11-05 19:18:47.905975] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.629 [2024-11-05 19:18:47.906012] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.630 [2024-11-05 19:18:47.906022] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.630 [2024-11-05 19:18:47.906027] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.630 [2024-11-05 19:18:47.906032] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:18.630 [2024-11-05 19:18:47.906042] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.630 qpair failed and we were unable to recover it. 00:29:18.630 [2024-11-05 19:18:47.916130] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.630 [2024-11-05 19:18:47.916172] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.630 [2024-11-05 19:18:47.916181] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.630 [2024-11-05 19:18:47.916187] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.630 [2024-11-05 19:18:47.916191] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:18.630 [2024-11-05 19:18:47.916201] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.630 qpair failed and we were unable to recover it. 00:29:18.630 [2024-11-05 19:18:47.926167] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.630 [2024-11-05 19:18:47.926211] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.630 [2024-11-05 19:18:47.926221] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.630 [2024-11-05 19:18:47.926226] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.630 [2024-11-05 19:18:47.926231] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:18.630 [2024-11-05 19:18:47.926241] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.630 qpair failed and we were unable to recover it. 00:29:18.630 [2024-11-05 19:18:47.936206] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.630 [2024-11-05 19:18:47.936251] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.630 [2024-11-05 19:18:47.936260] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.630 [2024-11-05 19:18:47.936265] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.630 [2024-11-05 19:18:47.936270] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:18.630 [2024-11-05 19:18:47.936281] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.630 qpair failed and we were unable to recover it. 00:29:18.630 [2024-11-05 19:18:47.946078] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.630 [2024-11-05 19:18:47.946139] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.630 [2024-11-05 19:18:47.946149] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.630 [2024-11-05 19:18:47.946154] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.630 [2024-11-05 19:18:47.946159] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:18.630 [2024-11-05 19:18:47.946169] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.630 qpair failed and we were unable to recover it. 00:29:18.892 [2024-11-05 19:18:47.956249] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.892 [2024-11-05 19:18:47.956290] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.892 [2024-11-05 19:18:47.956303] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.892 [2024-11-05 19:18:47.956308] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.892 [2024-11-05 19:18:47.956313] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:18.892 [2024-11-05 19:18:47.956323] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.892 qpair failed and we were unable to recover it. 00:29:18.892 [2024-11-05 19:18:47.966274] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.892 [2024-11-05 19:18:47.966314] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.892 [2024-11-05 19:18:47.966324] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.892 [2024-11-05 19:18:47.966330] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.893 [2024-11-05 19:18:47.966334] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:18.893 [2024-11-05 19:18:47.966345] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.893 qpair failed and we were unable to recover it. 00:29:18.893 [2024-11-05 19:18:47.976324] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.893 [2024-11-05 19:18:47.976369] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.893 [2024-11-05 19:18:47.976379] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.893 [2024-11-05 19:18:47.976385] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.893 [2024-11-05 19:18:47.976390] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:18.893 [2024-11-05 19:18:47.976400] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.893 qpair failed and we were unable to recover it. 00:29:18.893 [2024-11-05 19:18:47.986339] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.893 [2024-11-05 19:18:47.986380] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.893 [2024-11-05 19:18:47.986390] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.893 [2024-11-05 19:18:47.986395] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.893 [2024-11-05 19:18:47.986400] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:18.893 [2024-11-05 19:18:47.986410] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.893 qpair failed and we were unable to recover it. 00:29:18.893 [2024-11-05 19:18:47.996220] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.893 [2024-11-05 19:18:47.996264] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.893 [2024-11-05 19:18:47.996274] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.893 [2024-11-05 19:18:47.996279] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.893 [2024-11-05 19:18:47.996287] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:18.893 [2024-11-05 19:18:47.996297] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.893 qpair failed and we were unable to recover it. 00:29:18.893 [2024-11-05 19:18:48.006394] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.893 [2024-11-05 19:18:48.006434] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.893 [2024-11-05 19:18:48.006444] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.893 [2024-11-05 19:18:48.006450] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.893 [2024-11-05 19:18:48.006455] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:18.893 [2024-11-05 19:18:48.006465] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.893 qpair failed and we were unable to recover it. 00:29:18.893 [2024-11-05 19:18:48.016392] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.893 [2024-11-05 19:18:48.016437] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.893 [2024-11-05 19:18:48.016447] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.893 [2024-11-05 19:18:48.016453] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.893 [2024-11-05 19:18:48.016457] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:18.893 [2024-11-05 19:18:48.016468] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.893 qpair failed and we were unable to recover it. 00:29:18.893 [2024-11-05 19:18:48.026430] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.893 [2024-11-05 19:18:48.026472] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.893 [2024-11-05 19:18:48.026482] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.893 [2024-11-05 19:18:48.026488] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.893 [2024-11-05 19:18:48.026493] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:18.893 [2024-11-05 19:18:48.026503] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.893 qpair failed and we were unable to recover it. 00:29:18.893 [2024-11-05 19:18:48.036358] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.893 [2024-11-05 19:18:48.036424] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.893 [2024-11-05 19:18:48.036434] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.893 [2024-11-05 19:18:48.036440] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.893 [2024-11-05 19:18:48.036444] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:18.893 [2024-11-05 19:18:48.036455] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.893 qpair failed and we were unable to recover it. 00:29:18.893 [2024-11-05 19:18:48.046492] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.893 [2024-11-05 19:18:48.046541] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.893 [2024-11-05 19:18:48.046551] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.893 [2024-11-05 19:18:48.046556] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.893 [2024-11-05 19:18:48.046561] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:18.893 [2024-11-05 19:18:48.046571] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.893 qpair failed and we were unable to recover it. 00:29:18.893 [2024-11-05 19:18:48.056539] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.893 [2024-11-05 19:18:48.056629] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.893 [2024-11-05 19:18:48.056649] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.893 [2024-11-05 19:18:48.056656] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.893 [2024-11-05 19:18:48.056661] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:18.893 [2024-11-05 19:18:48.056675] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.893 qpair failed and we were unable to recover it. 00:29:18.893 [2024-11-05 19:18:48.066547] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.893 [2024-11-05 19:18:48.066591] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.893 [2024-11-05 19:18:48.066603] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.893 [2024-11-05 19:18:48.066608] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.893 [2024-11-05 19:18:48.066613] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:18.893 [2024-11-05 19:18:48.066625] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.893 qpair failed and we were unable to recover it. 00:29:18.893 [2024-11-05 19:18:48.076574] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.893 [2024-11-05 19:18:48.076613] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.893 [2024-11-05 19:18:48.076623] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.893 [2024-11-05 19:18:48.076628] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.893 [2024-11-05 19:18:48.076633] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:18.893 [2024-11-05 19:18:48.076644] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.893 qpair failed and we were unable to recover it. 00:29:18.893 [2024-11-05 19:18:48.086611] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.893 [2024-11-05 19:18:48.086659] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.893 [2024-11-05 19:18:48.086673] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.893 [2024-11-05 19:18:48.086678] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.893 [2024-11-05 19:18:48.086683] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:18.893 [2024-11-05 19:18:48.086693] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.893 qpair failed and we were unable to recover it. 00:29:18.893 [2024-11-05 19:18:48.096643] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.894 [2024-11-05 19:18:48.096689] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.894 [2024-11-05 19:18:48.096700] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.894 [2024-11-05 19:18:48.096706] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.894 [2024-11-05 19:18:48.096710] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:18.894 [2024-11-05 19:18:48.096721] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.894 qpair failed and we were unable to recover it. 00:29:18.894 [2024-11-05 19:18:48.106550] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.894 [2024-11-05 19:18:48.106641] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.894 [2024-11-05 19:18:48.106651] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.894 [2024-11-05 19:18:48.106657] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.894 [2024-11-05 19:18:48.106661] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:18.894 [2024-11-05 19:18:48.106672] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.894 qpair failed and we were unable to recover it. 00:29:18.894 [2024-11-05 19:18:48.116550] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.894 [2024-11-05 19:18:48.116589] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.894 [2024-11-05 19:18:48.116599] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.894 [2024-11-05 19:18:48.116604] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.894 [2024-11-05 19:18:48.116608] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:18.894 [2024-11-05 19:18:48.116619] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.894 qpair failed and we were unable to recover it. 00:29:18.894 [2024-11-05 19:18:48.126770] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.894 [2024-11-05 19:18:48.126855] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.894 [2024-11-05 19:18:48.126866] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.894 [2024-11-05 19:18:48.126871] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.894 [2024-11-05 19:18:48.126878] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:18.894 [2024-11-05 19:18:48.126889] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.894 qpair failed and we were unable to recover it. 00:29:18.894 [2024-11-05 19:18:48.136731] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.894 [2024-11-05 19:18:48.136775] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.894 [2024-11-05 19:18:48.136785] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.894 [2024-11-05 19:18:48.136790] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.894 [2024-11-05 19:18:48.136795] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:18.894 [2024-11-05 19:18:48.136805] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.894 qpair failed and we were unable to recover it. 00:29:18.894 [2024-11-05 19:18:48.146756] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.894 [2024-11-05 19:18:48.146801] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.894 [2024-11-05 19:18:48.146810] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.894 [2024-11-05 19:18:48.146816] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.894 [2024-11-05 19:18:48.146820] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:18.894 [2024-11-05 19:18:48.146831] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.894 qpair failed and we were unable to recover it. 00:29:18.894 [2024-11-05 19:18:48.156764] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.894 [2024-11-05 19:18:48.156804] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.894 [2024-11-05 19:18:48.156814] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.894 [2024-11-05 19:18:48.156819] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.894 [2024-11-05 19:18:48.156824] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:18.894 [2024-11-05 19:18:48.156834] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.894 qpair failed and we were unable to recover it. 00:29:18.894 [2024-11-05 19:18:48.166817] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.894 [2024-11-05 19:18:48.166856] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.894 [2024-11-05 19:18:48.166866] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.894 [2024-11-05 19:18:48.166872] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.894 [2024-11-05 19:18:48.166876] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:18.894 [2024-11-05 19:18:48.166887] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.894 qpair failed and we were unable to recover it. 00:29:18.894 [2024-11-05 19:18:48.176718] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.894 [2024-11-05 19:18:48.176762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.894 [2024-11-05 19:18:48.176774] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.894 [2024-11-05 19:18:48.176780] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.894 [2024-11-05 19:18:48.176784] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:18.894 [2024-11-05 19:18:48.176796] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.894 qpair failed and we were unable to recover it. 00:29:18.894 [2024-11-05 19:18:48.186883] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.894 [2024-11-05 19:18:48.186931] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.894 [2024-11-05 19:18:48.186941] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.894 [2024-11-05 19:18:48.186946] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.894 [2024-11-05 19:18:48.186951] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:18.894 [2024-11-05 19:18:48.186961] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.894 qpair failed and we were unable to recover it. 00:29:18.894 [2024-11-05 19:18:48.196759] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.894 [2024-11-05 19:18:48.196800] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.894 [2024-11-05 19:18:48.196810] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.894 [2024-11-05 19:18:48.196816] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.894 [2024-11-05 19:18:48.196820] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:18.894 [2024-11-05 19:18:48.196832] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.894 qpair failed and we were unable to recover it. 00:29:18.894 [2024-11-05 19:18:48.206935] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.894 [2024-11-05 19:18:48.206976] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.894 [2024-11-05 19:18:48.206986] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.894 [2024-11-05 19:18:48.206991] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.894 [2024-11-05 19:18:48.206996] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:18.894 [2024-11-05 19:18:48.207007] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.894 qpair failed and we were unable to recover it. 00:29:18.894 [2024-11-05 19:18:48.216828] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.157 [2024-11-05 19:18:48.216875] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.157 [2024-11-05 19:18:48.216886] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.157 [2024-11-05 19:18:48.216892] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.157 [2024-11-05 19:18:48.216898] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:19.157 [2024-11-05 19:18:48.216910] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.157 qpair failed and we were unable to recover it. 00:29:19.157 [2024-11-05 19:18:48.226990] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.157 [2024-11-05 19:18:48.227054] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.157 [2024-11-05 19:18:48.227064] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.157 [2024-11-05 19:18:48.227069] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.157 [2024-11-05 19:18:48.227074] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:19.157 [2024-11-05 19:18:48.227084] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.157 qpair failed and we were unable to recover it. 00:29:19.157 [2024-11-05 19:18:48.237019] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.157 [2024-11-05 19:18:48.237058] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.157 [2024-11-05 19:18:48.237067] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.157 [2024-11-05 19:18:48.237073] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.157 [2024-11-05 19:18:48.237077] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:19.157 [2024-11-05 19:18:48.237088] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.157 qpair failed and we were unable to recover it. 00:29:19.157 [2024-11-05 19:18:48.247012] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.157 [2024-11-05 19:18:48.247055] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.157 [2024-11-05 19:18:48.247064] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.157 [2024-11-05 19:18:48.247070] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.157 [2024-11-05 19:18:48.247074] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:19.157 [2024-11-05 19:18:48.247084] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.157 qpair failed and we were unable to recover it. 00:29:19.157 [2024-11-05 19:18:48.256955] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.157 [2024-11-05 19:18:48.257054] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.157 [2024-11-05 19:18:48.257064] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.157 [2024-11-05 19:18:48.257074] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.157 [2024-11-05 19:18:48.257079] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:19.157 [2024-11-05 19:18:48.257089] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.157 qpair failed and we were unable to recover it. 00:29:19.157 [2024-11-05 19:18:48.267077] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.157 [2024-11-05 19:18:48.267163] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.157 [2024-11-05 19:18:48.267172] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.157 [2024-11-05 19:18:48.267178] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.157 [2024-11-05 19:18:48.267182] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:19.157 [2024-11-05 19:18:48.267193] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.157 qpair failed and we were unable to recover it. 00:29:19.157 [2024-11-05 19:18:48.277097] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.157 [2024-11-05 19:18:48.277138] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.157 [2024-11-05 19:18:48.277147] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.157 [2024-11-05 19:18:48.277153] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.157 [2024-11-05 19:18:48.277157] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:19.157 [2024-11-05 19:18:48.277167] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.157 qpair failed and we were unable to recover it. 00:29:19.157 [2024-11-05 19:18:48.287154] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.157 [2024-11-05 19:18:48.287260] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.157 [2024-11-05 19:18:48.287270] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.157 [2024-11-05 19:18:48.287276] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.157 [2024-11-05 19:18:48.287281] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:19.158 [2024-11-05 19:18:48.287291] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.158 qpair failed and we were unable to recover it. 00:29:19.158 [2024-11-05 19:18:48.297179] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.158 [2024-11-05 19:18:48.297233] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.158 [2024-11-05 19:18:48.297243] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.158 [2024-11-05 19:18:48.297249] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.158 [2024-11-05 19:18:48.297254] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:19.158 [2024-11-05 19:18:48.297267] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.158 qpair failed and we were unable to recover it. 00:29:19.158 [2024-11-05 19:18:48.307211] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.158 [2024-11-05 19:18:48.307250] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.158 [2024-11-05 19:18:48.307260] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.158 [2024-11-05 19:18:48.307265] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.158 [2024-11-05 19:18:48.307270] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:19.158 [2024-11-05 19:18:48.307280] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.158 qpair failed and we were unable to recover it. 00:29:19.158 [2024-11-05 19:18:48.317095] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.158 [2024-11-05 19:18:48.317135] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.158 [2024-11-05 19:18:48.317145] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.158 [2024-11-05 19:18:48.317151] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.158 [2024-11-05 19:18:48.317155] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:19.158 [2024-11-05 19:18:48.317166] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.158 qpair failed and we were unable to recover it. 00:29:19.158 [2024-11-05 19:18:48.327254] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.158 [2024-11-05 19:18:48.327297] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.158 [2024-11-05 19:18:48.327307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.158 [2024-11-05 19:18:48.327312] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.158 [2024-11-05 19:18:48.327316] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:19.158 [2024-11-05 19:18:48.327326] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.158 qpair failed and we were unable to recover it. 00:29:19.158 [2024-11-05 19:18:48.337156] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.158 [2024-11-05 19:18:48.337198] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.158 [2024-11-05 19:18:48.337208] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.158 [2024-11-05 19:18:48.337213] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.158 [2024-11-05 19:18:48.337217] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:19.158 [2024-11-05 19:18:48.337227] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.158 qpair failed and we were unable to recover it. 00:29:19.158 [2024-11-05 19:18:48.347302] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.158 [2024-11-05 19:18:48.347341] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.158 [2024-11-05 19:18:48.347352] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.158 [2024-11-05 19:18:48.347357] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.158 [2024-11-05 19:18:48.347361] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:19.158 [2024-11-05 19:18:48.347371] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.158 qpair failed and we were unable to recover it. 00:29:19.158 [2024-11-05 19:18:48.357319] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.158 [2024-11-05 19:18:48.357358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.158 [2024-11-05 19:18:48.357369] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.158 [2024-11-05 19:18:48.357374] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.158 [2024-11-05 19:18:48.357379] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:19.158 [2024-11-05 19:18:48.357389] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.158 qpair failed and we were unable to recover it. 00:29:19.158 [2024-11-05 19:18:48.367358] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.158 [2024-11-05 19:18:48.367435] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.158 [2024-11-05 19:18:48.367445] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.158 [2024-11-05 19:18:48.367450] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.158 [2024-11-05 19:18:48.367455] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:19.158 [2024-11-05 19:18:48.367466] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.158 qpair failed and we were unable to recover it. 00:29:19.158 [2024-11-05 19:18:48.377435] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.158 [2024-11-05 19:18:48.377515] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.158 [2024-11-05 19:18:48.377524] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.158 [2024-11-05 19:18:48.377530] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.158 [2024-11-05 19:18:48.377534] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:19.158 [2024-11-05 19:18:48.377545] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.158 qpair failed and we were unable to recover it. 00:29:19.158 [2024-11-05 19:18:48.387411] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.158 [2024-11-05 19:18:48.387451] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.158 [2024-11-05 19:18:48.387463] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.158 [2024-11-05 19:18:48.387468] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.158 [2024-11-05 19:18:48.387473] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:19.158 [2024-11-05 19:18:48.387483] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.158 qpair failed and we were unable to recover it. 00:29:19.158 [2024-11-05 19:18:48.397308] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.158 [2024-11-05 19:18:48.397353] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.158 [2024-11-05 19:18:48.397363] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.158 [2024-11-05 19:18:48.397368] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.158 [2024-11-05 19:18:48.397373] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:19.158 [2024-11-05 19:18:48.397383] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.158 qpair failed and we were unable to recover it. 00:29:19.158 [2024-11-05 19:18:48.407348] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.158 [2024-11-05 19:18:48.407402] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.158 [2024-11-05 19:18:48.407412] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.158 [2024-11-05 19:18:48.407417] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.158 [2024-11-05 19:18:48.407421] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:19.158 [2024-11-05 19:18:48.407432] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.158 qpair failed and we were unable to recover it. 00:29:19.158 [2024-11-05 19:18:48.417510] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.158 [2024-11-05 19:18:48.417551] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.158 [2024-11-05 19:18:48.417561] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.158 [2024-11-05 19:18:48.417566] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.158 [2024-11-05 19:18:48.417571] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:19.158 [2024-11-05 19:18:48.417581] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.159 qpair failed and we were unable to recover it. 00:29:19.159 [2024-11-05 19:18:48.427568] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.159 [2024-11-05 19:18:48.427620] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.159 [2024-11-05 19:18:48.427638] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.159 [2024-11-05 19:18:48.427645] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.159 [2024-11-05 19:18:48.427650] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:19.159 [2024-11-05 19:18:48.427669] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.159 qpair failed and we were unable to recover it. 00:29:19.159 [2024-11-05 19:18:48.437553] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.159 [2024-11-05 19:18:48.437594] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.159 [2024-11-05 19:18:48.437605] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.159 [2024-11-05 19:18:48.437611] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.159 [2024-11-05 19:18:48.437616] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:19.159 [2024-11-05 19:18:48.437627] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.159 qpair failed and we were unable to recover it. 00:29:19.159 [2024-11-05 19:18:48.447579] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.159 [2024-11-05 19:18:48.447630] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.159 [2024-11-05 19:18:48.447640] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.159 [2024-11-05 19:18:48.447645] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.159 [2024-11-05 19:18:48.447650] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:19.159 [2024-11-05 19:18:48.447661] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.159 qpair failed and we were unable to recover it. 00:29:19.159 [2024-11-05 19:18:48.457648] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.159 [2024-11-05 19:18:48.457727] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.159 [2024-11-05 19:18:48.457737] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.159 [2024-11-05 19:18:48.457742] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.159 [2024-11-05 19:18:48.457751] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:19.159 [2024-11-05 19:18:48.457762] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.159 qpair failed and we were unable to recover it. 00:29:19.159 [2024-11-05 19:18:48.467742] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.159 [2024-11-05 19:18:48.467781] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.159 [2024-11-05 19:18:48.467791] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.159 [2024-11-05 19:18:48.467796] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.159 [2024-11-05 19:18:48.467801] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:19.159 [2024-11-05 19:18:48.467811] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.159 qpair failed and we were unable to recover it. 00:29:19.159 [2024-11-05 19:18:48.477647] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.159 [2024-11-05 19:18:48.477690] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.159 [2024-11-05 19:18:48.477701] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.159 [2024-11-05 19:18:48.477706] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.159 [2024-11-05 19:18:48.477710] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:19.159 [2024-11-05 19:18:48.477721] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.159 qpair failed and we were unable to recover it. 00:29:19.421 [2024-11-05 19:18:48.487687] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.421 [2024-11-05 19:18:48.487768] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.421 [2024-11-05 19:18:48.487778] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.421 [2024-11-05 19:18:48.487783] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.421 [2024-11-05 19:18:48.487788] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:19.421 [2024-11-05 19:18:48.487799] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.421 qpair failed and we were unable to recover it. 00:29:19.421 [2024-11-05 19:18:48.497727] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.421 [2024-11-05 19:18:48.497772] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.421 [2024-11-05 19:18:48.497783] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.421 [2024-11-05 19:18:48.497788] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.421 [2024-11-05 19:18:48.497793] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:19.421 [2024-11-05 19:18:48.497804] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.421 qpair failed and we were unable to recover it. 00:29:19.421 [2024-11-05 19:18:48.507740] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.421 [2024-11-05 19:18:48.507801] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.421 [2024-11-05 19:18:48.507811] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.421 [2024-11-05 19:18:48.507816] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.422 [2024-11-05 19:18:48.507821] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:19.422 [2024-11-05 19:18:48.507831] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.422 qpair failed and we were unable to recover it. 00:29:19.422 [2024-11-05 19:18:48.517772] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.422 [2024-11-05 19:18:48.517865] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.422 [2024-11-05 19:18:48.517878] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.422 [2024-11-05 19:18:48.517883] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.422 [2024-11-05 19:18:48.517888] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:19.422 [2024-11-05 19:18:48.517899] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.422 qpair failed and we were unable to recover it. 00:29:19.422 [2024-11-05 19:18:48.527806] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.422 [2024-11-05 19:18:48.527887] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.422 [2024-11-05 19:18:48.527897] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.422 [2024-11-05 19:18:48.527902] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.422 [2024-11-05 19:18:48.527906] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd674000b90 00:29:19.422 [2024-11-05 19:18:48.527918] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:19.422 qpair failed and we were unable to recover it. 00:29:19.422 [2024-11-05 19:18:48.537852] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.422 [2024-11-05 19:18:48.537911] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.422 [2024-11-05 19:18:48.537937] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.422 [2024-11-05 19:18:48.537947] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.422 [2024-11-05 19:18:48.537955] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x150a0c0 00:29:19.422 [2024-11-05 19:18:48.537976] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:19.422 qpair failed and we were unable to recover it. 00:29:19.422 [2024-11-05 19:18:48.547826] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.422 [2024-11-05 19:18:48.547876] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.422 [2024-11-05 19:18:48.547903] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.422 [2024-11-05 19:18:48.547912] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.422 [2024-11-05 19:18:48.547919] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x150a0c0 00:29:19.422 [2024-11-05 19:18:48.547940] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:19.422 qpair failed and we were unable to recover it. 00:29:19.422 Read completed with error (sct=0, sc=8) 00:29:19.422 starting I/O failed 00:29:19.422 Read completed with error (sct=0, sc=8) 00:29:19.422 starting I/O failed 00:29:19.422 Read completed with error (sct=0, sc=8) 00:29:19.422 starting I/O failed 00:29:19.422 Read completed with error (sct=0, sc=8) 00:29:19.422 starting I/O failed 00:29:19.422 Read completed with error (sct=0, sc=8) 00:29:19.422 starting I/O failed 00:29:19.422 Read completed with error (sct=0, sc=8) 00:29:19.422 starting I/O failed 00:29:19.422 Read completed with error (sct=0, sc=8) 00:29:19.422 starting I/O failed 00:29:19.422 Write completed with error (sct=0, sc=8) 00:29:19.422 starting I/O failed 00:29:19.422 Write completed with error (sct=0, sc=8) 00:29:19.422 starting I/O failed 00:29:19.422 Read completed with error (sct=0, sc=8) 00:29:19.422 starting I/O failed 00:29:19.422 Write completed with error (sct=0, sc=8) 00:29:19.422 starting I/O failed 00:29:19.422 Read completed with error (sct=0, sc=8) 00:29:19.422 starting I/O failed 00:29:19.422 Write completed with error (sct=0, sc=8) 00:29:19.422 starting I/O failed 00:29:19.422 Read completed with error (sct=0, sc=8) 00:29:19.422 starting I/O failed 00:29:19.422 Read completed with error (sct=0, sc=8) 00:29:19.422 starting I/O failed 00:29:19.422 Write completed with error (sct=0, sc=8) 00:29:19.422 starting I/O failed 00:29:19.422 Read completed with error (sct=0, sc=8) 00:29:19.422 starting I/O failed 00:29:19.422 Write completed with error (sct=0, sc=8) 00:29:19.422 starting I/O failed 00:29:19.422 Read completed with error (sct=0, sc=8) 00:29:19.422 starting I/O failed 00:29:19.422 Write completed with error (sct=0, sc=8) 00:29:19.422 starting I/O failed 00:29:19.422 Write completed with error (sct=0, sc=8) 00:29:19.422 starting I/O failed 00:29:19.422 Read completed with error (sct=0, sc=8) 00:29:19.422 starting I/O failed 00:29:19.422 Write completed with error (sct=0, sc=8) 00:29:19.422 starting I/O failed 00:29:19.422 Write completed with error (sct=0, sc=8) 00:29:19.422 starting I/O failed 00:29:19.422 Read completed with error (sct=0, sc=8) 00:29:19.422 starting I/O failed 00:29:19.422 Read completed with error (sct=0, sc=8) 00:29:19.422 starting I/O failed 00:29:19.422 Write completed with error (sct=0, sc=8) 00:29:19.422 starting I/O failed 00:29:19.422 Write completed with error (sct=0, sc=8) 00:29:19.422 starting I/O failed 00:29:19.422 Write completed with error (sct=0, sc=8) 00:29:19.422 starting I/O failed 00:29:19.422 Write completed with error (sct=0, sc=8) 00:29:19.422 starting I/O failed 00:29:19.422 Read completed with error (sct=0, sc=8) 00:29:19.422 starting I/O failed 00:29:19.422 Read completed with error (sct=0, sc=8) 00:29:19.422 starting I/O failed 00:29:19.422 [2024-11-05 19:18:48.548887] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.422 [2024-11-05 19:18:48.557829] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.422 [2024-11-05 19:18:48.557929] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.422 [2024-11-05 19:18:48.557980] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.422 [2024-11-05 19:18:48.558005] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.422 [2024-11-05 19:18:48.558026] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd670000b90 00:29:19.422 [2024-11-05 19:18:48.558073] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.422 qpair failed and we were unable to recover it. 00:29:19.422 [2024-11-05 19:18:48.567811] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.422 [2024-11-05 19:18:48.567875] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.422 [2024-11-05 19:18:48.567902] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.422 [2024-11-05 19:18:48.567916] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.422 [2024-11-05 19:18:48.567929] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd670000b90 00:29:19.422 [2024-11-05 19:18:48.567959] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.422 qpair failed and we were unable to recover it. 00:29:19.422 Read completed with error (sct=0, sc=8) 00:29:19.422 starting I/O failed 00:29:19.422 Read completed with error (sct=0, sc=8) 00:29:19.422 starting I/O failed 00:29:19.422 Read completed with error (sct=0, sc=8) 00:29:19.422 starting I/O failed 00:29:19.422 Read completed with error (sct=0, sc=8) 00:29:19.422 starting I/O failed 00:29:19.422 Read completed with error (sct=0, sc=8) 00:29:19.422 starting I/O failed 00:29:19.422 Write completed with error (sct=0, sc=8) 00:29:19.422 starting I/O failed 00:29:19.422 Read completed with error (sct=0, sc=8) 00:29:19.422 starting I/O failed 00:29:19.422 Write completed with error (sct=0, sc=8) 00:29:19.422 starting I/O failed 00:29:19.422 Write completed with error (sct=0, sc=8) 00:29:19.422 starting I/O failed 00:29:19.422 Read completed with error (sct=0, sc=8) 00:29:19.422 starting I/O failed 00:29:19.422 Read completed with error (sct=0, sc=8) 00:29:19.422 starting I/O failed 00:29:19.422 Write completed with error (sct=0, sc=8) 00:29:19.422 starting I/O failed 00:29:19.422 Read completed with error (sct=0, sc=8) 00:29:19.422 starting I/O failed 00:29:19.422 Write completed with error (sct=0, sc=8) 00:29:19.422 starting I/O failed 00:29:19.422 Read completed with error (sct=0, sc=8) 00:29:19.422 starting I/O failed 00:29:19.422 Read completed with error (sct=0, sc=8) 00:29:19.422 starting I/O failed 00:29:19.422 Read completed with error (sct=0, sc=8) 00:29:19.422 starting I/O failed 00:29:19.422 Read completed with error (sct=0, sc=8) 00:29:19.422 starting I/O failed 00:29:19.422 Read completed with error (sct=0, sc=8) 00:29:19.422 starting I/O failed 00:29:19.422 Write completed with error (sct=0, sc=8) 00:29:19.422 starting I/O failed 00:29:19.422 Read completed with error (sct=0, sc=8) 00:29:19.422 starting I/O failed 00:29:19.422 Read completed with error (sct=0, sc=8) 00:29:19.422 starting I/O failed 00:29:19.422 Read completed with error (sct=0, sc=8) 00:29:19.422 starting I/O failed 00:29:19.422 Write completed with error (sct=0, sc=8) 00:29:19.422 starting I/O failed 00:29:19.422 Write completed with error (sct=0, sc=8) 00:29:19.422 starting I/O failed 00:29:19.422 Read completed with error (sct=0, sc=8) 00:29:19.422 starting I/O failed 00:29:19.422 Read completed with error (sct=0, sc=8) 00:29:19.422 starting I/O failed 00:29:19.422 Read completed with error (sct=0, sc=8) 00:29:19.422 starting I/O failed 00:29:19.422 Read completed with error (sct=0, sc=8) 00:29:19.423 starting I/O failed 00:29:19.423 Write completed with error (sct=0, sc=8) 00:29:19.423 starting I/O failed 00:29:19.423 Read completed with error (sct=0, sc=8) 00:29:19.423 starting I/O failed 00:29:19.423 Read completed with error (sct=0, sc=8) 00:29:19.423 starting I/O failed 00:29:19.423 [2024-11-05 19:18:48.568870] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:19.423 [2024-11-05 19:18:48.578009] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.423 [2024-11-05 19:18:48.578116] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.423 [2024-11-05 19:18:48.578166] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.423 [2024-11-05 19:18:48.578190] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.423 [2024-11-05 19:18:48.578211] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd67c000b90 00:29:19.423 [2024-11-05 19:18:48.578259] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:19.423 qpair failed and we were unable to recover it. 00:29:19.423 [2024-11-05 19:18:48.587954] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.423 [2024-11-05 19:18:48.588016] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.423 [2024-11-05 19:18:48.588044] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.423 [2024-11-05 19:18:48.588058] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.423 [2024-11-05 19:18:48.588071] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd67c000b90 00:29:19.423 [2024-11-05 19:18:48.588100] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:19.423 qpair failed and we were unable to recover it. 00:29:19.423 [2024-11-05 19:18:48.588524] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ffe00 is same with the state(6) to be set 00:29:19.423 [2024-11-05 19:18:48.588828] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14ffe00 (9): Bad file descriptor 00:29:19.423 Initializing NVMe Controllers 00:29:19.423 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:19.423 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:19.423 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:29:19.423 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:29:19.423 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:29:19.423 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:29:19.423 Initialization complete. Launching workers. 00:29:19.423 Starting thread on core 1 00:29:19.423 Starting thread on core 2 00:29:19.423 Starting thread on core 3 00:29:19.423 Starting thread on core 0 00:29:19.423 19:18:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:29:19.423 00:29:19.423 real 0m11.329s 00:29:19.423 user 0m21.829s 00:29:19.423 sys 0m3.585s 00:29:19.423 19:18:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:19.423 19:18:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:19.423 ************************************ 00:29:19.423 END TEST nvmf_target_disconnect_tc2 00:29:19.423 ************************************ 00:29:19.423 19:18:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:29:19.423 19:18:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:29:19.423 19:18:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:29:19.423 19:18:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@335 -- # nvmfcleanup 00:29:19.423 19:18:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@99 -- # sync 00:29:19.423 19:18:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:29:19.423 19:18:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@102 -- # set +e 00:29:19.423 19:18:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@103 -- # for i in {1..20} 00:29:19.423 19:18:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:29:19.423 rmmod nvme_tcp 00:29:19.423 rmmod nvme_fabrics 00:29:19.423 rmmod nvme_keyring 00:29:19.423 19:18:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:29:19.423 19:18:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # set -e 00:29:19.423 19:18:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # return 0 00:29:19.423 19:18:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # '[' -n 518532 ']' 00:29:19.423 19:18:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@337 -- # killprocess 518532 00:29:19.423 19:18:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # '[' -z 518532 ']' 00:29:19.423 19:18:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # kill -0 518532 00:29:19.423 19:18:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@957 -- # uname 00:29:19.423 19:18:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:19.423 19:18:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 518532 00:29:19.685 19:18:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # process_name=reactor_4 00:29:19.685 19:18:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@962 -- # '[' reactor_4 = sudo ']' 00:29:19.685 19:18:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@970 -- # echo 'killing process with pid 518532' 00:29:19.685 killing process with pid 518532 00:29:19.685 19:18:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@971 -- # kill 518532 00:29:19.685 19:18:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@976 -- # wait 518532 00:29:19.685 19:18:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:29:19.685 19:18:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # nvmf_fini 00:29:19.685 19:18:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@264 -- # local dev 00:29:19.685 19:18:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@267 -- # remove_target_ns 00:29:19.685 19:18:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:29:19.685 19:18:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:29:19.685 19:18:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_target_ns 00:29:22.232 19:18:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@268 -- # delete_main_bridge 00:29:22.232 19:18:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:29:22.232 19:18:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@130 -- # return 0 00:29:22.232 19:18:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:29:22.232 19:18:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:29:22.232 19:18:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:29:22.232 19:18:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:29:22.232 19:18:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:29:22.232 19:18:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:29:22.232 19:18:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:29:22.232 19:18:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:29:22.232 19:18:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:29:22.232 19:18:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:29:22.232 19:18:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:29:22.232 19:18:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:29:22.232 19:18:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:29:22.232 19:18:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:29:22.232 19:18:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:29:22.232 19:18:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:29:22.232 19:18:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:29:22.232 19:18:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@41 -- # _dev=0 00:29:22.232 19:18:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@41 -- # dev_map=() 00:29:22.232 19:18:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@284 -- # iptr 00:29:22.232 19:18:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@542 -- # iptables-save 00:29:22.232 19:18:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:29:22.232 19:18:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@542 -- # iptables-restore 00:29:22.232 00:29:22.232 real 0m21.609s 00:29:22.232 user 0m49.471s 00:29:22.232 sys 0m9.576s 00:29:22.232 19:18:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:22.232 19:18:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:22.232 ************************************ 00:29:22.232 END TEST nvmf_target_disconnect 00:29:22.232 ************************************ 00:29:22.232 19:18:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@31 -- # [[ tcp == \t\c\p ]] 00:29:22.232 19:18:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:29:22.232 19:18:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:29:22.232 19:18:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:22.232 19:18:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.232 ************************************ 00:29:22.232 START TEST nvmf_digest 00:29:22.232 ************************************ 00:29:22.232 19:18:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:29:22.232 * Looking for test storage... 00:29:22.232 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:22.232 19:18:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:22.232 19:18:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # lcov --version 00:29:22.232 19:18:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:22.232 19:18:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:22.232 19:18:51 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:22.232 19:18:51 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:22.232 19:18:51 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:22.232 19:18:51 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:29:22.232 19:18:51 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:29:22.232 19:18:51 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:29:22.232 19:18:51 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:29:22.232 19:18:51 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:29:22.232 19:18:51 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:29:22.232 19:18:51 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:29:22.232 19:18:51 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:22.232 19:18:51 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:29:22.232 19:18:51 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:29:22.232 19:18:51 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:22.232 19:18:51 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:22.232 19:18:51 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:29:22.232 19:18:51 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:29:22.232 19:18:51 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:22.232 19:18:51 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:29:22.232 19:18:51 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:29:22.232 19:18:51 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:29:22.232 19:18:51 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:29:22.232 19:18:51 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:22.232 19:18:51 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:29:22.232 19:18:51 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:29:22.232 19:18:51 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:22.232 19:18:51 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:22.232 19:18:51 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:29:22.232 19:18:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:22.233 19:18:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:22.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:22.233 --rc genhtml_branch_coverage=1 00:29:22.233 --rc genhtml_function_coverage=1 00:29:22.233 --rc genhtml_legend=1 00:29:22.233 --rc geninfo_all_blocks=1 00:29:22.233 --rc geninfo_unexecuted_blocks=1 00:29:22.233 00:29:22.233 ' 00:29:22.233 19:18:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:22.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:22.233 --rc genhtml_branch_coverage=1 00:29:22.233 --rc genhtml_function_coverage=1 00:29:22.233 --rc genhtml_legend=1 00:29:22.233 --rc geninfo_all_blocks=1 00:29:22.233 --rc geninfo_unexecuted_blocks=1 00:29:22.233 00:29:22.233 ' 00:29:22.233 19:18:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:22.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:22.233 --rc genhtml_branch_coverage=1 00:29:22.233 --rc genhtml_function_coverage=1 00:29:22.233 --rc genhtml_legend=1 00:29:22.233 --rc geninfo_all_blocks=1 00:29:22.233 --rc geninfo_unexecuted_blocks=1 00:29:22.233 00:29:22.233 ' 00:29:22.233 19:18:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:22.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:22.233 --rc genhtml_branch_coverage=1 00:29:22.233 --rc genhtml_function_coverage=1 00:29:22.233 --rc genhtml_legend=1 00:29:22.233 --rc geninfo_all_blocks=1 00:29:22.233 --rc geninfo_unexecuted_blocks=1 00:29:22.233 00:29:22.233 ' 00:29:22.233 19:18:51 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:22.233 19:18:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:29:22.233 19:18:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:22.233 19:18:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:22.233 19:18:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:22.233 19:18:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:22.233 19:18:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:22.233 19:18:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:29:22.233 19:18:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:22.233 19:18:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:29:22.233 19:18:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:22.233 19:18:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:22.233 19:18:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:22.233 19:18:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:29:22.233 19:18:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:29:22.233 19:18:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:22.233 19:18:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:22.233 19:18:51 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:29:22.233 19:18:51 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:22.233 19:18:51 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:22.233 19:18:51 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:22.233 19:18:51 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:22.233 19:18:51 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:22.233 19:18:51 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:22.233 19:18:51 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:29:22.233 19:18:51 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:22.233 19:18:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:29:22.233 19:18:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:29:22.233 19:18:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:29:22.233 19:18:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:29:22.233 19:18:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@50 -- # : 0 00:29:22.233 19:18:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:29:22.233 19:18:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:29:22.233 19:18:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:29:22.233 19:18:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:22.233 19:18:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:22.233 19:18:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:29:22.233 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:29:22.233 19:18:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:29:22.233 19:18:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:29:22.233 19:18:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@54 -- # have_pci_nics=0 00:29:22.233 19:18:51 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:29:22.233 19:18:51 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:29:22.233 19:18:51 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:29:22.233 19:18:51 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:29:22.233 19:18:51 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:29:22.233 19:18:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:29:22.233 19:18:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:22.233 19:18:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@296 -- # prepare_net_devs 00:29:22.233 19:18:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # local -g is_hw=no 00:29:22.233 19:18:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@260 -- # remove_target_ns 00:29:22.233 19:18:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:29:22.233 19:18:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:29:22.233 19:18:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_target_ns 00:29:22.233 19:18:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:29:22.233 19:18:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:29:22.233 19:18:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # xtrace_disable 00:29:22.233 19:18:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:28.821 19:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:28.821 19:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@131 -- # pci_devs=() 00:29:28.821 19:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@131 -- # local -a pci_devs 00:29:28.821 19:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@132 -- # pci_net_devs=() 00:29:28.821 19:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:29:28.821 19:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@133 -- # pci_drivers=() 00:29:28.821 19:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@133 -- # local -A pci_drivers 00:29:28.821 19:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@135 -- # net_devs=() 00:29:28.821 19:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@135 -- # local -ga net_devs 00:29:28.821 19:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@136 -- # e810=() 00:29:28.821 19:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@136 -- # local -ga e810 00:29:28.821 19:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@137 -- # x722=() 00:29:28.821 19:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@137 -- # local -ga x722 00:29:28.821 19:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@138 -- # mlx=() 00:29:28.821 19:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@138 -- # local -ga mlx 00:29:28.821 19:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:28.821 19:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:28.821 19:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:28.821 19:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:28.821 19:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:28.821 19:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:28.821 19:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:28.821 19:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:28.821 19:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:28.821 19:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:28.821 19:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:28.821 19:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:28.821 19:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:29:28.821 19:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:29:28.821 19:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:29:28.821 19:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:29:28.821 19:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:29:28.821 19:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:29:28.821 19:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:29:28.821 19:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:28.821 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:28.821 19:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:29:28.821 19:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:29:28.821 19:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:28.821 19:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:28.821 19:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:29:28.821 19:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:29:28.821 19:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:28.821 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:28.821 19:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:29:28.821 19:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:29:28.821 19:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:28.821 19:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:28.821 19:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:29:28.821 19:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:29:28.821 19:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:29:28.821 19:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:29:28.821 19:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:29:28.821 19:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:28.821 19:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:29:28.821 19:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:28.821 19:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # [[ up == up ]] 00:29:28.821 19:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:29:28.821 19:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:28.821 19:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:28.821 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:28.821 19:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:29:28.821 19:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:29:28.821 19:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:28.821 19:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:29:28.821 19:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:28.821 19:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # [[ up == up ]] 00:29:28.821 19:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:29:28.821 19:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:28.821 19:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:28.821 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:28.821 19:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:29:28.821 19:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:29:28.821 19:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:29:28.821 19:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # is_hw=yes 00:29:28.821 19:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:29:28.821 19:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:29:28.821 19:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:29:28.821 19:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:29:28.821 19:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@257 -- # create_target_ns 00:29:28.821 19:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:29:28.821 19:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:29:28.821 19:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:29:28.821 19:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:28.821 19:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:29:28.821 19:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:29:28.821 19:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:28.821 19:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:28.821 19:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:29:28.821 19:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:29:28.821 19:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:29:28.821 19:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:29:28.821 19:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@27 -- # local -gA dev_map 00:29:28.821 19:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@28 -- # local -g _dev 00:29:28.821 19:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:29:28.821 19:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:29:28.821 19:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:29:28.821 19:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:29:28.821 19:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@44 -- # ips=() 00:29:28.821 19:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:29:28.821 19:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:29:28.822 19:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:29:28.822 19:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:29:28.822 19:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:29:28.822 19:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:29:28.822 19:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:29:28.822 19:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:29:28.822 19:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:29:28.822 19:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:29:28.822 19:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:29:28.822 19:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:29:28.822 19:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:29:28.822 19:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:29:28.822 19:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:29:28.822 19:18:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:29:28.822 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:29:28.822 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:29:28.822 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:29:28.822 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:29:28.822 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@11 -- # local val=167772161 00:29:28.822 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:29:28.822 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:29:28.822 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:29:28.822 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:29:28.822 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:29:28.822 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:29:28.822 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:29:28.822 10.0.0.1 00:29:28.822 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:29:28.822 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:29:28.822 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:28.822 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:28.822 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:29:28.822 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@11 -- # local val=167772162 00:29:28.822 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:29:28.822 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:29:28.822 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:29:28.822 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:29:28.822 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:29:28.822 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:29:28.822 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:29:28.822 10.0.0.2 00:29:28.822 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:29:28.822 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:29:28.822 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:29:28.822 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:29:28.822 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:29:29.082 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:29:29.082 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:29:29.082 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:29.082 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:29.082 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:29:29.082 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:29:29.082 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:29:29.082 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:29:29.082 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:29:29.082 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:29:29.082 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:29:29.082 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:29:29.082 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:29:29.082 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:29:29.082 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:29:29.082 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@38 -- # ping_ips 1 00:29:29.082 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:29:29.082 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:29:29.082 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:29:29.082 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:29:29.082 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:29:29.082 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:29:29.082 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:29:29.082 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:29:29.082 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:29:29.082 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@107 -- # local dev=initiator0 00:29:29.082 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:29:29.083 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:29:29.083 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:29:29.083 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:29:29.083 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:29:29.083 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:29:29.083 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:29:29.083 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:29:29.083 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:29:29.083 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:29:29.083 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:29:29.083 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:29.083 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:29.083 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:29:29.083 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:29:29.083 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:29.083 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.644 ms 00:29:29.083 00:29:29.083 --- 10.0.0.1 ping statistics --- 00:29:29.083 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:29.083 rtt min/avg/max/mdev = 0.644/0.644/0.644/0.000 ms 00:29:29.083 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:29:29.083 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:29:29.083 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:29:29.083 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:29:29.083 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:29.083 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:29.083 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@168 -- # get_net_dev target0 00:29:29.083 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@107 -- # local dev=target0 00:29:29.083 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:29:29.083 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:29:29.083 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:29:29.083 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:29:29.083 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:29:29.083 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:29:29.083 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:29:29.083 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:29:29.083 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:29:29.083 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:29:29.083 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:29:29.083 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:29:29.083 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:29:29.083 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:29:29.083 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:29.083 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.298 ms 00:29:29.083 00:29:29.083 --- 10.0.0.2 ping statistics --- 00:29:29.083 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:29.083 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:29:29.083 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@98 -- # (( pair++ )) 00:29:29.083 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:29:29.083 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:29.083 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@270 -- # return 0 00:29:29.083 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:29:29.083 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:29:29.083 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:29:29.083 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:29:29.083 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:29:29.083 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:29:29.083 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:29:29.083 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:29:29.083 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:29:29.083 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:29:29.083 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@107 -- # local dev=initiator0 00:29:29.083 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:29:29.083 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:29:29.083 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:29:29.083 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:29:29.083 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:29:29.083 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:29:29.083 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:29:29.083 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:29:29.083 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:29:29.083 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:29.083 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:29:29.083 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:29:29.083 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:29:29.083 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:29:29.083 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:29:29.083 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:29:29.083 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@107 -- # local dev=initiator1 00:29:29.083 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:29:29.083 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:29:29.083 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@109 -- # return 1 00:29:29.083 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@168 -- # dev= 00:29:29.083 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@169 -- # return 0 00:29:29.083 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:29:29.083 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:29:29.083 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:29:29.083 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:29:29.083 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:29:29.083 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:29.083 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:29.083 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@168 -- # get_net_dev target0 00:29:29.083 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@107 -- # local dev=target0 00:29:29.083 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:29:29.083 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:29:29.083 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:29:29.083 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:29:29.083 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:29:29.083 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:29:29.083 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:29:29.083 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:29:29.083 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:29:29.083 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:29.083 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:29:29.083 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:29:29.083 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:29:29.083 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:29:29.083 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:29.083 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:29.083 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@168 -- # get_net_dev target1 00:29:29.083 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@107 -- # local dev=target1 00:29:29.083 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:29:29.083 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:29:29.083 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@109 -- # return 1 00:29:29.083 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@168 -- # dev= 00:29:29.083 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@169 -- # return 0 00:29:29.083 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:29:29.083 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:29.083 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:29:29.083 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:29:29.083 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:29.083 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:29:29.083 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:29:29.083 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:29:29.083 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:29:29.083 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:29:29.083 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:29:29.083 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:29.083 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:29.344 ************************************ 00:29:29.344 START TEST nvmf_digest_clean 00:29:29.344 ************************************ 00:29:29.344 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1127 -- # run_digest 00:29:29.344 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:29:29.344 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:29:29.344 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:29:29.344 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:29:29.344 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:29:29.344 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:29:29.344 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:29.344 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:29.344 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@328 -- # nvmfpid=523938 00:29:29.344 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@329 -- # waitforlisten 523938 00:29:29.344 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 523938 ']' 00:29:29.344 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:29.344 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:29.344 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:29.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:29.344 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:29.344 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:29.344 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:29:29.344 [2024-11-05 19:18:58.512395] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:29:29.344 [2024-11-05 19:18:58.512461] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:29.344 [2024-11-05 19:18:58.594627] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:29.344 [2024-11-05 19:18:58.635644] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:29.344 [2024-11-05 19:18:58.635680] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:29.344 [2024-11-05 19:18:58.635689] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:29.344 [2024-11-05 19:18:58.635696] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:29.344 [2024-11-05 19:18:58.635702] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:29.344 [2024-11-05 19:18:58.636325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:30.283 19:18:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:30.283 19:18:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:29:30.283 19:18:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:29:30.283 19:18:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:30.283 19:18:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:30.284 19:18:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:30.284 19:18:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:29:30.284 19:18:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:29:30.284 19:18:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:29:30.284 19:18:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:30.284 19:18:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:30.284 null0 00:29:30.284 [2024-11-05 19:18:59.412609] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:30.284 [2024-11-05 19:18:59.436842] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:30.284 19:18:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:30.284 19:18:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:29:30.284 19:18:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:30.284 19:18:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:30.284 19:18:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:29:30.284 19:18:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:29:30.284 19:18:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:29:30.284 19:18:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:30.284 19:18:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=524265 00:29:30.284 19:18:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 524265 /var/tmp/bperf.sock 00:29:30.284 19:18:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 524265 ']' 00:29:30.284 19:18:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:30.284 19:18:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:30.284 19:18:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:30.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:30.284 19:18:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:30.284 19:18:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:30.284 19:18:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:29:30.284 [2024-11-05 19:18:59.493123] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:29:30.284 [2024-11-05 19:18:59.493171] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid524265 ] 00:29:30.284 [2024-11-05 19:18:59.579528] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:30.543 [2024-11-05 19:18:59.615259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:31.113 19:19:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:31.113 19:19:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:29:31.113 19:19:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:31.113 19:19:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:31.113 19:19:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:31.374 19:19:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:31.374 19:19:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:31.635 nvme0n1 00:29:31.635 19:19:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:31.635 19:19:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:31.896 Running I/O for 2 seconds... 00:29:33.788 19712.00 IOPS, 77.00 MiB/s [2024-11-05T18:19:03.111Z] 19611.50 IOPS, 76.61 MiB/s 00:29:33.788 Latency(us) 00:29:33.788 [2024-11-05T18:19:03.111Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:33.788 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:29:33.788 nvme0n1 : 2.01 19615.55 76.62 0.00 0.00 6517.88 3140.27 22282.24 00:29:33.788 [2024-11-05T18:19:03.111Z] =================================================================================================================== 00:29:33.788 [2024-11-05T18:19:03.111Z] Total : 19615.55 76.62 0.00 0.00 6517.88 3140.27 22282.24 00:29:33.788 { 00:29:33.788 "results": [ 00:29:33.788 { 00:29:33.788 "job": "nvme0n1", 00:29:33.788 "core_mask": "0x2", 00:29:33.788 "workload": "randread", 00:29:33.788 "status": "finished", 00:29:33.788 "queue_depth": 128, 00:29:33.788 "io_size": 4096, 00:29:33.788 "runtime": 2.006316, 00:29:33.788 "iops": 19615.554080214682, 00:29:33.788 "mibps": 76.6232581258386, 00:29:33.788 "io_failed": 0, 00:29:33.788 "io_timeout": 0, 00:29:33.788 "avg_latency_us": 6517.882422055647, 00:29:33.788 "min_latency_us": 3140.266666666667, 00:29:33.788 "max_latency_us": 22282.24 00:29:33.788 } 00:29:33.788 ], 00:29:33.788 "core_count": 1 00:29:33.788 } 00:29:33.788 19:19:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:33.788 19:19:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:33.788 19:19:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:33.788 19:19:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:33.788 | select(.opcode=="crc32c") 00:29:33.788 | "\(.module_name) \(.executed)"' 00:29:33.788 19:19:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:34.052 19:19:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:34.052 19:19:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:34.052 19:19:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:34.052 19:19:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:34.052 19:19:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 524265 00:29:34.052 19:19:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 524265 ']' 00:29:34.052 19:19:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 524265 00:29:34.052 19:19:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:29:34.052 19:19:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:34.052 19:19:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 524265 00:29:34.052 19:19:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:29:34.052 19:19:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:29:34.052 19:19:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 524265' 00:29:34.052 killing process with pid 524265 00:29:34.052 19:19:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 524265 00:29:34.052 Received shutdown signal, test time was about 2.000000 seconds 00:29:34.052 00:29:34.052 Latency(us) 00:29:34.052 [2024-11-05T18:19:03.375Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:34.052 [2024-11-05T18:19:03.375Z] =================================================================================================================== 00:29:34.052 [2024-11-05T18:19:03.375Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:34.052 19:19:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 524265 00:29:34.330 19:19:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:29:34.330 19:19:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:34.330 19:19:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:34.330 19:19:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:29:34.330 19:19:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:29:34.330 19:19:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:29:34.330 19:19:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:34.330 19:19:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=524957 00:29:34.330 19:19:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 524957 /var/tmp/bperf.sock 00:29:34.330 19:19:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 524957 ']' 00:29:34.330 19:19:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:29:34.330 19:19:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:34.330 19:19:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:34.330 19:19:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:34.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:34.330 19:19:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:34.330 19:19:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:34.330 [2024-11-05 19:19:03.440228] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:29:34.330 [2024-11-05 19:19:03.440284] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid524957 ] 00:29:34.330 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:34.330 Zero copy mechanism will not be used. 00:29:34.330 [2024-11-05 19:19:03.529477] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:34.330 [2024-11-05 19:19:03.563908] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:34.906 19:19:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:34.906 19:19:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:29:34.906 19:19:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:34.906 19:19:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:34.906 19:19:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:35.168 19:19:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:35.168 19:19:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:35.428 nvme0n1 00:29:35.428 19:19:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:35.428 19:19:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:35.689 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:35.689 Zero copy mechanism will not be used. 00:29:35.689 Running I/O for 2 seconds... 00:29:37.574 3034.00 IOPS, 379.25 MiB/s [2024-11-05T18:19:06.897Z] 3134.00 IOPS, 391.75 MiB/s 00:29:37.574 Latency(us) 00:29:37.574 [2024-11-05T18:19:06.897Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:37.574 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:29:37.574 nvme0n1 : 2.00 3136.25 392.03 0.00 0.00 5097.97 699.73 13544.11 00:29:37.574 [2024-11-05T18:19:06.897Z] =================================================================================================================== 00:29:37.574 [2024-11-05T18:19:06.898Z] Total : 3136.25 392.03 0.00 0.00 5097.97 699.73 13544.11 00:29:37.575 { 00:29:37.575 "results": [ 00:29:37.575 { 00:29:37.575 "job": "nvme0n1", 00:29:37.575 "core_mask": "0x2", 00:29:37.575 "workload": "randread", 00:29:37.575 "status": "finished", 00:29:37.575 "queue_depth": 16, 00:29:37.575 "io_size": 131072, 00:29:37.575 "runtime": 2.003666, 00:29:37.575 "iops": 3136.251251456081, 00:29:37.575 "mibps": 392.0314064320101, 00:29:37.575 "io_failed": 0, 00:29:37.575 "io_timeout": 0, 00:29:37.575 "avg_latency_us": 5097.965151708041, 00:29:37.575 "min_latency_us": 699.7333333333333, 00:29:37.575 "max_latency_us": 13544.106666666667 00:29:37.575 } 00:29:37.575 ], 00:29:37.575 "core_count": 1 00:29:37.575 } 00:29:37.575 19:19:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:37.575 19:19:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:37.575 19:19:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:37.575 19:19:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:37.575 | select(.opcode=="crc32c") 00:29:37.575 | "\(.module_name) \(.executed)"' 00:29:37.575 19:19:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:37.834 19:19:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:37.834 19:19:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:37.834 19:19:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:37.834 19:19:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:37.834 19:19:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 524957 00:29:37.834 19:19:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 524957 ']' 00:29:37.834 19:19:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 524957 00:29:37.834 19:19:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:29:37.834 19:19:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:37.834 19:19:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 524957 00:29:37.834 19:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:29:37.834 19:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:29:37.834 19:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 524957' 00:29:37.834 killing process with pid 524957 00:29:37.834 19:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 524957 00:29:37.834 Received shutdown signal, test time was about 2.000000 seconds 00:29:37.834 00:29:37.834 Latency(us) 00:29:37.834 [2024-11-05T18:19:07.157Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:37.834 [2024-11-05T18:19:07.157Z] =================================================================================================================== 00:29:37.834 [2024-11-05T18:19:07.157Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:37.834 19:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 524957 00:29:37.834 19:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:29:37.834 19:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:37.834 19:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:37.834 19:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:29:37.834 19:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:29:37.834 19:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:29:37.834 19:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:37.834 19:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=525639 00:29:37.834 19:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 525639 /var/tmp/bperf.sock 00:29:37.835 19:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 525639 ']' 00:29:37.835 19:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:37.835 19:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:37.835 19:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:37.835 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:37.835 19:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:37.835 19:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:37.835 19:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:29:38.095 [2024-11-05 19:19:07.160611] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:29:38.095 [2024-11-05 19:19:07.160669] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid525639 ] 00:29:38.095 [2024-11-05 19:19:07.242238] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:38.095 [2024-11-05 19:19:07.271604] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:38.725 19:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:38.725 19:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:29:38.725 19:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:38.725 19:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:38.725 19:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:38.985 19:19:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:38.985 19:19:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:39.245 nvme0n1 00:29:39.245 19:19:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:39.245 19:19:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:39.506 Running I/O for 2 seconds... 00:29:41.388 21585.00 IOPS, 84.32 MiB/s [2024-11-05T18:19:10.711Z] 21645.50 IOPS, 84.55 MiB/s 00:29:41.388 Latency(us) 00:29:41.388 [2024-11-05T18:19:10.711Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:41.388 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:41.388 nvme0n1 : 2.01 21645.34 84.55 0.00 0.00 5905.83 2293.76 10267.31 00:29:41.388 [2024-11-05T18:19:10.711Z] =================================================================================================================== 00:29:41.388 [2024-11-05T18:19:10.711Z] Total : 21645.34 84.55 0.00 0.00 5905.83 2293.76 10267.31 00:29:41.388 { 00:29:41.388 "results": [ 00:29:41.388 { 00:29:41.388 "job": "nvme0n1", 00:29:41.388 "core_mask": "0x2", 00:29:41.388 "workload": "randwrite", 00:29:41.388 "status": "finished", 00:29:41.388 "queue_depth": 128, 00:29:41.388 "io_size": 4096, 00:29:41.388 "runtime": 2.005928, 00:29:41.388 "iops": 21645.34320274706, 00:29:41.388 "mibps": 84.5521218857307, 00:29:41.388 "io_failed": 0, 00:29:41.388 "io_timeout": 0, 00:29:41.388 "avg_latency_us": 5905.827333962858, 00:29:41.388 "min_latency_us": 2293.76, 00:29:41.388 "max_latency_us": 10267.306666666667 00:29:41.388 } 00:29:41.388 ], 00:29:41.388 "core_count": 1 00:29:41.388 } 00:29:41.388 19:19:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:41.388 19:19:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:41.388 19:19:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:41.388 19:19:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:41.388 | select(.opcode=="crc32c") 00:29:41.388 | "\(.module_name) \(.executed)"' 00:29:41.388 19:19:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:41.649 19:19:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:41.649 19:19:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:41.649 19:19:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:41.649 19:19:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:41.649 19:19:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 525639 00:29:41.649 19:19:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 525639 ']' 00:29:41.649 19:19:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 525639 00:29:41.649 19:19:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:29:41.649 19:19:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:41.649 19:19:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 525639 00:29:41.649 19:19:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:29:41.649 19:19:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:29:41.649 19:19:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 525639' 00:29:41.649 killing process with pid 525639 00:29:41.649 19:19:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 525639 00:29:41.649 Received shutdown signal, test time was about 2.000000 seconds 00:29:41.649 00:29:41.649 Latency(us) 00:29:41.649 [2024-11-05T18:19:10.972Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:41.649 [2024-11-05T18:19:10.972Z] =================================================================================================================== 00:29:41.649 [2024-11-05T18:19:10.972Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:41.649 19:19:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 525639 00:29:41.910 19:19:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:29:41.910 19:19:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:41.910 19:19:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:41.910 19:19:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:29:41.910 19:19:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:29:41.910 19:19:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:29:41.910 19:19:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:41.910 19:19:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=526425 00:29:41.910 19:19:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 526425 /var/tmp/bperf.sock 00:29:41.910 19:19:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 526425 ']' 00:29:41.910 19:19:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:41.910 19:19:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:41.910 19:19:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:41.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:41.910 19:19:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:41.910 19:19:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:41.910 19:19:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:29:41.910 [2024-11-05 19:19:11.039987] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:29:41.910 [2024-11-05 19:19:11.040046] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid526425 ] 00:29:41.910 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:41.910 Zero copy mechanism will not be used. 00:29:41.910 [2024-11-05 19:19:11.121935] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:41.910 [2024-11-05 19:19:11.151405] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:42.852 19:19:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:42.852 19:19:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:29:42.852 19:19:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:42.852 19:19:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:42.852 19:19:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:42.852 19:19:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:42.852 19:19:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:43.113 nvme0n1 00:29:43.113 19:19:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:43.113 19:19:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:43.374 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:43.374 Zero copy mechanism will not be used. 00:29:43.374 Running I/O for 2 seconds... 00:29:45.261 3991.00 IOPS, 498.88 MiB/s [2024-11-05T18:19:14.584Z] 3891.00 IOPS, 486.38 MiB/s 00:29:45.261 Latency(us) 00:29:45.261 [2024-11-05T18:19:14.584Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:45.261 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:29:45.261 nvme0n1 : 2.00 3893.53 486.69 0.00 0.00 4104.48 1761.28 7700.48 00:29:45.261 [2024-11-05T18:19:14.584Z] =================================================================================================================== 00:29:45.261 [2024-11-05T18:19:14.584Z] Total : 3893.53 486.69 0.00 0.00 4104.48 1761.28 7700.48 00:29:45.261 { 00:29:45.261 "results": [ 00:29:45.261 { 00:29:45.261 "job": "nvme0n1", 00:29:45.261 "core_mask": "0x2", 00:29:45.261 "workload": "randwrite", 00:29:45.261 "status": "finished", 00:29:45.261 "queue_depth": 16, 00:29:45.261 "io_size": 131072, 00:29:45.261 "runtime": 2.003837, 00:29:45.261 "iops": 3893.5302621919845, 00:29:45.261 "mibps": 486.69128277399807, 00:29:45.261 "io_failed": 0, 00:29:45.261 "io_timeout": 0, 00:29:45.261 "avg_latency_us": 4104.480396479536, 00:29:45.261 "min_latency_us": 1761.28, 00:29:45.261 "max_latency_us": 7700.48 00:29:45.261 } 00:29:45.261 ], 00:29:45.261 "core_count": 1 00:29:45.261 } 00:29:45.261 19:19:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:45.261 19:19:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:45.261 19:19:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:45.261 19:19:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:45.261 | select(.opcode=="crc32c") 00:29:45.261 | "\(.module_name) \(.executed)"' 00:29:45.261 19:19:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:45.522 19:19:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:45.522 19:19:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:45.522 19:19:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:45.522 19:19:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:45.522 19:19:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 526425 00:29:45.522 19:19:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 526425 ']' 00:29:45.522 19:19:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 526425 00:29:45.522 19:19:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:29:45.522 19:19:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:45.522 19:19:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 526425 00:29:45.522 19:19:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:29:45.522 19:19:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:29:45.522 19:19:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 526425' 00:29:45.522 killing process with pid 526425 00:29:45.522 19:19:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 526425 00:29:45.522 Received shutdown signal, test time was about 2.000000 seconds 00:29:45.522 00:29:45.522 Latency(us) 00:29:45.522 [2024-11-05T18:19:14.845Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:45.522 [2024-11-05T18:19:14.845Z] =================================================================================================================== 00:29:45.522 [2024-11-05T18:19:14.845Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:45.522 19:19:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 526425 00:29:45.522 19:19:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 523938 00:29:45.522 19:19:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 523938 ']' 00:29:45.522 19:19:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 523938 00:29:45.522 19:19:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:29:45.522 19:19:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:45.522 19:19:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 523938 00:29:45.784 19:19:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:29:45.784 19:19:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:29:45.784 19:19:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 523938' 00:29:45.784 killing process with pid 523938 00:29:45.784 19:19:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 523938 00:29:45.784 19:19:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 523938 00:29:45.784 00:29:45.784 real 0m16.573s 00:29:45.784 user 0m32.883s 00:29:45.784 sys 0m3.409s 00:29:45.784 19:19:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:45.784 19:19:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:45.784 ************************************ 00:29:45.784 END TEST nvmf_digest_clean 00:29:45.784 ************************************ 00:29:45.784 19:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:29:45.784 19:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:29:45.784 19:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:45.784 19:19:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:45.784 ************************************ 00:29:45.784 START TEST nvmf_digest_error 00:29:45.784 ************************************ 00:29:45.784 19:19:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1127 -- # run_digest_error 00:29:45.784 19:19:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:29:45.784 19:19:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:29:45.784 19:19:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:45.784 19:19:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:46.045 19:19:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@328 -- # nvmfpid=527349 00:29:46.045 19:19:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@329 -- # waitforlisten 527349 00:29:46.045 19:19:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:29:46.045 19:19:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 527349 ']' 00:29:46.045 19:19:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:46.045 19:19:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:46.045 19:19:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:46.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:46.045 19:19:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:46.045 19:19:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:46.045 [2024-11-05 19:19:15.168989] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:29:46.045 [2024-11-05 19:19:15.169043] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:46.045 [2024-11-05 19:19:15.246324] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:46.045 [2024-11-05 19:19:15.282733] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:46.045 [2024-11-05 19:19:15.282771] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:46.045 [2024-11-05 19:19:15.282779] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:46.045 [2024-11-05 19:19:15.282785] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:46.045 [2024-11-05 19:19:15.282791] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:46.045 [2024-11-05 19:19:15.283344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:46.986 19:19:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:46.987 19:19:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:29:46.987 19:19:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:29:46.987 19:19:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:46.987 19:19:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:46.987 19:19:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:46.987 19:19:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:29:46.987 19:19:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:46.987 19:19:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:46.987 [2024-11-05 19:19:15.993359] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:29:46.987 19:19:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:46.987 19:19:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:29:46.987 19:19:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:29:46.987 19:19:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:46.987 19:19:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:46.987 null0 00:29:46.987 [2024-11-05 19:19:16.075618] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:46.987 [2024-11-05 19:19:16.099848] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:46.987 19:19:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:46.987 19:19:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:29:46.987 19:19:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:46.987 19:19:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:29:46.987 19:19:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:29:46.987 19:19:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:29:46.987 19:19:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=527405 00:29:46.987 19:19:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 527405 /var/tmp/bperf.sock 00:29:46.987 19:19:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 527405 ']' 00:29:46.987 19:19:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:29:46.987 19:19:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:46.987 19:19:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:46.987 19:19:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:46.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:46.987 19:19:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:46.987 19:19:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:46.987 [2024-11-05 19:19:16.169222] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:29:46.987 [2024-11-05 19:19:16.169271] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid527405 ] 00:29:46.987 [2024-11-05 19:19:16.251807] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:46.987 [2024-11-05 19:19:16.281628] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:47.929 19:19:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:47.929 19:19:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:29:47.929 19:19:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:47.929 19:19:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:47.929 19:19:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:47.929 19:19:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:47.929 19:19:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:47.929 19:19:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:47.929 19:19:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:47.930 19:19:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:48.190 nvme0n1 00:29:48.190 19:19:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:29:48.190 19:19:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:48.190 19:19:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:48.190 19:19:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:48.190 19:19:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:48.190 19:19:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:48.452 Running I/O for 2 seconds... 00:29:48.452 [2024-11-05 19:19:17.565037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:48.452 [2024-11-05 19:19:17.565069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:3311 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.452 [2024-11-05 19:19:17.565078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:48.452 [2024-11-05 19:19:17.577707] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:48.452 [2024-11-05 19:19:17.577727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:19694 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.452 [2024-11-05 19:19:17.577735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:48.452 [2024-11-05 19:19:17.590983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:48.452 [2024-11-05 19:19:17.591002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:10188 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.452 [2024-11-05 19:19:17.591008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:48.452 [2024-11-05 19:19:17.605716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:48.452 [2024-11-05 19:19:17.605735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:8996 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.452 [2024-11-05 19:19:17.605742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:48.452 [2024-11-05 19:19:17.619115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:48.452 [2024-11-05 19:19:17.619133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:7629 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.452 [2024-11-05 19:19:17.619140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:48.452 [2024-11-05 19:19:17.631452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:48.452 [2024-11-05 19:19:17.631471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:23153 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.452 [2024-11-05 19:19:17.631479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:48.452 [2024-11-05 19:19:17.642573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:48.452 [2024-11-05 19:19:17.642591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:19101 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.452 [2024-11-05 19:19:17.642598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:48.452 [2024-11-05 19:19:17.657008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:48.452 [2024-11-05 19:19:17.657026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:24874 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.452 [2024-11-05 19:19:17.657033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:48.452 [2024-11-05 19:19:17.669677] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:48.452 [2024-11-05 19:19:17.669702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:486 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.452 [2024-11-05 19:19:17.669708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:48.452 [2024-11-05 19:19:17.681678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:48.452 [2024-11-05 19:19:17.681696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:5978 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.452 [2024-11-05 19:19:17.681702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:48.452 [2024-11-05 19:19:17.694997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:48.452 [2024-11-05 19:19:17.695015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:10884 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.452 [2024-11-05 19:19:17.695021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:48.452 [2024-11-05 19:19:17.708596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:48.452 [2024-11-05 19:19:17.708614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:8054 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.452 [2024-11-05 19:19:17.708620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:48.452 [2024-11-05 19:19:17.718980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:48.452 [2024-11-05 19:19:17.718998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18605 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.452 [2024-11-05 19:19:17.719005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:48.452 [2024-11-05 19:19:17.733022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:48.452 [2024-11-05 19:19:17.733040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:7686 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.452 [2024-11-05 19:19:17.733046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:48.452 [2024-11-05 19:19:17.745795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:48.452 [2024-11-05 19:19:17.745813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:24362 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.452 [2024-11-05 19:19:17.745819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:48.452 [2024-11-05 19:19:17.757997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:48.452 [2024-11-05 19:19:17.758016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:15453 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.452 [2024-11-05 19:19:17.758022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:48.452 [2024-11-05 19:19:17.770293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:48.452 [2024-11-05 19:19:17.770311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22401 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.452 [2024-11-05 19:19:17.770318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:48.714 [2024-11-05 19:19:17.785254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:48.714 [2024-11-05 19:19:17.785272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:8770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.714 [2024-11-05 19:19:17.785279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:48.714 [2024-11-05 19:19:17.798654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:48.714 [2024-11-05 19:19:17.798673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:3369 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.714 [2024-11-05 19:19:17.798681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:48.714 [2024-11-05 19:19:17.811023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:48.714 [2024-11-05 19:19:17.811041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20132 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.714 [2024-11-05 19:19:17.811048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:48.714 [2024-11-05 19:19:17.822905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:48.714 [2024-11-05 19:19:17.822924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:20406 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.714 [2024-11-05 19:19:17.822930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:48.714 [2024-11-05 19:19:17.836431] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:48.714 [2024-11-05 19:19:17.836449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:25419 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.714 [2024-11-05 19:19:17.836456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:48.714 [2024-11-05 19:19:17.849515] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:48.714 [2024-11-05 19:19:17.849534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:21515 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.714 [2024-11-05 19:19:17.849540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:48.714 [2024-11-05 19:19:17.862907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:48.714 [2024-11-05 19:19:17.862926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:1923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.714 [2024-11-05 19:19:17.862932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:48.714 [2024-11-05 19:19:17.875958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:48.714 [2024-11-05 19:19:17.875976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:23692 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.714 [2024-11-05 19:19:17.875982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:48.714 [2024-11-05 19:19:17.886843] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:48.714 [2024-11-05 19:19:17.886861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:3536 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.714 [2024-11-05 19:19:17.886871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:48.714 [2024-11-05 19:19:17.900858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:48.714 [2024-11-05 19:19:17.900877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:73 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.714 [2024-11-05 19:19:17.900884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:48.714 [2024-11-05 19:19:17.913417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:48.714 [2024-11-05 19:19:17.913435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:1182 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.714 [2024-11-05 19:19:17.913442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:48.714 [2024-11-05 19:19:17.926292] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:48.714 [2024-11-05 19:19:17.926310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:18741 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.714 [2024-11-05 19:19:17.926316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:48.714 [2024-11-05 19:19:17.938480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:48.714 [2024-11-05 19:19:17.938499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:12782 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.714 [2024-11-05 19:19:17.938507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:48.714 [2024-11-05 19:19:17.949354] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:48.714 [2024-11-05 19:19:17.949373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:4430 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.714 [2024-11-05 19:19:17.949380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:48.714 [2024-11-05 19:19:17.963146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:48.714 [2024-11-05 19:19:17.963164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:2569 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.714 [2024-11-05 19:19:17.963171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:48.714 [2024-11-05 19:19:17.976707] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:48.714 [2024-11-05 19:19:17.976726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:22226 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.714 [2024-11-05 19:19:17.976732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:48.714 [2024-11-05 19:19:17.990282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:48.714 [2024-11-05 19:19:17.990301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:10545 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.714 [2024-11-05 19:19:17.990307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:48.714 [2024-11-05 19:19:18.002510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:48.714 [2024-11-05 19:19:18.002531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:10097 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.714 [2024-11-05 19:19:18.002539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:48.714 [2024-11-05 19:19:18.013254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:48.714 [2024-11-05 19:19:18.013271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:5197 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.714 [2024-11-05 19:19:18.013278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:48.714 [2024-11-05 19:19:18.028253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:48.714 [2024-11-05 19:19:18.028271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4725 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.714 [2024-11-05 19:19:18.028278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:48.976 [2024-11-05 19:19:18.042417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:48.976 [2024-11-05 19:19:18.042436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16538 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.976 [2024-11-05 19:19:18.042443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:48.976 [2024-11-05 19:19:18.054451] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:48.976 [2024-11-05 19:19:18.054468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:14843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.976 [2024-11-05 19:19:18.054475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:48.976 [2024-11-05 19:19:18.066396] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:48.976 [2024-11-05 19:19:18.066414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22108 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.976 [2024-11-05 19:19:18.066421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:48.976 [2024-11-05 19:19:18.080254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:48.976 [2024-11-05 19:19:18.080273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:5920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.976 [2024-11-05 19:19:18.080279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:48.976 [2024-11-05 19:19:18.094219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:48.976 [2024-11-05 19:19:18.094237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:5778 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.976 [2024-11-05 19:19:18.094245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:48.976 [2024-11-05 19:19:18.106125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:48.976 [2024-11-05 19:19:18.106142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:20777 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.976 [2024-11-05 19:19:18.106149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:48.976 [2024-11-05 19:19:18.117728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:48.976 [2024-11-05 19:19:18.117751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:15261 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.976 [2024-11-05 19:19:18.117757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:48.976 [2024-11-05 19:19:18.131021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:48.976 [2024-11-05 19:19:18.131039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:16323 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.976 [2024-11-05 19:19:18.131046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:48.976 [2024-11-05 19:19:18.141405] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:48.976 [2024-11-05 19:19:18.141423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5370 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.976 [2024-11-05 19:19:18.141429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:48.976 [2024-11-05 19:19:18.154804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:48.976 [2024-11-05 19:19:18.154830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19490 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.976 [2024-11-05 19:19:18.154837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:48.976 [2024-11-05 19:19:18.167710] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:48.976 [2024-11-05 19:19:18.167727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:9702 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.976 [2024-11-05 19:19:18.167734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:48.976 [2024-11-05 19:19:18.181573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:48.976 [2024-11-05 19:19:18.181591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:21824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.976 [2024-11-05 19:19:18.181599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:48.976 [2024-11-05 19:19:18.194152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:48.976 [2024-11-05 19:19:18.194170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:7965 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.976 [2024-11-05 19:19:18.194177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:48.976 [2024-11-05 19:19:18.204970] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:48.977 [2024-11-05 19:19:18.204987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:11009 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.977 [2024-11-05 19:19:18.204993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:48.977 [2024-11-05 19:19:18.217997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:48.977 [2024-11-05 19:19:18.218018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:16252 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.977 [2024-11-05 19:19:18.218025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:48.977 [2024-11-05 19:19:18.230346] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:48.977 [2024-11-05 19:19:18.230365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:12393 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.977 [2024-11-05 19:19:18.230371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:48.977 [2024-11-05 19:19:18.244183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:48.977 [2024-11-05 19:19:18.244201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:21434 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.977 [2024-11-05 19:19:18.244208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:48.977 [2024-11-05 19:19:18.255144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:48.977 [2024-11-05 19:19:18.255162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:24782 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.977 [2024-11-05 19:19:18.255168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:48.977 [2024-11-05 19:19:18.268660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:48.977 [2024-11-05 19:19:18.268678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:20757 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.977 [2024-11-05 19:19:18.268685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:48.977 [2024-11-05 19:19:18.281158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:48.977 [2024-11-05 19:19:18.281176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.977 [2024-11-05 19:19:18.281182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:48.977 [2024-11-05 19:19:18.294927] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:48.977 [2024-11-05 19:19:18.294946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:12031 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.977 [2024-11-05 19:19:18.294953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.239 [2024-11-05 19:19:18.308232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:49.239 [2024-11-05 19:19:18.308250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:21891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.239 [2024-11-05 19:19:18.308257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.239 [2024-11-05 19:19:18.317711] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:49.239 [2024-11-05 19:19:18.317729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:24533 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.239 [2024-11-05 19:19:18.317735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.239 [2024-11-05 19:19:18.332286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:49.239 [2024-11-05 19:19:18.332305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:8886 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.239 [2024-11-05 19:19:18.332313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.239 [2024-11-05 19:19:18.345896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:49.239 [2024-11-05 19:19:18.345913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:3209 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.239 [2024-11-05 19:19:18.345919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.239 [2024-11-05 19:19:18.358597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:49.239 [2024-11-05 19:19:18.358615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:4105 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.239 [2024-11-05 19:19:18.358621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.239 [2024-11-05 19:19:18.371391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:49.239 [2024-11-05 19:19:18.371409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:17235 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.239 [2024-11-05 19:19:18.371415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.239 [2024-11-05 19:19:18.384838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:49.239 [2024-11-05 19:19:18.384856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:16684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.239 [2024-11-05 19:19:18.384863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.239 [2024-11-05 19:19:18.396852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:49.239 [2024-11-05 19:19:18.396872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:12645 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.239 [2024-11-05 19:19:18.396879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.239 [2024-11-05 19:19:18.408551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:49.239 [2024-11-05 19:19:18.408569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:19515 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.239 [2024-11-05 19:19:18.408576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.239 [2024-11-05 19:19:18.420989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:49.239 [2024-11-05 19:19:18.421007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:22420 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.239 [2024-11-05 19:19:18.421014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.239 [2024-11-05 19:19:18.434254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:49.239 [2024-11-05 19:19:18.434272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:25588 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.239 [2024-11-05 19:19:18.434283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.239 [2024-11-05 19:19:18.447259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:49.239 [2024-11-05 19:19:18.447277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:6672 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.239 [2024-11-05 19:19:18.447283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.239 [2024-11-05 19:19:18.459998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:49.239 [2024-11-05 19:19:18.460017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:14562 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.239 [2024-11-05 19:19:18.460024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.239 [2024-11-05 19:19:18.471475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:49.239 [2024-11-05 19:19:18.471492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:19213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.239 [2024-11-05 19:19:18.471499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.239 [2024-11-05 19:19:18.483252] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:49.239 [2024-11-05 19:19:18.483270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:11663 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.239 [2024-11-05 19:19:18.483276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.239 [2024-11-05 19:19:18.496414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:49.239 [2024-11-05 19:19:18.496432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:22649 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.239 [2024-11-05 19:19:18.496439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.239 [2024-11-05 19:19:18.509567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:49.239 [2024-11-05 19:19:18.509584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:5558 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.239 [2024-11-05 19:19:18.509591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.239 [2024-11-05 19:19:18.522062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:49.239 [2024-11-05 19:19:18.522080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:21669 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.239 [2024-11-05 19:19:18.522086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.239 [2024-11-05 19:19:18.535006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:49.239 [2024-11-05 19:19:18.535024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.239 [2024-11-05 19:19:18.535031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.239 [2024-11-05 19:19:18.545432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:49.239 [2024-11-05 19:19:18.545453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:5628 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.239 [2024-11-05 19:19:18.545460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.239 19939.00 IOPS, 77.89 MiB/s [2024-11-05T18:19:18.562Z] [2024-11-05 19:19:18.560263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:49.239 [2024-11-05 19:19:18.560280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:1378 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.239 [2024-11-05 19:19:18.560287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.501 [2024-11-05 19:19:18.572839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:49.501 [2024-11-05 19:19:18.572856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:7600 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.501 [2024-11-05 19:19:18.572864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.501 [2024-11-05 19:19:18.584579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:49.501 [2024-11-05 19:19:18.584596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:11510 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.501 [2024-11-05 19:19:18.584603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.501 [2024-11-05 19:19:18.596906] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:49.501 [2024-11-05 19:19:18.596923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:1384 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.501 [2024-11-05 19:19:18.596930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.501 [2024-11-05 19:19:18.610090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:49.501 [2024-11-05 19:19:18.610108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:12159 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.501 [2024-11-05 19:19:18.610115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.501 [2024-11-05 19:19:18.622942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:49.501 [2024-11-05 19:19:18.622960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23539 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.501 [2024-11-05 19:19:18.622967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.501 [2024-11-05 19:19:18.634586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:49.501 [2024-11-05 19:19:18.634603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:18292 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.501 [2024-11-05 19:19:18.634611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.501 [2024-11-05 19:19:18.647351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:49.501 [2024-11-05 19:19:18.647368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:13562 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.501 [2024-11-05 19:19:18.647378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.501 [2024-11-05 19:19:18.660148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:49.501 [2024-11-05 19:19:18.660166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:8855 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.501 [2024-11-05 19:19:18.660173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.501 [2024-11-05 19:19:18.671436] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:49.501 [2024-11-05 19:19:18.671454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:9126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.501 [2024-11-05 19:19:18.671460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.501 [2024-11-05 19:19:18.685155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:49.501 [2024-11-05 19:19:18.685173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23084 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.501 [2024-11-05 19:19:18.685179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.501 [2024-11-05 19:19:18.698339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:49.501 [2024-11-05 19:19:18.698357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:9310 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.501 [2024-11-05 19:19:18.698364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.501 [2024-11-05 19:19:18.711486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:49.501 [2024-11-05 19:19:18.711504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.501 [2024-11-05 19:19:18.711511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.501 [2024-11-05 19:19:18.722955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:49.501 [2024-11-05 19:19:18.722973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:12653 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.501 [2024-11-05 19:19:18.722980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.501 [2024-11-05 19:19:18.735428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:49.501 [2024-11-05 19:19:18.735446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:20302 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.501 [2024-11-05 19:19:18.735452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.501 [2024-11-05 19:19:18.749116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:49.501 [2024-11-05 19:19:18.749133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14003 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.501 [2024-11-05 19:19:18.749140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.501 [2024-11-05 19:19:18.762031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:49.501 [2024-11-05 19:19:18.762051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:1164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.501 [2024-11-05 19:19:18.762058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.501 [2024-11-05 19:19:18.771373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:49.501 [2024-11-05 19:19:18.771390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:16476 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.501 [2024-11-05 19:19:18.771397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.501 [2024-11-05 19:19:18.786126] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:49.501 [2024-11-05 19:19:18.786144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:13053 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.502 [2024-11-05 19:19:18.786150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.502 [2024-11-05 19:19:18.798473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:49.502 [2024-11-05 19:19:18.798491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:23687 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.502 [2024-11-05 19:19:18.798497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.502 [2024-11-05 19:19:18.811773] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:49.502 [2024-11-05 19:19:18.811790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:19757 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.502 [2024-11-05 19:19:18.811797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.502 [2024-11-05 19:19:18.823560] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:49.502 [2024-11-05 19:19:18.823577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:16379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.502 [2024-11-05 19:19:18.823583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.763 [2024-11-05 19:19:18.835505] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:49.763 [2024-11-05 19:19:18.835523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:20810 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.763 [2024-11-05 19:19:18.835531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.763 [2024-11-05 19:19:18.848346] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:49.763 [2024-11-05 19:19:18.848364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:8662 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.763 [2024-11-05 19:19:18.848370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.763 [2024-11-05 19:19:18.861246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:49.763 [2024-11-05 19:19:18.861263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:591 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.763 [2024-11-05 19:19:18.861270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.763 [2024-11-05 19:19:18.874049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:49.763 [2024-11-05 19:19:18.874066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14910 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.763 [2024-11-05 19:19:18.874072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.763 [2024-11-05 19:19:18.887192] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:49.763 [2024-11-05 19:19:18.887209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:10294 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.763 [2024-11-05 19:19:18.887215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.763 [2024-11-05 19:19:18.899226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:49.763 [2024-11-05 19:19:18.899243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:17423 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.763 [2024-11-05 19:19:18.899251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.763 [2024-11-05 19:19:18.911400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:49.763 [2024-11-05 19:19:18.911417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:7978 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.763 [2024-11-05 19:19:18.911423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.763 [2024-11-05 19:19:18.924028] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:49.763 [2024-11-05 19:19:18.924046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:3467 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.763 [2024-11-05 19:19:18.924052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.763 [2024-11-05 19:19:18.937068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:49.763 [2024-11-05 19:19:18.937085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:9651 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.763 [2024-11-05 19:19:18.937091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.763 [2024-11-05 19:19:18.950462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:49.763 [2024-11-05 19:19:18.950479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:18446 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.763 [2024-11-05 19:19:18.950485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.763 [2024-11-05 19:19:18.962711] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:49.763 [2024-11-05 19:19:18.962728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:25065 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.763 [2024-11-05 19:19:18.962735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.763 [2024-11-05 19:19:18.972605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:49.763 [2024-11-05 19:19:18.972622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:4792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.763 [2024-11-05 19:19:18.972632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.763 [2024-11-05 19:19:18.986359] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:49.763 [2024-11-05 19:19:18.986377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:24994 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.763 [2024-11-05 19:19:18.986383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.763 [2024-11-05 19:19:18.998863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:49.763 [2024-11-05 19:19:18.998880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:1058 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.763 [2024-11-05 19:19:18.998887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.763 [2024-11-05 19:19:19.013341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:49.763 [2024-11-05 19:19:19.013359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:6565 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.763 [2024-11-05 19:19:19.013365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.763 [2024-11-05 19:19:19.025360] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:49.763 [2024-11-05 19:19:19.025378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:3850 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.763 [2024-11-05 19:19:19.025384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.763 [2024-11-05 19:19:19.038609] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:49.763 [2024-11-05 19:19:19.038626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.763 [2024-11-05 19:19:19.038632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.763 [2024-11-05 19:19:19.049906] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:49.763 [2024-11-05 19:19:19.049923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:19974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.763 [2024-11-05 19:19:19.049929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.763 [2024-11-05 19:19:19.062288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:49.763 [2024-11-05 19:19:19.062305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:15463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.763 [2024-11-05 19:19:19.062312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.763 [2024-11-05 19:19:19.074836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:49.763 [2024-11-05 19:19:19.074853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10806 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.763 [2024-11-05 19:19:19.074860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.763 [2024-11-05 19:19:19.086963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:49.763 [2024-11-05 19:19:19.086983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:4622 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.763 [2024-11-05 19:19:19.086990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.025 [2024-11-05 19:19:19.100187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:50.025 [2024-11-05 19:19:19.100204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.025 [2024-11-05 19:19:19.100211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.025 [2024-11-05 19:19:19.114870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:50.025 [2024-11-05 19:19:19.114887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:23850 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.025 [2024-11-05 19:19:19.114894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.025 [2024-11-05 19:19:19.128528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:50.025 [2024-11-05 19:19:19.128546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8290 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.025 [2024-11-05 19:19:19.128553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.025 [2024-11-05 19:19:19.140758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:50.025 [2024-11-05 19:19:19.140775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:4605 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.025 [2024-11-05 19:19:19.140782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.025 [2024-11-05 19:19:19.152881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:50.025 [2024-11-05 19:19:19.152898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:23405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.025 [2024-11-05 19:19:19.152905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.025 [2024-11-05 19:19:19.166783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:50.025 [2024-11-05 19:19:19.166800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:12758 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.025 [2024-11-05 19:19:19.166806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.025 [2024-11-05 19:19:19.178979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:50.025 [2024-11-05 19:19:19.178996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:15396 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.025 [2024-11-05 19:19:19.179002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.025 [2024-11-05 19:19:19.190404] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:50.025 [2024-11-05 19:19:19.190420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:25413 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.025 [2024-11-05 19:19:19.190427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.025 [2024-11-05 19:19:19.203456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:50.025 [2024-11-05 19:19:19.203472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12696 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.025 [2024-11-05 19:19:19.203479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.025 [2024-11-05 19:19:19.216405] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:50.025 [2024-11-05 19:19:19.216421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:19783 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.026 [2024-11-05 19:19:19.216428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.026 [2024-11-05 19:19:19.228966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:50.026 [2024-11-05 19:19:19.228983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9344 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.026 [2024-11-05 19:19:19.228990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.026 [2024-11-05 19:19:19.242719] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:50.026 [2024-11-05 19:19:19.242737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:3447 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.026 [2024-11-05 19:19:19.242743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.026 [2024-11-05 19:19:19.255921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:50.026 [2024-11-05 19:19:19.255938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:23276 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.026 [2024-11-05 19:19:19.255945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.026 [2024-11-05 19:19:19.266264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:50.026 [2024-11-05 19:19:19.266282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:323 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.026 [2024-11-05 19:19:19.266289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.026 [2024-11-05 19:19:19.280158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:50.026 [2024-11-05 19:19:19.280175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:3821 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.026 [2024-11-05 19:19:19.280182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.026 [2024-11-05 19:19:19.293624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:50.026 [2024-11-05 19:19:19.293642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:2580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.026 [2024-11-05 19:19:19.293649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.026 [2024-11-05 19:19:19.307081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:50.026 [2024-11-05 19:19:19.307106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:22612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.026 [2024-11-05 19:19:19.307113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.026 [2024-11-05 19:19:19.319572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:50.026 [2024-11-05 19:19:19.319590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:10373 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.026 [2024-11-05 19:19:19.319598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.026 [2024-11-05 19:19:19.330627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:50.026 [2024-11-05 19:19:19.330644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:12625 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.026 [2024-11-05 19:19:19.330651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.026 [2024-11-05 19:19:19.343648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:50.026 [2024-11-05 19:19:19.343665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:12942 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.026 [2024-11-05 19:19:19.343672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.287 [2024-11-05 19:19:19.357579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:50.287 [2024-11-05 19:19:19.357597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:10799 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.287 [2024-11-05 19:19:19.357604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.287 [2024-11-05 19:19:19.368864] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:50.287 [2024-11-05 19:19:19.368882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:10423 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.287 [2024-11-05 19:19:19.368889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.287 [2024-11-05 19:19:19.380661] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:50.287 [2024-11-05 19:19:19.380678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:2047 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.287 [2024-11-05 19:19:19.380685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.287 [2024-11-05 19:19:19.395127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:50.287 [2024-11-05 19:19:19.395145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:15941 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.287 [2024-11-05 19:19:19.395152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.287 [2024-11-05 19:19:19.408689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:50.287 [2024-11-05 19:19:19.408708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:9802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.287 [2024-11-05 19:19:19.408715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.287 [2024-11-05 19:19:19.418447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:50.287 [2024-11-05 19:19:19.418464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:18545 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.287 [2024-11-05 19:19:19.418471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.287 [2024-11-05 19:19:19.431811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:50.287 [2024-11-05 19:19:19.431829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:8127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.287 [2024-11-05 19:19:19.431836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.287 [2024-11-05 19:19:19.444500] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:50.287 [2024-11-05 19:19:19.444517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:17550 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.287 [2024-11-05 19:19:19.444524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.287 [2024-11-05 19:19:19.457571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:50.288 [2024-11-05 19:19:19.457589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:25059 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.288 [2024-11-05 19:19:19.457596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.288 [2024-11-05 19:19:19.471619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:50.288 [2024-11-05 19:19:19.471636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:12607 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.288 [2024-11-05 19:19:19.471643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.288 [2024-11-05 19:19:19.484343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:50.288 [2024-11-05 19:19:19.484360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:1989 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.288 [2024-11-05 19:19:19.484367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.288 [2024-11-05 19:19:19.497787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:50.288 [2024-11-05 19:19:19.497805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:22183 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.288 [2024-11-05 19:19:19.497811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.288 [2024-11-05 19:19:19.510216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:50.288 [2024-11-05 19:19:19.510233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:22440 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.288 [2024-11-05 19:19:19.510240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.288 [2024-11-05 19:19:19.523352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:50.288 [2024-11-05 19:19:19.523369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:20888 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.288 [2024-11-05 19:19:19.523379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.288 [2024-11-05 19:19:19.533639] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:50.288 [2024-11-05 19:19:19.533657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:1974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.288 [2024-11-05 19:19:19.533664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.288 [2024-11-05 19:19:19.546889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e486e0) 00:29:50.288 [2024-11-05 19:19:19.546907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:5571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.288 [2024-11-05 19:19:19.546914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.288 20049.00 IOPS, 78.32 MiB/s 00:29:50.288 Latency(us) 00:29:50.288 [2024-11-05T18:19:19.611Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:50.288 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:29:50.288 nvme0n1 : 2.00 20081.64 78.44 0.00 0.00 6369.04 2225.49 18240.85 00:29:50.288 [2024-11-05T18:19:19.611Z] =================================================================================================================== 00:29:50.288 [2024-11-05T18:19:19.611Z] Total : 20081.64 78.44 0.00 0.00 6369.04 2225.49 18240.85 00:29:50.288 { 00:29:50.288 "results": [ 00:29:50.288 { 00:29:50.288 "job": "nvme0n1", 00:29:50.288 "core_mask": "0x2", 00:29:50.288 "workload": "randread", 00:29:50.288 "status": "finished", 00:29:50.288 "queue_depth": 128, 00:29:50.288 "io_size": 4096, 00:29:50.288 "runtime": 2.003123, 00:29:50.288 "iops": 20081.642515212494, 00:29:50.288 "mibps": 78.4439160750488, 00:29:50.288 "io_failed": 0, 00:29:50.288 "io_timeout": 0, 00:29:50.288 "avg_latency_us": 6369.035281327168, 00:29:50.288 "min_latency_us": 2225.4933333333333, 00:29:50.288 "max_latency_us": 18240.853333333333 00:29:50.288 } 00:29:50.288 ], 00:29:50.288 "core_count": 1 00:29:50.288 } 00:29:50.288 19:19:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:50.288 19:19:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:50.288 19:19:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:50.288 | .driver_specific 00:29:50.288 | .nvme_error 00:29:50.288 | .status_code 00:29:50.288 | .command_transient_transport_error' 00:29:50.288 19:19:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:50.548 19:19:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 157 > 0 )) 00:29:50.548 19:19:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 527405 00:29:50.548 19:19:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 527405 ']' 00:29:50.548 19:19:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 527405 00:29:50.548 19:19:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:29:50.548 19:19:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:50.548 19:19:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 527405 00:29:50.548 19:19:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:29:50.548 19:19:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:29:50.548 19:19:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 527405' 00:29:50.548 killing process with pid 527405 00:29:50.548 19:19:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 527405 00:29:50.548 Received shutdown signal, test time was about 2.000000 seconds 00:29:50.548 00:29:50.548 Latency(us) 00:29:50.548 [2024-11-05T18:19:19.871Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:50.548 [2024-11-05T18:19:19.871Z] =================================================================================================================== 00:29:50.548 [2024-11-05T18:19:19.871Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:50.548 19:19:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 527405 00:29:50.809 19:19:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:29:50.809 19:19:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:50.809 19:19:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:29:50.809 19:19:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:29:50.809 19:19:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:29:50.809 19:19:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=528217 00:29:50.809 19:19:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 528217 /var/tmp/bperf.sock 00:29:50.809 19:19:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 528217 ']' 00:29:50.809 19:19:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:29:50.809 19:19:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:50.809 19:19:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:50.809 19:19:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:50.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:50.809 19:19:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:50.809 19:19:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:50.809 [2024-11-05 19:19:19.958417] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:29:50.809 [2024-11-05 19:19:19.958476] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid528217 ] 00:29:50.809 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:50.809 Zero copy mechanism will not be used. 00:29:50.809 [2024-11-05 19:19:20.040609] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:50.809 [2024-11-05 19:19:20.071486] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:51.749 19:19:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:51.749 19:19:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:29:51.749 19:19:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:51.749 19:19:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:51.750 19:19:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:51.750 19:19:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:51.750 19:19:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:51.750 19:19:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:51.750 19:19:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:51.750 19:19:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:52.010 nvme0n1 00:29:52.271 19:19:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:29:52.271 19:19:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:52.271 19:19:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:52.271 19:19:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:52.271 19:19:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:52.271 19:19:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:52.271 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:52.271 Zero copy mechanism will not be used. 00:29:52.271 Running I/O for 2 seconds... 00:29:52.271 [2024-11-05 19:19:21.456999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:52.271 [2024-11-05 19:19:21.457033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.271 [2024-11-05 19:19:21.457042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:52.271 [2024-11-05 19:19:21.465255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:52.272 [2024-11-05 19:19:21.465276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.272 [2024-11-05 19:19:21.465283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:52.272 [2024-11-05 19:19:21.476241] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:52.272 [2024-11-05 19:19:21.476261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.272 [2024-11-05 19:19:21.476269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:52.272 [2024-11-05 19:19:21.483620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:52.272 [2024-11-05 19:19:21.483639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.272 [2024-11-05 19:19:21.483646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.272 [2024-11-05 19:19:21.490916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:52.272 [2024-11-05 19:19:21.490936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.272 [2024-11-05 19:19:21.490943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:52.272 [2024-11-05 19:19:21.500407] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:52.272 [2024-11-05 19:19:21.500430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.272 [2024-11-05 19:19:21.500437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:52.272 [2024-11-05 19:19:21.510121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:52.272 [2024-11-05 19:19:21.510140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.272 [2024-11-05 19:19:21.510147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:52.272 [2024-11-05 19:19:21.520295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:52.272 [2024-11-05 19:19:21.520314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.272 [2024-11-05 19:19:21.520321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.272 [2024-11-05 19:19:21.527393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:52.272 [2024-11-05 19:19:21.527412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.272 [2024-11-05 19:19:21.527418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:52.272 [2024-11-05 19:19:21.538065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:52.272 [2024-11-05 19:19:21.538084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.272 [2024-11-05 19:19:21.538091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:52.272 [2024-11-05 19:19:21.548073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:52.272 [2024-11-05 19:19:21.548091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.272 [2024-11-05 19:19:21.548098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:52.272 [2024-11-05 19:19:21.559205] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:52.272 [2024-11-05 19:19:21.559224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.272 [2024-11-05 19:19:21.559231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.272 [2024-11-05 19:19:21.569198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:52.272 [2024-11-05 19:19:21.569217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.272 [2024-11-05 19:19:21.569223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:52.272 [2024-11-05 19:19:21.580189] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:52.272 [2024-11-05 19:19:21.580208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.272 [2024-11-05 19:19:21.580215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:52.272 [2024-11-05 19:19:21.590728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:52.272 [2024-11-05 19:19:21.590752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.272 [2024-11-05 19:19:21.590758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:52.534 [2024-11-05 19:19:21.600118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:52.534 [2024-11-05 19:19:21.600138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.534 [2024-11-05 19:19:21.600144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.534 [2024-11-05 19:19:21.608111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:52.534 [2024-11-05 19:19:21.608130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.534 [2024-11-05 19:19:21.608137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:52.534 [2024-11-05 19:19:21.617225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:52.534 [2024-11-05 19:19:21.617244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.534 [2024-11-05 19:19:21.617251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:52.534 [2024-11-05 19:19:21.625213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:52.534 [2024-11-05 19:19:21.625232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.534 [2024-11-05 19:19:21.625238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:52.534 [2024-11-05 19:19:21.632331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:52.534 [2024-11-05 19:19:21.632351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.534 [2024-11-05 19:19:21.632357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.534 [2024-11-05 19:19:21.639941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:52.534 [2024-11-05 19:19:21.639961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.534 [2024-11-05 19:19:21.639967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:52.534 [2024-11-05 19:19:21.648476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:52.534 [2024-11-05 19:19:21.648495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.534 [2024-11-05 19:19:21.648501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:52.534 [2024-11-05 19:19:21.656213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:52.534 [2024-11-05 19:19:21.656232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.534 [2024-11-05 19:19:21.656242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:52.534 [2024-11-05 19:19:21.662969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:52.534 [2024-11-05 19:19:21.662988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.534 [2024-11-05 19:19:21.662995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.534 [2024-11-05 19:19:21.671475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:52.534 [2024-11-05 19:19:21.671494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.534 [2024-11-05 19:19:21.671501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:52.534 [2024-11-05 19:19:21.679313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:52.534 [2024-11-05 19:19:21.679332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.534 [2024-11-05 19:19:21.679338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:52.534 [2024-11-05 19:19:21.686233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:52.534 [2024-11-05 19:19:21.686252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.534 [2024-11-05 19:19:21.686259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:52.534 [2024-11-05 19:19:21.690767] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:52.534 [2024-11-05 19:19:21.690795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.534 [2024-11-05 19:19:21.690801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.534 [2024-11-05 19:19:21.700625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:52.534 [2024-11-05 19:19:21.700645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.534 [2024-11-05 19:19:21.700651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:52.534 [2024-11-05 19:19:21.709993] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:52.534 [2024-11-05 19:19:21.710012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.534 [2024-11-05 19:19:21.710019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:52.534 [2024-11-05 19:19:21.715311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:52.534 [2024-11-05 19:19:21.715330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.534 [2024-11-05 19:19:21.715337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:52.534 [2024-11-05 19:19:21.723782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:52.534 [2024-11-05 19:19:21.723801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.534 [2024-11-05 19:19:21.723807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.534 [2024-11-05 19:19:21.734010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:52.534 [2024-11-05 19:19:21.734029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.534 [2024-11-05 19:19:21.734035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:52.534 [2024-11-05 19:19:21.743651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:52.534 [2024-11-05 19:19:21.743670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.534 [2024-11-05 19:19:21.743677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:52.534 [2024-11-05 19:19:21.751889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:52.534 [2024-11-05 19:19:21.751907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.534 [2024-11-05 19:19:21.751914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:52.534 [2024-11-05 19:19:21.762098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:52.534 [2024-11-05 19:19:21.762117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.534 [2024-11-05 19:19:21.762124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.534 [2024-11-05 19:19:21.773431] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:52.534 [2024-11-05 19:19:21.773450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.535 [2024-11-05 19:19:21.773457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:52.535 [2024-11-05 19:19:21.780948] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:52.535 [2024-11-05 19:19:21.780966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.535 [2024-11-05 19:19:21.780973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:52.535 [2024-11-05 19:19:21.788809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:52.535 [2024-11-05 19:19:21.788828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.535 [2024-11-05 19:19:21.788835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:52.535 [2024-11-05 19:19:21.797794] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:52.535 [2024-11-05 19:19:21.797813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.535 [2024-11-05 19:19:21.797823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.535 [2024-11-05 19:19:21.806345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:52.535 [2024-11-05 19:19:21.806364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.535 [2024-11-05 19:19:21.806371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:52.535 [2024-11-05 19:19:21.813050] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:52.535 [2024-11-05 19:19:21.813068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.535 [2024-11-05 19:19:21.813075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:52.535 [2024-11-05 19:19:21.821595] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:52.535 [2024-11-05 19:19:21.821614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.535 [2024-11-05 19:19:21.821620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:52.535 [2024-11-05 19:19:21.828032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:52.535 [2024-11-05 19:19:21.828050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.535 [2024-11-05 19:19:21.828057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.535 [2024-11-05 19:19:21.836476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:52.535 [2024-11-05 19:19:21.836494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.535 [2024-11-05 19:19:21.836502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:52.535 [2024-11-05 19:19:21.841547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:52.535 [2024-11-05 19:19:21.841565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.535 [2024-11-05 19:19:21.841573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:52.535 [2024-11-05 19:19:21.846646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:52.535 [2024-11-05 19:19:21.846665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.535 [2024-11-05 19:19:21.846672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:52.535 [2024-11-05 19:19:21.856017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:52.535 [2024-11-05 19:19:21.856036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.535 [2024-11-05 19:19:21.856042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.796 [2024-11-05 19:19:21.861284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:52.796 [2024-11-05 19:19:21.861306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.796 [2024-11-05 19:19:21.861313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:52.796 [2024-11-05 19:19:21.869257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:52.796 [2024-11-05 19:19:21.869276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.796 [2024-11-05 19:19:21.869282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:52.796 [2024-11-05 19:19:21.877742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:52.796 [2024-11-05 19:19:21.877765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.796 [2024-11-05 19:19:21.877772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:52.796 [2024-11-05 19:19:21.886459] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:52.797 [2024-11-05 19:19:21.886478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.797 [2024-11-05 19:19:21.886484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.797 [2024-11-05 19:19:21.895132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:52.797 [2024-11-05 19:19:21.895151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.797 [2024-11-05 19:19:21.895157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:52.797 [2024-11-05 19:19:21.902786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:52.797 [2024-11-05 19:19:21.902805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.797 [2024-11-05 19:19:21.902812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:52.797 [2024-11-05 19:19:21.910214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:52.797 [2024-11-05 19:19:21.910233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.797 [2024-11-05 19:19:21.910240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:52.797 [2024-11-05 19:19:21.918297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:52.797 [2024-11-05 19:19:21.918316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.797 [2024-11-05 19:19:21.918323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.797 [2024-11-05 19:19:21.929919] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:52.797 [2024-11-05 19:19:21.929938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.797 [2024-11-05 19:19:21.929945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:52.797 [2024-11-05 19:19:21.938186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:52.797 [2024-11-05 19:19:21.938204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.797 [2024-11-05 19:19:21.938211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:52.797 [2024-11-05 19:19:21.948426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:52.797 [2024-11-05 19:19:21.948445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.797 [2024-11-05 19:19:21.948451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:52.797 [2024-11-05 19:19:21.959501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:52.797 [2024-11-05 19:19:21.959520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.797 [2024-11-05 19:19:21.959527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.797 [2024-11-05 19:19:21.971574] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:52.797 [2024-11-05 19:19:21.971594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.797 [2024-11-05 19:19:21.971600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:52.797 [2024-11-05 19:19:21.979729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:52.797 [2024-11-05 19:19:21.979753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.797 [2024-11-05 19:19:21.979760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:52.797 [2024-11-05 19:19:21.988461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:52.797 [2024-11-05 19:19:21.988480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.797 [2024-11-05 19:19:21.988486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:52.797 [2024-11-05 19:19:21.994410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:52.797 [2024-11-05 19:19:21.994429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.797 [2024-11-05 19:19:21.994436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.797 [2024-11-05 19:19:22.006146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:52.797 [2024-11-05 19:19:22.006166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.797 [2024-11-05 19:19:22.006173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:52.797 [2024-11-05 19:19:22.016757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:52.797 [2024-11-05 19:19:22.016776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.797 [2024-11-05 19:19:22.016786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:52.797 [2024-11-05 19:19:22.026606] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:52.797 [2024-11-05 19:19:22.026626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.797 [2024-11-05 19:19:22.026632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:52.797 [2024-11-05 19:19:22.035948] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:52.797 [2024-11-05 19:19:22.035968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.797 [2024-11-05 19:19:22.035975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.797 [2024-11-05 19:19:22.045865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:52.797 [2024-11-05 19:19:22.045884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.797 [2024-11-05 19:19:22.045891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:52.797 [2024-11-05 19:19:22.049332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:52.797 [2024-11-05 19:19:22.049349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.797 [2024-11-05 19:19:22.049356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:52.797 [2024-11-05 19:19:22.056155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:52.797 [2024-11-05 19:19:22.056174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.797 [2024-11-05 19:19:22.056180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:52.797 [2024-11-05 19:19:22.064317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:52.797 [2024-11-05 19:19:22.064335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.797 [2024-11-05 19:19:22.064342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.797 [2024-11-05 19:19:22.073135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:52.797 [2024-11-05 19:19:22.073154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.797 [2024-11-05 19:19:22.073160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:52.797 [2024-11-05 19:19:22.082329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:52.797 [2024-11-05 19:19:22.082348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.797 [2024-11-05 19:19:22.082355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:52.797 [2024-11-05 19:19:22.088552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:52.797 [2024-11-05 19:19:22.088574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.797 [2024-11-05 19:19:22.088580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:52.797 [2024-11-05 19:19:22.094245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:52.797 [2024-11-05 19:19:22.094264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.797 [2024-11-05 19:19:22.094270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:52.797 [2024-11-05 19:19:22.100539] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:52.797 [2024-11-05 19:19:22.100558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.797 [2024-11-05 19:19:22.100564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:52.797 [2024-11-05 19:19:22.110950] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:52.797 [2024-11-05 19:19:22.110969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.798 [2024-11-05 19:19:22.110975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:53.059 [2024-11-05 19:19:22.122426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:53.059 [2024-11-05 19:19:22.122445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.059 [2024-11-05 19:19:22.122452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:53.059 [2024-11-05 19:19:22.132446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:53.059 [2024-11-05 19:19:22.132465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.059 [2024-11-05 19:19:22.132471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.059 [2024-11-05 19:19:22.142040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:53.059 [2024-11-05 19:19:22.142058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.059 [2024-11-05 19:19:22.142065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:53.059 [2024-11-05 19:19:22.153482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:53.059 [2024-11-05 19:19:22.153501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.059 [2024-11-05 19:19:22.153508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:53.059 [2024-11-05 19:19:22.161669] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:53.059 [2024-11-05 19:19:22.161688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.059 [2024-11-05 19:19:22.161695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:53.059 [2024-11-05 19:19:22.167812] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:53.059 [2024-11-05 19:19:22.167830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.059 [2024-11-05 19:19:22.167836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.059 [2024-11-05 19:19:22.177971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:53.059 [2024-11-05 19:19:22.177990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.059 [2024-11-05 19:19:22.177996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:53.059 [2024-11-05 19:19:22.188103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:53.059 [2024-11-05 19:19:22.188122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.060 [2024-11-05 19:19:22.188128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:53.060 [2024-11-05 19:19:22.198392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:53.060 [2024-11-05 19:19:22.198411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.060 [2024-11-05 19:19:22.198418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:53.060 [2024-11-05 19:19:22.207200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:53.060 [2024-11-05 19:19:22.207219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.060 [2024-11-05 19:19:22.207226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.060 [2024-11-05 19:19:22.217894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:53.060 [2024-11-05 19:19:22.217912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.060 [2024-11-05 19:19:22.217919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:53.060 [2024-11-05 19:19:22.229637] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:53.060 [2024-11-05 19:19:22.229655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.060 [2024-11-05 19:19:22.229662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:53.060 [2024-11-05 19:19:22.239438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:53.060 [2024-11-05 19:19:22.239457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.060 [2024-11-05 19:19:22.239464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:53.060 [2024-11-05 19:19:22.247401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:53.060 [2024-11-05 19:19:22.247420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.060 [2024-11-05 19:19:22.247430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.060 [2024-11-05 19:19:22.256744] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:53.060 [2024-11-05 19:19:22.256767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.060 [2024-11-05 19:19:22.256773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:53.060 [2024-11-05 19:19:22.265081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:53.060 [2024-11-05 19:19:22.265100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.060 [2024-11-05 19:19:22.265106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:53.060 [2024-11-05 19:19:22.273555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:53.060 [2024-11-05 19:19:22.273574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.060 [2024-11-05 19:19:22.273581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:53.060 [2024-11-05 19:19:22.281948] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:53.060 [2024-11-05 19:19:22.281967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.060 [2024-11-05 19:19:22.281974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.060 [2024-11-05 19:19:22.289503] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:53.060 [2024-11-05 19:19:22.289521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.060 [2024-11-05 19:19:22.289528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:53.060 [2024-11-05 19:19:22.298697] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:53.060 [2024-11-05 19:19:22.298716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.060 [2024-11-05 19:19:22.298722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:53.060 [2024-11-05 19:19:22.306716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:53.060 [2024-11-05 19:19:22.306735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.060 [2024-11-05 19:19:22.306741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:53.060 [2024-11-05 19:19:22.315700] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:53.060 [2024-11-05 19:19:22.315718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.060 [2024-11-05 19:19:22.315725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.060 [2024-11-05 19:19:22.323616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:53.060 [2024-11-05 19:19:22.323638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.060 [2024-11-05 19:19:22.323645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:53.060 [2024-11-05 19:19:22.332182] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:53.060 [2024-11-05 19:19:22.332201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.060 [2024-11-05 19:19:22.332207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:53.060 [2024-11-05 19:19:22.341751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:53.060 [2024-11-05 19:19:22.341769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.060 [2024-11-05 19:19:22.341775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:53.060 [2024-11-05 19:19:22.350721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:53.060 [2024-11-05 19:19:22.350739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.060 [2024-11-05 19:19:22.350749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.060 [2024-11-05 19:19:22.360241] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:53.060 [2024-11-05 19:19:22.360260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.060 [2024-11-05 19:19:22.360266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:53.060 [2024-11-05 19:19:22.370349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:53.060 [2024-11-05 19:19:22.370367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.060 [2024-11-05 19:19:22.370374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:53.060 [2024-11-05 19:19:22.380801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:53.060 [2024-11-05 19:19:22.380821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.060 [2024-11-05 19:19:22.380828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:53.322 [2024-11-05 19:19:22.390426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:53.322 [2024-11-05 19:19:22.390445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.322 [2024-11-05 19:19:22.390451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.322 [2024-11-05 19:19:22.400475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:53.322 [2024-11-05 19:19:22.400494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.322 [2024-11-05 19:19:22.400500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:53.322 [2024-11-05 19:19:22.411123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:53.322 [2024-11-05 19:19:22.411142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.322 [2024-11-05 19:19:22.411148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:53.322 [2024-11-05 19:19:22.421277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:53.322 [2024-11-05 19:19:22.421297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.322 [2024-11-05 19:19:22.421303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:53.322 [2024-11-05 19:19:22.431778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:53.322 [2024-11-05 19:19:22.431797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.322 [2024-11-05 19:19:22.431804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.322 3494.00 IOPS, 436.75 MiB/s [2024-11-05T18:19:22.645Z] [2024-11-05 19:19:22.442400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:53.322 [2024-11-05 19:19:22.442415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.322 [2024-11-05 19:19:22.442422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:53.322 [2024-11-05 19:19:22.450718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:53.322 [2024-11-05 19:19:22.450737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.322 [2024-11-05 19:19:22.450743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:53.322 [2024-11-05 19:19:22.458506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:53.322 [2024-11-05 19:19:22.458524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.322 [2024-11-05 19:19:22.458530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:53.322 [2024-11-05 19:19:22.466430] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:53.322 [2024-11-05 19:19:22.466449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.322 [2024-11-05 19:19:22.466456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.323 [2024-11-05 19:19:22.471913] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:53.323 [2024-11-05 19:19:22.471932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.323 [2024-11-05 19:19:22.471938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:53.323 [2024-11-05 19:19:22.477953] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:53.323 [2024-11-05 19:19:22.477975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.323 [2024-11-05 19:19:22.477982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:53.323 [2024-11-05 19:19:22.488585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:53.323 [2024-11-05 19:19:22.488603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.323 [2024-11-05 19:19:22.488610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:53.323 [2024-11-05 19:19:22.498012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:53.323 [2024-11-05 19:19:22.498031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.323 [2024-11-05 19:19:22.498038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.323 [2024-11-05 19:19:22.508235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:53.323 [2024-11-05 19:19:22.508254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.323 [2024-11-05 19:19:22.508261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:53.323 [2024-11-05 19:19:22.517462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:53.323 [2024-11-05 19:19:22.517481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.323 [2024-11-05 19:19:22.517487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:53.323 [2024-11-05 19:19:22.525627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:53.323 [2024-11-05 19:19:22.525646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.323 [2024-11-05 19:19:22.525652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:53.323 [2024-11-05 19:19:22.532376] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:53.323 [2024-11-05 19:19:22.532395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.323 [2024-11-05 19:19:22.532401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.323 [2024-11-05 19:19:22.540493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:53.323 [2024-11-05 19:19:22.540511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.323 [2024-11-05 19:19:22.540518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:53.323 [2024-11-05 19:19:22.549652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:53.323 [2024-11-05 19:19:22.549670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.323 [2024-11-05 19:19:22.549677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:53.323 [2024-11-05 19:19:22.557342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:53.323 [2024-11-05 19:19:22.557361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.323 [2024-11-05 19:19:22.557368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:53.323 [2024-11-05 19:19:22.567529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:53.323 [2024-11-05 19:19:22.567548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.323 [2024-11-05 19:19:22.567554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.323 [2024-11-05 19:19:22.577386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:53.323 [2024-11-05 19:19:22.577405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.323 [2024-11-05 19:19:22.577412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:53.323 [2024-11-05 19:19:22.583884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:53.323 [2024-11-05 19:19:22.583902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.323 [2024-11-05 19:19:22.583909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:53.323 [2024-11-05 19:19:22.592300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:53.323 [2024-11-05 19:19:22.592319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.323 [2024-11-05 19:19:22.592325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:53.323 [2024-11-05 19:19:22.601651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:53.323 [2024-11-05 19:19:22.601671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.323 [2024-11-05 19:19:22.601677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.323 [2024-11-05 19:19:22.612726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:53.323 [2024-11-05 19:19:22.612749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.323 [2024-11-05 19:19:22.612756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:53.323 [2024-11-05 19:19:22.618109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:53.323 [2024-11-05 19:19:22.618128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.323 [2024-11-05 19:19:22.618134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:53.323 [2024-11-05 19:19:22.627896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:53.323 [2024-11-05 19:19:22.627915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.323 [2024-11-05 19:19:22.627924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:53.323 [2024-11-05 19:19:22.634065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:53.323 [2024-11-05 19:19:22.634084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.323 [2024-11-05 19:19:22.634091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.323 [2024-11-05 19:19:22.642426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:53.323 [2024-11-05 19:19:22.642444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.323 [2024-11-05 19:19:22.642450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:53.585 [2024-11-05 19:19:22.650030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:53.585 [2024-11-05 19:19:22.650049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.585 [2024-11-05 19:19:22.650055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:53.585 [2024-11-05 19:19:22.657876] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:53.585 [2024-11-05 19:19:22.657895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.585 [2024-11-05 19:19:22.657901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:53.585 [2024-11-05 19:19:22.669219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:53.585 [2024-11-05 19:19:22.669238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.585 [2024-11-05 19:19:22.669244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.585 [2024-11-05 19:19:22.678764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:53.585 [2024-11-05 19:19:22.678783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.585 [2024-11-05 19:19:22.678790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:53.585 [2024-11-05 19:19:22.690083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:53.585 [2024-11-05 19:19:22.690102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.585 [2024-11-05 19:19:22.690108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:53.585 [2024-11-05 19:19:22.701220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:53.585 [2024-11-05 19:19:22.701239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.585 [2024-11-05 19:19:22.701246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:53.585 [2024-11-05 19:19:22.710704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:53.585 [2024-11-05 19:19:22.710725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.585 [2024-11-05 19:19:22.710732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.585 [2024-11-05 19:19:22.719802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:53.585 [2024-11-05 19:19:22.719821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.585 [2024-11-05 19:19:22.719827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:53.585 [2024-11-05 19:19:22.729287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:53.585 [2024-11-05 19:19:22.729306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.585 [2024-11-05 19:19:22.729312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:53.585 [2024-11-05 19:19:22.735757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:53.585 [2024-11-05 19:19:22.735775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.585 [2024-11-05 19:19:22.735781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:53.585 [2024-11-05 19:19:22.746132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:53.585 [2024-11-05 19:19:22.746151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.585 [2024-11-05 19:19:22.746157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.585 [2024-11-05 19:19:22.753990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:53.585 [2024-11-05 19:19:22.754009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.585 [2024-11-05 19:19:22.754015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:53.585 [2024-11-05 19:19:22.763743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:53.585 [2024-11-05 19:19:22.763766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.585 [2024-11-05 19:19:22.763772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:53.585 [2024-11-05 19:19:22.770945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:53.585 [2024-11-05 19:19:22.770964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.585 [2024-11-05 19:19:22.770970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:53.585 [2024-11-05 19:19:22.778682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:53.585 [2024-11-05 19:19:22.778701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.585 [2024-11-05 19:19:22.778707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.585 [2024-11-05 19:19:22.786757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:53.585 [2024-11-05 19:19:22.786775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.585 [2024-11-05 19:19:22.786782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:53.585 [2024-11-05 19:19:22.796605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:53.585 [2024-11-05 19:19:22.796623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.585 [2024-11-05 19:19:22.796630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:53.585 [2024-11-05 19:19:22.806171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:53.585 [2024-11-05 19:19:22.806190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.585 [2024-11-05 19:19:22.806197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:53.585 [2024-11-05 19:19:22.817877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:53.585 [2024-11-05 19:19:22.817896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.585 [2024-11-05 19:19:22.817902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.585 [2024-11-05 19:19:22.829325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:53.585 [2024-11-05 19:19:22.829344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.585 [2024-11-05 19:19:22.829350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:53.585 [2024-11-05 19:19:22.841602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:53.585 [2024-11-05 19:19:22.841620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.585 [2024-11-05 19:19:22.841626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:53.585 [2024-11-05 19:19:22.852841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:53.585 [2024-11-05 19:19:22.852859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.585 [2024-11-05 19:19:22.852866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:53.585 [2024-11-05 19:19:22.861767] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:53.585 [2024-11-05 19:19:22.861785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.585 [2024-11-05 19:19:22.861792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.585 [2024-11-05 19:19:22.868114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:53.585 [2024-11-05 19:19:22.868132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.585 [2024-11-05 19:19:22.868142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:53.586 [2024-11-05 19:19:22.876951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:53.586 [2024-11-05 19:19:22.876970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.586 [2024-11-05 19:19:22.876976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:53.586 [2024-11-05 19:19:22.887682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:53.586 [2024-11-05 19:19:22.887701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.586 [2024-11-05 19:19:22.887708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:53.586 [2024-11-05 19:19:22.897001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:53.586 [2024-11-05 19:19:22.897020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.586 [2024-11-05 19:19:22.897026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.586 [2024-11-05 19:19:22.905154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:53.586 [2024-11-05 19:19:22.905173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.586 [2024-11-05 19:19:22.905179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:53.847 [2024-11-05 19:19:22.912047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:53.847 [2024-11-05 19:19:22.912067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.847 [2024-11-05 19:19:22.912073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:53.847 [2024-11-05 19:19:22.918209] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:53.847 [2024-11-05 19:19:22.918228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.847 [2024-11-05 19:19:22.918235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:53.847 [2024-11-05 19:19:22.925957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:53.847 [2024-11-05 19:19:22.925975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.847 [2024-11-05 19:19:22.925982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.847 [2024-11-05 19:19:22.935583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:53.847 [2024-11-05 19:19:22.935602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.847 [2024-11-05 19:19:22.935609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:53.847 [2024-11-05 19:19:22.945158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:53.847 [2024-11-05 19:19:22.945180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.847 [2024-11-05 19:19:22.945186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:53.847 [2024-11-05 19:19:22.952255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:53.847 [2024-11-05 19:19:22.952273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.847 [2024-11-05 19:19:22.952280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:53.847 [2024-11-05 19:19:22.958965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:53.847 [2024-11-05 19:19:22.958983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.847 [2024-11-05 19:19:22.958990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.847 [2024-11-05 19:19:22.968801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:53.847 [2024-11-05 19:19:22.968819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.847 [2024-11-05 19:19:22.968825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:53.847 [2024-11-05 19:19:22.977870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:53.847 [2024-11-05 19:19:22.977889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.847 [2024-11-05 19:19:22.977896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:53.847 [2024-11-05 19:19:22.986272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:53.847 [2024-11-05 19:19:22.986293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.847 [2024-11-05 19:19:22.986299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:53.847 [2024-11-05 19:19:22.994493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:53.847 [2024-11-05 19:19:22.994512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.847 [2024-11-05 19:19:22.994519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.847 [2024-11-05 19:19:23.005382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:53.847 [2024-11-05 19:19:23.005401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.847 [2024-11-05 19:19:23.005407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:53.847 [2024-11-05 19:19:23.015006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:53.847 [2024-11-05 19:19:23.015025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.847 [2024-11-05 19:19:23.015031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:53.847 [2024-11-05 19:19:23.021576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:53.847 [2024-11-05 19:19:23.021595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.847 [2024-11-05 19:19:23.021602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:53.847 [2024-11-05 19:19:23.032169] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:53.847 [2024-11-05 19:19:23.032187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.847 [2024-11-05 19:19:23.032194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.847 [2024-11-05 19:19:23.042640] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:53.847 [2024-11-05 19:19:23.042660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.847 [2024-11-05 19:19:23.042666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:53.847 [2024-11-05 19:19:23.051905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:53.847 [2024-11-05 19:19:23.051924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.847 [2024-11-05 19:19:23.051931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:53.847 [2024-11-05 19:19:23.065026] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:53.847 [2024-11-05 19:19:23.065045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.847 [2024-11-05 19:19:23.065052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:53.847 [2024-11-05 19:19:23.077393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:53.847 [2024-11-05 19:19:23.077411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.847 [2024-11-05 19:19:23.077417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.848 [2024-11-05 19:19:23.089691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:53.848 [2024-11-05 19:19:23.089711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.848 [2024-11-05 19:19:23.089718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:53.848 [2024-11-05 19:19:23.099375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:53.848 [2024-11-05 19:19:23.099394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.848 [2024-11-05 19:19:23.099400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:53.848 [2024-11-05 19:19:23.102167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:53.848 [2024-11-05 19:19:23.102186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.848 [2024-11-05 19:19:23.102196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:53.848 [2024-11-05 19:19:23.113060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:53.848 [2024-11-05 19:19:23.113078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.848 [2024-11-05 19:19:23.113085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.848 [2024-11-05 19:19:23.121808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:53.848 [2024-11-05 19:19:23.121826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.848 [2024-11-05 19:19:23.121833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:53.848 [2024-11-05 19:19:23.134460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:53.848 [2024-11-05 19:19:23.134479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.848 [2024-11-05 19:19:23.134485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:53.848 [2024-11-05 19:19:23.146329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:53.848 [2024-11-05 19:19:23.146348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.848 [2024-11-05 19:19:23.146354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:53.848 [2024-11-05 19:19:23.153313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:53.848 [2024-11-05 19:19:23.153332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.848 [2024-11-05 19:19:23.153339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.848 [2024-11-05 19:19:23.162465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:53.848 [2024-11-05 19:19:23.162484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.848 [2024-11-05 19:19:23.162491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:53.848 [2024-11-05 19:19:23.171501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:53.848 [2024-11-05 19:19:23.171520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.848 [2024-11-05 19:19:23.171527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:54.108 [2024-11-05 19:19:23.177002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:54.108 [2024-11-05 19:19:23.177021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.108 [2024-11-05 19:19:23.177028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:54.108 [2024-11-05 19:19:23.188432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:54.108 [2024-11-05 19:19:23.188454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.108 [2024-11-05 19:19:23.188460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.108 [2024-11-05 19:19:23.198137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:54.108 [2024-11-05 19:19:23.198156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.108 [2024-11-05 19:19:23.198162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:54.108 [2024-11-05 19:19:23.207658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:54.108 [2024-11-05 19:19:23.207678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.108 [2024-11-05 19:19:23.207684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:54.108 [2024-11-05 19:19:23.215086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:54.108 [2024-11-05 19:19:23.215106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.108 [2024-11-05 19:19:23.215112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:54.108 [2024-11-05 19:19:23.227410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:54.108 [2024-11-05 19:19:23.227428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.109 [2024-11-05 19:19:23.227435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.109 [2024-11-05 19:19:23.237450] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:54.109 [2024-11-05 19:19:23.237469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.109 [2024-11-05 19:19:23.237475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:54.109 [2024-11-05 19:19:23.246002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:54.109 [2024-11-05 19:19:23.246022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.109 [2024-11-05 19:19:23.246029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:54.109 [2024-11-05 19:19:23.254080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:54.109 [2024-11-05 19:19:23.254099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.109 [2024-11-05 19:19:23.254106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:54.109 [2024-11-05 19:19:23.258493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:54.109 [2024-11-05 19:19:23.258512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.109 [2024-11-05 19:19:23.258519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.109 [2024-11-05 19:19:23.265391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:54.109 [2024-11-05 19:19:23.265410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.109 [2024-11-05 19:19:23.265417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:54.109 [2024-11-05 19:19:23.274650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:54.109 [2024-11-05 19:19:23.274669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.109 [2024-11-05 19:19:23.274675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:54.109 [2024-11-05 19:19:23.283642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:54.109 [2024-11-05 19:19:23.283661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.109 [2024-11-05 19:19:23.283668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:54.109 [2024-11-05 19:19:23.289113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:54.109 [2024-11-05 19:19:23.289132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.109 [2024-11-05 19:19:23.289138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.109 [2024-11-05 19:19:23.298277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:54.109 [2024-11-05 19:19:23.298296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.109 [2024-11-05 19:19:23.298303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:54.109 [2024-11-05 19:19:23.308686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:54.109 [2024-11-05 19:19:23.308704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.109 [2024-11-05 19:19:23.308711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:54.109 [2024-11-05 19:19:23.320987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:54.109 [2024-11-05 19:19:23.321007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.109 [2024-11-05 19:19:23.321013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:54.109 [2024-11-05 19:19:23.332064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:54.109 [2024-11-05 19:19:23.332083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.109 [2024-11-05 19:19:23.332089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.109 [2024-11-05 19:19:23.340186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:54.109 [2024-11-05 19:19:23.340208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.109 [2024-11-05 19:19:23.340214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:54.109 [2024-11-05 19:19:23.343953] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:54.109 [2024-11-05 19:19:23.343971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.109 [2024-11-05 19:19:23.343978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:54.109 [2024-11-05 19:19:23.347439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:54.109 [2024-11-05 19:19:23.347457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.109 [2024-11-05 19:19:23.347464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:54.109 [2024-11-05 19:19:23.351659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:54.109 [2024-11-05 19:19:23.351678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.109 [2024-11-05 19:19:23.351684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.109 [2024-11-05 19:19:23.358093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:54.109 [2024-11-05 19:19:23.358112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.109 [2024-11-05 19:19:23.358118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:54.109 [2024-11-05 19:19:23.370373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:54.109 [2024-11-05 19:19:23.370392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.109 [2024-11-05 19:19:23.370399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:54.109 [2024-11-05 19:19:23.381274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:54.109 [2024-11-05 19:19:23.381294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.109 [2024-11-05 19:19:23.381300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:54.109 [2024-11-05 19:19:23.393558] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:54.109 [2024-11-05 19:19:23.393577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.109 [2024-11-05 19:19:23.393583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.109 [2024-11-05 19:19:23.405726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:54.109 [2024-11-05 19:19:23.405750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.109 [2024-11-05 19:19:23.405757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:54.109 [2024-11-05 19:19:23.417393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:54.109 [2024-11-05 19:19:23.417412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.109 [2024-11-05 19:19:23.417418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:54.109 [2024-11-05 19:19:23.428778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:54.109 [2024-11-05 19:19:23.428797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.109 [2024-11-05 19:19:23.428804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:54.370 [2024-11-05 19:19:23.441139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7219a0) 00:29:54.370 [2024-11-05 19:19:23.441159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.370 [2024-11-05 19:19:23.441166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.370 3475.00 IOPS, 434.38 MiB/s 00:29:54.370 Latency(us) 00:29:54.370 [2024-11-05T18:19:23.693Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:54.370 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:29:54.370 nvme0n1 : 2.01 3473.75 434.22 0.00 0.00 4603.31 686.08 15073.28 00:29:54.370 [2024-11-05T18:19:23.693Z] =================================================================================================================== 00:29:54.370 [2024-11-05T18:19:23.693Z] Total : 3473.75 434.22 0.00 0.00 4603.31 686.08 15073.28 00:29:54.370 { 00:29:54.370 "results": [ 00:29:54.370 { 00:29:54.370 "job": "nvme0n1", 00:29:54.370 "core_mask": "0x2", 00:29:54.370 "workload": "randread", 00:29:54.370 "status": "finished", 00:29:54.370 "queue_depth": 16, 00:29:54.370 "io_size": 131072, 00:29:54.370 "runtime": 2.005324, 00:29:54.370 "iops": 3473.7528698604315, 00:29:54.370 "mibps": 434.21910873255393, 00:29:54.370 "io_failed": 0, 00:29:54.370 "io_timeout": 0, 00:29:54.370 "avg_latency_us": 4603.313159153986, 00:29:54.370 "min_latency_us": 686.08, 00:29:54.370 "max_latency_us": 15073.28 00:29:54.370 } 00:29:54.370 ], 00:29:54.370 "core_count": 1 00:29:54.370 } 00:29:54.370 19:19:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:54.370 19:19:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:54.370 19:19:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:54.370 | .driver_specific 00:29:54.370 | .nvme_error 00:29:54.370 | .status_code 00:29:54.370 | .command_transient_transport_error' 00:29:54.370 19:19:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:54.370 19:19:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 224 > 0 )) 00:29:54.370 19:19:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 528217 00:29:54.370 19:19:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 528217 ']' 00:29:54.370 19:19:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 528217 00:29:54.370 19:19:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:29:54.370 19:19:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:54.370 19:19:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 528217 00:29:54.693 19:19:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:29:54.693 19:19:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:29:54.693 19:19:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 528217' 00:29:54.693 killing process with pid 528217 00:29:54.693 19:19:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 528217 00:29:54.693 Received shutdown signal, test time was about 2.000000 seconds 00:29:54.693 00:29:54.693 Latency(us) 00:29:54.693 [2024-11-05T18:19:24.016Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:54.693 [2024-11-05T18:19:24.016Z] =================================================================================================================== 00:29:54.693 [2024-11-05T18:19:24.016Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:54.693 19:19:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 528217 00:29:54.693 19:19:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:29:54.693 19:19:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:54.693 19:19:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:29:54.693 19:19:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:29:54.693 19:19:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:29:54.693 19:19:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=529072 00:29:54.693 19:19:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 529072 /var/tmp/bperf.sock 00:29:54.693 19:19:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 529072 ']' 00:29:54.693 19:19:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:54.693 19:19:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:54.693 19:19:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:54.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:54.693 19:19:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:54.693 19:19:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:54.693 19:19:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:29:54.693 [2024-11-05 19:19:23.859485] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:29:54.693 [2024-11-05 19:19:23.859542] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid529072 ] 00:29:54.693 [2024-11-05 19:19:23.941449] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:54.693 [2024-11-05 19:19:23.970549] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:55.684 19:19:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:55.684 19:19:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:29:55.684 19:19:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:55.684 19:19:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:55.684 19:19:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:55.684 19:19:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:55.684 19:19:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:55.684 19:19:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:55.684 19:19:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:55.684 19:19:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:55.943 nvme0n1 00:29:55.943 19:19:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:29:55.943 19:19:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:55.943 19:19:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:55.943 19:19:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:55.943 19:19:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:55.943 19:19:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:55.944 Running I/O for 2 seconds... 00:29:55.944 [2024-11-05 19:19:25.228240] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166ebfd0 00:29:55.944 [2024-11-05 19:19:25.230055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:23141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:55.944 [2024-11-05 19:19:25.230086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:55.944 [2024-11-05 19:19:25.238692] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166f8e88 00:29:55.944 [2024-11-05 19:19:25.239808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:19258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:55.944 [2024-11-05 19:19:25.239826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:55.944 [2024-11-05 19:19:25.250701] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166f8e88 00:29:55.944 [2024-11-05 19:19:25.251829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:14401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:55.944 [2024-11-05 19:19:25.251846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:55.944 [2024-11-05 19:19:25.262670] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166f8e88 00:29:55.944 [2024-11-05 19:19:25.263810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:24227 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:55.944 [2024-11-05 19:19:25.263827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:56.205 [2024-11-05 19:19:25.274654] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166f8e88 00:29:56.205 [2024-11-05 19:19:25.275809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:14225 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.205 [2024-11-05 19:19:25.275827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:56.205 [2024-11-05 19:19:25.286611] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166f8e88 00:29:56.205 [2024-11-05 19:19:25.287743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:22149 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.205 [2024-11-05 19:19:25.287764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:56.205 [2024-11-05 19:19:25.298565] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166f8e88 00:29:56.205 [2024-11-05 19:19:25.299699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:23642 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.205 [2024-11-05 19:19:25.299716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:56.205 [2024-11-05 19:19:25.310507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166f8e88 00:29:56.205 [2024-11-05 19:19:25.311662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:2344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.205 [2024-11-05 19:19:25.311678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:56.205 [2024-11-05 19:19:25.322459] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166f8e88 00:29:56.205 [2024-11-05 19:19:25.323566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:17845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.205 [2024-11-05 19:19:25.323582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:56.205 [2024-11-05 19:19:25.333825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166fda78 00:29:56.205 [2024-11-05 19:19:25.334943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13411 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.205 [2024-11-05 19:19:25.334959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:56.205 [2024-11-05 19:19:25.346922] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166f20d8 00:29:56.205 [2024-11-05 19:19:25.348230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:20610 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.205 [2024-11-05 19:19:25.348247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:56.205 [2024-11-05 19:19:25.358900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166fbcf0 00:29:56.205 [2024-11-05 19:19:25.360181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:16846 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.205 [2024-11-05 19:19:25.360200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:56.205 [2024-11-05 19:19:25.370906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166edd58 00:29:56.205 [2024-11-05 19:19:25.372223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:12572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.205 [2024-11-05 19:19:25.372240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:56.205 [2024-11-05 19:19:25.382830] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166fbcf0 00:29:56.205 [2024-11-05 19:19:25.384127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:11758 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.206 [2024-11-05 19:19:25.384146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:56.206 [2024-11-05 19:19:25.394180] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166fb480 00:29:56.206 [2024-11-05 19:19:25.395450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:8191 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.206 [2024-11-05 19:19:25.395466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:29:56.206 [2024-11-05 19:19:25.407260] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166eb328 00:29:56.206 [2024-11-05 19:19:25.408718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:19049 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.206 [2024-11-05 19:19:25.408735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:56.206 [2024-11-05 19:19:25.420758] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166df550 00:29:56.206 [2024-11-05 19:19:25.422848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:17796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.206 [2024-11-05 19:19:25.422865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:56.206 [2024-11-05 19:19:25.431200] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166e5a90 00:29:56.206 [2024-11-05 19:19:25.432655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:12081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.206 [2024-11-05 19:19:25.432672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:56.206 [2024-11-05 19:19:25.444706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166f3e60 00:29:56.206 [2024-11-05 19:19:25.446800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:16519 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.206 [2024-11-05 19:19:25.446817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:56.206 [2024-11-05 19:19:25.455070] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166ebb98 00:29:56.206 [2024-11-05 19:19:25.456511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:19129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.206 [2024-11-05 19:19:25.456527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:56.206 [2024-11-05 19:19:25.467007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166ebb98 00:29:56.206 [2024-11-05 19:19:25.468467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:1052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.206 [2024-11-05 19:19:25.468484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:56.206 [2024-11-05 19:19:25.478945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166ebb98 00:29:56.206 [2024-11-05 19:19:25.480349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:2339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.206 [2024-11-05 19:19:25.480366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:56.206 [2024-11-05 19:19:25.490034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166eb328 00:29:56.206 [2024-11-05 19:19:25.491501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17470 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.206 [2024-11-05 19:19:25.491518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:56.206 [2024-11-05 19:19:25.502774] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166eb328 00:29:56.206 [2024-11-05 19:19:25.504157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:23152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.206 [2024-11-05 19:19:25.504173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:56.206 [2024-11-05 19:19:25.514698] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166e01f8 00:29:56.206 [2024-11-05 19:19:25.516102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:5505 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.206 [2024-11-05 19:19:25.516120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:56.206 [2024-11-05 19:19:25.526651] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166e6b70 00:29:56.206 [2024-11-05 19:19:25.528081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:4015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.206 [2024-11-05 19:19:25.528098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:56.467 [2024-11-05 19:19:25.538595] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166f2948 00:29:56.467 [2024-11-05 19:19:25.539992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:22460 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.467 [2024-11-05 19:19:25.540008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:56.467 [2024-11-05 19:19:25.550583] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166eb328 00:29:56.467 [2024-11-05 19:19:25.551981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:10031 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.467 [2024-11-05 19:19:25.551998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:56.467 [2024-11-05 19:19:25.564040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166e01f8 00:29:56.467 [2024-11-05 19:19:25.566105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:4031 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.467 [2024-11-05 19:19:25.566122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:56.467 [2024-11-05 19:19:25.574435] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166e6b70 00:29:56.467 [2024-11-05 19:19:25.575826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:21423 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.467 [2024-11-05 19:19:25.575842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:56.467 [2024-11-05 19:19:25.586341] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166e6b70 00:29:56.467 [2024-11-05 19:19:25.587755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:23556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.467 [2024-11-05 19:19:25.587772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:56.467 [2024-11-05 19:19:25.598253] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166e6b70 00:29:56.467 [2024-11-05 19:19:25.599666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:4745 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.467 [2024-11-05 19:19:25.599682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:56.467 [2024-11-05 19:19:25.610160] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166e6b70 00:29:56.467 [2024-11-05 19:19:25.611588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:18519 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.467 [2024-11-05 19:19:25.611605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:56.467 [2024-11-05 19:19:25.622093] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166e6b70 00:29:56.467 [2024-11-05 19:19:25.623512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:5354 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.467 [2024-11-05 19:19:25.623527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:56.467 [2024-11-05 19:19:25.634011] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166e6b70 00:29:56.468 [2024-11-05 19:19:25.635415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:4237 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.468 [2024-11-05 19:19:25.635431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:56.468 [2024-11-05 19:19:25.645907] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166e6b70 00:29:56.468 [2024-11-05 19:19:25.647313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:19453 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.468 [2024-11-05 19:19:25.647330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:56.468 [2024-11-05 19:19:25.657819] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166e6b70 00:29:56.468 [2024-11-05 19:19:25.659225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:18994 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.468 [2024-11-05 19:19:25.659241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:56.468 [2024-11-05 19:19:25.669713] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166e6b70 00:29:56.468 [2024-11-05 19:19:25.671122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:7196 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.468 [2024-11-05 19:19:25.671139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:56.468 [2024-11-05 19:19:25.681642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166e6b70 00:29:56.468 [2024-11-05 19:19:25.683053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:5435 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.468 [2024-11-05 19:19:25.683069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:56.468 [2024-11-05 19:19:25.693558] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166e6b70 00:29:56.468 [2024-11-05 19:19:25.694932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:11006 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.468 [2024-11-05 19:19:25.694951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:56.468 [2024-11-05 19:19:25.705418] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166f2510 00:29:56.468 [2024-11-05 19:19:25.706811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:10161 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.468 [2024-11-05 19:19:25.706827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:56.468 [2024-11-05 19:19:25.717305] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166f2510 00:29:56.468 [2024-11-05 19:19:25.718714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:11501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.468 [2024-11-05 19:19:25.718730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:56.468 [2024-11-05 19:19:25.729216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166f2510 00:29:56.468 [2024-11-05 19:19:25.730609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:15461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.468 [2024-11-05 19:19:25.730626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:56.468 [2024-11-05 19:19:25.741131] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166ea680 00:29:56.468 [2024-11-05 19:19:25.742530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.468 [2024-11-05 19:19:25.742546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:56.468 [2024-11-05 19:19:25.753045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166fbcf0 00:29:56.468 [2024-11-05 19:19:25.754434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:6304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.468 [2024-11-05 19:19:25.754451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:56.468 [2024-11-05 19:19:25.764951] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166fbcf0 00:29:56.468 [2024-11-05 19:19:25.766350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:21496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.468 [2024-11-05 19:19:25.766367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:56.468 [2024-11-05 19:19:25.776879] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166fbcf0 00:29:56.468 [2024-11-05 19:19:25.778270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:19664 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.468 [2024-11-05 19:19:25.778286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:56.468 [2024-11-05 19:19:25.788799] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166e73e0 00:29:56.468 [2024-11-05 19:19:25.790202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:17182 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.468 [2024-11-05 19:19:25.790218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:56.729 [2024-11-05 19:19:25.800760] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166e84c0 00:29:56.729 [2024-11-05 19:19:25.802110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:16445 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.729 [2024-11-05 19:19:25.802129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:56.729 [2024-11-05 19:19:25.812701] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166f2510 00:29:56.729 [2024-11-05 19:19:25.814079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7364 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.729 [2024-11-05 19:19:25.814096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:56.729 [2024-11-05 19:19:25.826220] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166f35f0 00:29:56.729 [2024-11-05 19:19:25.828250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:2994 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.729 [2024-11-05 19:19:25.828266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:56.729 [2024-11-05 19:19:25.836567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166eb328 00:29:56.729 [2024-11-05 19:19:25.837961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:9423 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.729 [2024-11-05 19:19:25.837977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:56.729 [2024-11-05 19:19:25.848481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166eb328 00:29:56.729 [2024-11-05 19:19:25.849861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11060 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.729 [2024-11-05 19:19:25.849877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:56.729 [2024-11-05 19:19:25.860377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166eb328 00:29:56.729 [2024-11-05 19:19:25.861717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:22478 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.729 [2024-11-05 19:19:25.861734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:56.729 [2024-11-05 19:19:25.873832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166fd208 00:29:56.729 [2024-11-05 19:19:25.875861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:6831 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.729 [2024-11-05 19:19:25.875876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:56.729 [2024-11-05 19:19:25.884208] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166f0788 00:29:56.729 [2024-11-05 19:19:25.885532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:6451 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.729 [2024-11-05 19:19:25.885548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:56.729 [2024-11-05 19:19:25.895521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166e5ec8 00:29:56.729 [2024-11-05 19:19:25.896831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:17829 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.729 [2024-11-05 19:19:25.896847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:56.729 [2024-11-05 19:19:25.905503] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166e5220 00:29:56.729 [2024-11-05 19:19:25.906366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:15484 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.729 [2024-11-05 19:19:25.906382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:29:56.729 [2024-11-05 19:19:25.920318] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166eee38 00:29:56.729 [2024-11-05 19:19:25.921994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:14815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.729 [2024-11-05 19:19:25.922010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:56.729 [2024-11-05 19:19:25.930736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166f1ca0 00:29:56.729 [2024-11-05 19:19:25.931774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.729 [2024-11-05 19:19:25.931789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:56.729 [2024-11-05 19:19:25.941919] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166e99d8 00:29:56.729 [2024-11-05 19:19:25.942925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:11548 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.729 [2024-11-05 19:19:25.942941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:29:56.730 [2024-11-05 19:19:25.954636] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166ebb98 00:29:56.730 [2024-11-05 19:19:25.955640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:23338 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.730 [2024-11-05 19:19:25.955656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:56.730 [2024-11-05 19:19:25.968132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166eee38 00:29:56.730 [2024-11-05 19:19:25.969812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:10692 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.730 [2024-11-05 19:19:25.969828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:56.730 [2024-11-05 19:19:25.978494] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166f2510 00:29:56.730 [2024-11-05 19:19:25.979543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:22152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.730 [2024-11-05 19:19:25.979560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:56.730 [2024-11-05 19:19:25.992150] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166f7da8 00:29:56.730 [2024-11-05 19:19:25.993804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:5782 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.730 [2024-11-05 19:19:25.993820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:56.730 [2024-11-05 19:19:26.001965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166ecc78 00:29:56.730 [2024-11-05 19:19:26.002958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.730 [2024-11-05 19:19:26.002974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:29:56.730 [2024-11-05 19:19:26.015057] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166eee38 00:29:56.730 [2024-11-05 19:19:26.016257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:10583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.730 [2024-11-05 19:19:26.016274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:56.730 [2024-11-05 19:19:26.027001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166fd640 00:29:56.730 [2024-11-05 19:19:26.028175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:10126 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.730 [2024-11-05 19:19:26.028191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:56.730 [2024-11-05 19:19:26.038132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166ef6a8 00:29:56.730 [2024-11-05 19:19:26.039297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:14466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.730 [2024-11-05 19:19:26.039312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:56.730 [2024-11-05 19:19:26.053362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166e6b70 00:29:56.991 [2024-11-05 19:19:26.055488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:19609 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.991 [2024-11-05 19:19:26.055504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:56.991 [2024-11-05 19:19:26.063734] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166df988 00:29:56.991 [2024-11-05 19:19:26.065194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:20476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.991 [2024-11-05 19:19:26.065210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:56.991 [2024-11-05 19:19:26.075681] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166df988 00:29:56.991 [2024-11-05 19:19:26.077172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:7020 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.991 [2024-11-05 19:19:26.077188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:56.991 [2024-11-05 19:19:26.087604] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166df988 00:29:56.991 [2024-11-05 19:19:26.089070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:18621 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.991 [2024-11-05 19:19:26.089086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:56.991 [2024-11-05 19:19:26.099539] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166df988 00:29:56.991 [2024-11-05 19:19:26.101008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:22206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.991 [2024-11-05 19:19:26.101024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:56.991 [2024-11-05 19:19:26.111445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166df988 00:29:56.991 [2024-11-05 19:19:26.112886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:16263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.991 [2024-11-05 19:19:26.112905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:56.991 [2024-11-05 19:19:26.123334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166de038 00:29:56.991 [2024-11-05 19:19:26.124781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:4868 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.991 [2024-11-05 19:19:26.124798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:56.991 [2024-11-05 19:19:26.136874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166df118 00:29:56.991 [2024-11-05 19:19:26.138977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:3731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.991 [2024-11-05 19:19:26.138994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:56.991 [2024-11-05 19:19:26.147933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166f9b30 00:29:56.991 [2024-11-05 19:19:26.149666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19879 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.991 [2024-11-05 19:19:26.149682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:56.991 [2024-11-05 19:19:26.157515] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166f0350 00:29:56.991 [2024-11-05 19:19:26.158600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:25152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.991 [2024-11-05 19:19:26.158616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:29:56.991 [2024-11-05 19:19:26.170627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166e5ec8 00:29:56.991 [2024-11-05 19:19:26.172078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:9001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.991 [2024-11-05 19:19:26.172095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:56.991 [2024-11-05 19:19:26.183714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166f6890 00:29:56.991 [2024-11-05 19:19:26.185460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:9412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.991 [2024-11-05 19:19:26.185476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:56.991 [2024-11-05 19:19:26.193306] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166df988 00:29:56.991 [2024-11-05 19:19:26.194390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:12383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.992 [2024-11-05 19:19:26.194406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:29:56.992 [2024-11-05 19:19:26.206434] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166eff18 00:29:56.992 [2024-11-05 19:19:26.207879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:20700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.992 [2024-11-05 19:19:26.207895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:56.992 21316.00 IOPS, 83.27 MiB/s [2024-11-05T18:19:26.315Z] [2024-11-05 19:19:26.219462] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166f2510 00:29:56.992 [2024-11-05 19:19:26.221207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8859 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.992 [2024-11-05 19:19:26.221223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:56.992 [2024-11-05 19:19:26.229046] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166ef6a8 00:29:56.992 [2024-11-05 19:19:26.230130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:17275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.992 [2024-11-05 19:19:26.230147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:29:56.992 [2024-11-05 19:19:26.242179] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166dece0 00:29:56.992 [2024-11-05 19:19:26.243633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:11999 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.992 [2024-11-05 19:19:26.243649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:56.992 [2024-11-05 19:19:26.255219] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166fb480 00:29:56.992 [2024-11-05 19:19:26.256963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:2175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.992 [2024-11-05 19:19:26.256979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:56.992 [2024-11-05 19:19:26.264817] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166fac10 00:29:56.992 [2024-11-05 19:19:26.265895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:14347 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.992 [2024-11-05 19:19:26.265912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:29:56.992 [2024-11-05 19:19:26.278593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166f0788 00:29:56.992 [2024-11-05 19:19:26.280329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:20056 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.992 [2024-11-05 19:19:26.280346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:56.992 [2024-11-05 19:19:26.288950] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166ff3c8 00:29:56.992 [2024-11-05 19:19:26.290057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:4112 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.992 [2024-11-05 19:19:26.290073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:56.992 [2024-11-05 19:19:26.300851] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166ff3c8 00:29:56.992 [2024-11-05 19:19:26.301922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:14528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.992 [2024-11-05 19:19:26.301938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:56.992 [2024-11-05 19:19:26.314273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166ff3c8 00:29:56.992 [2024-11-05 19:19:26.315996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7196 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.992 [2024-11-05 19:19:26.316015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:57.253 [2024-11-05 19:19:26.324633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166e84c0 00:29:57.253 [2024-11-05 19:19:26.325705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:13927 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.253 [2024-11-05 19:19:26.325721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:57.253 [2024-11-05 19:19:26.336547] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166e84c0 00:29:57.253 [2024-11-05 19:19:26.337624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:13261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.253 [2024-11-05 19:19:26.337640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:57.253 [2024-11-05 19:19:26.348451] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166e84c0 00:29:57.253 [2024-11-05 19:19:26.349521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:12708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.253 [2024-11-05 19:19:26.349538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:57.253 [2024-11-05 19:19:26.360362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166e84c0 00:29:57.253 [2024-11-05 19:19:26.361432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8032 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.253 [2024-11-05 19:19:26.361448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:57.253 [2024-11-05 19:19:26.371473] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166f0ff8 00:29:57.253 [2024-11-05 19:19:26.372528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:4133 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.253 [2024-11-05 19:19:26.372543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:29:57.253 [2024-11-05 19:19:26.385761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166eaab8 00:29:57.253 [2024-11-05 19:19:26.387430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:7391 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.253 [2024-11-05 19:19:26.387446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:57.253 [2024-11-05 19:19:26.396157] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166de470 00:29:57.253 [2024-11-05 19:19:26.397232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:17824 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.253 [2024-11-05 19:19:26.397248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:57.253 [2024-11-05 19:19:26.409694] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166e38d0 00:29:57.253 [2024-11-05 19:19:26.411410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:5205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.253 [2024-11-05 19:19:26.411426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:57.253 [2024-11-05 19:19:26.421591] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166f0788 00:29:57.253 [2024-11-05 19:19:26.423269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:19841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.253 [2024-11-05 19:19:26.423287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:57.253 [2024-11-05 19:19:26.431970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166e3d08 00:29:57.253 [2024-11-05 19:19:26.433003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:3838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.253 [2024-11-05 19:19:26.433020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:57.253 [2024-11-05 19:19:26.443922] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166e3d08 00:29:57.253 [2024-11-05 19:19:26.444981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:17306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.253 [2024-11-05 19:19:26.444997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:57.253 [2024-11-05 19:19:26.455851] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166e3d08 00:29:57.253 [2024-11-05 19:19:26.456910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:15729 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.254 [2024-11-05 19:19:26.456926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:57.254 [2024-11-05 19:19:26.467781] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166e3d08 00:29:57.254 [2024-11-05 19:19:26.468817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:18391 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.254 [2024-11-05 19:19:26.468833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:57.254 [2024-11-05 19:19:26.479723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166e3d08 00:29:57.254 [2024-11-05 19:19:26.480774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9429 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.254 [2024-11-05 19:19:26.480789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:57.254 [2024-11-05 19:19:26.491645] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166e84c0 00:29:57.254 [2024-11-05 19:19:26.492700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:18665 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.254 [2024-11-05 19:19:26.492716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:57.254 [2024-11-05 19:19:26.503595] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166fe2e8 00:29:57.254 [2024-11-05 19:19:26.504638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:25363 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.254 [2024-11-05 19:19:26.504654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:57.254 [2024-11-05 19:19:26.515561] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166ec408 00:29:57.254 [2024-11-05 19:19:26.516595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:5623 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.254 [2024-11-05 19:19:26.516611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:57.254 [2024-11-05 19:19:26.527510] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166fd640 00:29:57.254 [2024-11-05 19:19:26.528566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:9726 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.254 [2024-11-05 19:19:26.528583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:57.254 [2024-11-05 19:19:26.541036] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166e4578 00:29:57.254 [2024-11-05 19:19:26.542691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.254 [2024-11-05 19:19:26.542707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:57.254 [2024-11-05 19:19:26.551421] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166f6020 00:29:57.254 [2024-11-05 19:19:26.552455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:18827 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.254 [2024-11-05 19:19:26.552471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:57.254 [2024-11-05 19:19:26.563348] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166f6020 00:29:57.254 [2024-11-05 19:19:26.564391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:8730 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.254 [2024-11-05 19:19:26.564407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:57.254 [2024-11-05 19:19:26.575272] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166f6020 00:29:57.254 [2024-11-05 19:19:26.576316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:4352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.254 [2024-11-05 19:19:26.576332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:57.515 [2024-11-05 19:19:26.587225] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166f6020 00:29:57.515 [2024-11-05 19:19:26.588268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:5893 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.515 [2024-11-05 19:19:26.588285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:57.515 [2024-11-05 19:19:26.599162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166fb048 00:29:57.515 [2024-11-05 19:19:26.600218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:17182 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.515 [2024-11-05 19:19:26.600234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:57.515 [2024-11-05 19:19:26.611176] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166e9e10 00:29:57.515 [2024-11-05 19:19:26.612180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.515 [2024-11-05 19:19:26.612196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:57.515 [2024-11-05 19:19:26.622550] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166efae0 00:29:57.515 [2024-11-05 19:19:26.623576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:21847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.515 [2024-11-05 19:19:26.623592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:29:57.515 [2024-11-05 19:19:26.635662] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166fa3a0 00:29:57.515 [2024-11-05 19:19:26.636859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:11438 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.515 [2024-11-05 19:19:26.636875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:57.515 [2024-11-05 19:19:26.647737] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166fdeb0 00:29:57.515 [2024-11-05 19:19:26.648907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:18940 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.515 [2024-11-05 19:19:26.648923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:57.515 [2024-11-05 19:19:26.661245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166e2c28 00:29:57.515 [2024-11-05 19:19:26.663092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:2714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.515 [2024-11-05 19:19:26.663108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:57.515 [2024-11-05 19:19:26.671626] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166dece0 00:29:57.515 [2024-11-05 19:19:26.672812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:5484 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.515 [2024-11-05 19:19:26.672829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:57.515 [2024-11-05 19:19:26.683545] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166e6300 00:29:57.515 [2024-11-05 19:19:26.684761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:15201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.516 [2024-11-05 19:19:26.684778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:57.516 [2024-11-05 19:19:26.697022] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166f20d8 00:29:57.516 [2024-11-05 19:19:26.698847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:23109 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.516 [2024-11-05 19:19:26.698863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:57.516 [2024-11-05 19:19:26.707471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166feb58 00:29:57.516 [2024-11-05 19:19:26.708665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:4769 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.516 [2024-11-05 19:19:26.708682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:57.516 [2024-11-05 19:19:26.719472] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166fc560 00:29:57.516 [2024-11-05 19:19:26.720638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:20631 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.516 [2024-11-05 19:19:26.720654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:57.516 [2024-11-05 19:19:26.731448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166fdeb0 00:29:57.516 [2024-11-05 19:19:26.732632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:5795 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.516 [2024-11-05 19:19:26.732652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:57.516 [2024-11-05 19:19:26.742586] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166fd640 00:29:57.516 [2024-11-05 19:19:26.743739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:21518 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.516 [2024-11-05 19:19:26.743757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:57.516 [2024-11-05 19:19:26.755289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166fd640 00:29:57.516 [2024-11-05 19:19:26.756452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:18435 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.516 [2024-11-05 19:19:26.756469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:57.516 [2024-11-05 19:19:26.767239] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166fd640 00:29:57.516 [2024-11-05 19:19:26.768420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15652 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.516 [2024-11-05 19:19:26.768437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:57.516 [2024-11-05 19:19:26.779187] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166fd640 00:29:57.516 [2024-11-05 19:19:26.780352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:2152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.516 [2024-11-05 19:19:26.780368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:57.516 [2024-11-05 19:19:26.791118] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166fd640 00:29:57.516 [2024-11-05 19:19:26.792287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:9556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.516 [2024-11-05 19:19:26.792304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:57.516 [2024-11-05 19:19:26.803075] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166fd640 00:29:57.516 [2024-11-05 19:19:26.804247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:16431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.516 [2024-11-05 19:19:26.804263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:57.516 [2024-11-05 19:19:26.815006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166f0350 00:29:57.516 [2024-11-05 19:19:26.816175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:5705 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.516 [2024-11-05 19:19:26.816192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:57.516 [2024-11-05 19:19:26.828553] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166feb58 00:29:57.516 [2024-11-05 19:19:26.830361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:2842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.516 [2024-11-05 19:19:26.830378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:57.516 [2024-11-05 19:19:26.838931] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166fbcf0 00:29:57.516 [2024-11-05 19:19:26.840109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:6156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.516 [2024-11-05 19:19:26.840125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:57.777 [2024-11-05 19:19:26.850908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166fbcf0 00:29:57.777 [2024-11-05 19:19:26.852025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:10241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.777 [2024-11-05 19:19:26.852042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:57.777 [2024-11-05 19:19:26.864367] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166fbcf0 00:29:57.777 [2024-11-05 19:19:26.866169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:22067 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.777 [2024-11-05 19:19:26.866186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:57.777 [2024-11-05 19:19:26.876310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166ed920 00:29:57.777 [2024-11-05 19:19:26.878103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:12169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.777 [2024-11-05 19:19:26.878119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:57.777 [2024-11-05 19:19:26.886694] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166e0630 00:29:57.777 [2024-11-05 19:19:26.887817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:18454 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.777 [2024-11-05 19:19:26.887833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:57.777 [2024-11-05 19:19:26.898624] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166e0630 00:29:57.777 [2024-11-05 19:19:26.899759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:2504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.777 [2024-11-05 19:19:26.899775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:57.777 [2024-11-05 19:19:26.910559] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166e0630 00:29:57.777 [2024-11-05 19:19:26.911694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:23547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.777 [2024-11-05 19:19:26.911710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:57.777 [2024-11-05 19:19:26.922506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166de8a8 00:29:57.777 [2024-11-05 19:19:26.923644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:8957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.777 [2024-11-05 19:19:26.923660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:57.777 [2024-11-05 19:19:26.934470] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166f20d8 00:29:57.777 [2024-11-05 19:19:26.935594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:20805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.777 [2024-11-05 19:19:26.935611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:57.777 [2024-11-05 19:19:26.947983] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166fc128 00:29:57.777 [2024-11-05 19:19:26.949720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:10048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.777 [2024-11-05 19:19:26.949737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:57.777 [2024-11-05 19:19:26.958408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166ec408 00:29:57.777 [2024-11-05 19:19:26.959549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.777 [2024-11-05 19:19:26.959566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:57.777 [2024-11-05 19:19:26.971958] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166e84c0 00:29:57.777 [2024-11-05 19:19:26.973739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:3373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.777 [2024-11-05 19:19:26.973758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:57.777 [2024-11-05 19:19:26.981619] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166f0350 00:29:57.777 [2024-11-05 19:19:26.982734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:15704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.777 [2024-11-05 19:19:26.982754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:29:57.777 [2024-11-05 19:19:26.994390] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166ec408 00:29:57.777 [2024-11-05 19:19:26.995526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:19352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.777 [2024-11-05 19:19:26.995543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:57.778 [2024-11-05 19:19:27.006542] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166e84c0 00:29:57.778 [2024-11-05 19:19:27.007658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:10806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.778 [2024-11-05 19:19:27.007674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:57.778 [2024-11-05 19:19:27.017687] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166de470 00:29:57.778 [2024-11-05 19:19:27.018791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:23100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.778 [2024-11-05 19:19:27.018808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:29:57.778 [2024-11-05 19:19:27.032549] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166f8618 00:29:57.778 [2024-11-05 19:19:27.034484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:6659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.778 [2024-11-05 19:19:27.034500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:57.778 [2024-11-05 19:19:27.044465] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166e3d08 00:29:57.778 [2024-11-05 19:19:27.046353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23916 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.778 [2024-11-05 19:19:27.046372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:57.778 [2024-11-05 19:19:27.055271] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166eb328 00:29:57.778 [2024-11-05 19:19:27.056699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:24462 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.778 [2024-11-05 19:19:27.056716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:57.778 [2024-11-05 19:19:27.067383] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166eee38 00:29:57.778 [2024-11-05 19:19:27.068800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:11456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.778 [2024-11-05 19:19:27.068818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:57.778 [2024-11-05 19:19:27.079345] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166f1868 00:29:57.778 [2024-11-05 19:19:27.080781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:1766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.778 [2024-11-05 19:19:27.080798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:57.778 [2024-11-05 19:19:27.091379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166f7da8 00:29:57.778 [2024-11-05 19:19:27.092818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.778 [2024-11-05 19:19:27.092835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:58.038 [2024-11-05 19:19:27.103337] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166fda78 00:29:58.038 [2024-11-05 19:19:27.104788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:3208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.038 [2024-11-05 19:19:27.104804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:58.038 [2024-11-05 19:19:27.115306] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166f5378 00:29:58.038 [2024-11-05 19:19:27.116734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:24512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.038 [2024-11-05 19:19:27.116755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:58.038 [2024-11-05 19:19:27.128785] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166f35f0 00:29:58.038 [2024-11-05 19:19:27.130860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:15250 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.038 [2024-11-05 19:19:27.130876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:58.038 [2024-11-05 19:19:27.140680] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166f5378 00:29:58.038 [2024-11-05 19:19:27.142750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.038 [2024-11-05 19:19:27.142767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:58.038 [2024-11-05 19:19:27.151061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166ebfd0 00:29:58.038 [2024-11-05 19:19:27.152481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:19184 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.038 [2024-11-05 19:19:27.152498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:58.038 [2024-11-05 19:19:27.163045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166ebfd0 00:29:58.038 [2024-11-05 19:19:27.164466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:8368 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.038 [2024-11-05 19:19:27.164482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:58.038 [2024-11-05 19:19:27.175010] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166ebfd0 00:29:58.038 [2024-11-05 19:19:27.176426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:24970 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.038 [2024-11-05 19:19:27.176443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:58.038 [2024-11-05 19:19:27.187046] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166ebfd0 00:29:58.038 [2024-11-05 19:19:27.188453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:6028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.038 [2024-11-05 19:19:27.188470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:58.038 [2024-11-05 19:19:27.198975] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166ebfd0 00:29:58.038 [2024-11-05 19:19:27.200386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:5501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.038 [2024-11-05 19:19:27.200403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:58.038 [2024-11-05 19:19:27.210934] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166ebfd0 00:29:58.038 [2024-11-05 19:19:27.212333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:20854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.038 [2024-11-05 19:19:27.212350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:58.038 21351.50 IOPS, 83.40 MiB/s [2024-11-05T18:19:27.361Z] [2024-11-05 19:19:27.222833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14520) with pdu=0x2000166e95a0 00:29:58.038 [2024-11-05 19:19:27.224238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:13887 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.038 [2024-11-05 19:19:27.224254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:58.038 00:29:58.039 Latency(us) 00:29:58.039 [2024-11-05T18:19:27.362Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:58.039 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:58.039 nvme0n1 : 2.01 21368.18 83.47 0.00 0.00 5981.30 2116.27 14090.24 00:29:58.039 [2024-11-05T18:19:27.362Z] =================================================================================================================== 00:29:58.039 [2024-11-05T18:19:27.362Z] Total : 21368.18 83.47 0.00 0.00 5981.30 2116.27 14090.24 00:29:58.039 { 00:29:58.039 "results": [ 00:29:58.039 { 00:29:58.039 "job": "nvme0n1", 00:29:58.039 "core_mask": "0x2", 00:29:58.039 "workload": "randwrite", 00:29:58.039 "status": "finished", 00:29:58.039 "queue_depth": 128, 00:29:58.039 "io_size": 4096, 00:29:58.039 "runtime": 2.007424, 00:29:58.039 "iops": 21368.181310973665, 00:29:58.039 "mibps": 83.46945824599088, 00:29:58.039 "io_failed": 0, 00:29:58.039 "io_timeout": 0, 00:29:58.039 "avg_latency_us": 5981.297910401368, 00:29:58.039 "min_latency_us": 2116.266666666667, 00:29:58.039 "max_latency_us": 14090.24 00:29:58.039 } 00:29:58.039 ], 00:29:58.039 "core_count": 1 00:29:58.039 } 00:29:58.039 19:19:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:58.039 19:19:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:58.039 19:19:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:58.039 | .driver_specific 00:29:58.039 | .nvme_error 00:29:58.039 | .status_code 00:29:58.039 | .command_transient_transport_error' 00:29:58.039 19:19:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:58.299 19:19:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 168 > 0 )) 00:29:58.299 19:19:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 529072 00:29:58.299 19:19:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 529072 ']' 00:29:58.299 19:19:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 529072 00:29:58.299 19:19:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:29:58.299 19:19:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:58.299 19:19:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 529072 00:29:58.299 19:19:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:29:58.299 19:19:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:29:58.299 19:19:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 529072' 00:29:58.299 killing process with pid 529072 00:29:58.299 19:19:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 529072 00:29:58.299 Received shutdown signal, test time was about 2.000000 seconds 00:29:58.299 00:29:58.299 Latency(us) 00:29:58.299 [2024-11-05T18:19:27.622Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:58.299 [2024-11-05T18:19:27.622Z] =================================================================================================================== 00:29:58.299 [2024-11-05T18:19:27.622Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:58.299 19:19:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 529072 00:29:58.299 19:19:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:29:58.299 19:19:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:58.299 19:19:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:29:58.299 19:19:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:29:58.299 19:19:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:29:58.299 19:19:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=529761 00:29:58.299 19:19:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 529761 /var/tmp/bperf.sock 00:29:58.299 19:19:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 529761 ']' 00:29:58.299 19:19:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:58.299 19:19:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:58.299 19:19:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:58.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:58.299 19:19:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:58.299 19:19:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:58.299 19:19:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:29:58.299 [2024-11-05 19:19:27.620863] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:29:58.299 [2024-11-05 19:19:27.620921] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid529761 ] 00:29:58.299 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:58.299 Zero copy mechanism will not be used. 00:29:58.560 [2024-11-05 19:19:27.701984] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:58.560 [2024-11-05 19:19:27.731202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:59.129 19:19:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:59.129 19:19:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:29:59.129 19:19:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:59.129 19:19:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:59.388 19:19:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:59.388 19:19:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:59.388 19:19:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:59.388 19:19:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:59.388 19:19:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:59.389 19:19:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:59.648 nvme0n1 00:29:59.648 19:19:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:29:59.648 19:19:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:59.648 19:19:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:59.648 19:19:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:59.648 19:19:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:59.648 19:19:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:59.648 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:59.648 Zero copy mechanism will not be used. 00:29:59.648 Running I/O for 2 seconds... 00:29:59.648 [2024-11-05 19:19:28.950655] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:29:59.648 [2024-11-05 19:19:28.951151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.648 [2024-11-05 19:19:28.951180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:59.649 [2024-11-05 19:19:28.962126] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:29:59.649 [2024-11-05 19:19:28.962465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.649 [2024-11-05 19:19:28.962487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:59.649 [2024-11-05 19:19:28.973545] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:29:59.649 [2024-11-05 19:19:28.973802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.649 [2024-11-05 19:19:28.973821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:59.910 [2024-11-05 19:19:28.984931] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:29:59.910 [2024-11-05 19:19:28.985268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.910 [2024-11-05 19:19:28.985287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.910 [2024-11-05 19:19:28.996352] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:29:59.910 [2024-11-05 19:19:28.996714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.910 [2024-11-05 19:19:28.996732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:59.910 [2024-11-05 19:19:29.007342] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:29:59.910 [2024-11-05 19:19:29.007606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.910 [2024-11-05 19:19:29.007626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:59.910 [2024-11-05 19:19:29.018301] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:29:59.910 [2024-11-05 19:19:29.018600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.910 [2024-11-05 19:19:29.018619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:59.910 [2024-11-05 19:19:29.028329] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:29:59.910 [2024-11-05 19:19:29.028600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.910 [2024-11-05 19:19:29.028618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.910 [2024-11-05 19:19:29.038374] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:29:59.910 [2024-11-05 19:19:29.038642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.910 [2024-11-05 19:19:29.038661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:59.910 [2024-11-05 19:19:29.048091] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:29:59.910 [2024-11-05 19:19:29.048321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.910 [2024-11-05 19:19:29.048342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:59.910 [2024-11-05 19:19:29.057721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:29:59.910 [2024-11-05 19:19:29.058068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.910 [2024-11-05 19:19:29.058085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:59.910 [2024-11-05 19:19:29.066498] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:29:59.910 [2024-11-05 19:19:29.066724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.910 [2024-11-05 19:19:29.066741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.910 [2024-11-05 19:19:29.075431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:29:59.910 [2024-11-05 19:19:29.075726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.910 [2024-11-05 19:19:29.075744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:59.910 [2024-11-05 19:19:29.082655] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:29:59.910 [2024-11-05 19:19:29.082977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.910 [2024-11-05 19:19:29.082993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:59.910 [2024-11-05 19:19:29.090633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:29:59.910 [2024-11-05 19:19:29.090684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.910 [2024-11-05 19:19:29.090699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:59.910 [2024-11-05 19:19:29.099098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:29:59.910 [2024-11-05 19:19:29.099354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.910 [2024-11-05 19:19:29.099371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.910 [2024-11-05 19:19:29.103383] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:29:59.910 [2024-11-05 19:19:29.103452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.910 [2024-11-05 19:19:29.103467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:59.910 [2024-11-05 19:19:29.110053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:29:59.910 [2024-11-05 19:19:29.110256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.910 [2024-11-05 19:19:29.110272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:59.910 [2024-11-05 19:19:29.115699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:29:59.910 [2024-11-05 19:19:29.115773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.910 [2024-11-05 19:19:29.115789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:59.910 [2024-11-05 19:19:29.120332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:29:59.910 [2024-11-05 19:19:29.120398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.910 [2024-11-05 19:19:29.120413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.910 [2024-11-05 19:19:29.128356] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:29:59.910 [2024-11-05 19:19:29.128427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.910 [2024-11-05 19:19:29.128442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:59.910 [2024-11-05 19:19:29.136065] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:29:59.910 [2024-11-05 19:19:29.136273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.910 [2024-11-05 19:19:29.136289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:59.910 [2024-11-05 19:19:29.143486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:29:59.910 [2024-11-05 19:19:29.143767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.910 [2024-11-05 19:19:29.143784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:59.910 [2024-11-05 19:19:29.149974] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:29:59.910 [2024-11-05 19:19:29.150072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.910 [2024-11-05 19:19:29.150088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.910 [2024-11-05 19:19:29.155562] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:29:59.910 [2024-11-05 19:19:29.155615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.910 [2024-11-05 19:19:29.155631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:59.911 [2024-11-05 19:19:29.163305] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:29:59.911 [2024-11-05 19:19:29.163371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.911 [2024-11-05 19:19:29.163386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:59.911 [2024-11-05 19:19:29.169812] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:29:59.911 [2024-11-05 19:19:29.170028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.911 [2024-11-05 19:19:29.170047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:59.911 [2024-11-05 19:19:29.177256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:29:59.911 [2024-11-05 19:19:29.177331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.911 [2024-11-05 19:19:29.177346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.911 [2024-11-05 19:19:29.184692] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:29:59.911 [2024-11-05 19:19:29.184945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.911 [2024-11-05 19:19:29.184962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:59.911 [2024-11-05 19:19:29.189338] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:29:59.911 [2024-11-05 19:19:29.189402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.911 [2024-11-05 19:19:29.189417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:59.911 [2024-11-05 19:19:29.194688] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:29:59.911 [2024-11-05 19:19:29.194777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.911 [2024-11-05 19:19:29.194793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:59.911 [2024-11-05 19:19:29.201257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:29:59.911 [2024-11-05 19:19:29.201500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.911 [2024-11-05 19:19:29.201516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.911 [2024-11-05 19:19:29.209370] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:29:59.911 [2024-11-05 19:19:29.209667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.911 [2024-11-05 19:19:29.209684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:59.911 [2024-11-05 19:19:29.218147] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:29:59.911 [2024-11-05 19:19:29.218218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.911 [2024-11-05 19:19:29.218233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:59.911 [2024-11-05 19:19:29.225813] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:29:59.911 [2024-11-05 19:19:29.226071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.911 [2024-11-05 19:19:29.226088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:59.911 [2024-11-05 19:19:29.234130] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:29:59.911 [2024-11-05 19:19:29.234407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.911 [2024-11-05 19:19:29.234423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.172 [2024-11-05 19:19:29.241800] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:00.172 [2024-11-05 19:19:29.242073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.172 [2024-11-05 19:19:29.242089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:00.173 [2024-11-05 19:19:29.249920] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:00.173 [2024-11-05 19:19:29.250101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.173 [2024-11-05 19:19:29.250116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:00.173 [2024-11-05 19:19:29.257220] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:00.173 [2024-11-05 19:19:29.257453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.173 [2024-11-05 19:19:29.257469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:00.173 [2024-11-05 19:19:29.265417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:00.173 [2024-11-05 19:19:29.265656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.173 [2024-11-05 19:19:29.265672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.173 [2024-11-05 19:19:29.272041] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:00.173 [2024-11-05 19:19:29.272243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.173 [2024-11-05 19:19:29.272258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:00.173 [2024-11-05 19:19:29.280954] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:00.173 [2024-11-05 19:19:29.281007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.173 [2024-11-05 19:19:29.281021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:00.173 [2024-11-05 19:19:29.289928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:00.173 [2024-11-05 19:19:29.290000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.173 [2024-11-05 19:19:29.290015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:00.173 [2024-11-05 19:19:29.298318] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:00.173 [2024-11-05 19:19:29.298598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.173 [2024-11-05 19:19:29.298614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.173 [2024-11-05 19:19:29.306798] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:00.173 [2024-11-05 19:19:29.306871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.173 [2024-11-05 19:19:29.306887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:00.173 [2024-11-05 19:19:29.312964] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:00.173 [2024-11-05 19:19:29.313214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.173 [2024-11-05 19:19:29.313230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:00.173 [2024-11-05 19:19:29.320321] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:00.173 [2024-11-05 19:19:29.320395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.173 [2024-11-05 19:19:29.320410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:00.173 [2024-11-05 19:19:29.326432] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:00.173 [2024-11-05 19:19:29.326490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.173 [2024-11-05 19:19:29.326505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.173 [2024-11-05 19:19:29.332313] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:00.173 [2024-11-05 19:19:29.332380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.173 [2024-11-05 19:19:29.332395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:00.173 [2024-11-05 19:19:29.337277] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:00.173 [2024-11-05 19:19:29.337329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.173 [2024-11-05 19:19:29.337343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:00.173 [2024-11-05 19:19:29.340684] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:00.173 [2024-11-05 19:19:29.340737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.173 [2024-11-05 19:19:29.340757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:00.173 [2024-11-05 19:19:29.344055] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:00.173 [2024-11-05 19:19:29.344109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.173 [2024-11-05 19:19:29.344125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.173 [2024-11-05 19:19:29.347688] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:00.173 [2024-11-05 19:19:29.347743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.173 [2024-11-05 19:19:29.347766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:00.173 [2024-11-05 19:19:29.355078] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:00.173 [2024-11-05 19:19:29.355346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.173 [2024-11-05 19:19:29.355362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:00.173 [2024-11-05 19:19:29.360244] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:00.173 [2024-11-05 19:19:29.360350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.173 [2024-11-05 19:19:29.360366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:00.173 [2024-11-05 19:19:29.368874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:00.173 [2024-11-05 19:19:29.369143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.173 [2024-11-05 19:19:29.369160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.173 [2024-11-05 19:19:29.377570] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:00.173 [2024-11-05 19:19:29.377640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.173 [2024-11-05 19:19:29.377655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:00.173 [2024-11-05 19:19:29.386202] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:00.173 [2024-11-05 19:19:29.386487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.173 [2024-11-05 19:19:29.386504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:00.173 [2024-11-05 19:19:29.393582] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:00.173 [2024-11-05 19:19:29.393851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.173 [2024-11-05 19:19:29.393867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:00.173 [2024-11-05 19:19:29.401640] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:00.173 [2024-11-05 19:19:29.401885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.173 [2024-11-05 19:19:29.401902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.173 [2024-11-05 19:19:29.405409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:00.173 [2024-11-05 19:19:29.405462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.173 [2024-11-05 19:19:29.405477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:00.173 [2024-11-05 19:19:29.412461] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:00.173 [2024-11-05 19:19:29.412671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.173 [2024-11-05 19:19:29.412689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:00.173 [2024-11-05 19:19:29.421034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:00.173 [2024-11-05 19:19:29.421122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.173 [2024-11-05 19:19:29.421137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:00.173 [2024-11-05 19:19:29.429207] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:00.173 [2024-11-05 19:19:29.429476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.174 [2024-11-05 19:19:29.429493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.174 [2024-11-05 19:19:29.437383] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:00.174 [2024-11-05 19:19:29.437537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.174 [2024-11-05 19:19:29.437553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:00.174 [2024-11-05 19:19:29.446187] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:00.174 [2024-11-05 19:19:29.446384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.174 [2024-11-05 19:19:29.446400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:00.174 [2024-11-05 19:19:29.454856] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:00.174 [2024-11-05 19:19:29.455132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.174 [2024-11-05 19:19:29.455148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:00.174 [2024-11-05 19:19:29.462486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:00.174 [2024-11-05 19:19:29.462757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.174 [2024-11-05 19:19:29.462774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.174 [2024-11-05 19:19:29.470431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:00.174 [2024-11-05 19:19:29.470517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.174 [2024-11-05 19:19:29.470533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:00.174 [2024-11-05 19:19:29.478422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:00.174 [2024-11-05 19:19:29.478521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.174 [2024-11-05 19:19:29.478537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:00.174 [2024-11-05 19:19:29.484980] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:00.174 [2024-11-05 19:19:29.485047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.174 [2024-11-05 19:19:29.485063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:00.174 [2024-11-05 19:19:29.492547] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:00.174 [2024-11-05 19:19:29.492611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.174 [2024-11-05 19:19:29.492627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.443 [2024-11-05 19:19:29.500145] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:00.443 [2024-11-05 19:19:29.500208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.443 [2024-11-05 19:19:29.500224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:00.443 [2024-11-05 19:19:29.508327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:00.443 [2024-11-05 19:19:29.508407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.443 [2024-11-05 19:19:29.508422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:00.443 [2024-11-05 19:19:29.516095] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:00.443 [2024-11-05 19:19:29.516196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.443 [2024-11-05 19:19:29.516212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:00.443 [2024-11-05 19:19:29.524134] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:00.443 [2024-11-05 19:19:29.524210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.443 [2024-11-05 19:19:29.524225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.443 [2024-11-05 19:19:29.532106] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:00.443 [2024-11-05 19:19:29.532206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.443 [2024-11-05 19:19:29.532222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:00.443 [2024-11-05 19:19:29.539983] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:00.443 [2024-11-05 19:19:29.540226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.443 [2024-11-05 19:19:29.540242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:00.443 [2024-11-05 19:19:29.548236] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:00.443 [2024-11-05 19:19:29.548455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.443 [2024-11-05 19:19:29.548473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:00.443 [2024-11-05 19:19:29.557233] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:00.443 [2024-11-05 19:19:29.557475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.443 [2024-11-05 19:19:29.557491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.443 [2024-11-05 19:19:29.565816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:00.443 [2024-11-05 19:19:29.566072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.443 [2024-11-05 19:19:29.566088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:00.443 [2024-11-05 19:19:29.573073] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:00.443 [2024-11-05 19:19:29.573135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.443 [2024-11-05 19:19:29.573150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:00.443 [2024-11-05 19:19:29.580850] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:00.443 [2024-11-05 19:19:29.581111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.443 [2024-11-05 19:19:29.581129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:00.443 [2024-11-05 19:19:29.589501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:00.444 [2024-11-05 19:19:29.589729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.444 [2024-11-05 19:19:29.589749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.444 [2024-11-05 19:19:29.598393] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:00.444 [2024-11-05 19:19:29.598665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.444 [2024-11-05 19:19:29.598682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:00.444 [2024-11-05 19:19:29.608787] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:00.444 [2024-11-05 19:19:29.609056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.444 [2024-11-05 19:19:29.609073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:00.444 [2024-11-05 19:19:29.619016] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:00.444 [2024-11-05 19:19:29.619254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.444 [2024-11-05 19:19:29.619270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:00.444 [2024-11-05 19:19:29.628500] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:00.444 [2024-11-05 19:19:29.628852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.444 [2024-11-05 19:19:29.628872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.444 [2024-11-05 19:19:29.639179] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:00.444 [2024-11-05 19:19:29.639520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.444 [2024-11-05 19:19:29.639537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:00.444 [2024-11-05 19:19:29.649338] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:00.444 [2024-11-05 19:19:29.649604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.444 [2024-11-05 19:19:29.649621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:00.444 [2024-11-05 19:19:29.659841] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:00.444 [2024-11-05 19:19:29.660122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.444 [2024-11-05 19:19:29.660138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:00.444 [2024-11-05 19:19:29.669952] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:00.444 [2024-11-05 19:19:29.670256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.444 [2024-11-05 19:19:29.670273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.444 [2024-11-05 19:19:29.680176] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:00.444 [2024-11-05 19:19:29.680559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.444 [2024-11-05 19:19:29.680577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:00.444 [2024-11-05 19:19:29.690907] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:00.444 [2024-11-05 19:19:29.691204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.444 [2024-11-05 19:19:29.691221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:00.444 [2024-11-05 19:19:29.701359] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:00.444 [2024-11-05 19:19:29.701431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.444 [2024-11-05 19:19:29.701446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:00.444 [2024-11-05 19:19:29.711955] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:00.444 [2024-11-05 19:19:29.712217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.444 [2024-11-05 19:19:29.712232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.444 [2024-11-05 19:19:29.722947] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:00.444 [2024-11-05 19:19:29.723212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.444 [2024-11-05 19:19:29.723229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:00.444 [2024-11-05 19:19:29.733786] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:00.444 [2024-11-05 19:19:29.734037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.444 [2024-11-05 19:19:29.734054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:00.444 [2024-11-05 19:19:29.744941] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:00.444 [2024-11-05 19:19:29.745130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.444 [2024-11-05 19:19:29.745145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:00.444 [2024-11-05 19:19:29.755569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:00.444 [2024-11-05 19:19:29.755826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.444 [2024-11-05 19:19:29.755842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.444 [2024-11-05 19:19:29.765476] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:00.444 [2024-11-05 19:19:29.765739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.444 [2024-11-05 19:19:29.765760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:00.711 [2024-11-05 19:19:29.775562] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:00.711 [2024-11-05 19:19:29.775691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.711 [2024-11-05 19:19:29.775706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:00.711 [2024-11-05 19:19:29.786093] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:00.711 [2024-11-05 19:19:29.786420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.711 [2024-11-05 19:19:29.786436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:00.711 [2024-11-05 19:19:29.795234] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:00.711 [2024-11-05 19:19:29.795525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.711 [2024-11-05 19:19:29.795541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.711 [2024-11-05 19:19:29.804452] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:00.711 [2024-11-05 19:19:29.804705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.711 [2024-11-05 19:19:29.804725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:00.711 [2024-11-05 19:19:29.813893] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:00.711 [2024-11-05 19:19:29.814099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.711 [2024-11-05 19:19:29.814114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:00.712 [2024-11-05 19:19:29.823316] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:00.712 [2024-11-05 19:19:29.823566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.712 [2024-11-05 19:19:29.823583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:00.712 [2024-11-05 19:19:29.831147] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:00.712 [2024-11-05 19:19:29.831291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.712 [2024-11-05 19:19:29.831307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.712 [2024-11-05 19:19:29.838913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:00.712 [2024-11-05 19:19:29.839211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.712 [2024-11-05 19:19:29.839227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:00.712 [2024-11-05 19:19:29.846500] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:00.712 [2024-11-05 19:19:29.846767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.712 [2024-11-05 19:19:29.846783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:00.712 [2024-11-05 19:19:29.856175] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:00.712 [2024-11-05 19:19:29.856515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.712 [2024-11-05 19:19:29.856531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:00.712 [2024-11-05 19:19:29.862549] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:00.712 [2024-11-05 19:19:29.862827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.712 [2024-11-05 19:19:29.862844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.712 [2024-11-05 19:19:29.869144] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:00.712 [2024-11-05 19:19:29.869366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.712 [2024-11-05 19:19:29.869382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:00.712 [2024-11-05 19:19:29.877368] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:00.712 [2024-11-05 19:19:29.877644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.712 [2024-11-05 19:19:29.877663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:00.712 [2024-11-05 19:19:29.884797] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:00.712 [2024-11-05 19:19:29.885025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.712 [2024-11-05 19:19:29.885041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:00.712 [2024-11-05 19:19:29.893031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:00.712 [2024-11-05 19:19:29.893260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.712 [2024-11-05 19:19:29.893275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.712 [2024-11-05 19:19:29.899699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:00.712 [2024-11-05 19:19:29.900014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.712 [2024-11-05 19:19:29.900030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:00.712 [2024-11-05 19:19:29.907318] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:00.712 [2024-11-05 19:19:29.907412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.712 [2024-11-05 19:19:29.907427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:00.712 [2024-11-05 19:19:29.915548] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:00.712 [2024-11-05 19:19:29.915624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.712 [2024-11-05 19:19:29.915639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:00.712 [2024-11-05 19:19:29.921162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:00.712 [2024-11-05 19:19:29.921245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.712 [2024-11-05 19:19:29.921260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.712 [2024-11-05 19:19:29.926743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:00.712 [2024-11-05 19:19:29.927018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.712 [2024-11-05 19:19:29.927035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:00.712 [2024-11-05 19:19:29.932776] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:00.712 [2024-11-05 19:19:29.932847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.712 [2024-11-05 19:19:29.932862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:00.712 [2024-11-05 19:19:29.936322] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:00.712 [2024-11-05 19:19:29.936385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.712 [2024-11-05 19:19:29.936400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:00.712 3835.00 IOPS, 479.38 MiB/s [2024-11-05T18:19:30.035Z] [2024-11-05 19:19:29.941256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:00.712 [2024-11-05 19:19:29.941321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.712 [2024-11-05 19:19:29.941336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.712 [2024-11-05 19:19:29.944761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:00.712 [2024-11-05 19:19:29.944820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.712 [2024-11-05 19:19:29.944835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:00.712 [2024-11-05 19:19:29.948244] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:00.712 [2024-11-05 19:19:29.948302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.712 [2024-11-05 19:19:29.948318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:00.712 [2024-11-05 19:19:29.954332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:00.712 [2024-11-05 19:19:29.954599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.712 [2024-11-05 19:19:29.954614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:00.712 [2024-11-05 19:19:29.962096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:00.712 [2024-11-05 19:19:29.962377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.712 [2024-11-05 19:19:29.962394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.712 [2024-11-05 19:19:29.970106] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:00.712 [2024-11-05 19:19:29.970375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.712 [2024-11-05 19:19:29.970391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:00.712 [2024-11-05 19:19:29.977591] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:00.712 [2024-11-05 19:19:29.977868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.712 [2024-11-05 19:19:29.977884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:00.712 [2024-11-05 19:19:29.985900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:00.712 [2024-11-05 19:19:29.986124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.712 [2024-11-05 19:19:29.986148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:00.712 [2024-11-05 19:19:29.993616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:00.713 [2024-11-05 19:19:29.993822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.713 [2024-11-05 19:19:29.993838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.713 [2024-11-05 19:19:30.001999] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:00.713 [2024-11-05 19:19:30.002284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.713 [2024-11-05 19:19:30.002301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:00.713 [2024-11-05 19:19:30.011740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:00.713 [2024-11-05 19:19:30.012054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.713 [2024-11-05 19:19:30.012076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:00.713 [2024-11-05 19:19:30.020534] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:00.713 [2024-11-05 19:19:30.020842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.713 [2024-11-05 19:19:30.020860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:00.713 [2024-11-05 19:19:30.028849] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:00.713 [2024-11-05 19:19:30.029173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.713 [2024-11-05 19:19:30.029190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.975 [2024-11-05 19:19:30.037430] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:00.975 [2024-11-05 19:19:30.037705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.975 [2024-11-05 19:19:30.037722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:00.975 [2024-11-05 19:19:30.045695] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:00.975 [2024-11-05 19:19:30.045967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.975 [2024-11-05 19:19:30.045985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:00.975 [2024-11-05 19:19:30.054722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:00.975 [2024-11-05 19:19:30.054987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.975 [2024-11-05 19:19:30.055004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:00.975 [2024-11-05 19:19:30.061464] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:00.975 [2024-11-05 19:19:30.061528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.975 [2024-11-05 19:19:30.061544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.975 [2024-11-05 19:19:30.065568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:00.975 [2024-11-05 19:19:30.065683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.975 [2024-11-05 19:19:30.065700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:00.975 [2024-11-05 19:19:30.069380] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:00.975 [2024-11-05 19:19:30.069433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.975 [2024-11-05 19:19:30.069448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:00.975 [2024-11-05 19:19:30.075309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:00.975 [2024-11-05 19:19:30.075387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.975 [2024-11-05 19:19:30.075403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:00.975 [2024-11-05 19:19:30.082453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:00.975 [2024-11-05 19:19:30.082781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.975 [2024-11-05 19:19:30.082798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.975 [2024-11-05 19:19:30.087332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:00.975 [2024-11-05 19:19:30.087389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.975 [2024-11-05 19:19:30.087404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:00.975 [2024-11-05 19:19:30.094490] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:00.975 [2024-11-05 19:19:30.094564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.975 [2024-11-05 19:19:30.094580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:00.975 [2024-11-05 19:19:30.099805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:00.975 [2024-11-05 19:19:30.099893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.975 [2024-11-05 19:19:30.099908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:00.975 [2024-11-05 19:19:30.105973] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:00.975 [2024-11-05 19:19:30.106263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.975 [2024-11-05 19:19:30.106280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.975 [2024-11-05 19:19:30.115966] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:00.975 [2024-11-05 19:19:30.116200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.975 [2024-11-05 19:19:30.116216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:00.975 [2024-11-05 19:19:30.126633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:00.975 [2024-11-05 19:19:30.126988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.975 [2024-11-05 19:19:30.127005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:00.975 [2024-11-05 19:19:30.137588] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:00.976 [2024-11-05 19:19:30.137853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.976 [2024-11-05 19:19:30.137870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:00.976 [2024-11-05 19:19:30.148440] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:00.976 [2024-11-05 19:19:30.148514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.976 [2024-11-05 19:19:30.148529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.976 [2024-11-05 19:19:30.159273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:00.976 [2024-11-05 19:19:30.159564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.976 [2024-11-05 19:19:30.159581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:00.976 [2024-11-05 19:19:30.169927] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:00.976 [2024-11-05 19:19:30.170195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.976 [2024-11-05 19:19:30.170211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:00.976 [2024-11-05 19:19:30.179047] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:00.976 [2024-11-05 19:19:30.179139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.976 [2024-11-05 19:19:30.179154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:00.976 [2024-11-05 19:19:30.189240] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:00.976 [2024-11-05 19:19:30.189338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.976 [2024-11-05 19:19:30.189354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.976 [2024-11-05 19:19:30.197404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:00.976 [2024-11-05 19:19:30.197644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.976 [2024-11-05 19:19:30.197662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:00.976 [2024-11-05 19:19:30.207448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:00.976 [2024-11-05 19:19:30.207540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.976 [2024-11-05 19:19:30.207556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:00.976 [2024-11-05 19:19:30.215774] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:00.976 [2024-11-05 19:19:30.215852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.976 [2024-11-05 19:19:30.215867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:00.976 [2024-11-05 19:19:30.223002] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:00.976 [2024-11-05 19:19:30.223097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.976 [2024-11-05 19:19:30.223111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.976 [2024-11-05 19:19:30.230505] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:00.976 [2024-11-05 19:19:30.230725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.976 [2024-11-05 19:19:30.230740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:00.976 [2024-11-05 19:19:30.239332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:00.976 [2024-11-05 19:19:30.239402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.976 [2024-11-05 19:19:30.239417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:00.976 [2024-11-05 19:19:30.248316] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:00.976 [2024-11-05 19:19:30.248565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.976 [2024-11-05 19:19:30.248583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:00.976 [2024-11-05 19:19:30.256881] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:00.976 [2024-11-05 19:19:30.256952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.976 [2024-11-05 19:19:30.256968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.976 [2024-11-05 19:19:30.265565] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:00.976 [2024-11-05 19:19:30.265797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.976 [2024-11-05 19:19:30.265813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:00.976 [2024-11-05 19:19:30.274298] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:00.976 [2024-11-05 19:19:30.274568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.976 [2024-11-05 19:19:30.274584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:00.976 [2024-11-05 19:19:30.282888] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:00.976 [2024-11-05 19:19:30.283204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.976 [2024-11-05 19:19:30.283220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:00.976 [2024-11-05 19:19:30.293238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:00.976 [2024-11-05 19:19:30.293539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.976 [2024-11-05 19:19:30.293556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:01.237 [2024-11-05 19:19:30.304463] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:01.237 [2024-11-05 19:19:30.304728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.237 [2024-11-05 19:19:30.304750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:01.237 [2024-11-05 19:19:30.313728] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:01.237 [2024-11-05 19:19:30.313965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.237 [2024-11-05 19:19:30.313980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:01.237 [2024-11-05 19:19:30.324248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:01.237 [2024-11-05 19:19:30.324544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.237 [2024-11-05 19:19:30.324560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:01.237 [2024-11-05 19:19:30.335977] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:01.237 [2024-11-05 19:19:30.336286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.237 [2024-11-05 19:19:30.336303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:01.237 [2024-11-05 19:19:30.346543] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:01.237 [2024-11-05 19:19:30.346631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.237 [2024-11-05 19:19:30.346646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:01.237 [2024-11-05 19:19:30.356813] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:01.237 [2024-11-05 19:19:30.357100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.237 [2024-11-05 19:19:30.357117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:01.237 [2024-11-05 19:19:30.368093] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:01.238 [2024-11-05 19:19:30.368372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.238 [2024-11-05 19:19:30.368390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:01.238 [2024-11-05 19:19:30.379037] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:01.238 [2024-11-05 19:19:30.379325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.238 [2024-11-05 19:19:30.379342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:01.238 [2024-11-05 19:19:30.389899] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:01.238 [2024-11-05 19:19:30.390104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.238 [2024-11-05 19:19:30.390119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:01.238 [2024-11-05 19:19:30.400953] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:01.238 [2024-11-05 19:19:30.401230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.238 [2024-11-05 19:19:30.401247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:01.238 [2024-11-05 19:19:30.412566] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:01.238 [2024-11-05 19:19:30.412830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.238 [2024-11-05 19:19:30.412847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:01.238 [2024-11-05 19:19:30.424140] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:01.238 [2024-11-05 19:19:30.424422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.238 [2024-11-05 19:19:30.424438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:01.238 [2024-11-05 19:19:30.435110] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:01.238 [2024-11-05 19:19:30.435379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.238 [2024-11-05 19:19:30.435395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:01.238 [2024-11-05 19:19:30.446505] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:01.238 [2024-11-05 19:19:30.446791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.238 [2024-11-05 19:19:30.446808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:01.238 [2024-11-05 19:19:30.457574] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:01.238 [2024-11-05 19:19:30.457869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.238 [2024-11-05 19:19:30.457888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:01.238 [2024-11-05 19:19:30.468320] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:01.238 [2024-11-05 19:19:30.468390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.238 [2024-11-05 19:19:30.468406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:01.238 [2024-11-05 19:19:30.479062] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:01.238 [2024-11-05 19:19:30.479323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.238 [2024-11-05 19:19:30.479339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:01.238 [2024-11-05 19:19:30.490240] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:01.238 [2024-11-05 19:19:30.490436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.238 [2024-11-05 19:19:30.490452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:01.238 [2024-11-05 19:19:30.500572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:01.238 [2024-11-05 19:19:30.500663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.238 [2024-11-05 19:19:30.500678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:01.238 [2024-11-05 19:19:30.510978] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:01.238 [2024-11-05 19:19:30.511076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.238 [2024-11-05 19:19:30.511091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:01.238 [2024-11-05 19:19:30.519893] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:01.238 [2024-11-05 19:19:30.519967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.238 [2024-11-05 19:19:30.519982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:01.238 [2024-11-05 19:19:30.523665] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:01.238 [2024-11-05 19:19:30.523728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.238 [2024-11-05 19:19:30.523743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:01.238 [2024-11-05 19:19:30.527522] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:01.238 [2024-11-05 19:19:30.527581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.238 [2024-11-05 19:19:30.527596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:01.238 [2024-11-05 19:19:30.531344] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:01.238 [2024-11-05 19:19:30.531411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.238 [2024-11-05 19:19:30.531427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:01.238 [2024-11-05 19:19:30.537238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:01.238 [2024-11-05 19:19:30.537301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.238 [2024-11-05 19:19:30.537315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:01.238 [2024-11-05 19:19:30.544940] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:01.238 [2024-11-05 19:19:30.545118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.238 [2024-11-05 19:19:30.545134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:01.238 [2024-11-05 19:19:30.550918] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:01.238 [2024-11-05 19:19:30.550974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.238 [2024-11-05 19:19:30.550989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:01.238 [2024-11-05 19:19:30.554530] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:01.239 [2024-11-05 19:19:30.554588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.239 [2024-11-05 19:19:30.554603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:01.239 [2024-11-05 19:19:30.558270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:01.239 [2024-11-05 19:19:30.558327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.239 [2024-11-05 19:19:30.558342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:01.499 [2024-11-05 19:19:30.563628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:01.499 [2024-11-05 19:19:30.563894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.499 [2024-11-05 19:19:30.563910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:01.499 [2024-11-05 19:19:30.568224] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:01.499 [2024-11-05 19:19:30.568333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.499 [2024-11-05 19:19:30.568349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:01.499 [2024-11-05 19:19:30.576510] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:01.499 [2024-11-05 19:19:30.576590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.499 [2024-11-05 19:19:30.576605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:01.499 [2024-11-05 19:19:30.580247] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:01.499 [2024-11-05 19:19:30.580346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.499 [2024-11-05 19:19:30.580362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:01.500 [2024-11-05 19:19:30.584330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:01.500 [2024-11-05 19:19:30.584391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.500 [2024-11-05 19:19:30.584406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:01.500 [2024-11-05 19:19:30.591599] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:01.500 [2024-11-05 19:19:30.591682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.500 [2024-11-05 19:19:30.591697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:01.500 [2024-11-05 19:19:30.599437] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:01.500 [2024-11-05 19:19:30.599528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.500 [2024-11-05 19:19:30.599543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:01.500 [2024-11-05 19:19:30.606993] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:01.500 [2024-11-05 19:19:30.607056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.500 [2024-11-05 19:19:30.607071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:01.500 [2024-11-05 19:19:30.615418] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:01.500 [2024-11-05 19:19:30.615633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.500 [2024-11-05 19:19:30.615649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:01.500 [2024-11-05 19:19:30.623928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:01.500 [2024-11-05 19:19:30.624024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.500 [2024-11-05 19:19:30.624039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:01.500 [2024-11-05 19:19:30.631465] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:01.500 [2024-11-05 19:19:30.631517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.500 [2024-11-05 19:19:30.631531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:01.500 [2024-11-05 19:19:30.639152] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:01.500 [2024-11-05 19:19:30.639380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.500 [2024-11-05 19:19:30.639408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:01.500 [2024-11-05 19:19:30.647718] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:01.500 [2024-11-05 19:19:30.647800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.500 [2024-11-05 19:19:30.647815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:01.500 [2024-11-05 19:19:30.654434] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:01.500 [2024-11-05 19:19:30.654487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.500 [2024-11-05 19:19:30.654502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:01.500 [2024-11-05 19:19:30.663024] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:01.500 [2024-11-05 19:19:30.663316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.500 [2024-11-05 19:19:30.663332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:01.500 [2024-11-05 19:19:30.667236] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:01.500 [2024-11-05 19:19:30.667298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.500 [2024-11-05 19:19:30.667313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:01.500 [2024-11-05 19:19:30.671113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:01.500 [2024-11-05 19:19:30.671181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.500 [2024-11-05 19:19:30.671196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:01.500 [2024-11-05 19:19:30.674965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:01.500 [2024-11-05 19:19:30.675019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.500 [2024-11-05 19:19:30.675034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:01.500 [2024-11-05 19:19:30.678563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:01.500 [2024-11-05 19:19:30.678617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.500 [2024-11-05 19:19:30.678632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:01.500 [2024-11-05 19:19:30.683511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:01.500 [2024-11-05 19:19:30.683772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.500 [2024-11-05 19:19:30.683788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:01.500 [2024-11-05 19:19:30.690859] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:01.500 [2024-11-05 19:19:30.690923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.500 [2024-11-05 19:19:30.690938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:01.500 [2024-11-05 19:19:30.694482] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:01.500 [2024-11-05 19:19:30.694539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.500 [2024-11-05 19:19:30.694554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:01.500 [2024-11-05 19:19:30.698303] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:01.500 [2024-11-05 19:19:30.698374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.500 [2024-11-05 19:19:30.698389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:01.500 [2024-11-05 19:19:30.706519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:01.500 [2024-11-05 19:19:30.706583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.500 [2024-11-05 19:19:30.706599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:01.500 [2024-11-05 19:19:30.710351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:01.500 [2024-11-05 19:19:30.710409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.500 [2024-11-05 19:19:30.710425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:01.500 [2024-11-05 19:19:30.714088] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:01.500 [2024-11-05 19:19:30.714151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.500 [2024-11-05 19:19:30.714166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:01.500 [2024-11-05 19:19:30.717916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:01.500 [2024-11-05 19:19:30.718002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.500 [2024-11-05 19:19:30.718017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:01.500 [2024-11-05 19:19:30.723744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:01.501 [2024-11-05 19:19:30.724024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.501 [2024-11-05 19:19:30.724040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:01.501 [2024-11-05 19:19:30.733287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:01.501 [2024-11-05 19:19:30.733346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.501 [2024-11-05 19:19:30.733361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:01.501 [2024-11-05 19:19:30.742461] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:01.501 [2024-11-05 19:19:30.742752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.501 [2024-11-05 19:19:30.742769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:01.501 [2024-11-05 19:19:30.752734] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:01.501 [2024-11-05 19:19:30.752824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.501 [2024-11-05 19:19:30.752839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:01.501 [2024-11-05 19:19:30.763426] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:01.501 [2024-11-05 19:19:30.763738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.501 [2024-11-05 19:19:30.763760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:01.501 [2024-11-05 19:19:30.774495] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:01.501 [2024-11-05 19:19:30.774816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.501 [2024-11-05 19:19:30.774833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:01.501 [2024-11-05 19:19:30.786018] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:01.501 [2024-11-05 19:19:30.786279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.501 [2024-11-05 19:19:30.786295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:01.501 [2024-11-05 19:19:30.797444] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:01.501 [2024-11-05 19:19:30.797587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.501 [2024-11-05 19:19:30.797602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:01.501 [2024-11-05 19:19:30.808447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:01.501 [2024-11-05 19:19:30.808782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.501 [2024-11-05 19:19:30.808798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:01.501 [2024-11-05 19:19:30.819404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:01.501 [2024-11-05 19:19:30.819684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.501 [2024-11-05 19:19:30.819701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:01.761 [2024-11-05 19:19:30.827775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:01.761 [2024-11-05 19:19:30.827830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.761 [2024-11-05 19:19:30.827848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:01.762 [2024-11-05 19:19:30.836956] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:01.762 [2024-11-05 19:19:30.837155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.762 [2024-11-05 19:19:30.837171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:01.762 [2024-11-05 19:19:30.847380] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:01.762 [2024-11-05 19:19:30.847664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.762 [2024-11-05 19:19:30.847680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:01.762 [2024-11-05 19:19:30.856129] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:01.762 [2024-11-05 19:19:30.856234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.762 [2024-11-05 19:19:30.856249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:01.762 [2024-11-05 19:19:30.865577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:01.762 [2024-11-05 19:19:30.865729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.762 [2024-11-05 19:19:30.865745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:01.762 [2024-11-05 19:19:30.876030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:01.762 [2024-11-05 19:19:30.876309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.762 [2024-11-05 19:19:30.876326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:01.762 [2024-11-05 19:19:30.884665] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:01.762 [2024-11-05 19:19:30.884914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.762 [2024-11-05 19:19:30.884930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:01.762 [2024-11-05 19:19:30.893320] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:01.762 [2024-11-05 19:19:30.893384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.762 [2024-11-05 19:19:30.893399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:01.762 [2024-11-05 19:19:30.900407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:01.762 [2024-11-05 19:19:30.900634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.762 [2024-11-05 19:19:30.900650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:01.762 [2024-11-05 19:19:30.908870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:01.762 [2024-11-05 19:19:30.909153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.762 [2024-11-05 19:19:30.909170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:01.762 [2024-11-05 19:19:30.916891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:01.762 [2024-11-05 19:19:30.917156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.762 [2024-11-05 19:19:30.917171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:01.762 [2024-11-05 19:19:30.924020] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:01.762 [2024-11-05 19:19:30.924284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.762 [2024-11-05 19:19:30.924300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:01.762 [2024-11-05 19:19:30.931828] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:01.762 [2024-11-05 19:19:30.932113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.762 [2024-11-05 19:19:30.932130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:01.762 [2024-11-05 19:19:30.940621] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d14860) with pdu=0x2000166fef90 00:30:01.762 [2024-11-05 19:19:30.940688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.762 [2024-11-05 19:19:30.940704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:01.762 3845.50 IOPS, 480.69 MiB/s 00:30:01.762 Latency(us) 00:30:01.762 [2024-11-05T18:19:31.085Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:01.762 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:30:01.762 nvme0n1 : 2.01 3843.88 480.49 0.00 0.00 4155.72 1624.75 12178.77 00:30:01.762 [2024-11-05T18:19:31.085Z] =================================================================================================================== 00:30:01.762 [2024-11-05T18:19:31.085Z] Total : 3843.88 480.49 0.00 0.00 4155.72 1624.75 12178.77 00:30:01.762 { 00:30:01.762 "results": [ 00:30:01.762 { 00:30:01.762 "job": "nvme0n1", 00:30:01.762 "core_mask": "0x2", 00:30:01.762 "workload": "randwrite", 00:30:01.762 "status": "finished", 00:30:01.762 "queue_depth": 16, 00:30:01.762 "io_size": 131072, 00:30:01.762 "runtime": 2.005004, 00:30:01.762 "iops": 3843.8826057204874, 00:30:01.762 "mibps": 480.4853257150609, 00:30:01.762 "io_failed": 0, 00:30:01.762 "io_timeout": 0, 00:30:01.762 "avg_latency_us": 4155.7211539293285, 00:30:01.762 "min_latency_us": 1624.7466666666667, 00:30:01.762 "max_latency_us": 12178.773333333333 00:30:01.762 } 00:30:01.762 ], 00:30:01.762 "core_count": 1 00:30:01.762 } 00:30:01.762 19:19:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:30:01.762 19:19:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:30:01.762 19:19:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:30:01.762 | .driver_specific 00:30:01.762 | .nvme_error 00:30:01.762 | .status_code 00:30:01.762 | .command_transient_transport_error' 00:30:01.762 19:19:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:30:02.023 19:19:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 248 > 0 )) 00:30:02.023 19:19:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 529761 00:30:02.023 19:19:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 529761 ']' 00:30:02.023 19:19:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 529761 00:30:02.023 19:19:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:30:02.023 19:19:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:02.023 19:19:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 529761 00:30:02.023 19:19:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:30:02.023 19:19:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:30:02.023 19:19:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 529761' 00:30:02.023 killing process with pid 529761 00:30:02.023 19:19:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 529761 00:30:02.023 Received shutdown signal, test time was about 2.000000 seconds 00:30:02.023 00:30:02.023 Latency(us) 00:30:02.023 [2024-11-05T18:19:31.346Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:02.023 [2024-11-05T18:19:31.346Z] =================================================================================================================== 00:30:02.023 [2024-11-05T18:19:31.346Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:02.023 19:19:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 529761 00:30:02.023 19:19:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 527349 00:30:02.023 19:19:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 527349 ']' 00:30:02.023 19:19:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 527349 00:30:02.023 19:19:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:30:02.023 19:19:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:02.023 19:19:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 527349 00:30:02.283 19:19:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:30:02.283 19:19:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:30:02.283 19:19:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 527349' 00:30:02.283 killing process with pid 527349 00:30:02.283 19:19:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 527349 00:30:02.283 19:19:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 527349 00:30:02.283 00:30:02.283 real 0m16.393s 00:30:02.283 user 0m32.515s 00:30:02.283 sys 0m3.420s 00:30:02.283 19:19:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:02.283 19:19:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:02.283 ************************************ 00:30:02.283 END TEST nvmf_digest_error 00:30:02.283 ************************************ 00:30:02.283 19:19:31 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:30:02.283 19:19:31 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:30:02.283 19:19:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@335 -- # nvmfcleanup 00:30:02.283 19:19:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@99 -- # sync 00:30:02.283 19:19:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:30:02.283 19:19:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@102 -- # set +e 00:30:02.283 19:19:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@103 -- # for i in {1..20} 00:30:02.283 19:19:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:30:02.283 rmmod nvme_tcp 00:30:02.284 rmmod nvme_fabrics 00:30:02.284 rmmod nvme_keyring 00:30:02.284 19:19:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:30:02.284 19:19:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@106 -- # set -e 00:30:02.284 19:19:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@107 -- # return 0 00:30:02.284 19:19:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # '[' -n 527349 ']' 00:30:02.284 19:19:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@337 -- # killprocess 527349 00:30:02.284 19:19:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@952 -- # '[' -z 527349 ']' 00:30:02.284 19:19:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@956 -- # kill -0 527349 00:30:02.284 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (527349) - No such process 00:30:02.284 19:19:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@979 -- # echo 'Process with pid 527349 is not found' 00:30:02.284 Process with pid 527349 is not found 00:30:02.284 19:19:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:30:02.284 19:19:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # nvmf_fini 00:30:02.284 19:19:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@264 -- # local dev 00:30:02.284 19:19:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@267 -- # remove_target_ns 00:30:02.284 19:19:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:30:02.284 19:19:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:30:02.284 19:19:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_target_ns 00:30:04.828 19:19:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@268 -- # delete_main_bridge 00:30:04.828 19:19:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:30:04.828 19:19:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@130 -- # return 0 00:30:04.828 19:19:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:30:04.828 19:19:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:30:04.828 19:19:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:30:04.828 19:19:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:30:04.828 19:19:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:30:04.828 19:19:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:30:04.829 19:19:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:30:04.829 19:19:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:30:04.829 19:19:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:30:04.829 19:19:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:30:04.829 19:19:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:30:04.829 19:19:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:30:04.829 19:19:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:30:04.829 19:19:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:30:04.829 19:19:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:30:04.829 19:19:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:30:04.829 19:19:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:30:04.829 19:19:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@41 -- # _dev=0 00:30:04.829 19:19:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@41 -- # dev_map=() 00:30:04.829 19:19:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@284 -- # iptr 00:30:04.829 19:19:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@542 -- # iptables-save 00:30:04.829 19:19:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:30:04.829 19:19:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@542 -- # iptables-restore 00:30:04.829 00:30:04.829 real 0m42.636s 00:30:04.829 user 1m7.497s 00:30:04.829 sys 0m12.317s 00:30:04.829 19:19:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:04.829 19:19:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:30:04.829 ************************************ 00:30:04.829 END TEST nvmf_digest 00:30:04.829 ************************************ 00:30:04.829 19:19:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:30:04.829 19:19:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:30:04.829 19:19:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:04.829 19:19:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:04.829 ************************************ 00:30:04.829 START TEST nvmf_host_discovery 00:30:04.829 ************************************ 00:30:04.829 19:19:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:30:04.829 * Looking for test storage... 00:30:04.829 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:04.829 19:19:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:04.829 19:19:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:04.829 19:19:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lcov --version 00:30:04.829 19:19:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:04.829 19:19:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:04.829 19:19:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:04.829 19:19:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:04.829 19:19:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:30:04.829 19:19:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:30:04.829 19:19:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:30:04.829 19:19:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:30:04.829 19:19:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:30:04.829 19:19:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:30:04.829 19:19:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:30:04.829 19:19:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:04.829 19:19:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:30:04.829 19:19:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:30:04.829 19:19:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:04.829 19:19:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:04.829 19:19:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:30:04.829 19:19:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:30:04.829 19:19:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:04.829 19:19:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:30:04.829 19:19:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:30:04.829 19:19:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:30:04.829 19:19:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:30:04.829 19:19:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:04.829 19:19:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:30:04.829 19:19:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:30:04.829 19:19:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:04.829 19:19:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:04.829 19:19:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:30:04.829 19:19:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:04.829 19:19:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:04.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:04.829 --rc genhtml_branch_coverage=1 00:30:04.829 --rc genhtml_function_coverage=1 00:30:04.829 --rc genhtml_legend=1 00:30:04.829 --rc geninfo_all_blocks=1 00:30:04.829 --rc geninfo_unexecuted_blocks=1 00:30:04.829 00:30:04.829 ' 00:30:04.829 19:19:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:04.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:04.829 --rc genhtml_branch_coverage=1 00:30:04.829 --rc genhtml_function_coverage=1 00:30:04.829 --rc genhtml_legend=1 00:30:04.829 --rc geninfo_all_blocks=1 00:30:04.829 --rc geninfo_unexecuted_blocks=1 00:30:04.829 00:30:04.829 ' 00:30:04.829 19:19:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:04.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:04.829 --rc genhtml_branch_coverage=1 00:30:04.829 --rc genhtml_function_coverage=1 00:30:04.829 --rc genhtml_legend=1 00:30:04.829 --rc geninfo_all_blocks=1 00:30:04.829 --rc geninfo_unexecuted_blocks=1 00:30:04.829 00:30:04.829 ' 00:30:04.829 19:19:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:04.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:04.829 --rc genhtml_branch_coverage=1 00:30:04.829 --rc genhtml_function_coverage=1 00:30:04.829 --rc genhtml_legend=1 00:30:04.829 --rc geninfo_all_blocks=1 00:30:04.829 --rc geninfo_unexecuted_blocks=1 00:30:04.829 00:30:04.829 ' 00:30:04.829 19:19:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:04.829 19:19:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:30:04.829 19:19:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:04.829 19:19:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:04.829 19:19:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:04.829 19:19:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:04.829 19:19:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:04.829 19:19:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:30:04.829 19:19:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:04.829 19:19:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:30:04.829 19:19:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:04.829 19:19:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:04.829 19:19:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:04.829 19:19:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:30:04.829 19:19:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:30:04.829 19:19:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:04.829 19:19:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:04.829 19:19:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:30:04.829 19:19:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:04.829 19:19:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:04.829 19:19:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:04.829 19:19:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:04.829 19:19:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:04.830 19:19:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:04.830 19:19:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:30:04.830 19:19:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:04.830 19:19:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:30:04.830 19:19:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:30:04.830 19:19:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:30:04.830 19:19:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:30:04.830 19:19:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@50 -- # : 0 00:30:04.830 19:19:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:30:04.830 19:19:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:30:04.830 19:19:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:30:04.830 19:19:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:04.830 19:19:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:04.830 19:19:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:30:04.830 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:30:04.830 19:19:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:30:04.830 19:19:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:30:04.830 19:19:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@54 -- # have_pci_nics=0 00:30:04.830 19:19:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # DISCOVERY_PORT=8009 00:30:04.830 19:19:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@12 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:30:04.830 19:19:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@15 -- # NQN=nqn.2016-06.io.spdk:cnode 00:30:04.830 19:19:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:30:04.830 19:19:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@18 -- # HOST_SOCK=/tmp/host.sock 00:30:04.830 19:19:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # nvmftestinit 00:30:04.830 19:19:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:30:04.830 19:19:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:04.830 19:19:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@296 -- # prepare_net_devs 00:30:04.830 19:19:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # local -g is_hw=no 00:30:04.830 19:19:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@260 -- # remove_target_ns 00:30:04.830 19:19:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:30:04.830 19:19:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:30:04.830 19:19:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_target_ns 00:30:04.830 19:19:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:30:04.830 19:19:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:30:04.830 19:19:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # xtrace_disable 00:30:04.830 19:19:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:12.966 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:12.966 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@131 -- # pci_devs=() 00:30:12.966 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@131 -- # local -a pci_devs 00:30:12.966 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@132 -- # pci_net_devs=() 00:30:12.967 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:30:12.967 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@133 -- # pci_drivers=() 00:30:12.967 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@133 -- # local -A pci_drivers 00:30:12.967 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@135 -- # net_devs=() 00:30:12.967 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@135 -- # local -ga net_devs 00:30:12.967 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@136 -- # e810=() 00:30:12.967 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@136 -- # local -ga e810 00:30:12.967 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@137 -- # x722=() 00:30:12.967 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@137 -- # local -ga x722 00:30:12.967 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@138 -- # mlx=() 00:30:12.967 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@138 -- # local -ga mlx 00:30:12.967 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:12.967 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:12.967 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:12.967 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:12.967 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:12.967 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:12.967 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:12.967 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:12.967 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:12.967 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:12.967 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:12.967 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:12.967 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:30:12.967 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:30:12.967 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:30:12.967 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:30:12.967 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:30:12.967 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:30:12.967 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:30:12.967 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:12.967 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:12.967 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:30:12.967 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:30:12.967 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:12.967 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:12.967 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:30:12.967 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:30:12.967 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:12.967 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:12.967 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:30:12.967 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:30:12.967 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:12.967 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:12.967 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:30:12.967 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:30:12.967 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:30:12.967 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:30:12.967 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:30:12.967 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:12.967 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:30:12.967 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:12.967 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # [[ up == up ]] 00:30:12.967 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:30:12.967 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:12.967 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:12.967 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:12.967 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:30:12.967 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:30:12.967 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:12.967 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:30:12.967 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:12.967 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # [[ up == up ]] 00:30:12.967 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:30:12.967 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:12.967 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:12.967 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:12.967 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:30:12.967 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:30:12.967 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:30:12.967 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # is_hw=yes 00:30:12.967 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:30:12.967 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:30:12.967 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:30:12.967 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:30:12.967 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@257 -- # create_target_ns 00:30:12.967 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:30:12.967 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:30:12.967 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:30:12.967 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:12.967 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:30:12.967 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:30:12.967 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:12.967 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:12.967 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:30:12.967 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:30:12.967 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:30:12.967 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:30:12.967 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@27 -- # local -gA dev_map 00:30:12.967 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@28 -- # local -g _dev 00:30:12.967 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:30:12.967 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:30:12.967 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:30:12.967 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:30:12.967 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@44 -- # ips=() 00:30:12.967 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:30:12.967 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:30:12.967 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:30:12.967 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:30:12.967 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:30:12.967 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:30:12.967 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:30:12.967 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:30:12.967 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:30:12.967 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:30:12.967 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:30:12.967 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:30:12.967 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:30:12.967 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:30:12.967 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:30:12.967 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:30:12.967 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:30:12.967 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:30:12.967 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:30:12.968 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:30:12.968 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@11 -- # local val=167772161 00:30:12.968 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:30:12.968 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:30:12.968 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:30:12.968 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:30:12.968 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:30:12.968 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:30:12.968 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:30:12.968 10.0.0.1 00:30:12.968 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:30:12.968 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:30:12.968 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:12.968 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:12.968 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:30:12.968 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@11 -- # local val=167772162 00:30:12.968 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:30:12.968 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:30:12.968 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:30:12.968 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:30:12.968 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:30:12.968 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:30:12.968 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:30:12.968 10.0.0.2 00:30:12.968 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:30:12.968 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:30:12.968 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:30:12.968 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:30:12.968 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:30:12.968 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:30:12.968 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:30:12.968 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:12.968 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:12.968 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:30:12.968 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:30:12.968 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:30:12.968 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:30:12.968 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:30:12.968 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:30:12.968 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:30:12.968 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:30:12.968 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:30:12.968 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:30:12.968 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:30:12.968 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@38 -- # ping_ips 1 00:30:12.968 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:30:12.968 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:30:12.968 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:30:12.968 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:30:12.968 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:30:12.968 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:30:12.968 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:30:12.968 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:30:12.968 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:30:12.968 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@107 -- # local dev=initiator0 00:30:12.968 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:30:12.968 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:30:12.968 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:30:12.968 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:30:12.968 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:30:12.968 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:30:12.968 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:30:12.968 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:30:12.968 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:30:12.968 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:30:12.968 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:30:12.968 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:12.968 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:12.968 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:30:12.968 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:30:12.968 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:12.968 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.493 ms 00:30:12.968 00:30:12.968 --- 10.0.0.1 ping statistics --- 00:30:12.968 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:12.968 rtt min/avg/max/mdev = 0.493/0.493/0.493/0.000 ms 00:30:12.968 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:30:12.968 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:30:12.968 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:30:12.968 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:30:12.968 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:12.968 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:12.968 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@168 -- # get_net_dev target0 00:30:12.968 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@107 -- # local dev=target0 00:30:12.968 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:30:12.968 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:30:12.968 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:30:12.968 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:30:12.968 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:30:12.968 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:30:12.968 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:30:12.968 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:30:12.968 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:30:12.968 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:30:12.968 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:30:12.968 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:30:12.968 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:30:12.968 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:30:12.968 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:12.968 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.286 ms 00:30:12.968 00:30:12.968 --- 10.0.0.2 ping statistics --- 00:30:12.968 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:12.968 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:30:12.968 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@98 -- # (( pair++ )) 00:30:12.968 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:30:12.968 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:12.968 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@270 -- # return 0 00:30:12.968 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:30:12.968 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:30:12.968 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:30:12.968 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:30:12.968 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:30:12.968 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:30:12.968 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:30:12.968 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:30:12.968 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:30:12.968 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:30:12.968 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@107 -- # local dev=initiator0 00:30:12.969 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:30:12.969 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:30:12.969 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:30:12.969 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:30:12.969 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:30:12.969 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:30:12.969 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:30:12.969 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:30:12.969 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:30:12.969 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:12.969 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:30:12.969 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:30:12.969 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:30:12.969 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:30:12.969 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:30:12.969 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:30:12.969 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@107 -- # local dev=initiator1 00:30:12.969 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:30:12.969 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:30:12.969 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@109 -- # return 1 00:30:12.969 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@168 -- # dev= 00:30:12.969 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@169 -- # return 0 00:30:12.969 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:30:12.969 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:30:12.969 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:30:12.969 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:30:12.969 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:30:12.969 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:12.969 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:12.969 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@168 -- # get_net_dev target0 00:30:12.969 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@107 -- # local dev=target0 00:30:12.969 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:30:12.969 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:30:12.969 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:30:12.969 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:30:12.969 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:30:12.969 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:30:12.969 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:30:12.969 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:30:12.969 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:30:12.969 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:12.969 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:30:12.969 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:30:12.969 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:30:12.969 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:30:12.969 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:12.969 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:12.969 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@168 -- # get_net_dev target1 00:30:12.969 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@107 -- # local dev=target1 00:30:12.969 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:30:12.969 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:30:12.969 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@109 -- # return 1 00:30:12.969 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@168 -- # dev= 00:30:12.969 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@169 -- # return 0 00:30:12.969 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:30:12.969 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:12.969 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:30:12.969 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:30:12.969 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:12.969 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:30:12.969 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:30:12.969 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmfappstart -m 0x2 00:30:12.969 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:30:12.969 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:12.969 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:12.969 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # nvmfpid=534739 00:30:12.969 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@329 -- # waitforlisten 534739 00:30:12.969 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:30:12.969 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # '[' -z 534739 ']' 00:30:12.969 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:12.969 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:12.969 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:12.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:12.969 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:12.969 19:19:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:12.969 [2024-11-05 19:19:41.561776] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:30:12.969 [2024-11-05 19:19:41.561843] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:12.969 [2024-11-05 19:19:41.663751] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:12.969 [2024-11-05 19:19:41.713637] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:12.969 [2024-11-05 19:19:41.713692] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:12.969 [2024-11-05 19:19:41.713701] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:12.969 [2024-11-05 19:19:41.713708] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:12.969 [2024-11-05 19:19:41.713715] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:12.969 [2024-11-05 19:19:41.714512] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:13.230 19:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:13.230 19:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@866 -- # return 0 00:30:13.230 19:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:30:13.230 19:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:13.230 19:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:13.230 19:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:13.230 19:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:13.230 19:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:13.230 19:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:13.230 [2024-11-05 19:19:42.424147] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:13.230 19:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:13.230 19:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:30:13.230 19:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:13.230 19:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:13.230 [2024-11-05 19:19:42.432372] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:30:13.230 19:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:13.230 19:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # rpc_cmd bdev_null_create null0 1000 512 00:30:13.230 19:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:13.230 19:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:13.230 null0 00:30:13.230 19:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:13.230 19:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@31 -- # rpc_cmd bdev_null_create null1 1000 512 00:30:13.230 19:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:13.230 19:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:13.230 null1 00:30:13.230 19:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:13.230 19:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd bdev_wait_for_examine 00:30:13.230 19:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:13.230 19:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:13.230 19:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:13.230 19:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@40 -- # hostpid=534831 00:30:13.230 19:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@41 -- # waitforlisten 534831 /tmp/host.sock 00:30:13.230 19:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:30:13.230 19:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # '[' -z 534831 ']' 00:30:13.230 19:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/tmp/host.sock 00:30:13.230 19:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:13.230 19:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:30:13.230 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:30:13.230 19:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:13.230 19:19:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:13.230 [2024-11-05 19:19:42.527380] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:30:13.230 [2024-11-05 19:19:42.527448] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid534831 ] 00:30:13.490 [2024-11-05 19:19:42.602446] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:13.490 [2024-11-05 19:19:42.644379] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:14.061 19:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:14.061 19:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@866 -- # return 0 00:30:14.061 19:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@43 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:14.061 19:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:30:14.061 19:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:14.061 19:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:14.061 19:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:14.061 19:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:30:14.061 19:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:14.061 19:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:14.061 19:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:14.061 19:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # notify_id=0 00:30:14.061 19:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@78 -- # get_subsystem_names 00:30:14.061 19:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:14.061 19:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # jq -r '.[].name' 00:30:14.061 19:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:14.061 19:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # sort 00:30:14.061 19:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:14.061 19:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # xargs 00:30:14.061 19:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:14.061 19:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@78 -- # [[ '' == '' ]] 00:30:14.061 19:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # get_bdev_list 00:30:14.061 19:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # sort 00:30:14.061 19:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:14.061 19:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # jq -r '.[].name' 00:30:14.061 19:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:14.061 19:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:14.061 19:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # xargs 00:30:14.322 19:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:14.322 19:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # [[ '' == '' ]] 00:30:14.322 19:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@81 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:30:14.322 19:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:14.322 19:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:14.322 19:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:14.322 19:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@82 -- # get_subsystem_names 00:30:14.322 19:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # xargs 00:30:14.322 19:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:14.322 19:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # jq -r '.[].name' 00:30:14.322 19:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:14.322 19:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # sort 00:30:14.322 19:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:14.322 19:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:14.322 19:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@82 -- # [[ '' == '' ]] 00:30:14.322 19:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_bdev_list 00:30:14.322 19:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:14.322 19:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # jq -r '.[].name' 00:30:14.322 19:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:14.322 19:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # sort 00:30:14.322 19:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:14.322 19:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # xargs 00:30:14.322 19:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:14.322 19:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:30:14.322 19:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@85 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:30:14.322 19:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:14.322 19:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:14.322 19:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:14.322 19:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # get_subsystem_names 00:30:14.322 19:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # jq -r '.[].name' 00:30:14.322 19:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:14.322 19:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:14.322 19:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # sort 00:30:14.322 19:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:14.322 19:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # xargs 00:30:14.322 19:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:14.322 19:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # [[ '' == '' ]] 00:30:14.322 19:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_bdev_list 00:30:14.322 19:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:14.322 19:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # jq -r '.[].name' 00:30:14.322 19:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # sort 00:30:14.322 19:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # xargs 00:30:14.322 19:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:14.322 19:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:14.322 19:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:14.322 19:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:30:14.322 19:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:14.322 19:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:14.322 19:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:14.322 [2024-11-05 19:19:43.643424] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:14.583 19:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:14.583 19:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_subsystem_names 00:30:14.583 19:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:14.583 19:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # jq -r '.[].name' 00:30:14.583 19:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:14.583 19:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # sort 00:30:14.583 19:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:14.583 19:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # xargs 00:30:14.583 19:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:14.583 19:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:30:14.583 19:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@93 -- # get_bdev_list 00:30:14.583 19:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:14.583 19:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # jq -r '.[].name' 00:30:14.583 19:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:14.583 19:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # sort 00:30:14.583 19:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:14.583 19:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # xargs 00:30:14.583 19:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:14.583 19:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@93 -- # [[ '' == '' ]] 00:30:14.583 19:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@94 -- # is_notification_count_eq 0 00:30:14.583 19:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # expected_count=0 00:30:14.583 19:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:14.583 19:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:14.583 19:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:30:14.583 19:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:30:14.583 19:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:14.583 19:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:30:14.583 19:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:30:14.583 19:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@69 -- # jq '. | length' 00:30:14.583 19:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:14.583 19:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:14.583 19:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:14.583 19:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@69 -- # notification_count=0 00:30:14.583 19:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@70 -- # notify_id=0 00:30:14.583 19:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:30:14.583 19:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:30:14.583 19:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:30:14.583 19:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:14.583 19:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:14.583 19:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:14.583 19:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@100 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:14.583 19:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:14.583 19:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:30:14.583 19:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:30:14.583 19:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:30:14.583 19:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:30:14.583 19:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # xargs 00:30:14.583 19:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:14.583 19:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # jq -r '.[].name' 00:30:14.583 19:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:14.583 19:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # sort 00:30:14.583 19:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:14.583 19:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:14.583 19:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == \n\v\m\e\0 ]] 00:30:14.584 19:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # sleep 1 00:30:15.154 [2024-11-05 19:19:44.372699] bdev_nvme.c:7382:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:30:15.154 [2024-11-05 19:19:44.372718] bdev_nvme.c:7468:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:30:15.154 [2024-11-05 19:19:44.372731] bdev_nvme.c:7345:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:15.415 [2024-11-05 19:19:44.503158] bdev_nvme.c:7311:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:30:15.415 [2024-11-05 19:19:44.684338] bdev_nvme.c:5632:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:30:15.415 [2024-11-05 19:19:44.685416] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x14b9780:1 started. 00:30:15.415 [2024-11-05 19:19:44.687069] bdev_nvme.c:7201:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:30:15.415 [2024-11-05 19:19:44.687088] bdev_nvme.c:7160:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:30:15.415 [2024-11-05 19:19:44.692161] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x14b9780 was disconnected and freed. delete nvme_qpair. 00:30:15.675 19:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:30:15.675 19:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:30:15.675 19:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:30:15.675 19:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:15.675 19:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # jq -r '.[].name' 00:30:15.675 19:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:15.675 19:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # sort 00:30:15.675 19:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:15.675 19:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # xargs 00:30:15.675 19:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:15.675 19:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:15.675 19:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:30:15.675 19:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@101 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:30:15.675 19:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:30:15.675 19:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:30:15.675 19:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:30:15.675 19:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:30:15.675 19:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:30:15.675 19:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # sort 00:30:15.675 19:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:15.675 19:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:15.675 19:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # jq -r '.[].name' 00:30:15.675 19:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:15.675 19:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # xargs 00:30:15.675 19:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:15.675 19:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:30:15.675 19:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:30:15.676 19:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@102 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:30:15.676 19:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:30:15.676 19:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:30:15.676 19:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:30:15.676 19:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:30:15.676 19:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:30:15.676 19:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@58 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:30:15.676 19:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@58 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:30:15.676 19:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@58 -- # sort -n 00:30:15.676 19:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@58 -- # xargs 00:30:15.676 19:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:15.676 19:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:15.676 19:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:15.936 19:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 == \4\4\2\0 ]] 00:30:15.936 19:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:30:15.936 19:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # is_notification_count_eq 1 00:30:15.936 19:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # expected_count=1 00:30:15.936 19:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:15.936 19:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:15.936 19:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:30:15.936 19:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:30:15.936 19:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:15.936 19:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:30:15.936 19:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:30:15.936 19:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@69 -- # jq '. | length' 00:30:15.936 19:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:15.936 19:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:15.937 19:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:15.937 19:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@69 -- # notification_count=1 00:30:15.937 19:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@70 -- # notify_id=1 00:30:15.937 19:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:30:15.937 19:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:30:15.937 19:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:30:15.937 19:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:15.937 19:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:15.937 19:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:15.937 19:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:15.937 19:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:15.937 19:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:30:15.937 19:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:30:15.937 19:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:30:15.937 19:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:30:15.937 19:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:15.937 19:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # jq -r '.[].name' 00:30:15.937 19:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:15.937 19:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # sort 00:30:15.937 19:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # xargs 00:30:15.937 19:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:15.937 [2024-11-05 19:19:45.088756] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x14c6ab0:1 started. 00:30:15.937 [2024-11-05 19:19:45.093062] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x14c6ab0 was disconnected and freed. delete nvme_qpair. 00:30:15.937 19:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:15.937 19:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:15.937 19:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:30:15.937 19:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@109 -- # is_notification_count_eq 1 00:30:15.937 19:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # expected_count=1 00:30:15.937 19:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:15.937 19:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:15.937 19:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:30:15.937 19:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:30:15.937 19:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:15.937 19:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:30:15.937 19:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:30:15.937 19:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@69 -- # jq '. | length' 00:30:15.937 19:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:15.937 19:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:15.937 19:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:15.937 19:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@69 -- # notification_count=1 00:30:15.937 19:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@70 -- # notify_id=2 00:30:15.937 19:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:30:15.937 19:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:30:15.937 19:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:30:15.937 19:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:15.937 19:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:15.937 [2024-11-05 19:19:45.187643] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:15.937 [2024-11-05 19:19:45.188352] bdev_nvme.c:7364:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:30:15.937 [2024-11-05 19:19:45.188373] bdev_nvme.c:7345:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:15.937 19:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:15.937 19:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@115 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:15.937 19:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:15.937 19:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:30:15.937 19:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:30:15.937 19:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:30:15.937 19:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:30:15.937 19:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:15.937 19:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # xargs 00:30:15.937 19:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # jq -r '.[].name' 00:30:15.937 19:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:15.937 19:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # sort 00:30:15.937 19:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:15.937 19:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:15.937 19:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:15.937 19:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:30:15.937 19:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@116 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:15.937 19:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:15.937 19:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:30:15.937 19:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:30:15.937 19:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:30:15.937 19:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:30:15.937 19:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:15.937 19:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # jq -r '.[].name' 00:30:15.937 19:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:15.937 19:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # sort 00:30:15.937 19:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:15.937 19:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # xargs 00:30:16.197 [2024-11-05 19:19:45.275073] bdev_nvme.c:7306:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:30:16.197 19:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:16.197 19:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:16.197 19:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:30:16.197 19:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@117 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:30:16.197 19:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:30:16.197 19:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:30:16.197 19:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:30:16.197 19:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:30:16.197 19:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:30:16.197 19:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@58 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:30:16.197 19:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:16.197 19:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:16.197 19:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@58 -- # xargs 00:30:16.197 19:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@58 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:30:16.197 19:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@58 -- # sort -n 00:30:16.197 19:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:16.197 19:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:30:16.197 19:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # sleep 1 00:30:16.456 [2024-11-05 19:19:45.539471] bdev_nvme.c:5632:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:30:16.456 [2024-11-05 19:19:45.539509] bdev_nvme.c:7201:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:30:16.456 [2024-11-05 19:19:45.539517] bdev_nvme.c:7160:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:30:16.456 [2024-11-05 19:19:45.539522] bdev_nvme.c:7160:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:30:17.399 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:30:17.399 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:30:17.399 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:30:17.399 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@58 -- # sort -n 00:30:17.399 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@58 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:30:17.399 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@58 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:30:17.399 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:17.399 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:17.399 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@58 -- # xargs 00:30:17.399 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:17.399 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:30:17.399 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:30:17.399 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # is_notification_count_eq 0 00:30:17.399 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # expected_count=0 00:30:17.399 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:17.399 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:17.399 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:30:17.399 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:30:17.399 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:17.399 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:30:17.399 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:30:17.399 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@69 -- # jq '. | length' 00:30:17.399 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:17.399 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:17.399 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:17.399 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@69 -- # notification_count=0 00:30:17.399 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@70 -- # notify_id=2 00:30:17.399 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:30:17.399 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:30:17.399 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:17.399 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:17.399 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:17.399 [2024-11-05 19:19:46.464058] bdev_nvme.c:7364:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:30:17.399 [2024-11-05 19:19:46.464081] bdev_nvme.c:7345:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:17.399 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:17.399 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@124 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:17.399 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:17.399 [2024-11-05 19:19:46.468697] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:17.399 [2024-11-05 19:19:46.468716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:17.399 [2024-11-05 19:19:46.468726] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:17.399 [2024-11-05 19:19:46.468734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:17.399 [2024-11-05 19:19:46.468742] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:17.399 [2024-11-05 19:19:46.468755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:17.399 [2024-11-05 19:19:46.468764] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:17.399 [2024-11-05 19:19:46.468771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:17.399 [2024-11-05 19:19:46.468778] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1489e10 is same with the state(6) to be set 00:30:17.399 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:30:17.399 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:30:17.399 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:30:17.399 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:30:17.399 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:17.399 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # jq -r '.[].name' 00:30:17.399 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:17.399 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # sort 00:30:17.399 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:17.399 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # xargs 00:30:17.399 [2024-11-05 19:19:46.478710] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1489e10 (9): Bad file descriptor 00:30:17.399 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:17.399 [2024-11-05 19:19:46.488752] bdev_nvme.c:2543:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:30:17.399 [2024-11-05 19:19:46.488767] bdev_nvme.c:2531:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:30:17.399 [2024-11-05 19:19:46.488773] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:30:17.399 [2024-11-05 19:19:46.488778] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:30:17.399 [2024-11-05 19:19:46.488797] bdev_nvme.c:2515:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:30:17.399 [2024-11-05 19:19:46.489222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.399 [2024-11-05 19:19:46.489262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1489e10 with addr=10.0.0.2, port=4420 00:30:17.399 [2024-11-05 19:19:46.489275] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1489e10 is same with the state(6) to be set 00:30:17.399 [2024-11-05 19:19:46.489296] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1489e10 (9): Bad file descriptor 00:30:17.399 [2024-11-05 19:19:46.489322] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:30:17.399 [2024-11-05 19:19:46.489331] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:30:17.399 [2024-11-05 19:19:46.489341] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:30:17.399 [2024-11-05 19:19:46.489349] bdev_nvme.c:2505:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:30:17.399 [2024-11-05 19:19:46.489355] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:30:17.399 [2024-11-05 19:19:46.489359] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:30:17.399 [2024-11-05 19:19:46.498832] bdev_nvme.c:2543:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:30:17.399 [2024-11-05 19:19:46.498847] bdev_nvme.c:2531:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:30:17.399 [2024-11-05 19:19:46.498852] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:30:17.399 [2024-11-05 19:19:46.498857] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:30:17.399 [2024-11-05 19:19:46.498874] bdev_nvme.c:2515:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:30:17.399 [2024-11-05 19:19:46.499070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.399 [2024-11-05 19:19:46.499084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1489e10 with addr=10.0.0.2, port=4420 00:30:17.399 [2024-11-05 19:19:46.499092] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1489e10 is same with the state(6) to be set 00:30:17.399 [2024-11-05 19:19:46.499109] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1489e10 (9): Bad file descriptor 00:30:17.399 [2024-11-05 19:19:46.499120] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:30:17.399 [2024-11-05 19:19:46.499127] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:30:17.399 [2024-11-05 19:19:46.499135] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:30:17.399 [2024-11-05 19:19:46.499141] bdev_nvme.c:2505:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:30:17.399 [2024-11-05 19:19:46.499145] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:30:17.399 [2024-11-05 19:19:46.499150] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:30:17.399 [2024-11-05 19:19:46.508906] bdev_nvme.c:2543:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:30:17.399 [2024-11-05 19:19:46.508920] bdev_nvme.c:2531:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:30:17.399 [2024-11-05 19:19:46.508925] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:30:17.399 [2024-11-05 19:19:46.508929] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:30:17.399 [2024-11-05 19:19:46.508945] bdev_nvme.c:2515:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:30:17.399 [2024-11-05 19:19:46.509270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.400 [2024-11-05 19:19:46.509284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1489e10 with addr=10.0.0.2, port=4420 00:30:17.400 [2024-11-05 19:19:46.509292] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1489e10 is same with the state(6) to be set 00:30:17.400 [2024-11-05 19:19:46.509303] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1489e10 (9): Bad file descriptor 00:30:17.400 [2024-11-05 19:19:46.509314] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:30:17.400 [2024-11-05 19:19:46.509320] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:30:17.400 [2024-11-05 19:19:46.509328] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:30:17.400 [2024-11-05 19:19:46.509334] bdev_nvme.c:2505:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:30:17.400 [2024-11-05 19:19:46.509339] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:30:17.400 [2024-11-05 19:19:46.509343] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:30:17.400 [2024-11-05 19:19:46.518976] bdev_nvme.c:2543:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:30:17.400 [2024-11-05 19:19:46.518989] bdev_nvme.c:2531:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:30:17.400 [2024-11-05 19:19:46.518994] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:30:17.400 [2024-11-05 19:19:46.518998] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:30:17.400 [2024-11-05 19:19:46.519013] bdev_nvme.c:2515:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:30:17.400 [2024-11-05 19:19:46.519208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.400 [2024-11-05 19:19:46.519220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1489e10 with addr=10.0.0.2, port=4420 00:30:17.400 [2024-11-05 19:19:46.519231] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1489e10 is same with the state(6) to be set 00:30:17.400 [2024-11-05 19:19:46.519244] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1489e10 (9): Bad file descriptor 00:30:17.400 [2024-11-05 19:19:46.519255] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:30:17.400 [2024-11-05 19:19:46.519262] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:30:17.400 [2024-11-05 19:19:46.519269] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:30:17.400 [2024-11-05 19:19:46.519275] bdev_nvme.c:2505:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:30:17.400 [2024-11-05 19:19:46.519280] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:30:17.400 [2024-11-05 19:19:46.519284] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:30:17.400 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:17.400 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:30:17.400 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@125 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:17.400 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:17.400 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:30:17.400 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:30:17.400 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:30:17.400 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:30:17.400 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # xargs 00:30:17.400 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:17.400 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # jq -r '.[].name' 00:30:17.400 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:17.400 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # sort 00:30:17.400 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:17.400 [2024-11-05 19:19:46.529045] bdev_nvme.c:2543:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:30:17.400 [2024-11-05 19:19:46.529057] bdev_nvme.c:2531:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:30:17.400 [2024-11-05 19:19:46.529062] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:30:17.400 [2024-11-05 19:19:46.529067] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:30:17.400 [2024-11-05 19:19:46.529082] bdev_nvme.c:2515:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:30:17.400 [2024-11-05 19:19:46.529418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.400 [2024-11-05 19:19:46.529430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1489e10 with addr=10.0.0.2, port=4420 00:30:17.400 [2024-11-05 19:19:46.529438] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1489e10 is same with the state(6) to be set 00:30:17.400 [2024-11-05 19:19:46.529449] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1489e10 (9): Bad file descriptor 00:30:17.400 [2024-11-05 19:19:46.529467] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:30:17.400 [2024-11-05 19:19:46.529477] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:30:17.400 [2024-11-05 19:19:46.529485] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:30:17.400 [2024-11-05 19:19:46.529491] bdev_nvme.c:2505:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:30:17.400 [2024-11-05 19:19:46.529495] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:30:17.400 [2024-11-05 19:19:46.529500] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:30:17.400 [2024-11-05 19:19:46.539114] bdev_nvme.c:2543:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:30:17.400 [2024-11-05 19:19:46.539129] bdev_nvme.c:2531:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:30:17.400 [2024-11-05 19:19:46.539134] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:30:17.400 [2024-11-05 19:19:46.539138] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:30:17.400 [2024-11-05 19:19:46.539154] bdev_nvme.c:2515:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:30:17.400 [2024-11-05 19:19:46.539485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.400 [2024-11-05 19:19:46.539498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1489e10 with addr=10.0.0.2, port=4420 00:30:17.400 [2024-11-05 19:19:46.539505] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1489e10 is same with the state(6) to be set 00:30:17.400 [2024-11-05 19:19:46.539517] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1489e10 (9): Bad file descriptor 00:30:17.400 [2024-11-05 19:19:46.539527] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:30:17.400 [2024-11-05 19:19:46.539534] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:30:17.400 [2024-11-05 19:19:46.539541] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:30:17.400 [2024-11-05 19:19:46.539547] bdev_nvme.c:2505:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:30:17.400 [2024-11-05 19:19:46.539552] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:30:17.400 [2024-11-05 19:19:46.539556] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:30:17.400 [2024-11-05 19:19:46.549186] bdev_nvme.c:2543:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:30:17.400 [2024-11-05 19:19:46.549197] bdev_nvme.c:2531:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:30:17.400 [2024-11-05 19:19:46.549202] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:30:17.400 [2024-11-05 19:19:46.549207] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:30:17.400 [2024-11-05 19:19:46.549221] bdev_nvme.c:2515:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:30:17.400 [2024-11-05 19:19:46.549528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.400 [2024-11-05 19:19:46.549541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1489e10 with addr=10.0.0.2, port=4420 00:30:17.400 [2024-11-05 19:19:46.549548] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1489e10 is same with the state(6) to be set 00:30:17.400 [2024-11-05 19:19:46.549559] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1489e10 (9): Bad file descriptor 00:30:17.400 [2024-11-05 19:19:46.549577] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:30:17.400 [2024-11-05 19:19:46.549583] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:30:17.400 [2024-11-05 19:19:46.549591] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:30:17.400 [2024-11-05 19:19:46.549596] bdev_nvme.c:2505:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:30:17.400 [2024-11-05 19:19:46.549601] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:30:17.400 [2024-11-05 19:19:46.549605] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:30:17.400 [2024-11-05 19:19:46.551874] bdev_nvme.c:7169:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:30:17.400 [2024-11-05 19:19:46.551892] bdev_nvme.c:7160:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:30:17.400 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:17.400 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:17.400 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:30:17.400 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@126 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:30:17.400 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:30:17.400 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:30:17.400 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:30:17.401 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:30:17.401 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:30:17.401 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@58 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:30:17.401 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@58 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:30:17.401 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@58 -- # sort -n 00:30:17.401 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@58 -- # xargs 00:30:17.401 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:17.401 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:17.401 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:17.401 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4421 == \4\4\2\1 ]] 00:30:17.401 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:30:17.401 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # is_notification_count_eq 0 00:30:17.401 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # expected_count=0 00:30:17.401 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:17.401 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:17.401 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:30:17.401 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:30:17.401 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:17.401 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:30:17.401 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:30:17.401 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@69 -- # jq '. | length' 00:30:17.401 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:17.401 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:17.401 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:17.401 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@69 -- # notification_count=0 00:30:17.401 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@70 -- # notify_id=2 00:30:17.401 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:30:17.401 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:30:17.401 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:30:17.401 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:17.401 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:17.401 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:17.401 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:30:17.401 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:30:17.401 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:30:17.401 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:30:17.401 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:30:17.401 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:30:17.401 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # sort 00:30:17.401 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:17.401 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:17.401 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # jq -r '.[].name' 00:30:17.401 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:17.401 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # xargs 00:30:17.401 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:17.662 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == '' ]] 00:30:17.662 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:30:17.662 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:30:17.662 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:30:17.662 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:30:17.662 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:30:17.662 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:30:17.662 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:30:17.662 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:17.662 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # jq -r '.[].name' 00:30:17.662 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:17.662 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # sort 00:30:17.662 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:17.662 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # xargs 00:30:17.662 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:17.662 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == '' ]] 00:30:17.662 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:30:17.662 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@133 -- # is_notification_count_eq 2 00:30:17.662 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # expected_count=2 00:30:17.662 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:17.662 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:17.662 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:30:17.662 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:30:17.662 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:17.662 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:30:17.662 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:30:17.662 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@69 -- # jq '. | length' 00:30:17.662 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:17.662 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:17.662 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:17.662 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@69 -- # notification_count=2 00:30:17.662 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@70 -- # notify_id=4 00:30:17.662 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:30:17.662 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:30:17.662 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:17.662 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:17.662 19:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:18.604 [2024-11-05 19:19:47.900804] bdev_nvme.c:7382:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:30:18.604 [2024-11-05 19:19:47.900823] bdev_nvme.c:7468:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:30:18.604 [2024-11-05 19:19:47.900835] bdev_nvme.c:7345:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:18.864 [2024-11-05 19:19:47.987112] bdev_nvme.c:7311:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:30:19.125 [2024-11-05 19:19:48.290631] bdev_nvme.c:5632:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:30:19.125 [2024-11-05 19:19:48.291413] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x14a3690:1 started. 00:30:19.125 [2024-11-05 19:19:48.293239] bdev_nvme.c:7201:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:30:19.125 [2024-11-05 19:19:48.293267] bdev_nvme.c:7160:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:30:19.125 19:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:19.125 19:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:19.125 19:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:30:19.125 19:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:19.125 19:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:30:19.125 19:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:19.125 19:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:30:19.125 19:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:19.125 19:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:19.125 19:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:19.125 19:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:19.125 request: 00:30:19.125 { 00:30:19.125 "name": "nvme", 00:30:19.125 "trtype": "tcp", 00:30:19.125 "traddr": "10.0.0.2", 00:30:19.125 "adrfam": "ipv4", 00:30:19.125 "trsvcid": "8009", 00:30:19.125 "hostnqn": "nqn.2021-12.io.spdk:test", 00:30:19.125 "wait_for_attach": true, 00:30:19.125 "method": "bdev_nvme_start_discovery", 00:30:19.125 "req_id": 1 00:30:19.125 } 00:30:19.125 Got JSON-RPC error response 00:30:19.125 response: 00:30:19.125 { 00:30:19.125 "code": -17, 00:30:19.125 "message": "File exists" 00:30:19.125 } 00:30:19.125 19:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:30:19.125 19:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:30:19.125 19:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:19.125 19:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:19.125 19:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:19.125 19:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@140 -- # get_discovery_ctrlrs 00:30:19.125 19:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@62 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:30:19.125 19:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@62 -- # jq -r '.[].name' 00:30:19.125 19:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:19.125 19:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@62 -- # sort 00:30:19.125 19:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:19.125 19:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@62 -- # xargs 00:30:19.125 19:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:19.125 [2024-11-05 19:19:48.342553] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x14a3690 was disconnected and freed. delete nvme_qpair. 00:30:19.125 19:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@140 -- # [[ nvme == \n\v\m\e ]] 00:30:19.125 19:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # get_bdev_list 00:30:19.125 19:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:19.125 19:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:19.125 19:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:19.125 19:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # jq -r '.[].name' 00:30:19.125 19:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # sort 00:30:19.125 19:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # xargs 00:30:19.125 19:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:19.125 19:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:19.125 19:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@144 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:19.125 19:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:30:19.126 19:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:19.126 19:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:30:19.126 19:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:19.126 19:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:30:19.126 19:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:19.126 19:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:19.126 19:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:19.126 19:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:19.126 request: 00:30:19.126 { 00:30:19.126 "name": "nvme_second", 00:30:19.126 "trtype": "tcp", 00:30:19.126 "traddr": "10.0.0.2", 00:30:19.126 "adrfam": "ipv4", 00:30:19.126 "trsvcid": "8009", 00:30:19.126 "hostnqn": "nqn.2021-12.io.spdk:test", 00:30:19.126 "wait_for_attach": true, 00:30:19.126 "method": "bdev_nvme_start_discovery", 00:30:19.126 "req_id": 1 00:30:19.126 } 00:30:19.126 Got JSON-RPC error response 00:30:19.126 response: 00:30:19.126 { 00:30:19.126 "code": -17, 00:30:19.126 "message": "File exists" 00:30:19.126 } 00:30:19.126 19:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:30:19.126 19:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:30:19.126 19:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:19.126 19:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:19.126 19:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:19.126 19:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_discovery_ctrlrs 00:30:19.126 19:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@62 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:30:19.126 19:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:19.126 19:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:19.126 19:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@62 -- # jq -r '.[].name' 00:30:19.126 19:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@62 -- # sort 00:30:19.126 19:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@62 -- # xargs 00:30:19.126 19:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:19.387 19:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme == \n\v\m\e ]] 00:30:19.387 19:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@147 -- # get_bdev_list 00:30:19.387 19:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:19.387 19:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:19.387 19:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:19.387 19:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # jq -r '.[].name' 00:30:19.387 19:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # sort 00:30:19.387 19:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # xargs 00:30:19.387 19:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:19.387 19:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@147 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:19.387 19:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@150 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:30:19.387 19:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:30:19.387 19:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:30:19.387 19:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:30:19.387 19:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:19.387 19:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:30:19.387 19:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:19.387 19:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:30:19.387 19:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:19.387 19:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:20.327 [2024-11-05 19:19:49.540708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.327 [2024-11-05 19:19:49.540738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14c8010 with addr=10.0.0.2, port=8010 00:30:20.327 [2024-11-05 19:19:49.540755] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:30:20.327 [2024-11-05 19:19:49.540763] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:30:20.327 [2024-11-05 19:19:49.540770] bdev_nvme.c:7450:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:30:21.268 [2024-11-05 19:19:50.543083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.268 [2024-11-05 19:19:50.543122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14c8010 with addr=10.0.0.2, port=8010 00:30:21.268 [2024-11-05 19:19:50.543138] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:30:21.268 [2024-11-05 19:19:50.543146] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:30:21.268 [2024-11-05 19:19:50.543153] bdev_nvme.c:7450:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:30:22.651 [2024-11-05 19:19:51.545052] bdev_nvme.c:7425:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:30:22.651 request: 00:30:22.651 { 00:30:22.651 "name": "nvme_second", 00:30:22.651 "trtype": "tcp", 00:30:22.651 "traddr": "10.0.0.2", 00:30:22.651 "adrfam": "ipv4", 00:30:22.651 "trsvcid": "8010", 00:30:22.651 "hostnqn": "nqn.2021-12.io.spdk:test", 00:30:22.651 "wait_for_attach": false, 00:30:22.651 "attach_timeout_ms": 3000, 00:30:22.651 "method": "bdev_nvme_start_discovery", 00:30:22.651 "req_id": 1 00:30:22.651 } 00:30:22.651 Got JSON-RPC error response 00:30:22.651 response: 00:30:22.651 { 00:30:22.651 "code": -110, 00:30:22.651 "message": "Connection timed out" 00:30:22.651 } 00:30:22.651 19:19:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:30:22.651 19:19:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:30:22.651 19:19:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:22.651 19:19:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:22.651 19:19:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:22.651 19:19:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_discovery_ctrlrs 00:30:22.651 19:19:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@62 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:30:22.651 19:19:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@62 -- # jq -r '.[].name' 00:30:22.651 19:19:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:22.651 19:19:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@62 -- # sort 00:30:22.651 19:19:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:22.651 19:19:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@62 -- # xargs 00:30:22.651 19:19:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:22.651 19:19:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme == \n\v\m\e ]] 00:30:22.651 19:19:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@154 -- # trap - SIGINT SIGTERM EXIT 00:30:22.651 19:19:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@156 -- # kill 534831 00:30:22.651 19:19:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # nvmftestfini 00:30:22.651 19:19:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@335 -- # nvmfcleanup 00:30:22.651 19:19:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@99 -- # sync 00:30:22.651 19:19:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:30:22.651 19:19:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@102 -- # set +e 00:30:22.651 19:19:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@103 -- # for i in {1..20} 00:30:22.651 19:19:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:30:22.651 rmmod nvme_tcp 00:30:22.651 rmmod nvme_fabrics 00:30:22.651 rmmod nvme_keyring 00:30:22.651 19:19:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:30:22.651 19:19:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@106 -- # set -e 00:30:22.651 19:19:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@107 -- # return 0 00:30:22.651 19:19:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # '[' -n 534739 ']' 00:30:22.651 19:19:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@337 -- # killprocess 534739 00:30:22.651 19:19:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@952 -- # '[' -z 534739 ']' 00:30:22.651 19:19:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # kill -0 534739 00:30:22.651 19:19:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@957 -- # uname 00:30:22.651 19:19:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:22.651 19:19:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 534739 00:30:22.651 19:19:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:30:22.651 19:19:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:30:22.651 19:19:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@970 -- # echo 'killing process with pid 534739' 00:30:22.651 killing process with pid 534739 00:30:22.651 19:19:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@971 -- # kill 534739 00:30:22.651 19:19:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@976 -- # wait 534739 00:30:22.651 19:19:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:30:22.651 19:19:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # nvmf_fini 00:30:22.651 19:19:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@264 -- # local dev 00:30:22.651 19:19:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@267 -- # remove_target_ns 00:30:22.651 19:19:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:30:22.651 19:19:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:30:22.651 19:19:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_target_ns 00:30:25.196 19:19:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@268 -- # delete_main_bridge 00:30:25.196 19:19:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:30:25.196 19:19:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@130 -- # return 0 00:30:25.196 19:19:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:30:25.196 19:19:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:30:25.196 19:19:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:30:25.196 19:19:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:30:25.196 19:19:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:30:25.196 19:19:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:30:25.196 19:19:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:30:25.196 19:19:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:30:25.196 19:19:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:30:25.196 19:19:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:30:25.196 19:19:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:30:25.196 19:19:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:30:25.196 19:19:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:30:25.196 19:19:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:30:25.196 19:19:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:30:25.196 19:19:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:30:25.196 19:19:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:30:25.196 19:19:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@41 -- # _dev=0 00:30:25.196 19:19:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@41 -- # dev_map=() 00:30:25.196 19:19:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@284 -- # iptr 00:30:25.196 19:19:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@542 -- # iptables-save 00:30:25.196 19:19:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:30:25.196 19:19:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@542 -- # iptables-restore 00:30:25.196 00:30:25.196 real 0m20.169s 00:30:25.196 user 0m23.412s 00:30:25.196 sys 0m7.024s 00:30:25.196 19:19:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:25.196 19:19:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:25.196 ************************************ 00:30:25.196 END TEST nvmf_host_discovery 00:30:25.196 ************************************ 00:30:25.196 19:19:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@34 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:30:25.196 19:19:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:30:25.196 19:19:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:25.196 19:19:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:25.196 ************************************ 00:30:25.196 START TEST nvmf_discovery_remove_ifc 00:30:25.196 ************************************ 00:30:25.196 19:19:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:30:25.196 * Looking for test storage... 00:30:25.196 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:25.196 19:19:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:25.196 19:19:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lcov --version 00:30:25.196 19:19:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:25.196 19:19:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:25.196 19:19:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:25.196 19:19:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:25.196 19:19:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:25.196 19:19:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:30:25.196 19:19:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:30:25.196 19:19:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:30:25.196 19:19:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:30:25.196 19:19:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:30:25.196 19:19:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:30:25.196 19:19:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:30:25.196 19:19:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:25.196 19:19:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:30:25.196 19:19:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:30:25.196 19:19:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:25.196 19:19:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:25.196 19:19:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:30:25.196 19:19:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:30:25.196 19:19:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:25.196 19:19:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:30:25.196 19:19:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:30:25.196 19:19:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:30:25.196 19:19:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:30:25.196 19:19:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:25.196 19:19:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:30:25.196 19:19:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:30:25.196 19:19:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:25.196 19:19:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:25.196 19:19:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:30:25.196 19:19:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:25.196 19:19:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:25.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:25.196 --rc genhtml_branch_coverage=1 00:30:25.196 --rc genhtml_function_coverage=1 00:30:25.196 --rc genhtml_legend=1 00:30:25.196 --rc geninfo_all_blocks=1 00:30:25.196 --rc geninfo_unexecuted_blocks=1 00:30:25.196 00:30:25.196 ' 00:30:25.196 19:19:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:25.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:25.196 --rc genhtml_branch_coverage=1 00:30:25.196 --rc genhtml_function_coverage=1 00:30:25.196 --rc genhtml_legend=1 00:30:25.196 --rc geninfo_all_blocks=1 00:30:25.196 --rc geninfo_unexecuted_blocks=1 00:30:25.196 00:30:25.196 ' 00:30:25.196 19:19:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:25.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:25.196 --rc genhtml_branch_coverage=1 00:30:25.196 --rc genhtml_function_coverage=1 00:30:25.196 --rc genhtml_legend=1 00:30:25.196 --rc geninfo_all_blocks=1 00:30:25.196 --rc geninfo_unexecuted_blocks=1 00:30:25.196 00:30:25.196 ' 00:30:25.196 19:19:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:25.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:25.196 --rc genhtml_branch_coverage=1 00:30:25.196 --rc genhtml_function_coverage=1 00:30:25.196 --rc genhtml_legend=1 00:30:25.196 --rc geninfo_all_blocks=1 00:30:25.196 --rc geninfo_unexecuted_blocks=1 00:30:25.196 00:30:25.196 ' 00:30:25.196 19:19:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:25.196 19:19:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:30:25.196 19:19:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:25.197 19:19:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:25.197 19:19:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:25.197 19:19:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:25.197 19:19:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:25.197 19:19:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:30:25.197 19:19:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:25.197 19:19:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:30:25.197 19:19:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:25.197 19:19:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:25.197 19:19:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:25.197 19:19:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:30:25.197 19:19:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:30:25.197 19:19:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:25.197 19:19:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:25.197 19:19:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:30:25.197 19:19:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:25.197 19:19:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:25.197 19:19:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:25.197 19:19:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:25.197 19:19:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:25.197 19:19:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:25.197 19:19:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:30:25.197 19:19:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:25.197 19:19:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:30:25.197 19:19:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:30:25.197 19:19:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:30:25.197 19:19:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:30:25.197 19:19:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@50 -- # : 0 00:30:25.197 19:19:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:30:25.197 19:19:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:30:25.197 19:19:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:30:25.197 19:19:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:25.197 19:19:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:25.197 19:19:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:30:25.197 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:30:25.197 19:19:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:30:25.197 19:19:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:30:25.197 19:19:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@54 -- # have_pci_nics=0 00:30:25.197 19:19:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # discovery_port=8009 00:30:25.197 19:19:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@15 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:30:25.197 19:19:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@18 -- # nqn=nqn.2016-06.io.spdk:cnode 00:30:25.197 19:19:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # host_nqn=nqn.2021-12.io.spdk:test 00:30:25.197 19:19:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@21 -- # host_sock=/tmp/host.sock 00:30:25.197 19:19:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # nvmftestinit 00:30:25.197 19:19:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:30:25.197 19:19:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:25.197 19:19:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # prepare_net_devs 00:30:25.197 19:19:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # local -g is_hw=no 00:30:25.197 19:19:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # remove_target_ns 00:30:25.197 19:19:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:30:25.197 19:19:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:30:25.197 19:19:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_target_ns 00:30:25.197 19:19:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:30:25.197 19:19:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:30:25.197 19:19:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # xtrace_disable 00:30:25.197 19:19:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:33.339 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:33.339 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@131 -- # pci_devs=() 00:30:33.339 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@131 -- # local -a pci_devs 00:30:33.339 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@132 -- # pci_net_devs=() 00:30:33.339 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:30:33.339 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@133 -- # pci_drivers=() 00:30:33.339 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@133 -- # local -A pci_drivers 00:30:33.339 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@135 -- # net_devs=() 00:30:33.339 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@135 -- # local -ga net_devs 00:30:33.339 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@136 -- # e810=() 00:30:33.339 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@136 -- # local -ga e810 00:30:33.339 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@137 -- # x722=() 00:30:33.339 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@137 -- # local -ga x722 00:30:33.339 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@138 -- # mlx=() 00:30:33.339 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@138 -- # local -ga mlx 00:30:33.339 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:33.339 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:33.339 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:33.339 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:33.339 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:33.339 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:33.339 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:33.339 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:33.339 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:33.339 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:33.339 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:33.339 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:33.339 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:30:33.339 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:30:33.339 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:30:33.339 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:30:33.339 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:30:33.339 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:30:33.339 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:30:33.339 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:33.339 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:33.339 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:30:33.339 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:30:33.339 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:33.339 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:33.339 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:30:33.339 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:30:33.339 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:33.339 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:33.339 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:30:33.339 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:30:33.339 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:33.339 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:33.339 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:30:33.339 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:30:33.339 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:30:33.339 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:30:33.339 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:30:33.339 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:33.339 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:30:33.339 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:33.339 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # [[ up == up ]] 00:30:33.339 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:30:33.339 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:33.339 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:33.340 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:33.340 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:30:33.340 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:30:33.340 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:33.340 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:30:33.340 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:33.340 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # [[ up == up ]] 00:30:33.340 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:30:33.340 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:33.340 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:33.340 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:33.340 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:30:33.340 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:30:33.340 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:30:33.340 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # is_hw=yes 00:30:33.340 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:30:33.340 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:30:33.340 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:30:33.340 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:30:33.340 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@257 -- # create_target_ns 00:30:33.340 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:30:33.340 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:30:33.340 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:30:33.340 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:33.340 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:30:33.340 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:30:33.340 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:33.340 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:33.340 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:30:33.340 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:30:33.340 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:30:33.340 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:30:33.340 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@27 -- # local -gA dev_map 00:30:33.340 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@28 -- # local -g _dev 00:30:33.340 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:30:33.340 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:30:33.340 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:30:33.340 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:30:33.340 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@44 -- # ips=() 00:30:33.340 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:30:33.340 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:30:33.340 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:30:33.340 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:30:33.340 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:30:33.340 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:30:33.340 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:30:33.340 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:30:33.340 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:30:33.340 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:30:33.340 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:30:33.340 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:30:33.340 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:30:33.340 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:30:33.340 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:30:33.340 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:30:33.340 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:30:33.340 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:30:33.340 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:30:33.340 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:30:33.340 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@11 -- # local val=167772161 00:30:33.340 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:30:33.340 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:30:33.340 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:30:33.340 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:30:33.340 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:30:33.340 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:30:33.340 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:30:33.340 10.0.0.1 00:30:33.340 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:30:33.340 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:30:33.340 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:33.340 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:33.340 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:30:33.340 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@11 -- # local val=167772162 00:30:33.340 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:30:33.340 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:30:33.340 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:30:33.340 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:30:33.340 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:30:33.340 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:30:33.340 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:30:33.340 10.0.0.2 00:30:33.340 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:30:33.340 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:30:33.340 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:30:33.340 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:30:33.340 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:30:33.340 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:30:33.340 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:30:33.340 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:33.340 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:33.340 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:30:33.340 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:30:33.340 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:30:33.340 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:30:33.340 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:30:33.340 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:30:33.340 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:30:33.340 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:30:33.340 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:30:33.340 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:30:33.341 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:30:33.341 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@38 -- # ping_ips 1 00:30:33.341 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:30:33.341 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:30:33.341 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:30:33.341 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:30:33.341 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:30:33.341 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:30:33.341 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:30:33.341 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:30:33.341 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:30:33.341 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@107 -- # local dev=initiator0 00:30:33.341 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:30:33.341 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:30:33.341 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:30:33.341 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:30:33.341 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:30:33.341 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:30:33.341 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:30:33.341 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:30:33.341 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:30:33.341 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:30:33.341 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:30:33.341 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:33.341 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:33.341 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:30:33.341 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:30:33.341 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:33.341 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.706 ms 00:30:33.341 00:30:33.341 --- 10.0.0.1 ping statistics --- 00:30:33.341 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:33.341 rtt min/avg/max/mdev = 0.706/0.706/0.706/0.000 ms 00:30:33.341 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:30:33.341 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:30:33.341 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:30:33.341 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:30:33.341 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:33.341 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:33.341 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@168 -- # get_net_dev target0 00:30:33.341 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@107 -- # local dev=target0 00:30:33.341 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:30:33.341 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:30:33.341 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:30:33.341 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:30:33.341 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:30:33.341 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:30:33.341 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:30:33.341 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:30:33.341 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:30:33.341 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:30:33.341 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:30:33.341 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:30:33.341 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:30:33.341 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:30:33.341 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:33.341 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.324 ms 00:30:33.341 00:30:33.341 --- 10.0.0.2 ping statistics --- 00:30:33.341 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:33.341 rtt min/avg/max/mdev = 0.324/0.324/0.324/0.000 ms 00:30:33.341 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@98 -- # (( pair++ )) 00:30:33.341 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:30:33.341 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:33.341 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # return 0 00:30:33.341 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:30:33.341 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:30:33.341 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:30:33.341 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:30:33.341 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:30:33.341 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:30:33.341 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:30:33.341 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:30:33.341 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:30:33.341 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:30:33.341 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@107 -- # local dev=initiator0 00:30:33.341 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:30:33.341 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:30:33.341 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:30:33.341 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:30:33.341 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:30:33.341 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:30:33.341 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:30:33.341 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:30:33.341 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:30:33.341 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:33.341 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:30:33.341 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:30:33.341 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:30:33.341 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:30:33.341 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:30:33.341 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:30:33.341 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@107 -- # local dev=initiator1 00:30:33.341 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:30:33.341 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:30:33.341 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@109 -- # return 1 00:30:33.341 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@168 -- # dev= 00:30:33.341 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@169 -- # return 0 00:30:33.341 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:30:33.341 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:30:33.341 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:30:33.341 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:30:33.341 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:30:33.341 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:33.341 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:33.341 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@168 -- # get_net_dev target0 00:30:33.341 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@107 -- # local dev=target0 00:30:33.341 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:30:33.341 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:30:33.341 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:30:33.342 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:30:33.342 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:30:33.342 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:30:33.342 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:30:33.342 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:30:33.342 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:30:33.342 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:33.342 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:30:33.342 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:30:33.342 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:30:33.342 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:30:33.342 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:33.342 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:33.342 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@168 -- # get_net_dev target1 00:30:33.342 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@107 -- # local dev=target1 00:30:33.342 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:30:33.342 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:30:33.342 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@109 -- # return 1 00:30:33.342 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@168 -- # dev= 00:30:33.342 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@169 -- # return 0 00:30:33.342 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:30:33.342 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:33.342 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:30:33.342 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:30:33.342 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:33.342 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:30:33.342 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:30:33.342 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@35 -- # nvmfappstart -m 0x2 00:30:33.342 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:30:33.342 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:33.342 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:33.342 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # nvmfpid=541030 00:30:33.342 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # waitforlisten 541030 00:30:33.342 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:30:33.342 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # '[' -z 541030 ']' 00:30:33.342 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:33.342 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:33.342 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:33.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:33.342 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:33.342 19:20:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:33.342 [2024-11-05 19:20:01.873531] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:30:33.342 [2024-11-05 19:20:01.873604] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:33.342 [2024-11-05 19:20:01.955198] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:33.342 [2024-11-05 19:20:02.005995] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:33.342 [2024-11-05 19:20:02.006045] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:33.342 [2024-11-05 19:20:02.006053] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:33.342 [2024-11-05 19:20:02.006061] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:33.342 [2024-11-05 19:20:02.006067] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:33.342 [2024-11-05 19:20:02.006834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:33.603 19:20:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:33.603 19:20:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@866 -- # return 0 00:30:33.603 19:20:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:30:33.603 19:20:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:33.603 19:20:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:33.603 19:20:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:33.603 19:20:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@38 -- # rpc_cmd 00:30:33.603 19:20:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:33.603 19:20:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:33.603 [2024-11-05 19:20:02.739488] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:33.603 [2024-11-05 19:20:02.747808] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:30:33.603 null0 00:30:33.603 [2024-11-05 19:20:02.779716] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:33.603 19:20:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:33.603 19:20:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@54 -- # hostpid=541305 00:30:33.603 19:20:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@55 -- # waitforlisten 541305 /tmp/host.sock 00:30:33.603 19:20:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:30:33.603 19:20:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # '[' -z 541305 ']' 00:30:33.603 19:20:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # local rpc_addr=/tmp/host.sock 00:30:33.603 19:20:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:33.603 19:20:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:30:33.603 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:30:33.603 19:20:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:33.603 19:20:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:33.603 [2024-11-05 19:20:02.856927] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:30:33.603 [2024-11-05 19:20:02.856989] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid541305 ] 00:30:33.864 [2024-11-05 19:20:02.933163] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:33.864 [2024-11-05 19:20:02.975069] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:34.434 19:20:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:34.434 19:20:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@866 -- # return 0 00:30:34.434 19:20:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@57 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:34.434 19:20:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:30:34.434 19:20:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:34.434 19:20:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:34.434 19:20:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:34.434 19:20:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@61 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:30:34.434 19:20:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:34.434 19:20:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:34.434 19:20:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:34.434 19:20:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:30:34.434 19:20:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:34.434 19:20:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:35.819 [2024-11-05 19:20:04.798813] bdev_nvme.c:7382:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:30:35.819 [2024-11-05 19:20:04.798835] bdev_nvme.c:7468:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:30:35.819 [2024-11-05 19:20:04.798849] bdev_nvme.c:7345:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:35.819 [2024-11-05 19:20:04.885138] bdev_nvme.c:7311:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:30:35.819 [2024-11-05 19:20:05.107526] bdev_nvme.c:5632:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:30:35.819 [2024-11-05 19:20:05.108462] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x6be3f0:1 started. 00:30:35.819 [2024-11-05 19:20:05.110018] bdev_nvme.c:8178:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:30:35.819 [2024-11-05 19:20:05.110059] bdev_nvme.c:8178:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:30:35.819 [2024-11-05 19:20:05.110090] bdev_nvme.c:8178:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:30:35.819 [2024-11-05 19:20:05.110104] bdev_nvme.c:7201:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:30:35.819 [2024-11-05 19:20:05.110124] bdev_nvme.c:7160:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:30:35.819 19:20:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:35.819 19:20:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@67 -- # wait_for_bdev nvme0n1 00:30:35.819 19:20:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # get_bdev_list 00:30:35.819 19:20:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:35.819 [2024-11-05 19:20:05.117859] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x6be3f0 was disconnected and freed. delete nvme_qpair. 00:30:35.819 19:20:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # jq -r '.[].name' 00:30:35.819 19:20:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:35.819 19:20:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # sort 00:30:35.819 19:20:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:35.819 19:20:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # xargs 00:30:35.819 19:20:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:36.080 19:20:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:30:36.080 19:20:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@70 -- # ip netns exec nvmf_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_1 00:30:36.080 19:20:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@71 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 down 00:30:36.080 19:20:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@74 -- # wait_for_bdev '' 00:30:36.080 19:20:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # get_bdev_list 00:30:36.080 19:20:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # sort 00:30:36.080 19:20:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:36.080 19:20:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # jq -r '.[].name' 00:30:36.080 19:20:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:36.080 19:20:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:36.080 19:20:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # xargs 00:30:36.080 19:20:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:36.080 19:20:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # [[ nvme0n1 != '' ]] 00:30:36.080 19:20:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sleep 1 00:30:37.020 19:20:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # get_bdev_list 00:30:37.020 19:20:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:37.020 19:20:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # jq -r '.[].name' 00:30:37.020 19:20:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:37.020 19:20:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # sort 00:30:37.020 19:20:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:37.020 19:20:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # xargs 00:30:37.281 19:20:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:37.281 19:20:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # [[ nvme0n1 != '' ]] 00:30:37.281 19:20:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sleep 1 00:30:38.221 19:20:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # get_bdev_list 00:30:38.221 19:20:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:38.221 19:20:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # jq -r '.[].name' 00:30:38.221 19:20:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:38.221 19:20:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # sort 00:30:38.221 19:20:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:38.221 19:20:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # xargs 00:30:38.221 19:20:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:38.221 19:20:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # [[ nvme0n1 != '' ]] 00:30:38.221 19:20:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sleep 1 00:30:39.161 19:20:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # get_bdev_list 00:30:39.161 19:20:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # xargs 00:30:39.161 19:20:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:39.161 19:20:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # jq -r '.[].name' 00:30:39.161 19:20:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:39.161 19:20:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # sort 00:30:39.161 19:20:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:39.161 19:20:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:39.422 19:20:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # [[ nvme0n1 != '' ]] 00:30:39.422 19:20:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sleep 1 00:30:40.363 19:20:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # get_bdev_list 00:30:40.363 19:20:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # xargs 00:30:40.363 19:20:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:40.363 19:20:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # jq -r '.[].name' 00:30:40.363 19:20:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:40.363 19:20:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # sort 00:30:40.363 19:20:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:40.363 19:20:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:40.363 19:20:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # [[ nvme0n1 != '' ]] 00:30:40.363 19:20:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sleep 1 00:30:41.304 [2024-11-05 19:20:10.551453] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:30:41.304 [2024-11-05 19:20:10.551504] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:41.304 [2024-11-05 19:20:10.551516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:41.304 [2024-11-05 19:20:10.551531] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:41.304 [2024-11-05 19:20:10.551539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:41.304 [2024-11-05 19:20:10.551547] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:41.304 [2024-11-05 19:20:10.551554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:41.304 [2024-11-05 19:20:10.551562] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:41.304 [2024-11-05 19:20:10.551570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:41.304 [2024-11-05 19:20:10.551578] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:30:41.304 [2024-11-05 19:20:10.551586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:41.304 [2024-11-05 19:20:10.551594] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x69ac00 is same with the state(6) to be set 00:30:41.304 19:20:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # get_bdev_list 00:30:41.304 19:20:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:41.304 19:20:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # jq -r '.[].name' 00:30:41.304 19:20:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:41.304 19:20:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # sort 00:30:41.304 19:20:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:41.304 [2024-11-05 19:20:10.561475] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x69ac00 (9): Bad file descriptor 00:30:41.304 19:20:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # xargs 00:30:41.304 [2024-11-05 19:20:10.571512] bdev_nvme.c:2543:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:30:41.304 [2024-11-05 19:20:10.571526] bdev_nvme.c:2531:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:30:41.304 [2024-11-05 19:20:10.571531] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:30:41.304 [2024-11-05 19:20:10.571537] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:30:41.304 [2024-11-05 19:20:10.571564] bdev_nvme.c:2515:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:30:41.304 19:20:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:41.304 19:20:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # [[ nvme0n1 != '' ]] 00:30:41.304 19:20:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sleep 1 00:30:42.687 19:20:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # get_bdev_list 00:30:42.687 19:20:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:42.687 19:20:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:42.687 19:20:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:42.687 19:20:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # jq -r '.[].name' 00:30:42.687 19:20:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # sort 00:30:42.687 19:20:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # xargs 00:30:42.687 [2024-11-05 19:20:11.636814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:30:42.687 [2024-11-05 19:20:11.636854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x69ac00 with addr=10.0.0.2, port=4420 00:30:42.687 [2024-11-05 19:20:11.636867] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x69ac00 is same with the state(6) to be set 00:30:42.687 [2024-11-05 19:20:11.636892] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x69ac00 (9): Bad file descriptor 00:30:42.687 [2024-11-05 19:20:11.637260] bdev_nvme.c:3166:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:30:42.687 [2024-11-05 19:20:11.637283] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:30:42.687 [2024-11-05 19:20:11.637291] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:30:42.687 [2024-11-05 19:20:11.637300] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:30:42.687 [2024-11-05 19:20:11.637308] bdev_nvme.c:2505:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:30:42.687 [2024-11-05 19:20:11.637314] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:30:42.687 [2024-11-05 19:20:11.637319] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:30:42.687 [2024-11-05 19:20:11.637327] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:30:42.687 [2024-11-05 19:20:11.637332] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:30:42.687 19:20:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:42.687 19:20:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # [[ nvme0n1 != '' ]] 00:30:42.687 19:20:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sleep 1 00:30:43.629 [2024-11-05 19:20:12.639708] bdev_nvme.c:2515:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:30:43.629 [2024-11-05 19:20:12.639728] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:30:43.629 [2024-11-05 19:20:12.639740] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:30:43.629 [2024-11-05 19:20:12.639751] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:30:43.629 [2024-11-05 19:20:12.639759] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:30:43.629 [2024-11-05 19:20:12.639766] bdev_nvme.c:2505:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:30:43.629 [2024-11-05 19:20:12.639771] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:30:43.629 [2024-11-05 19:20:12.639776] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:30:43.629 [2024-11-05 19:20:12.639796] bdev_nvme.c:7133:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:30:43.629 [2024-11-05 19:20:12.639817] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:43.629 [2024-11-05 19:20:12.639827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.629 [2024-11-05 19:20:12.639837] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:43.629 [2024-11-05 19:20:12.639849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.629 [2024-11-05 19:20:12.639857] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:43.629 [2024-11-05 19:20:12.639864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.629 [2024-11-05 19:20:12.639872] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:43.629 [2024-11-05 19:20:12.639879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.629 [2024-11-05 19:20:12.639888] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:30:43.629 [2024-11-05 19:20:12.639895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.629 [2024-11-05 19:20:12.639902] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:30:43.629 [2024-11-05 19:20:12.640155] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x68a340 (9): Bad file descriptor 00:30:43.629 [2024-11-05 19:20:12.641168] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:30:43.629 [2024-11-05 19:20:12.641179] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:30:43.629 19:20:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # get_bdev_list 00:30:43.629 19:20:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # xargs 00:30:43.629 19:20:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:43.629 19:20:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # jq -r '.[].name' 00:30:43.629 19:20:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:43.629 19:20:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # sort 00:30:43.629 19:20:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:43.629 19:20:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:43.629 19:20:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # [[ '' != '' ]] 00:30:43.629 19:20:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@77 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:30:43.629 19:20:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@78 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:30:43.630 19:20:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@81 -- # wait_for_bdev nvme1n1 00:30:43.630 19:20:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # get_bdev_list 00:30:43.630 19:20:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:43.630 19:20:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # jq -r '.[].name' 00:30:43.630 19:20:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:43.630 19:20:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # sort 00:30:43.630 19:20:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:43.630 19:20:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # xargs 00:30:43.630 19:20:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:43.630 19:20:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:30:43.630 19:20:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sleep 1 00:30:44.571 19:20:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # get_bdev_list 00:30:44.571 19:20:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:44.571 19:20:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # jq -r '.[].name' 00:30:44.571 19:20:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:44.571 19:20:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # sort 00:30:44.571 19:20:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:44.571 19:20:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # xargs 00:30:44.571 19:20:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:44.831 19:20:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:30:44.831 19:20:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sleep 1 00:30:45.401 [2024-11-05 19:20:14.693932] bdev_nvme.c:7382:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:30:45.401 [2024-11-05 19:20:14.693950] bdev_nvme.c:7468:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:30:45.401 [2024-11-05 19:20:14.693963] bdev_nvme.c:7345:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:45.662 [2024-11-05 19:20:14.782254] bdev_nvme.c:7311:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:30:45.662 19:20:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # get_bdev_list 00:30:45.662 19:20:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # sort 00:30:45.662 19:20:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:45.662 19:20:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # jq -r '.[].name' 00:30:45.662 19:20:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:45.662 19:20:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:45.662 19:20:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # xargs 00:30:45.662 19:20:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:45.662 [2024-11-05 19:20:14.962367] bdev_nvme.c:5632:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:30:45.662 [2024-11-05 19:20:14.963257] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x6cd150:1 started. 00:30:45.662 [2024-11-05 19:20:14.964513] bdev_nvme.c:8178:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:30:45.662 [2024-11-05 19:20:14.964549] bdev_nvme.c:8178:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:30:45.662 [2024-11-05 19:20:14.964568] bdev_nvme.c:8178:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:30:45.662 [2024-11-05 19:20:14.964581] bdev_nvme.c:7201:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:30:45.662 [2024-11-05 19:20:14.964589] bdev_nvme.c:7160:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:30:45.662 [2024-11-05 19:20:14.971870] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x6cd150 was disconnected and freed. delete nvme_qpair. 00:30:45.662 19:20:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:30:45.662 19:20:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sleep 1 00:30:47.046 19:20:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # get_bdev_list 00:30:47.046 19:20:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:47.046 19:20:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # jq -r '.[].name' 00:30:47.046 19:20:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:47.046 19:20:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # sort 00:30:47.046 19:20:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:47.046 19:20:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # xargs 00:30:47.046 19:20:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:47.046 19:20:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:30:47.046 19:20:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:30:47.046 19:20:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@85 -- # killprocess 541305 00:30:47.046 19:20:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # '[' -z 541305 ']' 00:30:47.046 19:20:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # kill -0 541305 00:30:47.046 19:20:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # uname 00:30:47.046 19:20:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:47.046 19:20:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 541305 00:30:47.046 19:20:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:30:47.046 19:20:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:30:47.046 19:20:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 541305' 00:30:47.046 killing process with pid 541305 00:30:47.046 19:20:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@971 -- # kill 541305 00:30:47.047 19:20:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@976 -- # wait 541305 00:30:47.047 19:20:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # nvmftestfini 00:30:47.047 19:20:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # nvmfcleanup 00:30:47.047 19:20:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@99 -- # sync 00:30:47.047 19:20:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:30:47.047 19:20:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@102 -- # set +e 00:30:47.047 19:20:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@103 -- # for i in {1..20} 00:30:47.047 19:20:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:30:47.047 rmmod nvme_tcp 00:30:47.047 rmmod nvme_fabrics 00:30:47.047 rmmod nvme_keyring 00:30:47.047 19:20:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:30:47.047 19:20:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@106 -- # set -e 00:30:47.047 19:20:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@107 -- # return 0 00:30:47.047 19:20:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # '[' -n 541030 ']' 00:30:47.047 19:20:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@337 -- # killprocess 541030 00:30:47.047 19:20:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # '[' -z 541030 ']' 00:30:47.047 19:20:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # kill -0 541030 00:30:47.047 19:20:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # uname 00:30:47.047 19:20:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:47.047 19:20:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 541030 00:30:47.047 19:20:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:30:47.047 19:20:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:30:47.047 19:20:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 541030' 00:30:47.047 killing process with pid 541030 00:30:47.047 19:20:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@971 -- # kill 541030 00:30:47.047 19:20:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@976 -- # wait 541030 00:30:47.307 19:20:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:30:47.307 19:20:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # nvmf_fini 00:30:47.307 19:20:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@264 -- # local dev 00:30:47.307 19:20:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@267 -- # remove_target_ns 00:30:47.307 19:20:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:30:47.307 19:20:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:30:47.307 19:20:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_target_ns 00:30:49.320 19:20:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@268 -- # delete_main_bridge 00:30:49.320 19:20:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:30:49.320 19:20:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@130 -- # return 0 00:30:49.320 19:20:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:30:49.320 19:20:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:30:49.320 19:20:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:30:49.320 19:20:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:30:49.320 19:20:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:30:49.320 19:20:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:30:49.320 19:20:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:30:49.320 19:20:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:30:49.320 19:20:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:30:49.320 19:20:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:30:49.320 19:20:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:30:49.320 19:20:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:30:49.320 19:20:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:30:49.320 19:20:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:30:49.320 19:20:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:30:49.320 19:20:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:30:49.320 19:20:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:30:49.320 19:20:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@41 -- # _dev=0 00:30:49.320 19:20:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@41 -- # dev_map=() 00:30:49.320 19:20:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@284 -- # iptr 00:30:49.320 19:20:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@542 -- # iptables-save 00:30:49.320 19:20:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:30:49.320 19:20:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@542 -- # iptables-restore 00:30:49.320 00:30:49.320 real 0m24.522s 00:30:49.320 user 0m29.611s 00:30:49.320 sys 0m7.190s 00:30:49.320 19:20:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:49.320 19:20:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:49.320 ************************************ 00:30:49.320 END TEST nvmf_discovery_remove_ifc 00:30:49.320 ************************************ 00:30:49.320 19:20:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@35 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:30:49.320 19:20:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:30:49.320 19:20:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:49.320 19:20:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:49.320 ************************************ 00:30:49.320 START TEST nvmf_multicontroller 00:30:49.320 ************************************ 00:30:49.320 19:20:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:30:49.581 * Looking for test storage... 00:30:49.581 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:49.581 19:20:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:49.582 19:20:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:49.582 19:20:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # lcov --version 00:30:49.582 19:20:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:49.582 19:20:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:49.582 19:20:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:49.582 19:20:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:49.582 19:20:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:30:49.582 19:20:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:30:49.582 19:20:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:30:49.582 19:20:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:30:49.582 19:20:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:30:49.582 19:20:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:30:49.582 19:20:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:30:49.582 19:20:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:49.582 19:20:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:30:49.582 19:20:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:30:49.582 19:20:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:49.582 19:20:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:49.582 19:20:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:30:49.582 19:20:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:30:49.582 19:20:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:49.582 19:20:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:30:49.582 19:20:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:30:49.582 19:20:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:30:49.582 19:20:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:30:49.582 19:20:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:49.582 19:20:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:30:49.582 19:20:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:30:49.582 19:20:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:49.582 19:20:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:49.582 19:20:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:30:49.582 19:20:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:49.582 19:20:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:49.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:49.582 --rc genhtml_branch_coverage=1 00:30:49.582 --rc genhtml_function_coverage=1 00:30:49.582 --rc genhtml_legend=1 00:30:49.582 --rc geninfo_all_blocks=1 00:30:49.582 --rc geninfo_unexecuted_blocks=1 00:30:49.582 00:30:49.582 ' 00:30:49.582 19:20:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:49.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:49.582 --rc genhtml_branch_coverage=1 00:30:49.582 --rc genhtml_function_coverage=1 00:30:49.582 --rc genhtml_legend=1 00:30:49.582 --rc geninfo_all_blocks=1 00:30:49.582 --rc geninfo_unexecuted_blocks=1 00:30:49.582 00:30:49.582 ' 00:30:49.582 19:20:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:49.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:49.582 --rc genhtml_branch_coverage=1 00:30:49.582 --rc genhtml_function_coverage=1 00:30:49.582 --rc genhtml_legend=1 00:30:49.582 --rc geninfo_all_blocks=1 00:30:49.582 --rc geninfo_unexecuted_blocks=1 00:30:49.582 00:30:49.582 ' 00:30:49.582 19:20:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:49.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:49.582 --rc genhtml_branch_coverage=1 00:30:49.582 --rc genhtml_function_coverage=1 00:30:49.582 --rc genhtml_legend=1 00:30:49.582 --rc geninfo_all_blocks=1 00:30:49.582 --rc geninfo_unexecuted_blocks=1 00:30:49.582 00:30:49.582 ' 00:30:49.582 19:20:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:49.582 19:20:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:30:49.582 19:20:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:49.582 19:20:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:49.582 19:20:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:49.582 19:20:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:49.582 19:20:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:49.582 19:20:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:30:49.582 19:20:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:49.582 19:20:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:30:49.582 19:20:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:49.582 19:20:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:49.582 19:20:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:49.582 19:20:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:30:49.582 19:20:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:30:49.582 19:20:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:49.582 19:20:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:49.582 19:20:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:30:49.582 19:20:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:49.582 19:20:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:49.582 19:20:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:49.582 19:20:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:49.582 19:20:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:49.582 19:20:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:49.582 19:20:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:30:49.582 19:20:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:49.582 19:20:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:30:49.582 19:20:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:30:49.582 19:20:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:30:49.582 19:20:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:30:49.582 19:20:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@50 -- # : 0 00:30:49.582 19:20:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:30:49.582 19:20:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:30:49.582 19:20:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:30:49.582 19:20:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:49.582 19:20:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:49.582 19:20:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:30:49.582 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:30:49.582 19:20:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:30:49.582 19:20:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:30:49.582 19:20:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@54 -- # have_pci_nics=0 00:30:49.582 19:20:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:49.582 19:20:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:49.583 19:20:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:30:49.583 19:20:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:30:49.583 19:20:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:49.583 19:20:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # nvmftestinit 00:30:49.583 19:20:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:30:49.583 19:20:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:49.583 19:20:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@296 -- # prepare_net_devs 00:30:49.583 19:20:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # local -g is_hw=no 00:30:49.583 19:20:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@260 -- # remove_target_ns 00:30:49.583 19:20:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:30:49.583 19:20:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:30:49.583 19:20:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_target_ns 00:30:49.583 19:20:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:30:49.583 19:20:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:30:49.583 19:20:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # xtrace_disable 00:30:49.583 19:20:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:57.724 19:20:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:57.724 19:20:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@131 -- # pci_devs=() 00:30:57.724 19:20:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@131 -- # local -a pci_devs 00:30:57.724 19:20:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@132 -- # pci_net_devs=() 00:30:57.724 19:20:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:30:57.724 19:20:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@133 -- # pci_drivers=() 00:30:57.724 19:20:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@133 -- # local -A pci_drivers 00:30:57.724 19:20:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@135 -- # net_devs=() 00:30:57.724 19:20:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@135 -- # local -ga net_devs 00:30:57.724 19:20:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@136 -- # e810=() 00:30:57.724 19:20:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@136 -- # local -ga e810 00:30:57.724 19:20:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@137 -- # x722=() 00:30:57.724 19:20:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@137 -- # local -ga x722 00:30:57.724 19:20:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@138 -- # mlx=() 00:30:57.724 19:20:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@138 -- # local -ga mlx 00:30:57.724 19:20:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:57.724 19:20:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:57.724 19:20:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:57.724 19:20:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:57.724 19:20:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:57.724 19:20:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:57.724 19:20:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:57.724 19:20:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:57.724 19:20:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:57.724 19:20:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:57.724 19:20:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:57.724 19:20:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:57.724 19:20:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:30:57.724 19:20:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:30:57.724 19:20:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:30:57.724 19:20:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:30:57.724 19:20:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:30:57.724 19:20:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:30:57.724 19:20:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:30:57.724 19:20:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:57.724 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:57.724 19:20:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:30:57.724 19:20:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:30:57.724 19:20:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:57.724 19:20:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:57.724 19:20:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:30:57.724 19:20:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:30:57.724 19:20:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:57.724 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:57.724 19:20:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:30:57.724 19:20:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:30:57.724 19:20:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:57.724 19:20:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:57.724 19:20:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:30:57.724 19:20:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:30:57.724 19:20:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:30:57.724 19:20:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:30:57.724 19:20:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:30:57.724 19:20:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:57.724 19:20:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:30:57.725 19:20:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:57.725 19:20:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@234 -- # [[ up == up ]] 00:30:57.725 19:20:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:30:57.725 19:20:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:57.725 19:20:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:57.725 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:57.725 19:20:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:30:57.725 19:20:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:30:57.725 19:20:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:57.725 19:20:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:30:57.725 19:20:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:57.725 19:20:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@234 -- # [[ up == up ]] 00:30:57.725 19:20:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:30:57.725 19:20:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:57.725 19:20:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:57.725 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:57.725 19:20:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:30:57.725 19:20:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:30:57.725 19:20:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:30:57.725 19:20:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # is_hw=yes 00:30:57.725 19:20:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:30:57.725 19:20:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:30:57.725 19:20:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:30:57.725 19:20:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:30:57.725 19:20:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@257 -- # create_target_ns 00:30:57.725 19:20:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:30:57.725 19:20:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:30:57.725 19:20:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:30:57.725 19:20:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:57.725 19:20:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:30:57.725 19:20:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:30:57.725 19:20:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:57.725 19:20:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:57.725 19:20:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:30:57.725 19:20:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:30:57.725 19:20:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:30:57.725 19:20:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:30:57.725 19:20:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@27 -- # local -gA dev_map 00:30:57.725 19:20:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@28 -- # local -g _dev 00:30:57.725 19:20:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:30:57.725 19:20:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:30:57.725 19:20:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:30:57.725 19:20:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:30:57.725 19:20:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@44 -- # ips=() 00:30:57.725 19:20:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:30:57.725 19:20:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:30:57.725 19:20:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:30:57.725 19:20:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:30:57.725 19:20:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:30:57.725 19:20:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:30:57.725 19:20:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:30:57.725 19:20:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:30:57.725 19:20:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:30:57.725 19:20:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:30:57.725 19:20:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:30:57.725 19:20:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:30:57.725 19:20:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:30:57.725 19:20:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:30:57.725 19:20:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:30:57.725 19:20:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:30:57.725 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:30:57.725 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:30:57.725 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:30:57.725 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:30:57.725 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@11 -- # local val=167772161 00:30:57.725 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:30:57.725 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:30:57.725 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:30:57.725 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:30:57.725 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:30:57.725 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:30:57.725 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:30:57.725 10.0.0.1 00:30:57.725 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:30:57.725 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:30:57.725 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:57.725 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:57.725 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:30:57.725 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@11 -- # local val=167772162 00:30:57.725 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:30:57.725 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:30:57.725 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:30:57.725 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:30:57.725 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:30:57.725 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:30:57.725 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:30:57.725 10.0.0.2 00:30:57.725 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:30:57.725 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:30:57.725 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:30:57.725 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:30:57.725 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:30:57.725 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:30:57.725 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:30:57.725 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:57.725 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:57.725 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:30:57.725 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:30:57.725 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:30:57.725 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:30:57.725 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:30:57.725 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:30:57.725 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:30:57.725 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:30:57.725 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:30:57.725 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:30:57.725 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:30:57.725 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@38 -- # ping_ips 1 00:30:57.725 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:30:57.725 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:30:57.725 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:30:57.725 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:30:57.725 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:30:57.726 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:30:57.726 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:30:57.726 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:30:57.726 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:30:57.726 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@107 -- # local dev=initiator0 00:30:57.726 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:30:57.726 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:30:57.726 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:30:57.726 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:30:57.726 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:30:57.726 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:30:57.726 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:30:57.726 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:30:57.726 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:30:57.726 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:30:57.726 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:30:57.726 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:57.726 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:57.726 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:30:57.726 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:30:57.726 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:57.726 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.546 ms 00:30:57.726 00:30:57.726 --- 10.0.0.1 ping statistics --- 00:30:57.726 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:57.726 rtt min/avg/max/mdev = 0.546/0.546/0.546/0.000 ms 00:30:57.726 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:30:57.726 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:30:57.726 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:30:57.726 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:30:57.726 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:57.726 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:57.726 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@168 -- # get_net_dev target0 00:30:57.726 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@107 -- # local dev=target0 00:30:57.726 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:30:57.726 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:30:57.726 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:30:57.726 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:30:57.726 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:30:57.726 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:30:57.726 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:30:57.726 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:30:57.726 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:30:57.726 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:30:57.726 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:30:57.726 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:30:57.726 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:30:57.726 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:30:57.726 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:57.726 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.197 ms 00:30:57.726 00:30:57.726 --- 10.0.0.2 ping statistics --- 00:30:57.726 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:57.726 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:30:57.726 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@98 -- # (( pair++ )) 00:30:57.726 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:30:57.726 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:57.726 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@270 -- # return 0 00:30:57.726 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:30:57.726 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:30:57.726 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:30:57.726 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:30:57.726 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:30:57.726 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:30:57.726 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:30:57.726 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:30:57.726 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:30:57.726 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:30:57.726 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@107 -- # local dev=initiator0 00:30:57.726 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:30:57.726 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:30:57.726 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:30:57.726 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:30:57.726 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:30:57.726 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:30:57.726 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:30:57.726 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:30:57.726 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:30:57.726 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:57.726 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:30:57.726 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:30:57.726 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:30:57.726 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:30:57.726 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:30:57.726 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:30:57.726 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@107 -- # local dev=initiator1 00:30:57.726 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:30:57.726 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:30:57.726 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@109 -- # return 1 00:30:57.726 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@168 -- # dev= 00:30:57.726 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@169 -- # return 0 00:30:57.726 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:30:57.726 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:30:57.726 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:30:57.726 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:30:57.726 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:30:57.726 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:57.726 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:57.726 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@168 -- # get_net_dev target0 00:30:57.726 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@107 -- # local dev=target0 00:30:57.726 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:30:57.726 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:30:57.726 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:30:57.726 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:30:57.726 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:30:57.726 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:30:57.726 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:30:57.726 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:30:57.726 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:30:57.726 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:57.726 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:30:57.726 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:30:57.726 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:30:57.726 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:30:57.726 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:57.726 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:57.726 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@168 -- # get_net_dev target1 00:30:57.726 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@107 -- # local dev=target1 00:30:57.726 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:30:57.727 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:30:57.727 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@109 -- # return 1 00:30:57.727 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@168 -- # dev= 00:30:57.727 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@169 -- # return 0 00:30:57.727 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:30:57.727 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:57.727 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:30:57.727 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:30:57.727 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:57.727 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:30:57.727 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:30:57.727 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@20 -- # nvmfappstart -m 0xE 00:30:57.727 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:30:57.727 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:57.727 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:57.727 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # nvmfpid=548171 00:30:57.727 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@329 -- # waitforlisten 548171 00:30:57.727 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:57.727 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@833 -- # '[' -z 548171 ']' 00:30:57.727 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:57.727 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:57.727 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:57.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:57.727 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:57.727 19:20:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:57.727 [2024-11-05 19:20:26.489679] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:30:57.727 [2024-11-05 19:20:26.489782] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:57.727 [2024-11-05 19:20:26.580589] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:57.727 [2024-11-05 19:20:26.633219] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:57.727 [2024-11-05 19:20:26.633274] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:57.727 [2024-11-05 19:20:26.633283] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:57.727 [2024-11-05 19:20:26.633290] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:57.727 [2024-11-05 19:20:26.633296] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:57.727 [2024-11-05 19:20:26.635098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:57.727 [2024-11-05 19:20:26.635416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:57.727 [2024-11-05 19:20:26.635416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:57.987 19:20:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:57.987 19:20:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@866 -- # return 0 00:30:57.987 19:20:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:30:57.987 19:20:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:57.987 19:20:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:58.248 19:20:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:58.248 19:20:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:58.248 19:20:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:58.248 19:20:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:58.248 [2024-11-05 19:20:27.346429] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:58.248 19:20:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:58.248 19:20:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:58.248 19:20:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:58.248 19:20:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:58.248 Malloc0 00:30:58.248 19:20:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:58.248 19:20:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:58.248 19:20:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:58.248 19:20:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:58.248 19:20:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:58.248 19:20:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:58.248 19:20:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:58.248 19:20:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:58.248 19:20:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:58.248 19:20:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:58.248 19:20:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:58.248 19:20:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:58.248 [2024-11-05 19:20:27.412825] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:58.248 19:20:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:58.248 19:20:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:58.248 19:20:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:58.248 19:20:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:58.248 [2024-11-05 19:20:27.424738] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:58.248 19:20:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:58.248 19:20:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:30:58.248 19:20:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:58.248 19:20:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:58.248 Malloc1 00:30:58.248 19:20:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:58.248 19:20:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@32 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:30:58.248 19:20:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:58.248 19:20:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:58.248 19:20:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:58.248 19:20:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:30:58.248 19:20:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:58.248 19:20:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:58.248 19:20:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:58.248 19:20:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:30:58.248 19:20:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:58.248 19:20:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:58.248 19:20:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:58.248 19:20:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:30:58.248 19:20:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:58.248 19:20:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:58.248 19:20:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:58.248 19:20:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@39 -- # bdevperf_pid=548418 00:30:58.248 19:20:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:58.248 19:20:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:30:58.248 19:20:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@42 -- # waitforlisten 548418 /var/tmp/bdevperf.sock 00:30:58.248 19:20:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@833 -- # '[' -z 548418 ']' 00:30:58.248 19:20:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:58.248 19:20:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:58.248 19:20:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:58.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:58.248 19:20:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:58.248 19:20:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:58.509 19:20:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:58.509 19:20:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@866 -- # return 0 00:30:58.509 19:20:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@45 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:30:58.509 19:20:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:58.509 19:20:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:58.770 NVMe0n1 00:30:58.770 19:20:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:58.770 19:20:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@49 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:58.770 19:20:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@49 -- # grep -c NVMe 00:30:58.770 19:20:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:58.770 19:20:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:58.770 19:20:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:58.770 1 00:30:58.770 19:20:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@55 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:30:58.770 19:20:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:30:58.770 19:20:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:30:58.770 19:20:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:30:58.770 19:20:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:58.770 19:20:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:30:58.770 19:20:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:58.770 19:20:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:30:58.770 19:20:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:58.770 19:20:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:58.770 request: 00:30:58.770 { 00:30:58.770 "name": "NVMe0", 00:30:58.770 "trtype": "tcp", 00:30:58.770 "traddr": "10.0.0.2", 00:30:58.770 "adrfam": "ipv4", 00:30:58.770 "trsvcid": "4420", 00:30:58.770 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:58.770 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:30:58.770 "hostaddr": "10.0.0.1", 00:30:58.770 "prchk_reftag": false, 00:30:58.770 "prchk_guard": false, 00:30:58.770 "hdgst": false, 00:30:58.770 "ddgst": false, 00:30:58.770 "allow_unrecognized_csi": false, 00:30:58.770 "method": "bdev_nvme_attach_controller", 00:30:58.770 "req_id": 1 00:30:58.770 } 00:30:58.770 Got JSON-RPC error response 00:30:58.770 response: 00:30:58.770 { 00:30:58.770 "code": -114, 00:30:58.770 "message": "A controller named NVMe0 already exists with the specified network path" 00:30:58.770 } 00:30:58.770 19:20:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:30:58.770 19:20:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:30:58.770 19:20:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:58.770 19:20:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:58.770 19:20:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:58.770 19:20:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:30:58.770 19:20:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:30:58.770 19:20:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:30:58.770 19:20:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:30:58.770 19:20:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:58.770 19:20:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:30:58.770 19:20:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:58.770 19:20:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:30:58.770 19:20:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:58.770 19:20:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:58.770 request: 00:30:58.770 { 00:30:58.770 "name": "NVMe0", 00:30:58.770 "trtype": "tcp", 00:30:58.770 "traddr": "10.0.0.2", 00:30:58.771 "adrfam": "ipv4", 00:30:58.771 "trsvcid": "4420", 00:30:58.771 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:58.771 "hostaddr": "10.0.0.1", 00:30:58.771 "prchk_reftag": false, 00:30:58.771 "prchk_guard": false, 00:30:58.771 "hdgst": false, 00:30:58.771 "ddgst": false, 00:30:58.771 "allow_unrecognized_csi": false, 00:30:58.771 "method": "bdev_nvme_attach_controller", 00:30:58.771 "req_id": 1 00:30:58.771 } 00:30:58.771 Got JSON-RPC error response 00:30:58.771 response: 00:30:58.771 { 00:30:58.771 "code": -114, 00:30:58.771 "message": "A controller named NVMe0 already exists with the specified network path" 00:30:58.771 } 00:30:58.771 19:20:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:30:58.771 19:20:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:30:58.771 19:20:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:58.771 19:20:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:58.771 19:20:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:58.771 19:20:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@64 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:30:58.771 19:20:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:30:58.771 19:20:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:30:58.771 19:20:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:30:58.771 19:20:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:58.771 19:20:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:30:58.771 19:20:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:58.771 19:20:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:30:58.771 19:20:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:58.771 19:20:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:58.771 request: 00:30:58.771 { 00:30:58.771 "name": "NVMe0", 00:30:58.771 "trtype": "tcp", 00:30:58.771 "traddr": "10.0.0.2", 00:30:58.771 "adrfam": "ipv4", 00:30:58.771 "trsvcid": "4420", 00:30:58.771 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:58.771 "hostaddr": "10.0.0.1", 00:30:58.771 "prchk_reftag": false, 00:30:58.771 "prchk_guard": false, 00:30:58.771 "hdgst": false, 00:30:58.771 "ddgst": false, 00:30:58.771 "multipath": "disable", 00:30:58.771 "allow_unrecognized_csi": false, 00:30:58.771 "method": "bdev_nvme_attach_controller", 00:30:58.771 "req_id": 1 00:30:58.771 } 00:30:58.771 Got JSON-RPC error response 00:30:58.771 response: 00:30:58.771 { 00:30:58.771 "code": -114, 00:30:58.771 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:30:58.771 } 00:30:58.771 19:20:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:30:58.771 19:20:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:30:58.771 19:20:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:58.771 19:20:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:58.771 19:20:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:58.771 19:20:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:30:58.771 19:20:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:30:58.771 19:20:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:30:58.771 19:20:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:30:58.771 19:20:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:58.771 19:20:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:30:58.771 19:20:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:58.771 19:20:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:30:58.771 19:20:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:58.771 19:20:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:58.771 request: 00:30:58.771 { 00:30:58.771 "name": "NVMe0", 00:30:58.771 "trtype": "tcp", 00:30:58.771 "traddr": "10.0.0.2", 00:30:58.771 "adrfam": "ipv4", 00:30:58.771 "trsvcid": "4420", 00:30:58.771 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:58.771 "hostaddr": "10.0.0.1", 00:30:58.771 "prchk_reftag": false, 00:30:58.771 "prchk_guard": false, 00:30:58.771 "hdgst": false, 00:30:58.771 "ddgst": false, 00:30:58.771 "multipath": "failover", 00:30:58.771 "allow_unrecognized_csi": false, 00:30:58.771 "method": "bdev_nvme_attach_controller", 00:30:58.771 "req_id": 1 00:30:58.771 } 00:30:58.771 Got JSON-RPC error response 00:30:58.771 response: 00:30:58.771 { 00:30:58.771 "code": -114, 00:30:58.771 "message": "A controller named NVMe0 already exists with the specified network path" 00:30:58.771 } 00:30:58.771 19:20:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:30:58.771 19:20:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:30:58.771 19:20:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:58.771 19:20:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:58.771 19:20:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:58.771 19:20:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:58.771 19:20:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:58.771 19:20:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:58.771 NVMe0n1 00:30:58.771 19:20:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:58.771 19:20:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@78 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:58.771 19:20:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:58.771 19:20:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:58.771 19:20:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:58.771 19:20:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@82 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:30:58.771 19:20:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:58.771 19:20:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:59.031 00:30:59.031 19:20:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:59.031 19:20:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@85 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:59.031 19:20:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@85 -- # grep -c NVMe 00:30:59.031 19:20:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:59.031 19:20:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:59.031 19:20:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:59.031 19:20:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@85 -- # '[' 2 '!=' 2 ']' 00:30:59.032 19:20:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:00.415 { 00:31:00.415 "results": [ 00:31:00.415 { 00:31:00.415 "job": "NVMe0n1", 00:31:00.415 "core_mask": "0x1", 00:31:00.415 "workload": "write", 00:31:00.415 "status": "finished", 00:31:00.415 "queue_depth": 128, 00:31:00.415 "io_size": 4096, 00:31:00.415 "runtime": 1.00838, 00:31:00.415 "iops": 24829.925226601084, 00:31:00.415 "mibps": 96.99189541641049, 00:31:00.415 "io_failed": 0, 00:31:00.415 "io_timeout": 0, 00:31:00.415 "avg_latency_us": 5147.124262321272, 00:31:00.415 "min_latency_us": 2088.96, 00:31:00.415 "max_latency_us": 12397.226666666667 00:31:00.415 } 00:31:00.415 ], 00:31:00.415 "core_count": 1 00:31:00.415 } 00:31:00.415 19:20:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@93 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:31:00.415 19:20:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:00.415 19:20:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:00.415 19:20:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:00.415 19:20:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # [[ -n '' ]] 00:31:00.416 19:20:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@111 -- # killprocess 548418 00:31:00.416 19:20:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@952 -- # '[' -z 548418 ']' 00:31:00.416 19:20:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # kill -0 548418 00:31:00.416 19:20:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # uname 00:31:00.416 19:20:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:00.416 19:20:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 548418 00:31:00.416 19:20:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:31:00.416 19:20:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:31:00.416 19:20:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@970 -- # echo 'killing process with pid 548418' 00:31:00.416 killing process with pid 548418 00:31:00.416 19:20:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@971 -- # kill 548418 00:31:00.416 19:20:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@976 -- # wait 548418 00:31:00.416 19:20:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@113 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:00.416 19:20:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:00.416 19:20:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:00.416 19:20:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:00.416 19:20:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@114 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:31:00.416 19:20:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:00.416 19:20:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:00.416 19:20:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:00.416 19:20:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # trap - SIGINT SIGTERM EXIT 00:31:00.416 19:20:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:00.416 19:20:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:31:00.416 19:20:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:31:00.416 19:20:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # sort -u 00:31:00.416 19:20:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # cat 00:31:00.416 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:31:00.416 [2024-11-05 19:20:27.543076] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:31:00.416 [2024-11-05 19:20:27.543134] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid548418 ] 00:31:00.416 [2024-11-05 19:20:27.614112] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:00.416 [2024-11-05 19:20:27.650146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:00.416 [2024-11-05 19:20:28.235120] bdev.c:4691:bdev_name_add: *ERROR*: Bdev name a8ebb295-32f8-4943-bbf6-7eb1aff5cae1 already exists 00:31:00.416 [2024-11-05 19:20:28.235151] bdev.c:7842:bdev_register: *ERROR*: Unable to add uuid:a8ebb295-32f8-4943-bbf6-7eb1aff5cae1 alias for bdev NVMe1n1 00:31:00.416 [2024-11-05 19:20:28.235160] bdev_nvme.c:4656:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:31:00.416 Running I/O for 1 seconds... 00:31:00.416 24783.00 IOPS, 96.81 MiB/s 00:31:00.416 Latency(us) 00:31:00.416 [2024-11-05T18:20:29.739Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:00.416 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:31:00.416 NVMe0n1 : 1.01 24829.93 96.99 0.00 0.00 5147.12 2088.96 12397.23 00:31:00.416 [2024-11-05T18:20:29.739Z] =================================================================================================================== 00:31:00.416 [2024-11-05T18:20:29.739Z] Total : 24829.93 96.99 0.00 0.00 5147.12 2088.96 12397.23 00:31:00.416 Received shutdown signal, test time was about 1.000000 seconds 00:31:00.416 00:31:00.416 Latency(us) 00:31:00.416 [2024-11-05T18:20:29.739Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:00.416 [2024-11-05T18:20:29.739Z] =================================================================================================================== 00:31:00.416 [2024-11-05T18:20:29.739Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:00.416 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:31:00.416 19:20:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1603 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:00.416 19:20:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:31:00.416 19:20:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # nvmftestfini 00:31:00.416 19:20:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@335 -- # nvmfcleanup 00:31:00.416 19:20:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@99 -- # sync 00:31:00.416 19:20:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:31:00.416 19:20:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@102 -- # set +e 00:31:00.416 19:20:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@103 -- # for i in {1..20} 00:31:00.416 19:20:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:31:00.416 rmmod nvme_tcp 00:31:00.416 rmmod nvme_fabrics 00:31:00.416 rmmod nvme_keyring 00:31:00.416 19:20:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:31:00.416 19:20:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@106 -- # set -e 00:31:00.416 19:20:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@107 -- # return 0 00:31:00.416 19:20:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # '[' -n 548171 ']' 00:31:00.416 19:20:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@337 -- # killprocess 548171 00:31:00.416 19:20:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@952 -- # '[' -z 548171 ']' 00:31:00.416 19:20:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # kill -0 548171 00:31:00.416 19:20:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # uname 00:31:00.416 19:20:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:00.416 19:20:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 548171 00:31:00.677 19:20:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:31:00.677 19:20:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:31:00.677 19:20:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@970 -- # echo 'killing process with pid 548171' 00:31:00.677 killing process with pid 548171 00:31:00.677 19:20:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@971 -- # kill 548171 00:31:00.677 19:20:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@976 -- # wait 548171 00:31:00.677 19:20:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:31:00.677 19:20:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # nvmf_fini 00:31:00.677 19:20:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@264 -- # local dev 00:31:00.677 19:20:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@267 -- # remove_target_ns 00:31:00.677 19:20:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:31:00.677 19:20:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:31:00.677 19:20:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_target_ns 00:31:03.220 19:20:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@268 -- # delete_main_bridge 00:31:03.220 19:20:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:31:03.220 19:20:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@130 -- # return 0 00:31:03.220 19:20:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:31:03.220 19:20:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:31:03.220 19:20:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:31:03.220 19:20:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:31:03.220 19:20:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:31:03.220 19:20:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:31:03.220 19:20:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:31:03.220 19:20:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:31:03.220 19:20:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:31:03.220 19:20:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:31:03.220 19:20:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:31:03.220 19:20:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:31:03.220 19:20:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:31:03.220 19:20:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:31:03.220 19:20:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:31:03.220 19:20:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:31:03.220 19:20:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:31:03.220 19:20:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@41 -- # _dev=0 00:31:03.220 19:20:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@41 -- # dev_map=() 00:31:03.220 19:20:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@284 -- # iptr 00:31:03.220 19:20:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@542 -- # iptables-save 00:31:03.220 19:20:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:31:03.220 19:20:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@542 -- # iptables-restore 00:31:03.220 00:31:03.220 real 0m13.340s 00:31:03.220 user 0m14.344s 00:31:03.220 sys 0m6.370s 00:31:03.220 19:20:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:03.220 19:20:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:03.220 ************************************ 00:31:03.220 END TEST nvmf_multicontroller 00:31:03.220 ************************************ 00:31:03.220 19:20:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@38 -- # [[ tcp == \r\d\m\a ]] 00:31:03.220 19:20:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@42 -- # [[ 0 -eq 1 ]] 00:31:03.220 19:20:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # [[ 0 -eq 1 ]] 00:31:03.221 19:20:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@53 -- # trap - SIGINT SIGTERM EXIT 00:31:03.221 00:31:03.221 real 6m54.656s 00:31:03.221 user 11m56.779s 00:31:03.221 sys 2m18.106s 00:31:03.221 19:20:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:03.221 19:20:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:03.221 ************************************ 00:31:03.221 END TEST nvmf_host 00:31:03.221 ************************************ 00:31:03.221 19:20:32 nvmf_tcp -- nvmf/nvmf.sh@15 -- # [[ tcp = \t\c\p ]] 00:31:03.221 19:20:32 nvmf_tcp -- nvmf/nvmf.sh@15 -- # [[ 0 -eq 0 ]] 00:31:03.221 19:20:32 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:31:03.221 19:20:32 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:31:03.221 19:20:32 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:03.221 19:20:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:03.221 ************************************ 00:31:03.221 START TEST nvmf_target_core_interrupt_mode 00:31:03.221 ************************************ 00:31:03.221 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:31:03.221 * Looking for test storage... 00:31:03.221 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:31:03.221 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:03.221 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # lcov --version 00:31:03.221 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:03.221 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:03.221 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:03.221 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:03.221 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:03.221 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:31:03.221 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:31:03.221 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:31:03.221 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:31:03.221 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:31:03.221 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:31:03.221 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:31:03.221 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:03.221 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:31:03.221 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:31:03.221 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:03.221 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:03.221 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:31:03.221 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:31:03.221 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:03.221 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:31:03.221 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:31:03.221 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:31:03.221 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:31:03.221 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:03.221 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:31:03.221 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:31:03.221 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:03.221 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:03.221 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:31:03.221 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:03.221 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:03.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:03.221 --rc genhtml_branch_coverage=1 00:31:03.221 --rc genhtml_function_coverage=1 00:31:03.221 --rc genhtml_legend=1 00:31:03.221 --rc geninfo_all_blocks=1 00:31:03.221 --rc geninfo_unexecuted_blocks=1 00:31:03.221 00:31:03.221 ' 00:31:03.221 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:03.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:03.221 --rc genhtml_branch_coverage=1 00:31:03.221 --rc genhtml_function_coverage=1 00:31:03.221 --rc genhtml_legend=1 00:31:03.221 --rc geninfo_all_blocks=1 00:31:03.221 --rc geninfo_unexecuted_blocks=1 00:31:03.221 00:31:03.221 ' 00:31:03.221 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:03.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:03.221 --rc genhtml_branch_coverage=1 00:31:03.221 --rc genhtml_function_coverage=1 00:31:03.221 --rc genhtml_legend=1 00:31:03.221 --rc geninfo_all_blocks=1 00:31:03.221 --rc geninfo_unexecuted_blocks=1 00:31:03.221 00:31:03.221 ' 00:31:03.221 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:03.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:03.221 --rc genhtml_branch_coverage=1 00:31:03.221 --rc genhtml_function_coverage=1 00:31:03.221 --rc genhtml_legend=1 00:31:03.221 --rc geninfo_all_blocks=1 00:31:03.221 --rc geninfo_unexecuted_blocks=1 00:31:03.221 00:31:03.221 ' 00:31:03.221 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:03.221 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:31:03.221 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:03.221 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:03.221 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:03.221 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:03.221 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:03.221 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:31:03.221 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:03.221 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:31:03.221 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:03.221 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:03.221 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:03.221 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:31:03.221 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:31:03.221 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:03.221 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:03.221 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:31:03.221 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:03.221 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:03.221 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:03.221 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:03.221 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:03.221 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:03.221 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:31:03.221 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:03.221 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:31:03.221 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:31:03.221 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:31:03.221 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:31:03.221 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@50 -- # : 0 00:31:03.222 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:31:03.222 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:31:03.222 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:31:03.222 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:03.222 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:03.222 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:31:03.222 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:31:03.222 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:31:03.222 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:31:03.222 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@54 -- # have_pci_nics=0 00:31:03.222 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:31:03.222 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@13 -- # TEST_ARGS=("$@") 00:31:03.222 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@15 -- # [[ 0 -eq 0 ]] 00:31:03.222 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:31:03.222 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:31:03.222 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:03.222 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:03.222 ************************************ 00:31:03.222 START TEST nvmf_abort 00:31:03.222 ************************************ 00:31:03.222 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:31:03.222 * Looking for test storage... 00:31:03.222 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:03.222 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:03.222 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # lcov --version 00:31:03.222 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:03.484 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:03.484 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:03.484 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:03.484 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:03.484 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:31:03.484 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:31:03.484 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:31:03.484 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:31:03.484 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:31:03.484 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:31:03.484 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:31:03.484 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:03.484 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:31:03.484 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:31:03.484 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:03.484 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:03.484 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:31:03.484 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:31:03.484 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:03.484 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:31:03.484 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:31:03.484 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:31:03.484 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:31:03.484 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:03.484 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:31:03.484 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:31:03.484 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:03.484 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:03.484 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:31:03.484 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:03.484 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:03.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:03.484 --rc genhtml_branch_coverage=1 00:31:03.484 --rc genhtml_function_coverage=1 00:31:03.484 --rc genhtml_legend=1 00:31:03.484 --rc geninfo_all_blocks=1 00:31:03.484 --rc geninfo_unexecuted_blocks=1 00:31:03.484 00:31:03.484 ' 00:31:03.484 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:03.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:03.484 --rc genhtml_branch_coverage=1 00:31:03.484 --rc genhtml_function_coverage=1 00:31:03.484 --rc genhtml_legend=1 00:31:03.484 --rc geninfo_all_blocks=1 00:31:03.484 --rc geninfo_unexecuted_blocks=1 00:31:03.484 00:31:03.484 ' 00:31:03.484 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:03.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:03.484 --rc genhtml_branch_coverage=1 00:31:03.484 --rc genhtml_function_coverage=1 00:31:03.484 --rc genhtml_legend=1 00:31:03.484 --rc geninfo_all_blocks=1 00:31:03.484 --rc geninfo_unexecuted_blocks=1 00:31:03.484 00:31:03.484 ' 00:31:03.484 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:03.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:03.484 --rc genhtml_branch_coverage=1 00:31:03.484 --rc genhtml_function_coverage=1 00:31:03.484 --rc genhtml_legend=1 00:31:03.484 --rc geninfo_all_blocks=1 00:31:03.484 --rc geninfo_unexecuted_blocks=1 00:31:03.484 00:31:03.484 ' 00:31:03.484 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:03.484 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:31:03.484 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:03.484 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:03.484 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:03.484 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:03.484 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:03.484 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:31:03.484 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:03.484 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:31:03.484 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:03.484 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:03.484 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:03.484 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:31:03.484 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:31:03.484 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:03.484 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:03.484 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:31:03.484 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:03.484 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:03.484 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:03.484 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:03.484 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:03.484 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:03.484 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:31:03.485 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:03.485 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:31:03.485 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:31:03.485 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:31:03.485 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:31:03.485 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@50 -- # : 0 00:31:03.485 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:31:03.485 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:31:03.485 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:31:03.485 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:03.485 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:03.485 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:31:03.485 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:31:03.485 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:31:03.485 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:31:03.485 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@54 -- # have_pci_nics=0 00:31:03.485 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:03.485 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:31:03.485 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:31:03.485 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:31:03.485 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:03.485 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@296 -- # prepare_net_devs 00:31:03.485 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # local -g is_hw=no 00:31:03.485 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@260 -- # remove_target_ns 00:31:03.485 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:31:03.485 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:31:03.485 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_target_ns 00:31:03.485 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:31:03.485 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:31:03.485 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # xtrace_disable 00:31:03.485 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:11.627 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:11.627 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@131 -- # pci_devs=() 00:31:11.627 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@131 -- # local -a pci_devs 00:31:11.627 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@132 -- # pci_net_devs=() 00:31:11.627 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:31:11.627 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@133 -- # pci_drivers=() 00:31:11.627 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@133 -- # local -A pci_drivers 00:31:11.627 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@135 -- # net_devs=() 00:31:11.627 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@135 -- # local -ga net_devs 00:31:11.627 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@136 -- # e810=() 00:31:11.627 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@136 -- # local -ga e810 00:31:11.627 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@137 -- # x722=() 00:31:11.627 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@137 -- # local -ga x722 00:31:11.627 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@138 -- # mlx=() 00:31:11.627 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@138 -- # local -ga mlx 00:31:11.627 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:11.627 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:11.627 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:11.627 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:11.627 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:11.627 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:11.627 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:11.627 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:11.627 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:11.627 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:11.627 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:11.627 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:11.627 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:31:11.627 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:31:11.628 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:31:11.628 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:31:11.628 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:31:11.628 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:31:11.628 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:31:11.628 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:11.628 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:11.628 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:31:11.628 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:31:11.628 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:11.628 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:11.628 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:31:11.628 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:31:11.628 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:11.628 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:11.628 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:31:11.628 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:31:11.628 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:11.628 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:11.628 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:31:11.628 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:31:11.628 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:31:11.628 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:31:11.628 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:31:11.628 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:11.628 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:31:11.628 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:11.628 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@234 -- # [[ up == up ]] 00:31:11.628 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:31:11.628 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:11.628 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:11.628 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:11.628 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:31:11.628 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:31:11.628 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:11.628 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:31:11.628 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:11.628 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@234 -- # [[ up == up ]] 00:31:11.628 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:31:11.628 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:11.628 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:11.628 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:11.628 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:31:11.628 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:31:11.628 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:31:11.628 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # is_hw=yes 00:31:11.628 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:31:11.628 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:31:11.628 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:31:11.628 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:31:11.628 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@257 -- # create_target_ns 00:31:11.628 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:31:11.628 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:31:11.628 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:31:11.628 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:11.628 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:31:11.628 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:31:11.628 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:11.628 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:11.628 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:31:11.628 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:31:11.628 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:31:11.628 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:31:11.628 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@27 -- # local -gA dev_map 00:31:11.628 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@28 -- # local -g _dev 00:31:11.628 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:31:11.628 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:31:11.628 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:31:11.628 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:31:11.628 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@44 -- # ips=() 00:31:11.628 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:31:11.628 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:31:11.628 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:31:11.628 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:31:11.628 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:31:11.628 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:31:11.628 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:31:11.628 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:31:11.628 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:31:11.628 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:31:11.628 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:31:11.628 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:31:11.628 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:31:11.628 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:31:11.628 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:31:11.628 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:31:11.628 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:31:11.628 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:31:11.628 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:31:11.628 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:31:11.628 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@11 -- # local val=167772161 00:31:11.628 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:31:11.628 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:31:11.628 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:31:11.628 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:31:11.628 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:31:11.628 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:31:11.628 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:31:11.628 10.0.0.1 00:31:11.628 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:31:11.628 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:31:11.628 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:11.628 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:11.628 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:31:11.628 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@11 -- # local val=167772162 00:31:11.628 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:31:11.629 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:31:11.629 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:31:11.629 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:31:11.629 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:31:11.629 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:31:11.629 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:31:11.629 10.0.0.2 00:31:11.629 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:31:11.629 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:31:11.629 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:31:11.629 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:31:11.629 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:31:11.629 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:31:11.629 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:31:11.629 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:11.629 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:11.629 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:31:11.629 19:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:31:11.629 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:31:11.629 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:31:11.629 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:31:11.629 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:31:11.629 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:31:11.629 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:31:11.629 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:31:11.629 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:31:11.629 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:31:11.629 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@38 -- # ping_ips 1 00:31:11.629 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:31:11.629 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:31:11.629 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:31:11.629 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:31:11.629 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:31:11.629 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:31:11.629 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:31:11.629 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:31:11.629 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:31:11.629 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@107 -- # local dev=initiator0 00:31:11.629 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:31:11.629 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:31:11.629 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:31:11.629 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:31:11.629 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:31:11.629 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:31:11.629 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:31:11.629 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:31:11.629 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:31:11.629 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:31:11.629 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:31:11.629 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:11.629 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:11.629 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:31:11.629 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:31:11.629 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:11.629 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.587 ms 00:31:11.629 00:31:11.629 --- 10.0.0.1 ping statistics --- 00:31:11.629 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:11.629 rtt min/avg/max/mdev = 0.587/0.587/0.587/0.000 ms 00:31:11.629 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:31:11.629 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:31:11.629 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:31:11.629 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:31:11.629 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:11.629 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:11.629 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@168 -- # get_net_dev target0 00:31:11.629 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@107 -- # local dev=target0 00:31:11.629 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:31:11.629 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:31:11.629 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:31:11.629 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:31:11.629 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:31:11.629 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:31:11.629 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:31:11.629 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:31:11.629 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:31:11.629 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:31:11.629 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:31:11.629 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:31:11.629 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:31:11.629 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:31:11.629 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:11.629 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.307 ms 00:31:11.629 00:31:11.629 --- 10.0.0.2 ping statistics --- 00:31:11.629 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:11.629 rtt min/avg/max/mdev = 0.307/0.307/0.307/0.000 ms 00:31:11.629 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@98 -- # (( pair++ )) 00:31:11.629 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:31:11.629 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:11.629 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@270 -- # return 0 00:31:11.629 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:31:11.629 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:31:11.629 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:31:11.629 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:31:11.629 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:31:11.629 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:31:11.629 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:31:11.629 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:31:11.629 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:31:11.629 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:31:11.629 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@107 -- # local dev=initiator0 00:31:11.629 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:31:11.629 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:31:11.629 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:31:11.629 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:31:11.629 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:31:11.629 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:31:11.629 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:31:11.630 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:31:11.630 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:31:11.630 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:11.630 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:31:11.630 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:31:11.630 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:31:11.630 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:31:11.630 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:31:11.630 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:31:11.630 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@107 -- # local dev=initiator1 00:31:11.630 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:31:11.630 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:31:11.630 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@109 -- # return 1 00:31:11.630 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@168 -- # dev= 00:31:11.630 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@169 -- # return 0 00:31:11.630 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:31:11.630 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:31:11.630 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:31:11.630 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:31:11.630 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:31:11.630 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:11.630 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:11.630 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@168 -- # get_net_dev target0 00:31:11.630 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@107 -- # local dev=target0 00:31:11.630 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:31:11.630 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:31:11.630 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:31:11.630 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:31:11.630 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:31:11.630 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:31:11.630 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:31:11.630 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:31:11.630 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:31:11.630 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:11.630 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:31:11.630 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:31:11.630 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:31:11.630 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:31:11.630 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:11.630 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:11.630 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@168 -- # get_net_dev target1 00:31:11.630 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@107 -- # local dev=target1 00:31:11.630 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:31:11.630 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:31:11.630 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@109 -- # return 1 00:31:11.630 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@168 -- # dev= 00:31:11.630 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@169 -- # return 0 00:31:11.630 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:31:11.630 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:11.630 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:31:11.630 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:31:11.630 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:11.630 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:31:11.630 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:31:11.630 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:31:11.630 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:31:11.630 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:11.630 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:11.630 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # nvmfpid=553259 00:31:11.630 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@329 -- # waitforlisten 553259 00:31:11.630 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:31:11.630 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@833 -- # '[' -z 553259 ']' 00:31:11.630 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:11.630 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:11.630 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:11.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:11.630 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:11.630 19:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:11.630 [2024-11-05 19:20:40.300408] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:11.630 [2024-11-05 19:20:40.301566] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:31:11.630 [2024-11-05 19:20:40.301622] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:11.630 [2024-11-05 19:20:40.402171] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:11.630 [2024-11-05 19:20:40.454192] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:11.630 [2024-11-05 19:20:40.454245] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:11.630 [2024-11-05 19:20:40.454254] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:11.630 [2024-11-05 19:20:40.454261] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:11.630 [2024-11-05 19:20:40.454268] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:11.630 [2024-11-05 19:20:40.456044] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:11.630 [2024-11-05 19:20:40.456227] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:11.630 [2024-11-05 19:20:40.456229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:11.630 [2024-11-05 19:20:40.532160] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:11.630 [2024-11-05 19:20:40.532209] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:11.630 [2024-11-05 19:20:40.532933] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:11.630 [2024-11-05 19:20:40.533192] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:11.891 19:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:11.891 19:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@866 -- # return 0 00:31:11.891 19:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:31:11.891 19:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:11.891 19:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:11.891 19:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:11.891 19:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:31:11.891 19:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:11.891 19:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:11.891 [2024-11-05 19:20:41.165193] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:11.891 19:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:11.891 19:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:31:11.891 19:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:11.891 19:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:11.891 Malloc0 00:31:11.892 19:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:11.892 19:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:31:11.892 19:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:11.892 19:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:12.153 Delay0 00:31:12.153 19:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:12.153 19:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:31:12.153 19:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:12.153 19:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:12.153 19:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:12.153 19:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:31:12.153 19:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:12.153 19:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:12.153 19:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:12.153 19:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:12.153 19:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:12.153 19:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:12.153 [2024-11-05 19:20:41.257060] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:12.153 19:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:12.153 19:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:12.153 19:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:12.153 19:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:12.153 19:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:12.153 19:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:31:12.153 [2024-11-05 19:20:41.338955] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:31:14.065 Initializing NVMe Controllers 00:31:14.065 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:31:14.065 controller IO queue size 128 less than required 00:31:14.065 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:31:14.065 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:31:14.065 Initialization complete. Launching workers. 00:31:14.065 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 29009 00:31:14.065 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 29070, failed to submit 66 00:31:14.065 success 29009, unsuccessful 61, failed 0 00:31:14.065 19:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:14.065 19:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:14.065 19:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:14.065 19:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:14.065 19:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:31:14.065 19:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:31:14.065 19:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@335 -- # nvmfcleanup 00:31:14.065 19:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@99 -- # sync 00:31:14.326 19:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:31:14.326 19:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@102 -- # set +e 00:31:14.326 19:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@103 -- # for i in {1..20} 00:31:14.326 19:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:31:14.326 rmmod nvme_tcp 00:31:14.326 rmmod nvme_fabrics 00:31:14.326 rmmod nvme_keyring 00:31:14.326 19:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:31:14.326 19:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@106 -- # set -e 00:31:14.326 19:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@107 -- # return 0 00:31:14.326 19:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # '[' -n 553259 ']' 00:31:14.326 19:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@337 -- # killprocess 553259 00:31:14.326 19:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@952 -- # '[' -z 553259 ']' 00:31:14.326 19:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@956 -- # kill -0 553259 00:31:14.326 19:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@957 -- # uname 00:31:14.326 19:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:14.326 19:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 553259 00:31:14.326 19:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:31:14.326 19:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:31:14.326 19:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@970 -- # echo 'killing process with pid 553259' 00:31:14.326 killing process with pid 553259 00:31:14.326 19:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@971 -- # kill 553259 00:31:14.326 19:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@976 -- # wait 553259 00:31:14.586 19:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:31:14.586 19:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@342 -- # nvmf_fini 00:31:14.586 19:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@264 -- # local dev 00:31:14.586 19:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@267 -- # remove_target_ns 00:31:14.586 19:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:31:14.586 19:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:31:14.586 19:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_target_ns 00:31:16.499 19:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@268 -- # delete_main_bridge 00:31:16.499 19:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:31:16.499 19:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@130 -- # return 0 00:31:16.499 19:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:31:16.499 19:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:31:16.499 19:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:31:16.499 19:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:31:16.499 19:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:31:16.499 19:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:31:16.499 19:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:31:16.499 19:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:31:16.499 19:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:31:16.499 19:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:31:16.499 19:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:31:16.499 19:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:31:16.499 19:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:31:16.499 19:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:31:16.499 19:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:31:16.499 19:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:31:16.499 19:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:31:16.499 19:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@41 -- # _dev=0 00:31:16.499 19:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@41 -- # dev_map=() 00:31:16.499 19:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@284 -- # iptr 00:31:16.499 19:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@542 -- # iptables-save 00:31:16.499 19:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:31:16.499 19:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@542 -- # iptables-restore 00:31:16.499 00:31:16.499 real 0m13.425s 00:31:16.499 user 0m10.615s 00:31:16.499 sys 0m7.015s 00:31:16.499 19:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:16.499 19:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:16.499 ************************************ 00:31:16.499 END TEST nvmf_abort 00:31:16.499 ************************************ 00:31:16.760 19:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@17 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:31:16.760 19:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:31:16.760 19:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:16.760 19:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:16.760 ************************************ 00:31:16.760 START TEST nvmf_ns_hotplug_stress 00:31:16.760 ************************************ 00:31:16.760 19:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:31:16.760 * Looking for test storage... 00:31:16.760 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:16.760 19:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:16.760 19:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:31:16.760 19:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:16.760 19:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:16.760 19:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:16.760 19:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:16.760 19:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:16.761 19:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:31:16.761 19:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:31:16.761 19:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:31:16.761 19:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:31:16.761 19:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:31:16.761 19:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:31:16.761 19:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:31:16.761 19:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:16.761 19:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:31:16.761 19:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:31:16.761 19:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:16.761 19:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:16.761 19:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:31:16.761 19:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:31:16.761 19:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:16.761 19:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:31:16.761 19:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:31:16.761 19:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:31:16.761 19:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:31:16.761 19:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:16.761 19:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:31:16.761 19:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:31:16.761 19:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:16.761 19:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:16.761 19:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:31:16.761 19:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:16.761 19:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:16.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:16.761 --rc genhtml_branch_coverage=1 00:31:16.761 --rc genhtml_function_coverage=1 00:31:16.761 --rc genhtml_legend=1 00:31:16.761 --rc geninfo_all_blocks=1 00:31:16.761 --rc geninfo_unexecuted_blocks=1 00:31:16.761 00:31:16.761 ' 00:31:16.761 19:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:16.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:16.761 --rc genhtml_branch_coverage=1 00:31:16.761 --rc genhtml_function_coverage=1 00:31:16.761 --rc genhtml_legend=1 00:31:16.761 --rc geninfo_all_blocks=1 00:31:16.761 --rc geninfo_unexecuted_blocks=1 00:31:16.761 00:31:16.761 ' 00:31:16.761 19:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:16.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:16.761 --rc genhtml_branch_coverage=1 00:31:16.761 --rc genhtml_function_coverage=1 00:31:16.761 --rc genhtml_legend=1 00:31:16.761 --rc geninfo_all_blocks=1 00:31:16.761 --rc geninfo_unexecuted_blocks=1 00:31:16.761 00:31:16.761 ' 00:31:16.761 19:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:16.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:16.761 --rc genhtml_branch_coverage=1 00:31:16.761 --rc genhtml_function_coverage=1 00:31:16.761 --rc genhtml_legend=1 00:31:16.761 --rc geninfo_all_blocks=1 00:31:16.761 --rc geninfo_unexecuted_blocks=1 00:31:16.761 00:31:16.761 ' 00:31:16.761 19:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:16.761 19:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:31:16.761 19:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:16.761 19:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:16.761 19:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:16.761 19:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:16.761 19:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:17.022 19:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:31:17.022 19:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:17.022 19:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:31:17.022 19:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:17.022 19:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:17.022 19:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:17.022 19:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:31:17.022 19:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:31:17.022 19:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:17.022 19:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:17.022 19:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:31:17.022 19:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:17.022 19:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:17.022 19:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:17.022 19:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:17.023 19:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:17.023 19:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:17.023 19:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:31:17.023 19:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:17.023 19:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:31:17.023 19:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:31:17.023 19:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:31:17.023 19:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:31:17.023 19:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@50 -- # : 0 00:31:17.023 19:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:31:17.023 19:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:31:17.023 19:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:31:17.023 19:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:17.023 19:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:17.023 19:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:31:17.023 19:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:31:17.023 19:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:31:17.023 19:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:31:17.023 19:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@54 -- # have_pci_nics=0 00:31:17.023 19:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:17.023 19:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:31:17.023 19:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:31:17.023 19:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:17.023 19:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # prepare_net_devs 00:31:17.023 19:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # local -g is_hw=no 00:31:17.023 19:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # remove_target_ns 00:31:17.023 19:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:31:17.023 19:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:31:17.023 19:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_target_ns 00:31:17.023 19:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:31:17.023 19:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:31:17.023 19:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # xtrace_disable 00:31:17.023 19:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:31:25.166 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:25.166 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@131 -- # pci_devs=() 00:31:25.166 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@131 -- # local -a pci_devs 00:31:25.166 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@132 -- # pci_net_devs=() 00:31:25.166 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:31:25.167 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@133 -- # pci_drivers=() 00:31:25.167 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@133 -- # local -A pci_drivers 00:31:25.167 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@135 -- # net_devs=() 00:31:25.167 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@135 -- # local -ga net_devs 00:31:25.167 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@136 -- # e810=() 00:31:25.167 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@136 -- # local -ga e810 00:31:25.167 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@137 -- # x722=() 00:31:25.167 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@137 -- # local -ga x722 00:31:25.167 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@138 -- # mlx=() 00:31:25.167 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@138 -- # local -ga mlx 00:31:25.167 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:25.167 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:25.167 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:25.167 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:25.167 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:25.167 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:25.167 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:25.167 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:25.167 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:25.167 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:25.167 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:25.167 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:25.167 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:31:25.167 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:31:25.167 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:31:25.167 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:31:25.167 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:31:25.167 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:31:25.167 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:31:25.167 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:25.167 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:25.167 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:31:25.167 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:31:25.167 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:25.167 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:25.167 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:31:25.167 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:31:25.167 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:25.167 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:25.167 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:31:25.167 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:31:25.167 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:25.167 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:25.167 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:31:25.167 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:31:25.167 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:31:25.167 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:31:25.167 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:31:25.167 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:25.167 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:31:25.167 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:25.167 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # [[ up == up ]] 00:31:25.167 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:31:25.167 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:25.167 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:25.167 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:25.167 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:31:25.167 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:31:25.167 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:25.167 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:31:25.167 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:25.167 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # [[ up == up ]] 00:31:25.167 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:31:25.167 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:25.167 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:25.167 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:25.167 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:31:25.167 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:31:25.167 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:31:25.167 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # is_hw=yes 00:31:25.167 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:31:25.167 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:31:25.167 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:31:25.167 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:31:25.167 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@257 -- # create_target_ns 00:31:25.167 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:31:25.167 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:31:25.167 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:31:25.167 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:25.167 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:31:25.167 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:31:25.167 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:25.167 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:25.167 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:31:25.167 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:31:25.167 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:31:25.167 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:31:25.167 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@27 -- # local -gA dev_map 00:31:25.167 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@28 -- # local -g _dev 00:31:25.167 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:31:25.167 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:31:25.167 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:31:25.167 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:31:25.167 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@44 -- # ips=() 00:31:25.167 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:31:25.167 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:31:25.168 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:31:25.168 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:31:25.168 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:31:25.168 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:31:25.168 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:31:25.168 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:31:25.168 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:31:25.168 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:31:25.168 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:31:25.168 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:31:25.168 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:31:25.168 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:31:25.168 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:31:25.168 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:31:25.168 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:31:25.168 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:31:25.168 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:31:25.168 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:31:25.168 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@11 -- # local val=167772161 00:31:25.168 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:31:25.168 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:31:25.168 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:31:25.168 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:31:25.168 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:31:25.168 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:31:25.168 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:31:25.168 10.0.0.1 00:31:25.168 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:31:25.168 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:31:25.168 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:25.168 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:25.168 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:31:25.168 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@11 -- # local val=167772162 00:31:25.168 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:31:25.168 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:31:25.168 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:31:25.168 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:31:25.168 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:31:25.168 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:31:25.168 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:31:25.168 10.0.0.2 00:31:25.168 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:31:25.168 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:31:25.168 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:31:25.168 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:31:25.168 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:31:25.168 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:31:25.168 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:31:25.168 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:25.168 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:25.168 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:31:25.168 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:31:25.168 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:31:25.168 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:31:25.168 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:31:25.168 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:31:25.168 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:31:25.168 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:31:25.168 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:31:25.168 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:31:25.168 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:31:25.168 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@38 -- # ping_ips 1 00:31:25.168 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:31:25.168 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:31:25.168 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:31:25.168 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:31:25.168 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:31:25.168 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:31:25.168 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:31:25.168 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:31:25.168 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:31:25.168 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@107 -- # local dev=initiator0 00:31:25.168 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:31:25.168 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:31:25.168 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:31:25.168 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:31:25.168 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:31:25.168 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:31:25.168 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:31:25.168 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:31:25.168 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:31:25.168 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:31:25.168 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:31:25.168 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:25.168 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:25.168 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:31:25.168 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:31:25.168 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:25.168 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.675 ms 00:31:25.168 00:31:25.168 --- 10.0.0.1 ping statistics --- 00:31:25.168 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:25.168 rtt min/avg/max/mdev = 0.675/0.675/0.675/0.000 ms 00:31:25.168 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:31:25.168 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:31:25.168 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:31:25.168 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:31:25.168 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:25.168 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:25.169 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@168 -- # get_net_dev target0 00:31:25.169 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@107 -- # local dev=target0 00:31:25.169 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:31:25.169 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:31:25.169 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:31:25.169 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:31:25.169 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:31:25.169 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:31:25.169 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:31:25.169 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:31:25.169 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:31:25.169 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:31:25.169 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:31:25.169 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:31:25.169 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:31:25.169 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:31:25.169 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:25.169 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.310 ms 00:31:25.169 00:31:25.169 --- 10.0.0.2 ping statistics --- 00:31:25.169 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:25.169 rtt min/avg/max/mdev = 0.310/0.310/0.310/0.000 ms 00:31:25.169 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@98 -- # (( pair++ )) 00:31:25.169 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:31:25.169 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:25.169 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # return 0 00:31:25.169 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:31:25.169 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:31:25.169 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:31:25.169 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:31:25.169 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:31:25.169 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:31:25.169 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:31:25.169 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:31:25.169 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:31:25.169 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:31:25.169 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@107 -- # local dev=initiator0 00:31:25.169 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:31:25.169 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:31:25.169 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:31:25.169 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:31:25.169 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:31:25.169 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:31:25.169 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:31:25.169 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:31:25.169 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:31:25.169 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:25.169 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:31:25.169 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:31:25.169 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:31:25.169 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:31:25.169 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:31:25.169 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:31:25.169 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@107 -- # local dev=initiator1 00:31:25.169 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:31:25.169 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:31:25.169 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@109 -- # return 1 00:31:25.169 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@168 -- # dev= 00:31:25.169 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@169 -- # return 0 00:31:25.169 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:31:25.169 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:31:25.169 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:31:25.169 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:31:25.169 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:31:25.169 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:25.169 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:25.169 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@168 -- # get_net_dev target0 00:31:25.169 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@107 -- # local dev=target0 00:31:25.169 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:31:25.169 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:31:25.169 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:31:25.169 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:31:25.169 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:31:25.169 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:31:25.169 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:31:25.169 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:31:25.169 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:31:25.169 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:25.169 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:31:25.169 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:31:25.169 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:31:25.169 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:31:25.169 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:25.169 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:25.169 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@168 -- # get_net_dev target1 00:31:25.169 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@107 -- # local dev=target1 00:31:25.169 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:31:25.169 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:31:25.169 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@109 -- # return 1 00:31:25.169 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@168 -- # dev= 00:31:25.169 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@169 -- # return 0 00:31:25.169 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:31:25.169 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:25.169 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:31:25.169 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:31:25.169 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:25.169 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:31:25.169 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:31:25.169 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:31:25.169 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:31:25.169 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:25.169 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:31:25.170 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # nvmfpid=557984 00:31:25.170 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # waitforlisten 557984 00:31:25.170 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:31:25.170 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # '[' -z 557984 ']' 00:31:25.170 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:25.170 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:25.170 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:25.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:25.170 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:25.170 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:31:25.170 [2024-11-05 19:20:53.695554] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:25.170 [2024-11-05 19:20:53.696693] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:31:25.170 [2024-11-05 19:20:53.696755] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:25.170 [2024-11-05 19:20:53.797657] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:25.170 [2024-11-05 19:20:53.850032] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:25.170 [2024-11-05 19:20:53.850088] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:25.170 [2024-11-05 19:20:53.850097] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:25.170 [2024-11-05 19:20:53.850104] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:25.170 [2024-11-05 19:20:53.850110] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:25.170 [2024-11-05 19:20:53.852127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:25.170 [2024-11-05 19:20:53.852294] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:25.170 [2024-11-05 19:20:53.852296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:25.170 [2024-11-05 19:20:53.929640] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:25.170 [2024-11-05 19:20:53.929699] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:25.170 [2024-11-05 19:20:53.930394] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:25.170 [2024-11-05 19:20:53.930634] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:25.431 19:20:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:25.431 19:20:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@866 -- # return 0 00:31:25.431 19:20:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:31:25.431 19:20:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:25.431 19:20:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:31:25.431 19:20:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:25.431 19:20:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:31:25.431 19:20:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:25.431 [2024-11-05 19:20:54.701188] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:25.431 19:20:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:31:25.692 19:20:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:25.952 [2024-11-05 19:20:55.081932] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:25.952 19:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:25.952 19:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:31:26.213 Malloc0 00:31:26.213 19:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:31:26.473 Delay0 00:31:26.473 19:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:26.734 19:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:31:26.734 NULL1 00:31:26.734 19:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:31:26.995 19:20:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=558462 00:31:26.995 19:20:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 558462 00:31:26.995 19:20:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:31:26.995 19:20:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:27.255 19:20:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:27.255 19:20:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:31:27.255 19:20:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:31:27.515 true 00:31:27.515 19:20:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 558462 00:31:27.515 19:20:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:27.776 19:20:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:28.036 19:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:31:28.036 19:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:31:28.036 true 00:31:28.036 19:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 558462 00:31:28.036 19:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:28.297 19:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:28.557 19:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:31:28.557 19:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:31:28.557 true 00:31:28.557 19:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 558462 00:31:28.557 19:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:28.818 19:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:29.079 19:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:31:29.079 19:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:31:29.079 true 00:31:29.340 19:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 558462 00:31:29.340 19:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:29.340 19:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:29.600 19:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:31:29.600 19:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:31:29.860 true 00:31:29.861 19:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 558462 00:31:29.861 19:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:29.861 19:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:30.121 19:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:31:30.121 19:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:31:30.382 true 00:31:30.382 19:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 558462 00:31:30.382 19:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:30.643 19:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:30.643 19:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:31:30.643 19:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:31:30.904 true 00:31:30.904 19:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 558462 00:31:30.904 19:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:31.165 19:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:31.165 19:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:31:31.165 19:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:31:31.427 true 00:31:31.427 19:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 558462 00:31:31.427 19:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:31.689 19:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:31.950 19:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:31:31.950 19:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:31:31.950 true 00:31:31.950 19:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 558462 00:31:31.950 19:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:32.211 19:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:32.472 19:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:31:32.472 19:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:31:32.472 true 00:31:32.472 19:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 558462 00:31:32.472 19:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:32.733 19:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:32.993 19:21:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:31:32.993 19:21:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:31:32.993 true 00:31:33.253 19:21:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 558462 00:31:33.253 19:21:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:33.253 19:21:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:33.514 19:21:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:31:33.514 19:21:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:31:33.775 true 00:31:33.775 19:21:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 558462 00:31:33.775 19:21:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:33.775 19:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:34.036 19:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:31:34.036 19:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:31:34.296 true 00:31:34.296 19:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 558462 00:31:34.296 19:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:34.557 19:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:34.557 19:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:31:34.557 19:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:31:34.817 true 00:31:34.817 19:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 558462 00:31:34.817 19:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:35.078 19:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:35.078 19:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:31:35.078 19:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:31:35.338 true 00:31:35.338 19:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 558462 00:31:35.338 19:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:35.598 19:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:35.598 19:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:31:35.598 19:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:31:35.860 true 00:31:35.860 19:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 558462 00:31:35.860 19:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:36.121 19:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:36.382 19:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:31:36.382 19:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:31:36.382 true 00:31:36.382 19:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 558462 00:31:36.382 19:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:36.643 19:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:36.904 19:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:31:36.904 19:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:31:36.904 true 00:31:36.904 19:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 558462 00:31:36.904 19:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:37.165 19:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:37.427 19:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:31:37.427 19:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:31:37.687 true 00:31:37.687 19:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 558462 00:31:37.687 19:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:37.687 19:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:37.948 19:21:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:31:37.948 19:21:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:31:38.208 true 00:31:38.209 19:21:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 558462 00:31:38.209 19:21:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:38.469 19:21:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:38.469 19:21:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:31:38.469 19:21:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:31:38.730 true 00:31:38.730 19:21:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 558462 00:31:38.730 19:21:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:38.991 19:21:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:38.991 19:21:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:31:38.991 19:21:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:31:39.252 true 00:31:39.252 19:21:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 558462 00:31:39.252 19:21:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:39.513 19:21:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:39.774 19:21:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:31:39.774 19:21:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:31:39.774 true 00:31:39.774 19:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 558462 00:31:39.774 19:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:40.035 19:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:40.295 19:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:31:40.295 19:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:31:40.295 true 00:31:40.295 19:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 558462 00:31:40.295 19:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:40.555 19:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:40.815 19:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:31:40.815 19:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:31:40.815 true 00:31:41.076 19:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 558462 00:31:41.076 19:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:41.076 19:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:41.336 19:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:31:41.336 19:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:31:41.597 true 00:31:41.597 19:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 558462 00:31:41.597 19:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:41.597 19:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:41.858 19:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:31:41.858 19:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:31:42.120 true 00:31:42.120 19:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 558462 00:31:42.120 19:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:42.380 19:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:42.380 19:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:31:42.380 19:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:31:42.640 true 00:31:42.640 19:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 558462 00:31:42.640 19:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:42.925 19:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:42.925 19:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:31:42.925 19:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:31:43.234 true 00:31:43.234 19:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 558462 00:31:43.234 19:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:43.523 19:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:43.523 19:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:31:43.523 19:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:31:43.784 true 00:31:43.784 19:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 558462 00:31:43.784 19:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:44.045 19:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:44.045 19:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:31:44.045 19:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:31:44.306 true 00:31:44.306 19:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 558462 00:31:44.306 19:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:44.568 19:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:44.828 19:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:31:44.828 19:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:31:44.828 true 00:31:44.828 19:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 558462 00:31:44.828 19:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:45.089 19:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:45.350 19:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:31:45.350 19:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:31:45.350 true 00:31:45.350 19:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 558462 00:31:45.350 19:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:45.611 19:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:45.872 19:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:31:45.872 19:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:31:45.872 true 00:31:46.132 19:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 558462 00:31:46.132 19:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:46.132 19:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:46.392 19:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:31:46.392 19:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:31:46.652 true 00:31:46.652 19:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 558462 00:31:46.653 19:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:46.653 19:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:46.913 19:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:31:46.913 19:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:31:47.173 true 00:31:47.173 19:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 558462 00:31:47.173 19:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:47.433 19:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:47.433 19:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:31:47.433 19:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:31:47.694 true 00:31:47.694 19:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 558462 00:31:47.694 19:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:47.954 19:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:47.954 19:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:31:47.954 19:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:31:48.215 true 00:31:48.215 19:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 558462 00:31:48.215 19:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:48.475 19:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:48.735 19:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:31:48.735 19:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:31:48.735 true 00:31:48.735 19:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 558462 00:31:48.735 19:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:48.995 19:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:49.255 19:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:31:49.255 19:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:31:49.255 true 00:31:49.255 19:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 558462 00:31:49.255 19:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:49.514 19:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:49.775 19:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:31:49.775 19:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:31:49.775 true 00:31:50.036 19:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 558462 00:31:50.036 19:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:50.036 19:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:50.298 19:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:31:50.298 19:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:31:50.558 true 00:31:50.558 19:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 558462 00:31:50.558 19:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:50.558 19:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:50.819 19:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:31:50.819 19:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:31:51.081 true 00:31:51.081 19:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 558462 00:31:51.081 19:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:51.342 19:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:51.342 19:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:31:51.342 19:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:31:51.604 true 00:31:51.604 19:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 558462 00:31:51.604 19:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:51.866 19:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:51.866 19:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:31:51.866 19:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:31:52.127 true 00:31:52.127 19:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 558462 00:31:52.127 19:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:52.387 19:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:52.647 19:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:31:52.647 19:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:31:52.647 true 00:31:52.647 19:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 558462 00:31:52.647 19:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:52.908 19:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:53.170 19:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:31:53.170 19:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:31:53.170 true 00:31:53.170 19:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 558462 00:31:53.170 19:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:53.431 19:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:53.692 19:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:31:53.692 19:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:31:53.692 true 00:31:53.953 19:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 558462 00:31:53.953 19:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:53.953 19:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:54.214 19:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:31:54.214 19:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:31:54.475 true 00:31:54.475 19:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 558462 00:31:54.475 19:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:54.475 19:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:54.736 19:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:31:54.736 19:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:31:54.998 true 00:31:54.998 19:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 558462 00:31:54.998 19:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:55.260 19:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:55.260 19:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:31:55.260 19:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:31:55.521 true 00:31:55.521 19:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 558462 00:31:55.521 19:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:55.781 19:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:55.782 19:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:31:55.782 19:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:31:56.042 true 00:31:56.042 19:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 558462 00:31:56.042 19:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:56.302 19:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:56.564 19:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:31:56.564 19:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:31:56.564 true 00:31:56.564 19:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 558462 00:31:56.564 19:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:56.825 19:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:57.086 19:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:31:57.086 19:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:31:57.086 true 00:31:57.086 19:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 558462 00:31:57.086 19:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:57.348 Initializing NVMe Controllers 00:31:57.348 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:57.348 Controller IO queue size 128, less than required. 00:31:57.348 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:57.348 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:57.348 Initialization complete. Launching workers. 00:31:57.348 ======================================================== 00:31:57.348 Latency(us) 00:31:57.348 Device Information : IOPS MiB/s Average min max 00:31:57.348 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 29865.98 14.58 4285.84 1532.90 11011.39 00:31:57.348 ======================================================== 00:31:57.348 Total : 29865.98 14.58 4285.84 1532.90 11011.39 00:31:57.348 00:31:57.348 19:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:57.609 19:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1055 00:31:57.609 19:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:31:57.609 true 00:31:57.870 19:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 558462 00:31:57.870 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (558462) - No such process 00:31:57.870 19:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 558462 00:31:57.870 19:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:57.870 19:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:58.132 19:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:31:58.132 19:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:31:58.132 19:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:31:58.132 19:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:58.132 19:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:31:58.393 null0 00:31:58.393 19:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:58.393 19:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:58.393 19:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:31:58.393 null1 00:31:58.393 19:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:58.393 19:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:58.393 19:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:31:58.655 null2 00:31:58.655 19:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:58.655 19:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:58.655 19:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:31:58.917 null3 00:31:58.917 19:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:58.917 19:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:58.917 19:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:31:58.917 null4 00:31:58.917 19:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:58.917 19:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:58.917 19:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:31:59.179 null5 00:31:59.179 19:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:59.179 19:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:59.179 19:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:31:59.440 null6 00:31:59.440 19:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:59.440 19:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:59.440 19:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:31:59.440 null7 00:31:59.703 19:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:59.703 19:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:59.703 19:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:31:59.703 19:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:59.703 19:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:59.703 19:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:59.703 19:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:59.703 19:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:31:59.703 19:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:31:59.703 19:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:59.703 19:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:59.703 19:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:59.703 19:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:31:59.703 19:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:59.703 19:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:31:59.703 19:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:59.703 19:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:59.703 19:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:59.703 19:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:59.703 19:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:59.703 19:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:59.703 19:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:59.703 19:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:59.703 19:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:31:59.703 19:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:31:59.703 19:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:59.703 19:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:59.703 19:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:59.703 19:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:59.703 19:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:59.703 19:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:59.703 19:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:31:59.703 19:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:31:59.703 19:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:59.703 19:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:59.703 19:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:59.703 19:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:59.703 19:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:59.703 19:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:59.703 19:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:31:59.703 19:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:31:59.703 19:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:59.703 19:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:59.703 19:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:59.703 19:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:59.703 19:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:59.703 19:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:59.703 19:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:31:59.703 19:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:31:59.703 19:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:59.703 19:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:59.703 19:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:59.703 19:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:59.703 19:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:59.703 19:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:59.703 19:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:31:59.703 19:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:31:59.703 19:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:59.703 19:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:59.703 19:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:59.703 19:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:59.704 19:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:59.704 19:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:59.704 19:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:31:59.704 19:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 565419 565420 565423 565424 565426 565428 565430 565432 00:31:59.704 19:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:31:59.704 19:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:59.704 19:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:59.704 19:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:59.704 19:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:59.704 19:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:59.704 19:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:59.704 19:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:59.704 19:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:59.966 19:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:59.966 19:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:59.966 19:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:59.966 19:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:59.966 19:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:59.966 19:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:59.966 19:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:59.966 19:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:59.966 19:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:59.966 19:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:59.966 19:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:59.966 19:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:59.966 19:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:59.966 19:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:59.966 19:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:59.966 19:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:59.966 19:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:59.966 19:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:59.966 19:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:59.966 19:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:59.966 19:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:59.966 19:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:59.966 19:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:59.966 19:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:59.966 19:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:59.966 19:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:59.966 19:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:00.227 19:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:00.227 19:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:00.227 19:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:00.227 19:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:00.227 19:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:00.227 19:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:00.227 19:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:00.227 19:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:00.228 19:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:00.228 19:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:00.228 19:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:00.228 19:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:00.228 19:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:00.228 19:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:00.228 19:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:00.228 19:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:00.228 19:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:00.488 19:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:00.488 19:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:00.488 19:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:00.488 19:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:00.488 19:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:00.488 19:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:00.488 19:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:00.488 19:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:00.488 19:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:00.488 19:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:00.488 19:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:00.488 19:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:00.488 19:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:00.488 19:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:00.488 19:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:00.488 19:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:00.488 19:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:00.488 19:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:00.488 19:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:00.488 19:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:00.488 19:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:00.488 19:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:00.749 19:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:00.749 19:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:00.749 19:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:00.750 19:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:00.750 19:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:00.750 19:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:00.750 19:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:00.750 19:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:00.750 19:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:00.750 19:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:00.750 19:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:00.750 19:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:00.750 19:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:00.750 19:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:00.750 19:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:00.750 19:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:00.750 19:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:00.750 19:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:00.750 19:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:00.750 19:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:00.750 19:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:00.750 19:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:00.750 19:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:00.750 19:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:00.750 19:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:01.011 19:21:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:01.012 19:21:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:01.012 19:21:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:01.012 19:21:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:01.012 19:21:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:01.012 19:21:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:01.012 19:21:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:01.012 19:21:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:01.012 19:21:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:01.012 19:21:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:01.012 19:21:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:01.012 19:21:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:01.012 19:21:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:01.012 19:21:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:01.012 19:21:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:01.012 19:21:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:01.012 19:21:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:01.012 19:21:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:01.012 19:21:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:01.012 19:21:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:01.012 19:21:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:01.012 19:21:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:01.012 19:21:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:01.012 19:21:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:01.012 19:21:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:01.012 19:21:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:01.012 19:21:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:01.012 19:21:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:01.012 19:21:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:01.272 19:21:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:01.272 19:21:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:01.272 19:21:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:01.272 19:21:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:01.272 19:21:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:01.272 19:21:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:01.272 19:21:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:01.272 19:21:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:01.272 19:21:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:01.272 19:21:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:01.272 19:21:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:01.532 19:21:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:01.532 19:21:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:01.532 19:21:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:01.532 19:21:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:01.532 19:21:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:01.532 19:21:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:01.532 19:21:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:01.532 19:21:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:01.532 19:21:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:01.532 19:21:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:01.532 19:21:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:01.532 19:21:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:01.532 19:21:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:01.532 19:21:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:01.532 19:21:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:01.532 19:21:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:01.532 19:21:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:01.532 19:21:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:01.532 19:21:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:01.532 19:21:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:01.532 19:21:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:01.532 19:21:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:01.532 19:21:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:01.532 19:21:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:01.532 19:21:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:01.532 19:21:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:01.792 19:21:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:01.792 19:21:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:01.792 19:21:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:01.792 19:21:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:01.792 19:21:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:01.792 19:21:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:01.792 19:21:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:01.792 19:21:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:01.792 19:21:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:01.792 19:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:01.792 19:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:01.792 19:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:01.792 19:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:01.792 19:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:01.792 19:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:01.792 19:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:01.792 19:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:01.792 19:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:01.792 19:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:01.792 19:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:01.792 19:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:02.052 19:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:02.052 19:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:02.052 19:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:02.052 19:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:02.052 19:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:02.052 19:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:02.052 19:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:02.052 19:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:02.052 19:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:02.052 19:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:02.052 19:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:02.052 19:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:02.052 19:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:02.052 19:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:02.052 19:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:02.052 19:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:02.052 19:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:02.052 19:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:02.052 19:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:02.052 19:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:02.052 19:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:02.052 19:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:02.312 19:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:02.312 19:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:02.312 19:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:02.312 19:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:02.312 19:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:02.312 19:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:02.312 19:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:02.312 19:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:02.312 19:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:02.312 19:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:02.312 19:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:02.312 19:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:02.312 19:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:02.312 19:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:02.312 19:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:02.312 19:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:02.312 19:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:02.312 19:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:02.312 19:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:02.312 19:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:02.312 19:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:02.312 19:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:02.572 19:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:02.572 19:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:02.572 19:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:02.572 19:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:02.572 19:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:02.572 19:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:02.572 19:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:02.572 19:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:02.572 19:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:02.572 19:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:02.572 19:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:02.572 19:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:02.572 19:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:02.572 19:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:02.572 19:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:02.572 19:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:02.572 19:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:02.572 19:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:02.572 19:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:02.572 19:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:02.572 19:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:02.572 19:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:02.572 19:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:02.572 19:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:02.572 19:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:02.833 19:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:02.833 19:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:02.833 19:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:02.833 19:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:02.833 19:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:02.833 19:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:02.833 19:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:02.833 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:02.833 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:02.833 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:02.833 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:02.833 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:02.833 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:02.833 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:02.833 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:02.833 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:02.833 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:02.833 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:02.833 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:02.833 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:02.833 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:03.093 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:03.093 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:03.093 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:03.093 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:03.093 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:03.093 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:03.093 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:03.093 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:03.093 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:03.093 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:03.093 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:03.093 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:03.093 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:03.093 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:03.093 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:03.093 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:03.093 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:03.093 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:03.093 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:03.093 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:03.093 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:03.352 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:03.352 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:03.352 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:03.352 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:03.352 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:03.352 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:03.352 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:03.352 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:03.352 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:03.352 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:03.352 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:03.352 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:03.352 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:03.352 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:03.352 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:03.352 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:03.352 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:03.352 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:03.611 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:32:03.611 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:32:03.611 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # nvmfcleanup 00:32:03.611 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@99 -- # sync 00:32:03.611 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:32:03.611 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # set +e 00:32:03.611 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # for i in {1..20} 00:32:03.611 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:32:03.611 rmmod nvme_tcp 00:32:03.611 rmmod nvme_fabrics 00:32:03.611 rmmod nvme_keyring 00:32:03.611 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:32:03.611 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # set -e 00:32:03.611 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # return 0 00:32:03.611 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # '[' -n 557984 ']' 00:32:03.611 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@337 -- # killprocess 557984 00:32:03.611 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # '[' -z 557984 ']' 00:32:03.611 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # kill -0 557984 00:32:03.611 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # uname 00:32:03.611 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:32:03.611 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 557984 00:32:03.611 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:32:03.611 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:32:03.611 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@970 -- # echo 'killing process with pid 557984' 00:32:03.611 killing process with pid 557984 00:32:03.611 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@971 -- # kill 557984 00:32:03.611 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@976 -- # wait 557984 00:32:03.871 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:32:03.871 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # nvmf_fini 00:32:03.871 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@264 -- # local dev 00:32:03.871 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@267 -- # remove_target_ns 00:32:03.871 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:32:03.871 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:32:03.871 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_target_ns 00:32:05.777 19:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@268 -- # delete_main_bridge 00:32:05.777 19:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:32:05.777 19:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@130 -- # return 0 00:32:05.777 19:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:32:05.777 19:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:32:05.777 19:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:32:05.777 19:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:32:05.777 19:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:32:05.777 19:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:32:05.777 19:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:32:05.777 19:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:32:05.777 19:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:32:05.777 19:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:32:05.777 19:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:32:05.777 19:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:32:05.777 19:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:32:05.777 19:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:32:05.777 19:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:32:05.777 19:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:32:05.777 19:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:32:05.777 19:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@41 -- # _dev=0 00:32:05.777 19:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@41 -- # dev_map=() 00:32:05.777 19:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@284 -- # iptr 00:32:05.777 19:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@542 -- # iptables-save 00:32:05.778 19:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:32:05.778 19:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@542 -- # iptables-restore 00:32:05.778 00:32:05.778 real 0m49.174s 00:32:05.778 user 3m3.343s 00:32:05.778 sys 0m22.165s 00:32:05.778 19:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:05.778 19:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:32:05.778 ************************************ 00:32:05.778 END TEST nvmf_ns_hotplug_stress 00:32:05.778 ************************************ 00:32:05.778 19:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:32:05.778 19:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:32:05.778 19:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:32:05.778 19:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:06.038 ************************************ 00:32:06.038 START TEST nvmf_delete_subsystem 00:32:06.038 ************************************ 00:32:06.038 19:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:32:06.038 * Looking for test storage... 00:32:06.038 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:06.038 19:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:32:06.038 19:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lcov --version 00:32:06.038 19:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:32:06.038 19:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:32:06.038 19:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:06.038 19:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:06.038 19:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:06.038 19:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:32:06.038 19:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:32:06.038 19:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:32:06.038 19:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:32:06.038 19:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:32:06.039 19:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:32:06.039 19:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:32:06.039 19:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:06.039 19:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:32:06.039 19:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:32:06.039 19:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:06.039 19:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:06.039 19:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:32:06.039 19:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:32:06.039 19:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:06.039 19:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:32:06.039 19:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:32:06.039 19:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:32:06.039 19:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:32:06.039 19:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:06.039 19:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:32:06.039 19:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:32:06.039 19:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:06.039 19:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:06.039 19:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:32:06.039 19:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:06.039 19:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:32:06.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:06.039 --rc genhtml_branch_coverage=1 00:32:06.039 --rc genhtml_function_coverage=1 00:32:06.039 --rc genhtml_legend=1 00:32:06.039 --rc geninfo_all_blocks=1 00:32:06.039 --rc geninfo_unexecuted_blocks=1 00:32:06.039 00:32:06.039 ' 00:32:06.039 19:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:32:06.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:06.039 --rc genhtml_branch_coverage=1 00:32:06.039 --rc genhtml_function_coverage=1 00:32:06.039 --rc genhtml_legend=1 00:32:06.039 --rc geninfo_all_blocks=1 00:32:06.039 --rc geninfo_unexecuted_blocks=1 00:32:06.039 00:32:06.039 ' 00:32:06.039 19:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:32:06.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:06.039 --rc genhtml_branch_coverage=1 00:32:06.039 --rc genhtml_function_coverage=1 00:32:06.039 --rc genhtml_legend=1 00:32:06.039 --rc geninfo_all_blocks=1 00:32:06.039 --rc geninfo_unexecuted_blocks=1 00:32:06.039 00:32:06.039 ' 00:32:06.039 19:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:32:06.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:06.039 --rc genhtml_branch_coverage=1 00:32:06.039 --rc genhtml_function_coverage=1 00:32:06.039 --rc genhtml_legend=1 00:32:06.039 --rc geninfo_all_blocks=1 00:32:06.039 --rc geninfo_unexecuted_blocks=1 00:32:06.039 00:32:06.039 ' 00:32:06.039 19:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:06.039 19:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:32:06.039 19:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:06.039 19:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:06.039 19:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:06.039 19:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:06.039 19:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:06.039 19:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:32:06.039 19:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:06.039 19:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:32:06.039 19:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:06.039 19:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:06.039 19:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:06.039 19:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:32:06.039 19:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:32:06.039 19:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:06.039 19:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:06.039 19:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:32:06.039 19:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:06.039 19:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:06.039 19:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:06.039 19:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:06.039 19:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:06.039 19:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:06.039 19:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:32:06.039 19:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:06.039 19:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:32:06.039 19:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:32:06.039 19:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:32:06.039 19:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:32:06.299 19:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@50 -- # : 0 00:32:06.299 19:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:32:06.299 19:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:32:06.299 19:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:32:06.299 19:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:06.299 19:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:06.299 19:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:32:06.299 19:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:32:06.299 19:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:32:06.299 19:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:32:06.299 19:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@54 -- # have_pci_nics=0 00:32:06.299 19:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:32:06.299 19:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:32:06.299 19:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:06.299 19:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # prepare_net_devs 00:32:06.299 19:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # local -g is_hw=no 00:32:06.299 19:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # remove_target_ns 00:32:06.299 19:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:32:06.299 19:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:32:06.299 19:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_target_ns 00:32:06.299 19:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:32:06.299 19:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:32:06.299 19:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # xtrace_disable 00:32:06.299 19:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:14.434 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:14.434 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@131 -- # pci_devs=() 00:32:14.434 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@131 -- # local -a pci_devs 00:32:14.434 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@132 -- # pci_net_devs=() 00:32:14.434 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:32:14.434 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@133 -- # pci_drivers=() 00:32:14.434 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@133 -- # local -A pci_drivers 00:32:14.434 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@135 -- # net_devs=() 00:32:14.434 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@135 -- # local -ga net_devs 00:32:14.434 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@136 -- # e810=() 00:32:14.434 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@136 -- # local -ga e810 00:32:14.434 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@137 -- # x722=() 00:32:14.434 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@137 -- # local -ga x722 00:32:14.434 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@138 -- # mlx=() 00:32:14.434 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@138 -- # local -ga mlx 00:32:14.434 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:14.434 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:14.434 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:14.434 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:14.434 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:14.434 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:14.434 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:14.434 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:14.434 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:14.434 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:14.434 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:14.434 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:14.434 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:32:14.434 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:32:14.434 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:32:14.434 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:32:14.434 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:32:14.434 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:32:14.434 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:32:14.434 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:14.434 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:14.434 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:32:14.434 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:32:14.434 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:14.434 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:14.434 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:32:14.434 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:32:14.434 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:14.434 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:14.434 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:32:14.434 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:32:14.434 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:14.434 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:14.434 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:32:14.434 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:32:14.434 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:32:14.434 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:32:14.434 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:32:14.434 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:14.434 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:32:14.434 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:14.434 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # [[ up == up ]] 00:32:14.434 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:32:14.435 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:14.435 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:14.435 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:14.435 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:32:14.435 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:32:14.435 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:14.435 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:32:14.435 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:14.435 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # [[ up == up ]] 00:32:14.435 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:32:14.435 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:14.435 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:14.435 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:14.435 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:32:14.435 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:32:14.435 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:32:14.435 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # is_hw=yes 00:32:14.435 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:32:14.435 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:32:14.435 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:32:14.435 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:32:14.435 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@257 -- # create_target_ns 00:32:14.435 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:32:14.435 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:32:14.435 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:32:14.435 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:14.435 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:32:14.435 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:32:14.435 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:14.435 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:14.435 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:32:14.435 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:32:14.435 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:32:14.435 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:32:14.435 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@27 -- # local -gA dev_map 00:32:14.435 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@28 -- # local -g _dev 00:32:14.435 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:32:14.435 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:32:14.435 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:32:14.435 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:32:14.435 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@44 -- # ips=() 00:32:14.435 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:32:14.435 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:32:14.435 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:32:14.435 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:32:14.435 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:32:14.435 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:32:14.435 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:32:14.435 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:32:14.435 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:32:14.435 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:32:14.435 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:32:14.435 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:32:14.435 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:32:14.435 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:32:14.435 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:32:14.435 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:32:14.435 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:32:14.435 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:32:14.435 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:32:14.435 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:32:14.435 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@11 -- # local val=167772161 00:32:14.435 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:32:14.435 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:32:14.435 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:32:14.435 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:32:14.435 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:32:14.435 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:32:14.435 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:32:14.435 10.0.0.1 00:32:14.435 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:32:14.435 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:32:14.435 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:14.435 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:14.435 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:32:14.435 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@11 -- # local val=167772162 00:32:14.435 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:32:14.435 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:32:14.435 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:32:14.435 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:32:14.435 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:32:14.435 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:32:14.435 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:32:14.435 10.0.0.2 00:32:14.435 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:32:14.435 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:32:14.435 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:32:14.435 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:32:14.435 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:32:14.435 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:32:14.435 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:32:14.435 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:14.435 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:14.435 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:32:14.435 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:32:14.435 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:32:14.435 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:32:14.436 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:32:14.436 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:32:14.436 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:32:14.436 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:32:14.436 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:32:14.436 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:32:14.436 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:32:14.436 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@38 -- # ping_ips 1 00:32:14.436 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:32:14.436 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:32:14.436 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:32:14.436 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:32:14.436 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:32:14.436 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:32:14.436 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:32:14.436 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:32:14.436 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:32:14.436 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@107 -- # local dev=initiator0 00:32:14.436 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:32:14.436 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:32:14.436 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:32:14.436 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:32:14.436 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:14.436 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:14.436 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:32:14.436 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:32:14.436 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:32:14.436 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:32:14.436 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:32:14.436 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:14.436 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:14.436 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:32:14.436 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:32:14.436 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:14.436 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.654 ms 00:32:14.436 00:32:14.436 --- 10.0.0.1 ping statistics --- 00:32:14.436 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:14.436 rtt min/avg/max/mdev = 0.654/0.654/0.654/0.000 ms 00:32:14.436 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:32:14.436 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:32:14.436 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:32:14.436 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:32:14.436 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:14.436 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:14.436 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@168 -- # get_net_dev target0 00:32:14.436 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@107 -- # local dev=target0 00:32:14.436 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:32:14.436 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:32:14.436 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:32:14.436 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:32:14.436 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:32:14.436 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:32:14.436 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:32:14.436 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:32:14.436 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:32:14.436 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:32:14.436 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:32:14.436 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:32:14.436 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:32:14.436 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:32:14.436 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:14.436 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.319 ms 00:32:14.436 00:32:14.436 --- 10.0.0.2 ping statistics --- 00:32:14.436 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:14.436 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:32:14.436 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@98 -- # (( pair++ )) 00:32:14.436 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:32:14.436 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:14.436 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # return 0 00:32:14.436 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:32:14.436 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:32:14.436 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:32:14.436 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:32:14.436 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:32:14.436 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:32:14.436 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:32:14.436 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:32:14.436 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:32:14.436 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:32:14.436 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@107 -- # local dev=initiator0 00:32:14.436 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:32:14.436 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:32:14.436 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:32:14.436 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:32:14.436 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:14.436 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:14.436 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:32:14.436 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:32:14.436 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:32:14.436 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:14.436 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:32:14.436 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:32:14.436 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:32:14.436 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:32:14.436 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:32:14.436 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:32:14.436 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@107 -- # local dev=initiator1 00:32:14.436 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:32:14.436 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:32:14.436 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@109 -- # return 1 00:32:14.436 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@168 -- # dev= 00:32:14.436 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@169 -- # return 0 00:32:14.437 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:32:14.437 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:32:14.437 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:32:14.437 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:32:14.437 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:32:14.437 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:14.437 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:14.437 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@168 -- # get_net_dev target0 00:32:14.437 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@107 -- # local dev=target0 00:32:14.437 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:32:14.437 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:32:14.437 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:32:14.437 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:32:14.437 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:32:14.437 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:32:14.437 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:32:14.437 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:32:14.437 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:32:14.437 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:14.437 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:32:14.437 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:32:14.437 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:32:14.437 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:32:14.437 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:14.437 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:14.437 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@168 -- # get_net_dev target1 00:32:14.437 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@107 -- # local dev=target1 00:32:14.437 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:32:14.437 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:32:14.437 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@109 -- # return 1 00:32:14.437 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@168 -- # dev= 00:32:14.437 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@169 -- # return 0 00:32:14.437 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:32:14.437 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:14.437 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:32:14.437 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:32:14.437 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:14.437 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:32:14.437 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:32:14.437 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:32:14.437 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:32:14.437 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:14.437 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:14.437 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # nvmfpid=570615 00:32:14.437 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # waitforlisten 570615 00:32:14.437 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:32:14.437 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # '[' -z 570615 ']' 00:32:14.437 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:14.437 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:14.437 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:14.437 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:14.437 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:14.437 19:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:14.437 [2024-11-05 19:21:42.964037] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:14.437 [2024-11-05 19:21:42.965164] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:32:14.437 [2024-11-05 19:21:42.965217] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:14.437 [2024-11-05 19:21:43.047119] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:14.437 [2024-11-05 19:21:43.087399] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:14.437 [2024-11-05 19:21:43.087437] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:14.437 [2024-11-05 19:21:43.087446] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:14.437 [2024-11-05 19:21:43.087453] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:14.437 [2024-11-05 19:21:43.087459] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:14.437 [2024-11-05 19:21:43.088686] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:14.437 [2024-11-05 19:21:43.088688] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:14.437 [2024-11-05 19:21:43.144308] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:14.437 [2024-11-05 19:21:43.144816] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:14.437 [2024-11-05 19:21:43.145155] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:14.698 19:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:14.698 19:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@866 -- # return 0 00:32:14.698 19:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:32:14.698 19:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:14.698 19:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:14.698 19:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:14.698 19:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:14.698 19:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:14.698 19:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:14.698 [2024-11-05 19:21:43.809305] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:14.698 19:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:14.698 19:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:32:14.698 19:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:14.698 19:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:14.698 19:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:14.699 19:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:14.699 19:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:14.699 19:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:14.699 [2024-11-05 19:21:43.837781] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:14.699 19:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:14.699 19:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:32:14.699 19:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:14.699 19:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:14.699 NULL1 00:32:14.699 19:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:14.699 19:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:32:14.699 19:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:14.699 19:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:14.699 Delay0 00:32:14.699 19:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:14.699 19:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:14.699 19:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:14.699 19:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:14.699 19:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:14.699 19:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=570739 00:32:14.699 19:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:32:14.699 19:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:32:14.699 [2024-11-05 19:21:43.939220] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:32:16.610 19:21:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:16.610 19:21:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:16.610 19:21:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:16.872 Write completed with error (sct=0, sc=8) 00:32:16.872 Read completed with error (sct=0, sc=8) 00:32:16.872 Write completed with error (sct=0, sc=8) 00:32:16.872 Read completed with error (sct=0, sc=8) 00:32:16.872 starting I/O failed: -6 00:32:16.872 Write completed with error (sct=0, sc=8) 00:32:16.872 Write completed with error (sct=0, sc=8) 00:32:16.872 Read completed with error (sct=0, sc=8) 00:32:16.872 Read completed with error (sct=0, sc=8) 00:32:16.872 starting I/O failed: -6 00:32:16.872 Read completed with error (sct=0, sc=8) 00:32:16.872 Read completed with error (sct=0, sc=8) 00:32:16.872 Read completed with error (sct=0, sc=8) 00:32:16.872 Write completed with error (sct=0, sc=8) 00:32:16.872 starting I/O failed: -6 00:32:16.872 Read completed with error (sct=0, sc=8) 00:32:16.872 Read completed with error (sct=0, sc=8) 00:32:16.872 Read completed with error (sct=0, sc=8) 00:32:16.872 Read completed with error (sct=0, sc=8) 00:32:16.872 starting I/O failed: -6 00:32:16.872 Read completed with error (sct=0, sc=8) 00:32:16.872 Read completed with error (sct=0, sc=8) 00:32:16.872 Read completed with error (sct=0, sc=8) 00:32:16.872 Read completed with error (sct=0, sc=8) 00:32:16.872 starting I/O failed: -6 00:32:16.872 Read completed with error (sct=0, sc=8) 00:32:16.872 Write completed with error (sct=0, sc=8) 00:32:16.872 Read completed with error (sct=0, sc=8) 00:32:16.872 Write completed with error (sct=0, sc=8) 00:32:16.872 starting I/O failed: -6 00:32:16.872 Read completed with error (sct=0, sc=8) 00:32:16.872 Read completed with error (sct=0, sc=8) 00:32:16.872 Read completed with error (sct=0, sc=8) 00:32:16.872 Read completed with error (sct=0, sc=8) 00:32:16.872 starting I/O failed: -6 00:32:16.873 Read completed with error (sct=0, sc=8) 00:32:16.873 Write completed with error (sct=0, sc=8) 00:32:16.873 Read completed with error (sct=0, sc=8) 00:32:16.873 Read completed with error (sct=0, sc=8) 00:32:16.873 starting I/O failed: -6 00:32:16.873 Write completed with error (sct=0, sc=8) 00:32:16.873 Write completed with error (sct=0, sc=8) 00:32:16.873 Read completed with error (sct=0, sc=8) 00:32:16.873 Write completed with error (sct=0, sc=8) 00:32:16.873 starting I/O failed: -6 00:32:16.873 Read completed with error (sct=0, sc=8) 00:32:16.873 Read completed with error (sct=0, sc=8) 00:32:16.873 Write completed with error (sct=0, sc=8) 00:32:16.873 Read completed with error (sct=0, sc=8) 00:32:16.873 starting I/O failed: -6 00:32:16.873 Write completed with error (sct=0, sc=8) 00:32:16.873 Read completed with error (sct=0, sc=8) 00:32:16.873 Read completed with error (sct=0, sc=8) 00:32:16.873 Write completed with error (sct=0, sc=8) 00:32:16.873 starting I/O failed: -6 00:32:16.873 Read completed with error (sct=0, sc=8) 00:32:16.873 Write completed with error (sct=0, sc=8) 00:32:16.873 starting I/O failed: -6 00:32:16.873 starting I/O failed: -6 00:32:16.873 Read completed with error (sct=0, sc=8) 00:32:16.873 Read completed with error (sct=0, sc=8) 00:32:16.873 starting I/O failed: -6 00:32:16.873 Write completed with error (sct=0, sc=8) 00:32:16.873 Read completed with error (sct=0, sc=8) 00:32:16.873 starting I/O failed: -6 00:32:16.873 Read completed with error (sct=0, sc=8) 00:32:16.873 Write completed with error (sct=0, sc=8) 00:32:16.873 starting I/O failed: -6 00:32:16.873 Read completed with error (sct=0, sc=8) 00:32:16.873 Write completed with error (sct=0, sc=8) 00:32:16.873 starting I/O failed: -6 00:32:16.873 Read completed with error (sct=0, sc=8) 00:32:16.873 Read completed with error (sct=0, sc=8) 00:32:16.873 starting I/O failed: -6 00:32:16.873 Read completed with error (sct=0, sc=8) 00:32:16.873 Write completed with error (sct=0, sc=8) 00:32:16.873 starting I/O failed: -6 00:32:16.873 Read completed with error (sct=0, sc=8) 00:32:16.873 Read completed with error (sct=0, sc=8) 00:32:16.873 starting I/O failed: -6 00:32:16.873 Read completed with error (sct=0, sc=8) 00:32:16.873 Write completed with error (sct=0, sc=8) 00:32:16.873 starting I/O failed: -6 00:32:16.873 Read completed with error (sct=0, sc=8) 00:32:16.873 Write completed with error (sct=0, sc=8) 00:32:16.873 starting I/O failed: -6 00:32:16.873 Read completed with error (sct=0, sc=8) 00:32:16.873 Write completed with error (sct=0, sc=8) 00:32:16.873 starting I/O failed: -6 00:32:16.873 Read completed with error (sct=0, sc=8) 00:32:16.873 Write completed with error (sct=0, sc=8) 00:32:16.873 starting I/O failed: -6 00:32:16.873 Write completed with error (sct=0, sc=8) 00:32:16.873 Read completed with error (sct=0, sc=8) 00:32:16.873 starting I/O failed: -6 00:32:16.873 Read completed with error (sct=0, sc=8) 00:32:16.873 Read completed with error (sct=0, sc=8) 00:32:16.873 starting I/O failed: -6 00:32:16.873 Write completed with error (sct=0, sc=8) 00:32:16.873 Read completed with error (sct=0, sc=8) 00:32:16.873 starting I/O failed: -6 00:32:16.873 Write completed with error (sct=0, sc=8) 00:32:16.873 Read completed with error (sct=0, sc=8) 00:32:16.873 starting I/O failed: -6 00:32:16.873 Read completed with error (sct=0, sc=8) 00:32:16.873 Read completed with error (sct=0, sc=8) 00:32:16.873 starting I/O failed: -6 00:32:16.873 Write completed with error (sct=0, sc=8) 00:32:16.873 Read completed with error (sct=0, sc=8) 00:32:16.873 starting I/O failed: -6 00:32:16.873 Read completed with error (sct=0, sc=8) 00:32:16.873 Read completed with error (sct=0, sc=8) 00:32:16.873 starting I/O failed: -6 00:32:16.873 Read completed with error (sct=0, sc=8) 00:32:16.873 Read completed with error (sct=0, sc=8) 00:32:16.873 starting I/O failed: -6 00:32:16.873 Read completed with error (sct=0, sc=8) 00:32:16.873 Read completed with error (sct=0, sc=8) 00:32:16.873 starting I/O failed: -6 00:32:16.873 Read completed with error (sct=0, sc=8) 00:32:16.873 Write completed with error (sct=0, sc=8) 00:32:16.873 starting I/O failed: -6 00:32:16.873 Write completed with error (sct=0, sc=8) 00:32:16.873 Write completed with error (sct=0, sc=8) 00:32:16.873 starting I/O failed: -6 00:32:16.873 Read completed with error (sct=0, sc=8) 00:32:16.873 Read completed with error (sct=0, sc=8) 00:32:16.873 starting I/O failed: -6 00:32:16.873 Read completed with error (sct=0, sc=8) 00:32:16.873 Read completed with error (sct=0, sc=8) 00:32:16.873 starting I/O failed: -6 00:32:16.873 Read completed with error (sct=0, sc=8) 00:32:16.873 Write completed with error (sct=0, sc=8) 00:32:16.873 starting I/O failed: -6 00:32:16.873 Read completed with error (sct=0, sc=8) 00:32:16.873 Write completed with error (sct=0, sc=8) 00:32:16.873 starting I/O failed: -6 00:32:16.873 Read completed with error (sct=0, sc=8) 00:32:16.873 Read completed with error (sct=0, sc=8) 00:32:16.873 starting I/O failed: -6 00:32:16.873 Write completed with error (sct=0, sc=8) 00:32:16.873 Read completed with error (sct=0, sc=8) 00:32:16.873 starting I/O failed: -6 00:32:16.873 Read completed with error (sct=0, sc=8) 00:32:16.873 Read completed with error (sct=0, sc=8) 00:32:16.873 starting I/O failed: -6 00:32:16.873 Read completed with error (sct=0, sc=8) 00:32:16.873 [2024-11-05 19:21:46.018536] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6482c0 is same with the state(6) to be set 00:32:16.873 starting I/O failed: -6 00:32:16.873 starting I/O failed: -6 00:32:16.873 Write completed with error (sct=0, sc=8) 00:32:16.873 starting I/O failed: -6 00:32:16.873 Write completed with error (sct=0, sc=8) 00:32:16.873 Read completed with error (sct=0, sc=8) 00:32:16.873 Read completed with error (sct=0, sc=8) 00:32:16.873 Write completed with error (sct=0, sc=8) 00:32:16.873 starting I/O failed: -6 00:32:16.873 Read completed with error (sct=0, sc=8) 00:32:16.873 Read completed with error (sct=0, sc=8) 00:32:16.873 Read completed with error (sct=0, sc=8) 00:32:16.873 Read completed with error (sct=0, sc=8) 00:32:16.873 starting I/O failed: -6 00:32:16.873 Read completed with error (sct=0, sc=8) 00:32:16.873 Read completed with error (sct=0, sc=8) 00:32:16.873 Write completed with error (sct=0, sc=8) 00:32:16.873 Read completed with error (sct=0, sc=8) 00:32:16.873 starting I/O failed: -6 00:32:16.873 Write completed with error (sct=0, sc=8) 00:32:16.873 Write completed with error (sct=0, sc=8) 00:32:16.873 Write completed with error (sct=0, sc=8) 00:32:16.873 Read completed with error (sct=0, sc=8) 00:32:16.873 starting I/O failed: -6 00:32:16.873 Read completed with error (sct=0, sc=8) 00:32:16.873 Write completed with error (sct=0, sc=8) 00:32:16.873 Read completed with error (sct=0, sc=8) 00:32:16.873 Write completed with error (sct=0, sc=8) 00:32:16.873 starting I/O failed: -6 00:32:16.873 Read completed with error (sct=0, sc=8) 00:32:16.873 Write completed with error (sct=0, sc=8) 00:32:16.873 Read completed with error (sct=0, sc=8) 00:32:16.873 Read completed with error (sct=0, sc=8) 00:32:16.873 starting I/O failed: -6 00:32:16.873 Read completed with error (sct=0, sc=8) 00:32:16.873 Read completed with error (sct=0, sc=8) 00:32:16.873 Read completed with error (sct=0, sc=8) 00:32:16.873 Read completed with error (sct=0, sc=8) 00:32:16.873 starting I/O failed: -6 00:32:16.873 Read completed with error (sct=0, sc=8) 00:32:16.873 Write completed with error (sct=0, sc=8) 00:32:16.873 Write completed with error (sct=0, sc=8) 00:32:16.873 Read completed with error (sct=0, sc=8) 00:32:16.873 starting I/O failed: -6 00:32:16.873 Read completed with error (sct=0, sc=8) 00:32:16.873 Read completed with error (sct=0, sc=8) 00:32:16.873 Read completed with error (sct=0, sc=8) 00:32:16.873 [2024-11-05 19:21:46.021998] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fe90000d490 is same with the state(6) to be set 00:32:16.873 Write completed with error (sct=0, sc=8) 00:32:16.873 Read completed with error (sct=0, sc=8) 00:32:16.873 Read completed with error (sct=0, sc=8) 00:32:16.873 Read completed with error (sct=0, sc=8) 00:32:16.873 Read completed with error (sct=0, sc=8) 00:32:16.873 Write completed with error (sct=0, sc=8) 00:32:16.873 Write completed with error (sct=0, sc=8) 00:32:16.873 Write completed with error (sct=0, sc=8) 00:32:16.873 Read completed with error (sct=0, sc=8) 00:32:16.873 Read completed with error (sct=0, sc=8) 00:32:16.873 Read completed with error (sct=0, sc=8) 00:32:16.873 Read completed with error (sct=0, sc=8) 00:32:16.873 Read completed with error (sct=0, sc=8) 00:32:16.873 Read completed with error (sct=0, sc=8) 00:32:16.873 Write completed with error (sct=0, sc=8) 00:32:16.873 Read completed with error (sct=0, sc=8) 00:32:16.873 Read completed with error (sct=0, sc=8) 00:32:16.873 Read completed with error (sct=0, sc=8) 00:32:16.873 Read completed with error (sct=0, sc=8) 00:32:16.873 Read completed with error (sct=0, sc=8) 00:32:16.873 Read completed with error (sct=0, sc=8) 00:32:16.873 Read completed with error (sct=0, sc=8) 00:32:16.873 Read completed with error (sct=0, sc=8) 00:32:16.873 Read completed with error (sct=0, sc=8) 00:32:16.873 Read completed with error (sct=0, sc=8) 00:32:16.873 Write completed with error (sct=0, sc=8) 00:32:16.873 Read completed with error (sct=0, sc=8) 00:32:16.873 Read completed with error (sct=0, sc=8) 00:32:16.873 Read completed with error (sct=0, sc=8) 00:32:16.873 Write completed with error (sct=0, sc=8) 00:32:16.873 Read completed with error (sct=0, sc=8) 00:32:16.873 Write completed with error (sct=0, sc=8) 00:32:16.873 Read completed with error (sct=0, sc=8) 00:32:16.873 Write completed with error (sct=0, sc=8) 00:32:16.873 Read completed with error (sct=0, sc=8) 00:32:16.873 Read completed with error (sct=0, sc=8) 00:32:16.873 Read completed with error (sct=0, sc=8) 00:32:16.873 Read completed with error (sct=0, sc=8) 00:32:16.873 Write completed with error (sct=0, sc=8) 00:32:16.873 Read completed with error (sct=0, sc=8) 00:32:16.873 Read completed with error (sct=0, sc=8) 00:32:16.873 Read completed with error (sct=0, sc=8) 00:32:16.873 Write completed with error (sct=0, sc=8) 00:32:16.873 Write completed with error (sct=0, sc=8) 00:32:16.873 Read completed with error (sct=0, sc=8) 00:32:16.873 Read completed with error (sct=0, sc=8) 00:32:17.815 [2024-11-05 19:21:46.997077] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6499a0 is same with the state(6) to be set 00:32:17.815 Read completed with error (sct=0, sc=8) 00:32:17.815 Read completed with error (sct=0, sc=8) 00:32:17.815 Read completed with error (sct=0, sc=8) 00:32:17.815 Read completed with error (sct=0, sc=8) 00:32:17.815 Read completed with error (sct=0, sc=8) 00:32:17.815 Write completed with error (sct=0, sc=8) 00:32:17.815 Read completed with error (sct=0, sc=8) 00:32:17.815 Read completed with error (sct=0, sc=8) 00:32:17.815 Write completed with error (sct=0, sc=8) 00:32:17.815 Write completed with error (sct=0, sc=8) 00:32:17.815 Read completed with error (sct=0, sc=8) 00:32:17.815 Read completed with error (sct=0, sc=8) 00:32:17.815 Read completed with error (sct=0, sc=8) 00:32:17.815 Write completed with error (sct=0, sc=8) 00:32:17.815 Read completed with error (sct=0, sc=8) 00:32:17.815 Write completed with error (sct=0, sc=8) 00:32:17.815 Read completed with error (sct=0, sc=8) 00:32:17.815 Read completed with error (sct=0, sc=8) 00:32:17.815 Write completed with error (sct=0, sc=8) 00:32:17.815 Read completed with error (sct=0, sc=8) 00:32:17.815 Read completed with error (sct=0, sc=8) 00:32:17.815 Read completed with error (sct=0, sc=8) 00:32:17.815 Read completed with error (sct=0, sc=8) 00:32:17.815 Write completed with error (sct=0, sc=8) 00:32:17.815 Read completed with error (sct=0, sc=8) 00:32:17.815 Write completed with error (sct=0, sc=8) 00:32:17.815 Read completed with error (sct=0, sc=8) 00:32:17.815 Read completed with error (sct=0, sc=8) 00:32:17.815 Write completed with error (sct=0, sc=8) 00:32:17.815 Read completed with error (sct=0, sc=8) 00:32:17.815 Read completed with error (sct=0, sc=8) 00:32:17.815 Write completed with error (sct=0, sc=8) 00:32:17.815 Read completed with error (sct=0, sc=8) 00:32:17.815 Write completed with error (sct=0, sc=8) 00:32:17.815 Write completed with error (sct=0, sc=8) 00:32:17.815 Read completed with error (sct=0, sc=8) 00:32:17.815 Write completed with error (sct=0, sc=8) 00:32:17.815 Read completed with error (sct=0, sc=8) 00:32:17.815 Read completed with error (sct=0, sc=8) 00:32:17.815 Read completed with error (sct=0, sc=8) 00:32:17.815 Write completed with error (sct=0, sc=8) 00:32:17.815 Read completed with error (sct=0, sc=8) 00:32:17.815 [2024-11-05 19:21:47.023041] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x648780 is same with the state(6) to be set 00:32:17.815 Read completed with error (sct=0, sc=8) 00:32:17.815 Write completed with error (sct=0, sc=8) 00:32:17.815 Read completed with error (sct=0, sc=8) 00:32:17.815 Read completed with error (sct=0, sc=8) 00:32:17.815 Write completed with error (sct=0, sc=8) 00:32:17.815 Write completed with error (sct=0, sc=8) 00:32:17.815 Write completed with error (sct=0, sc=8) 00:32:17.815 Read completed with error (sct=0, sc=8) 00:32:17.815 Read completed with error (sct=0, sc=8) 00:32:17.815 Read completed with error (sct=0, sc=8) 00:32:17.815 Write completed with error (sct=0, sc=8) 00:32:17.815 Read completed with error (sct=0, sc=8) 00:32:17.815 Write completed with error (sct=0, sc=8) 00:32:17.815 Read completed with error (sct=0, sc=8) 00:32:17.815 Read completed with error (sct=0, sc=8) 00:32:17.815 Read completed with error (sct=0, sc=8) 00:32:17.815 Read completed with error (sct=0, sc=8) 00:32:17.815 Read completed with error (sct=0, sc=8) 00:32:17.815 Read completed with error (sct=0, sc=8) 00:32:17.815 Read completed with error (sct=0, sc=8) 00:32:17.815 Write completed with error (sct=0, sc=8) 00:32:17.815 Read completed with error (sct=0, sc=8) 00:32:17.815 Write completed with error (sct=0, sc=8) 00:32:17.815 Read completed with error (sct=0, sc=8) 00:32:17.815 Read completed with error (sct=0, sc=8) 00:32:17.815 Read completed with error (sct=0, sc=8) 00:32:17.815 Read completed with error (sct=0, sc=8) 00:32:17.815 Read completed with error (sct=0, sc=8) 00:32:17.815 Read completed with error (sct=0, sc=8) 00:32:17.815 Write completed with error (sct=0, sc=8) 00:32:17.815 Read completed with error (sct=0, sc=8) 00:32:17.815 Read completed with error (sct=0, sc=8) 00:32:17.815 Write completed with error (sct=0, sc=8) 00:32:17.815 Read completed with error (sct=0, sc=8) 00:32:17.815 Read completed with error (sct=0, sc=8) 00:32:17.815 Read completed with error (sct=0, sc=8) 00:32:17.815 Read completed with error (sct=0, sc=8) 00:32:17.815 Write completed with error (sct=0, sc=8) 00:32:17.815 Read completed with error (sct=0, sc=8) 00:32:17.815 Read completed with error (sct=0, sc=8) 00:32:17.815 Read completed with error (sct=0, sc=8) 00:32:17.815 Read completed with error (sct=0, sc=8) 00:32:17.815 [2024-11-05 19:21:47.023368] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x648b40 is same with the state(6) to be set 00:32:17.815 Read completed with error (sct=0, sc=8) 00:32:17.815 Write completed with error (sct=0, sc=8) 00:32:17.815 Read completed with error (sct=0, sc=8) 00:32:17.815 Read completed with error (sct=0, sc=8) 00:32:17.815 Read completed with error (sct=0, sc=8) 00:32:17.815 Read completed with error (sct=0, sc=8) 00:32:17.815 Write completed with error (sct=0, sc=8) 00:32:17.815 Read completed with error (sct=0, sc=8) 00:32:17.815 Write completed with error (sct=0, sc=8) 00:32:17.815 Write completed with error (sct=0, sc=8) 00:32:17.815 Read completed with error (sct=0, sc=8) 00:32:17.815 Read completed with error (sct=0, sc=8) 00:32:17.815 Read completed with error (sct=0, sc=8) 00:32:17.815 [2024-11-05 19:21:47.023771] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fe90000d7c0 is same with the state(6) to be set 00:32:17.815 Read completed with error (sct=0, sc=8) 00:32:17.815 Write completed with error (sct=0, sc=8) 00:32:17.815 Read completed with error (sct=0, sc=8) 00:32:17.815 Write completed with error (sct=0, sc=8) 00:32:17.815 Write completed with error (sct=0, sc=8) 00:32:17.815 Read completed with error (sct=0, sc=8) 00:32:17.815 Write completed with error (sct=0, sc=8) 00:32:17.815 Read completed with error (sct=0, sc=8) 00:32:17.815 Read completed with error (sct=0, sc=8) 00:32:17.815 Read completed with error (sct=0, sc=8) 00:32:17.815 Read completed with error (sct=0, sc=8) 00:32:17.815 Read completed with error (sct=0, sc=8) 00:32:17.815 Read completed with error (sct=0, sc=8) 00:32:17.815 Read completed with error (sct=0, sc=8) 00:32:17.815 [2024-11-05 19:21:47.024224] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fe90000d020 is same with the state(6) to be set 00:32:17.815 Initializing NVMe Controllers 00:32:17.815 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:17.815 Controller IO queue size 128, less than required. 00:32:17.815 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:17.815 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:32:17.815 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:32:17.815 Initialization complete. Launching workers. 00:32:17.816 ======================================================== 00:32:17.816 Latency(us) 00:32:17.816 Device Information : IOPS MiB/s Average min max 00:32:17.816 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 189.70 0.09 895768.49 302.29 1008432.59 00:32:17.816 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 149.87 0.07 945043.53 253.33 1010408.52 00:32:17.816 ======================================================== 00:32:17.816 Total : 339.57 0.17 917515.98 253.33 1010408.52 00:32:17.816 00:32:17.816 [2024-11-05 19:21:47.024686] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6499a0 (9): Bad file descriptor 00:32:17.816 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:32:17.816 19:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:17.816 19:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:32:17.816 19:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 570739 00:32:17.816 19:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:32:18.388 19:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:32:18.388 19:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 570739 00:32:18.388 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (570739) - No such process 00:32:18.388 19:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 570739 00:32:18.388 19:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:32:18.388 19:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 570739 00:32:18.388 19:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:32:18.388 19:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:18.388 19:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:32:18.388 19:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:18.388 19:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 570739 00:32:18.388 19:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:32:18.388 19:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:18.388 19:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:18.388 19:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:18.388 19:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:32:18.388 19:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:18.388 19:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:18.388 19:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:18.388 19:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:18.388 19:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:18.388 19:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:18.388 [2024-11-05 19:21:47.557508] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:18.388 19:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:18.388 19:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:18.388 19:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:18.388 19:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:18.388 19:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:18.388 19:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=571484 00:32:18.388 19:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:32:18.388 19:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:32:18.388 19:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 571484 00:32:18.388 19:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:32:18.388 [2024-11-05 19:21:47.629523] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:32:18.959 19:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:32:18.960 19:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 571484 00:32:18.960 19:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:32:19.531 19:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:32:19.531 19:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 571484 00:32:19.531 19:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:32:19.793 19:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:32:19.793 19:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 571484 00:32:19.793 19:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:32:20.364 19:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:32:20.365 19:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 571484 00:32:20.365 19:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:32:20.934 19:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:32:20.934 19:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 571484 00:32:20.935 19:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:32:21.505 19:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:32:21.505 19:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 571484 00:32:21.505 19:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:32:21.765 Initializing NVMe Controllers 00:32:21.765 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:21.765 Controller IO queue size 128, less than required. 00:32:21.766 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:21.766 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:32:21.766 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:32:21.766 Initialization complete. Launching workers. 00:32:21.766 ======================================================== 00:32:21.766 Latency(us) 00:32:21.766 Device Information : IOPS MiB/s Average min max 00:32:21.766 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003122.50 1000203.02 1043574.27 00:32:21.766 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003981.18 1000238.43 1042118.17 00:32:21.766 ======================================================== 00:32:21.766 Total : 256.00 0.12 1003551.84 1000203.02 1043574.27 00:32:21.766 00:32:22.026 19:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:32:22.026 19:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 571484 00:32:22.026 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (571484) - No such process 00:32:22.026 19:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 571484 00:32:22.026 19:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:32:22.026 19:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:32:22.026 19:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # nvmfcleanup 00:32:22.026 19:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@99 -- # sync 00:32:22.026 19:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:32:22.026 19:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # set +e 00:32:22.026 19:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # for i in {1..20} 00:32:22.026 19:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:32:22.026 rmmod nvme_tcp 00:32:22.026 rmmod nvme_fabrics 00:32:22.026 rmmod nvme_keyring 00:32:22.026 19:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:32:22.026 19:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # set -e 00:32:22.026 19:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # return 0 00:32:22.026 19:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # '[' -n 570615 ']' 00:32:22.026 19:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@337 -- # killprocess 570615 00:32:22.026 19:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # '[' -z 570615 ']' 00:32:22.027 19:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # kill -0 570615 00:32:22.027 19:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # uname 00:32:22.027 19:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:32:22.027 19:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 570615 00:32:22.027 19:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:32:22.027 19:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:32:22.027 19:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@970 -- # echo 'killing process with pid 570615' 00:32:22.027 killing process with pid 570615 00:32:22.027 19:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@971 -- # kill 570615 00:32:22.027 19:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@976 -- # wait 570615 00:32:22.287 19:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:32:22.287 19:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # nvmf_fini 00:32:22.287 19:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@264 -- # local dev 00:32:22.287 19:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@267 -- # remove_target_ns 00:32:22.287 19:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:32:22.287 19:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:32:22.287 19:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_target_ns 00:32:24.203 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@268 -- # delete_main_bridge 00:32:24.203 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:32:24.203 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@130 -- # return 0 00:32:24.203 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:32:24.203 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:32:24.203 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:32:24.203 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:32:24.203 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:32:24.203 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:32:24.203 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:32:24.203 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:32:24.203 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:32:24.203 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:32:24.203 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:32:24.203 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:32:24.203 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:32:24.203 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:32:24.203 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:32:24.203 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:32:24.203 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:32:24.203 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@41 -- # _dev=0 00:32:24.203 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@41 -- # dev_map=() 00:32:24.203 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@284 -- # iptr 00:32:24.203 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@542 -- # iptables-save 00:32:24.203 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:32:24.203 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@542 -- # iptables-restore 00:32:24.203 00:32:24.203 real 0m18.330s 00:32:24.203 user 0m26.628s 00:32:24.203 sys 0m7.354s 00:32:24.203 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:24.203 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:24.203 ************************************ 00:32:24.203 END TEST nvmf_delete_subsystem 00:32:24.203 ************************************ 00:32:24.203 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:32:24.203 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:32:24.203 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:32:24.203 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:24.465 ************************************ 00:32:24.465 START TEST nvmf_host_management 00:32:24.465 ************************************ 00:32:24.465 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:32:24.465 * Looking for test storage... 00:32:24.465 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:24.465 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:32:24.465 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # lcov --version 00:32:24.465 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:32:24.465 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:32:24.465 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:24.465 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:24.465 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:24.465 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:32:24.465 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:32:24.465 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:32:24.465 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:32:24.465 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:32:24.465 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:32:24.465 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:32:24.465 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:24.465 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:32:24.465 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:32:24.465 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:24.465 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:24.465 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:32:24.465 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:32:24.465 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:24.465 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:32:24.465 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:32:24.465 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:32:24.465 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:32:24.465 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:24.465 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:32:24.465 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:32:24.465 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:24.465 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:24.465 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:32:24.465 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:24.465 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:32:24.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:24.465 --rc genhtml_branch_coverage=1 00:32:24.465 --rc genhtml_function_coverage=1 00:32:24.465 --rc genhtml_legend=1 00:32:24.465 --rc geninfo_all_blocks=1 00:32:24.465 --rc geninfo_unexecuted_blocks=1 00:32:24.465 00:32:24.465 ' 00:32:24.465 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:32:24.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:24.465 --rc genhtml_branch_coverage=1 00:32:24.465 --rc genhtml_function_coverage=1 00:32:24.465 --rc genhtml_legend=1 00:32:24.465 --rc geninfo_all_blocks=1 00:32:24.465 --rc geninfo_unexecuted_blocks=1 00:32:24.465 00:32:24.465 ' 00:32:24.465 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:32:24.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:24.465 --rc genhtml_branch_coverage=1 00:32:24.465 --rc genhtml_function_coverage=1 00:32:24.465 --rc genhtml_legend=1 00:32:24.465 --rc geninfo_all_blocks=1 00:32:24.465 --rc geninfo_unexecuted_blocks=1 00:32:24.465 00:32:24.465 ' 00:32:24.465 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:32:24.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:24.465 --rc genhtml_branch_coverage=1 00:32:24.465 --rc genhtml_function_coverage=1 00:32:24.465 --rc genhtml_legend=1 00:32:24.465 --rc geninfo_all_blocks=1 00:32:24.465 --rc geninfo_unexecuted_blocks=1 00:32:24.465 00:32:24.465 ' 00:32:24.465 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:24.465 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:32:24.465 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:24.465 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:24.465 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:24.465 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:24.465 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:24.465 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:32:24.465 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:24.465 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:32:24.465 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:24.465 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:24.465 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:24.465 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:32:24.465 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:32:24.465 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:24.465 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:24.465 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:32:24.465 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:24.465 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:24.465 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:24.465 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:24.465 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:24.466 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:24.466 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:32:24.466 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:24.466 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:32:24.466 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:32:24.466 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:32:24.466 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:32:24.466 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@50 -- # : 0 00:32:24.466 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:32:24.466 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:32:24.466 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:32:24.466 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:24.466 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:24.466 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:32:24.466 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:32:24.466 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:32:24.466 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:32:24.466 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@54 -- # have_pci_nics=0 00:32:24.466 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:24.466 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:24.466 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:32:24.466 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:32:24.466 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:24.466 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@296 -- # prepare_net_devs 00:32:24.466 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # local -g is_hw=no 00:32:24.466 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@260 -- # remove_target_ns 00:32:24.466 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:32:24.466 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:32:24.466 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_target_ns 00:32:24.466 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:32:24.466 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:32:24.466 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # xtrace_disable 00:32:24.466 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:32.755 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:32.755 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@131 -- # pci_devs=() 00:32:32.755 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@131 -- # local -a pci_devs 00:32:32.755 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@132 -- # pci_net_devs=() 00:32:32.755 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:32:32.755 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@133 -- # pci_drivers=() 00:32:32.755 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@133 -- # local -A pci_drivers 00:32:32.755 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@135 -- # net_devs=() 00:32:32.755 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@135 -- # local -ga net_devs 00:32:32.755 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@136 -- # e810=() 00:32:32.755 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@136 -- # local -ga e810 00:32:32.755 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@137 -- # x722=() 00:32:32.755 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@137 -- # local -ga x722 00:32:32.755 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@138 -- # mlx=() 00:32:32.755 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@138 -- # local -ga mlx 00:32:32.755 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:32.755 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:32.755 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:32.755 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:32.755 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:32.755 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:32.755 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:32.755 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:32.755 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:32.755 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:32.755 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:32.755 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:32.755 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:32:32.755 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:32:32.755 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:32:32.755 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:32:32.755 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:32:32.755 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:32:32.755 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:32:32.755 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:32.755 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:32.755 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:32:32.755 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:32:32.755 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:32.755 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:32.755 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:32:32.755 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:32:32.755 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:32.755 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:32.755 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:32:32.755 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:32:32.755 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:32.755 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:32.755 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:32:32.755 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:32:32.755 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:32:32.755 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:32:32.755 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:32:32.755 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:32.755 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:32:32.755 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:32.755 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@234 -- # [[ up == up ]] 00:32:32.755 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:32:32.755 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:32.755 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:32.755 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:32.755 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:32:32.755 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:32:32.755 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:32.755 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:32:32.755 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:32.755 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@234 -- # [[ up == up ]] 00:32:32.755 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:32:32.755 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:32.755 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:32.755 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:32.755 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:32:32.755 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:32:32.755 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:32:32.755 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # is_hw=yes 00:32:32.755 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:32:32.755 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:32:32.755 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:32:32.755 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:32:32.755 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@257 -- # create_target_ns 00:32:32.755 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:32:32.755 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:32:32.755 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:32:32.755 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:32.755 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:32:32.755 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:32:32.755 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:32.755 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:32.755 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:32:32.755 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:32:32.756 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:32:32.756 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:32:32.756 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@27 -- # local -gA dev_map 00:32:32.756 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@28 -- # local -g _dev 00:32:32.756 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:32:32.756 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:32:32.756 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:32:32.756 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:32:32.756 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@44 -- # ips=() 00:32:32.756 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:32:32.756 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:32:32.756 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:32:32.756 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:32:32.756 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:32:32.756 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:32:32.756 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:32:32.756 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:32:32.756 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:32:32.756 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:32:32.756 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:32:32.756 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:32:32.756 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:32:32.756 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:32:32.756 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:32:32.756 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:32:32.756 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:32:32.756 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:32:32.756 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:32:32.756 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:32:32.756 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@11 -- # local val=167772161 00:32:32.756 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:32:32.756 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:32:32.756 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:32:32.756 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:32:32.756 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:32:32.756 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:32:32.756 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:32:32.756 10.0.0.1 00:32:32.756 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:32:32.756 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:32:32.756 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:32.756 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:32.756 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:32:32.756 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@11 -- # local val=167772162 00:32:32.756 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:32:32.756 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:32:32.756 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:32:32.756 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:32:32.756 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:32:32.756 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:32:32.756 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:32:32.756 10.0.0.2 00:32:32.756 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:32:32.756 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:32:32.756 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:32:32.756 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:32:32.756 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:32:32.756 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:32:32.756 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:32:32.756 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:32.756 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:32.756 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:32:32.756 19:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:32:32.756 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:32:32.756 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:32:32.756 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:32:32.756 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:32:32.756 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:32:32.756 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:32:32.756 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:32:32.756 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:32:32.756 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:32:32.756 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@38 -- # ping_ips 1 00:32:32.756 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:32:32.756 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:32:32.756 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:32:32.756 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:32:32.756 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:32:32.756 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:32:32.756 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:32:32.756 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:32:32.756 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:32:32.756 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@107 -- # local dev=initiator0 00:32:32.756 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:32:32.756 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:32:32.756 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:32:32.756 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:32:32.756 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:32.756 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:32.756 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:32:32.756 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:32:32.756 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:32:32.756 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:32:32.756 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:32:32.756 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:32.756 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:32.757 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:32:32.757 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:32:32.757 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:32.757 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.565 ms 00:32:32.757 00:32:32.757 --- 10.0.0.1 ping statistics --- 00:32:32.757 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:32.757 rtt min/avg/max/mdev = 0.565/0.565/0.565/0.000 ms 00:32:32.757 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:32:32.757 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:32:32.757 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:32:32.757 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:32:32.757 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:32.757 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:32.757 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@168 -- # get_net_dev target0 00:32:32.757 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@107 -- # local dev=target0 00:32:32.757 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:32:32.757 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:32:32.757 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:32:32.757 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:32:32.757 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:32:32.757 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:32:32.757 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:32:32.757 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:32:32.757 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:32:32.757 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:32:32.757 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:32:32.757 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:32:32.757 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:32:32.757 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:32:32.757 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:32.757 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.266 ms 00:32:32.757 00:32:32.757 --- 10.0.0.2 ping statistics --- 00:32:32.757 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:32.757 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:32:32.757 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@98 -- # (( pair++ )) 00:32:32.757 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:32:32.757 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:32.757 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@270 -- # return 0 00:32:32.757 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:32:32.757 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:32:32.757 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:32:32.757 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:32:32.757 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:32:32.757 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:32:32.757 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:32:32.757 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:32:32.757 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:32:32.757 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:32:32.757 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@107 -- # local dev=initiator0 00:32:32.757 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:32:32.757 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:32:32.757 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:32:32.757 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:32:32.757 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:32.757 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:32.757 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:32:32.757 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:32:32.757 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:32:32.757 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:32.757 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:32:32.757 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:32:32.757 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:32:32.757 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:32:32.757 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:32:32.757 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:32:32.757 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@107 -- # local dev=initiator1 00:32:32.757 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:32:32.757 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:32:32.757 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@109 -- # return 1 00:32:32.757 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@168 -- # dev= 00:32:32.757 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@169 -- # return 0 00:32:32.757 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:32:32.757 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:32:32.757 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:32:32.757 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:32:32.757 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:32:32.757 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:32.757 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:32.757 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@168 -- # get_net_dev target0 00:32:32.757 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@107 -- # local dev=target0 00:32:32.757 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:32:32.757 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:32:32.757 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:32:32.757 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:32:32.757 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:32:32.757 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:32:32.757 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:32:32.757 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:32:32.757 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:32:32.757 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:32.757 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:32:32.757 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:32:32.757 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:32:32.757 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:32:32.757 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:32.757 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:32.757 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@168 -- # get_net_dev target1 00:32:32.757 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@107 -- # local dev=target1 00:32:32.757 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:32:32.757 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:32:32.757 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@109 -- # return 1 00:32:32.757 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@168 -- # dev= 00:32:32.757 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@169 -- # return 0 00:32:32.758 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:32:32.758 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:32.758 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:32:32.758 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:32:32.758 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:32.758 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:32:32.758 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:32:32.758 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:32:32.758 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:32:32.758 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:32:32.758 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:32:32.758 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:32.758 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:32.758 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # nvmfpid=576338 00:32:32.758 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@329 -- # waitforlisten 576338 00:32:32.758 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 576338 ']' 00:32:32.758 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:32.758 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:32.758 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:32.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:32.758 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:32.758 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:32.758 19:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:32:32.758 [2024-11-05 19:22:01.266913] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:32.758 [2024-11-05 19:22:01.268065] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:32:32.758 [2024-11-05 19:22:01.268118] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:32.758 [2024-11-05 19:22:01.366976] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:32.758 [2024-11-05 19:22:01.420182] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:32.758 [2024-11-05 19:22:01.420236] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:32.758 [2024-11-05 19:22:01.420244] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:32.758 [2024-11-05 19:22:01.420251] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:32.758 [2024-11-05 19:22:01.420257] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:32.758 [2024-11-05 19:22:01.422467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:32.758 [2024-11-05 19:22:01.422636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:32.758 [2024-11-05 19:22:01.422803] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:32:32.758 [2024-11-05 19:22:01.422804] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:32.758 [2024-11-05 19:22:01.499138] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:32.758 [2024-11-05 19:22:01.499764] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:32.758 [2024-11-05 19:22:01.500672] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:32.758 [2024-11-05 19:22:01.500831] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:32:32.758 [2024-11-05 19:22:01.500985] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:33.019 19:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:33.019 19:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:32:33.019 19:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:32:33.019 19:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:33.019 19:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:33.019 19:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:33.019 19:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:33.019 19:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:33.019 19:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:33.019 [2024-11-05 19:22:02.123795] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:33.019 19:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:33.019 19:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:32:33.019 19:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:33.019 19:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:33.020 19:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:32:33.020 19:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:32:33.020 19:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:32:33.020 19:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:33.020 19:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:33.020 Malloc0 00:32:33.020 [2024-11-05 19:22:02.215971] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:33.020 19:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:33.020 19:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:32:33.020 19:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:33.020 19:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:33.020 19:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=576710 00:32:33.020 19:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 576710 /var/tmp/bdevperf.sock 00:32:33.020 19:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 576710 ']' 00:32:33.020 19:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:33.020 19:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:33.020 19:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:33.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:33.020 19:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:32:33.020 19:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:33.020 19:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:32:33.020 19:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:33.020 19:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # config=() 00:32:33.020 19:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # local subsystem config 00:32:33.020 19:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:32:33.020 19:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:32:33.020 { 00:32:33.020 "params": { 00:32:33.020 "name": "Nvme$subsystem", 00:32:33.020 "trtype": "$TEST_TRANSPORT", 00:32:33.020 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:33.020 "adrfam": "ipv4", 00:32:33.020 "trsvcid": "$NVMF_PORT", 00:32:33.020 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:33.020 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:33.020 "hdgst": ${hdgst:-false}, 00:32:33.020 "ddgst": ${ddgst:-false} 00:32:33.020 }, 00:32:33.020 "method": "bdev_nvme_attach_controller" 00:32:33.020 } 00:32:33.020 EOF 00:32:33.020 )") 00:32:33.020 19:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@394 -- # cat 00:32:33.020 19:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@396 -- # jq . 00:32:33.020 19:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@397 -- # IFS=, 00:32:33.020 19:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:32:33.020 "params": { 00:32:33.020 "name": "Nvme0", 00:32:33.020 "trtype": "tcp", 00:32:33.020 "traddr": "10.0.0.2", 00:32:33.020 "adrfam": "ipv4", 00:32:33.020 "trsvcid": "4420", 00:32:33.020 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:33.020 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:33.020 "hdgst": false, 00:32:33.020 "ddgst": false 00:32:33.020 }, 00:32:33.020 "method": "bdev_nvme_attach_controller" 00:32:33.020 }' 00:32:33.020 [2024-11-05 19:22:02.320761] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:32:33.020 [2024-11-05 19:22:02.320818] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid576710 ] 00:32:33.281 [2024-11-05 19:22:02.391764] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:33.281 [2024-11-05 19:22:02.428290] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:33.541 Running I/O for 10 seconds... 00:32:34.113 19:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:34.113 19:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:32:34.113 19:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:32:34.113 19:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:34.113 19:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:34.113 19:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:34.113 19:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:34.113 19:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:32:34.113 19:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:32:34.113 19:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:32:34.113 19:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:32:34.113 19:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:32:34.113 19:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:32:34.113 19:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:32:34.113 19:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:32:34.113 19:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:32:34.113 19:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:34.113 19:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:34.113 19:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:34.113 19:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=781 00:32:34.113 19:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 781 -ge 100 ']' 00:32:34.113 19:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:32:34.113 19:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:32:34.113 19:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:32:34.113 19:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:32:34.113 19:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:34.113 19:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:34.113 [2024-11-05 19:22:03.195593] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b82a0 is same with the state(6) to be set 00:32:34.113 [2024-11-05 19:22:03.195643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b82a0 is same with the state(6) to be set 00:32:34.113 [2024-11-05 19:22:03.195652] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b82a0 is same with the state(6) to be set 00:32:34.113 [2024-11-05 19:22:03.195659] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b82a0 is same with the state(6) to be set 00:32:34.113 [2024-11-05 19:22:03.195666] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b82a0 is same with the state(6) to be set 00:32:34.113 [2024-11-05 19:22:03.195673] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b82a0 is same with the state(6) to be set 00:32:34.113 [2024-11-05 19:22:03.195681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b82a0 is same with the state(6) to be set 00:32:34.113 [2024-11-05 19:22:03.195688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b82a0 is same with the state(6) to be set 00:32:34.113 [2024-11-05 19:22:03.195694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b82a0 is same with the state(6) to be set 00:32:34.113 [2024-11-05 19:22:03.195701] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b82a0 is same with the state(6) to be set 00:32:34.113 [2024-11-05 19:22:03.195708] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b82a0 is same with the state(6) to be set 00:32:34.113 [2024-11-05 19:22:03.195714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b82a0 is same with the state(6) to be set 00:32:34.113 [2024-11-05 19:22:03.195721] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b82a0 is same with the state(6) to be set 00:32:34.113 [2024-11-05 19:22:03.195727] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b82a0 is same with the state(6) to be set 00:32:34.113 [2024-11-05 19:22:03.195734] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b82a0 is same with the state(6) to be set 00:32:34.113 [2024-11-05 19:22:03.195741] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b82a0 is same with the state(6) to be set 00:32:34.113 [2024-11-05 19:22:03.195754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b82a0 is same with the state(6) to be set 00:32:34.113 [2024-11-05 19:22:03.195761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b82a0 is same with the state(6) to be set 00:32:34.113 [2024-11-05 19:22:03.195768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b82a0 is same with the state(6) to be set 00:32:34.113 [2024-11-05 19:22:03.195775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b82a0 is same with the state(6) to be set 00:32:34.113 [2024-11-05 19:22:03.195788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b82a0 is same with the state(6) to be set 00:32:34.113 [2024-11-05 19:22:03.195794] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b82a0 is same with the state(6) to be set 00:32:34.113 [2024-11-05 19:22:03.195801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b82a0 is same with the state(6) to be set 00:32:34.113 [2024-11-05 19:22:03.195808] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b82a0 is same with the state(6) to be set 00:32:34.113 [2024-11-05 19:22:03.195815] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b82a0 is same with the state(6) to be set 00:32:34.113 [2024-11-05 19:22:03.195822] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b82a0 is same with the state(6) to be set 00:32:34.113 [2024-11-05 19:22:03.195829] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b82a0 is same with the state(6) to be set 00:32:34.113 [2024-11-05 19:22:03.195836] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b82a0 is same with the state(6) to be set 00:32:34.113 [2024-11-05 19:22:03.195842] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b82a0 is same with the state(6) to be set 00:32:34.113 [2024-11-05 19:22:03.195849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b82a0 is same with the state(6) to be set 00:32:34.113 [2024-11-05 19:22:03.195856] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b82a0 is same with the state(6) to be set 00:32:34.113 [2024-11-05 19:22:03.195863] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b82a0 is same with the state(6) to be set 00:32:34.113 [2024-11-05 19:22:03.195870] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b82a0 is same with the state(6) to be set 00:32:34.113 [2024-11-05 19:22:03.195877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b82a0 is same with the state(6) to be set 00:32:34.113 [2024-11-05 19:22:03.199856] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:34.113 [2024-11-05 19:22:03.199898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.113 [2024-11-05 19:22:03.199909] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:34.113 [2024-11-05 19:22:03.199917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.113 19:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:34.113 [2024-11-05 19:22:03.199925] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:34.113 [2024-11-05 19:22:03.199933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.113 [2024-11-05 19:22:03.199941] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:34.113 [2024-11-05 19:22:03.199948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.113 [2024-11-05 19:22:03.199956] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x191e000 is same with the state(6) to be set 00:32:34.113 [2024-11-05 19:22:03.200007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.113 [2024-11-05 19:22:03.200018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.113 [2024-11-05 19:22:03.200032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:114816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.113 [2024-11-05 19:22:03.200045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.113 19:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:32:34.113 [2024-11-05 19:22:03.200056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:114944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.113 [2024-11-05 19:22:03.200064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.114 [2024-11-05 19:22:03.200073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:115072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.114 [2024-11-05 19:22:03.200081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.114 [2024-11-05 19:22:03.200090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:115200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.114 [2024-11-05 19:22:03.200098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.114 [2024-11-05 19:22:03.200108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:115328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.114 [2024-11-05 19:22:03.200115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.114 [2024-11-05 19:22:03.200125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:115456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.114 [2024-11-05 19:22:03.200133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.114 19:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:34.114 [2024-11-05 19:22:03.200142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:115584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.114 [2024-11-05 19:22:03.200150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.114 [2024-11-05 19:22:03.200159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:115712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.114 [2024-11-05 19:22:03.200166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.114 [2024-11-05 19:22:03.200176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:115840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.114 [2024-11-05 19:22:03.200184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.114 [2024-11-05 19:22:03.200193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:115968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.114 [2024-11-05 19:22:03.200201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.114 [2024-11-05 19:22:03.200211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:116096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.114 19:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:34.114 [2024-11-05 19:22:03.200218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.114 [2024-11-05 19:22:03.200228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:116224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.114 [2024-11-05 19:22:03.200238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.114 [2024-11-05 19:22:03.200248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:110080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.114 [2024-11-05 19:22:03.200255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.114 [2024-11-05 19:22:03.200264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:110208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.114 [2024-11-05 19:22:03.200272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.114 [2024-11-05 19:22:03.200282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:110336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.114 [2024-11-05 19:22:03.200289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.114 [2024-11-05 19:22:03.200299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:110464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.114 [2024-11-05 19:22:03.200308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.114 [2024-11-05 19:22:03.200317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:110592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.114 [2024-11-05 19:22:03.200325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.114 [2024-11-05 19:22:03.200334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:110720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.114 [2024-11-05 19:22:03.200341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.114 [2024-11-05 19:22:03.200351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:110848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.114 [2024-11-05 19:22:03.200358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.114 [2024-11-05 19:22:03.200368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:110976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.114 [2024-11-05 19:22:03.200375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.114 [2024-11-05 19:22:03.200385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:111104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.114 [2024-11-05 19:22:03.200392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.114 [2024-11-05 19:22:03.200401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:111232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.114 [2024-11-05 19:22:03.200409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.114 [2024-11-05 19:22:03.200419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:111360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.114 [2024-11-05 19:22:03.200426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.114 [2024-11-05 19:22:03.200436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:111488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.114 [2024-11-05 19:22:03.200443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.114 [2024-11-05 19:22:03.200454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:111616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.114 [2024-11-05 19:22:03.200462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.114 [2024-11-05 19:22:03.200472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:111744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.114 [2024-11-05 19:22:03.200480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.114 [2024-11-05 19:22:03.200489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:111872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.114 [2024-11-05 19:22:03.200496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.114 [2024-11-05 19:22:03.200506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:112000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.114 [2024-11-05 19:22:03.200515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.114 [2024-11-05 19:22:03.200525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:112128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.114 [2024-11-05 19:22:03.200532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.114 [2024-11-05 19:22:03.200542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:112256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.114 [2024-11-05 19:22:03.200549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.114 [2024-11-05 19:22:03.200559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:112384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.114 [2024-11-05 19:22:03.200567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.114 [2024-11-05 19:22:03.200577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:112512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.114 [2024-11-05 19:22:03.200586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.114 [2024-11-05 19:22:03.200595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:112640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.114 [2024-11-05 19:22:03.200603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.114 [2024-11-05 19:22:03.200612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:112768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.114 [2024-11-05 19:22:03.200619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.114 [2024-11-05 19:22:03.200629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:112896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.114 [2024-11-05 19:22:03.200637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.114 [2024-11-05 19:22:03.200646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:113024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.114 [2024-11-05 19:22:03.200654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.114 [2024-11-05 19:22:03.200664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:113152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.114 [2024-11-05 19:22:03.200674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.114 [2024-11-05 19:22:03.200683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:113280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.114 [2024-11-05 19:22:03.200690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.114 [2024-11-05 19:22:03.200701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:113408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.114 [2024-11-05 19:22:03.200708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.114 [2024-11-05 19:22:03.200718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:113536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.114 [2024-11-05 19:22:03.200726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.114 [2024-11-05 19:22:03.200735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:113664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.114 [2024-11-05 19:22:03.200742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.115 [2024-11-05 19:22:03.200758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:113792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.115 [2024-11-05 19:22:03.200766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.115 [2024-11-05 19:22:03.200776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:113920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.115 [2024-11-05 19:22:03.200783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.115 [2024-11-05 19:22:03.200793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:114048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.115 [2024-11-05 19:22:03.200800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.115 [2024-11-05 19:22:03.200810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:114176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.115 [2024-11-05 19:22:03.200817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.115 [2024-11-05 19:22:03.200826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:114304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.115 [2024-11-05 19:22:03.200834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.115 [2024-11-05 19:22:03.200844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:114432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.115 [2024-11-05 19:22:03.200851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.115 [2024-11-05 19:22:03.200861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:116352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.115 [2024-11-05 19:22:03.200868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.115 [2024-11-05 19:22:03.200877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:116480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.115 [2024-11-05 19:22:03.200885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.115 [2024-11-05 19:22:03.200897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:116608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.115 [2024-11-05 19:22:03.200905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.115 [2024-11-05 19:22:03.200915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:116736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.115 [2024-11-05 19:22:03.200922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.115 [2024-11-05 19:22:03.200931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:116864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.115 [2024-11-05 19:22:03.200939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.115 [2024-11-05 19:22:03.200949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:116992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.115 [2024-11-05 19:22:03.200956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.115 [2024-11-05 19:22:03.200965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:117120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.115 [2024-11-05 19:22:03.200972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.115 [2024-11-05 19:22:03.200982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:117248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.115 [2024-11-05 19:22:03.200990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.115 [2024-11-05 19:22:03.200999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:117376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.115 [2024-11-05 19:22:03.201006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.115 [2024-11-05 19:22:03.201015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:117504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.115 [2024-11-05 19:22:03.201023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.115 [2024-11-05 19:22:03.201032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:117632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.115 [2024-11-05 19:22:03.201040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.115 [2024-11-05 19:22:03.201050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:117760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.115 [2024-11-05 19:22:03.201058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.115 [2024-11-05 19:22:03.201068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:117888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.115 [2024-11-05 19:22:03.201075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.115 [2024-11-05 19:22:03.201085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:118016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.115 [2024-11-05 19:22:03.201093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.115 [2024-11-05 19:22:03.201103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:118144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.115 [2024-11-05 19:22:03.201112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.115 [2024-11-05 19:22:03.201122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:114560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:34.115 [2024-11-05 19:22:03.201129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:34.115 [2024-11-05 19:22:03.202393] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:34.115 task offset: 114688 on job bdev=Nvme0n1 fails 00:32:34.115 00:32:34.115 Latency(us) 00:32:34.115 [2024-11-05T18:22:03.438Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:34.115 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:32:34.115 Job: Nvme0n1 ended in about 0.47 seconds with error 00:32:34.115 Verification LBA range: start 0x0 length 0x400 00:32:34.115 Nvme0n1 : 0.47 1848.20 115.51 137.54 0.00 31263.88 1699.84 31238.83 00:32:34.115 [2024-11-05T18:22:03.438Z] =================================================================================================================== 00:32:34.115 [2024-11-05T18:22:03.438Z] Total : 1848.20 115.51 137.54 0.00 31263.88 1699.84 31238.83 00:32:34.115 [2024-11-05 19:22:03.204374] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:32:34.115 [2024-11-05 19:22:03.204399] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x191e000 (9): Bad file descriptor 00:32:34.115 19:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:34.115 19:22:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:32:34.115 [2024-11-05 19:22:03.298002] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:32:35.057 19:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 576710 00:32:35.057 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (576710) - No such process 00:32:35.057 19:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:32:35.057 19:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:32:35.057 19:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:32:35.057 19:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:32:35.057 19:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # config=() 00:32:35.057 19:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # local subsystem config 00:32:35.057 19:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:32:35.057 19:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:32:35.057 { 00:32:35.057 "params": { 00:32:35.057 "name": "Nvme$subsystem", 00:32:35.057 "trtype": "$TEST_TRANSPORT", 00:32:35.057 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:35.057 "adrfam": "ipv4", 00:32:35.057 "trsvcid": "$NVMF_PORT", 00:32:35.057 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:35.057 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:35.057 "hdgst": ${hdgst:-false}, 00:32:35.057 "ddgst": ${ddgst:-false} 00:32:35.057 }, 00:32:35.057 "method": "bdev_nvme_attach_controller" 00:32:35.057 } 00:32:35.057 EOF 00:32:35.057 )") 00:32:35.057 19:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@394 -- # cat 00:32:35.057 19:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@396 -- # jq . 00:32:35.057 19:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@397 -- # IFS=, 00:32:35.057 19:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:32:35.057 "params": { 00:32:35.057 "name": "Nvme0", 00:32:35.057 "trtype": "tcp", 00:32:35.057 "traddr": "10.0.0.2", 00:32:35.057 "adrfam": "ipv4", 00:32:35.057 "trsvcid": "4420", 00:32:35.057 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:35.057 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:35.057 "hdgst": false, 00:32:35.057 "ddgst": false 00:32:35.057 }, 00:32:35.057 "method": "bdev_nvme_attach_controller" 00:32:35.057 }' 00:32:35.057 [2024-11-05 19:22:04.273320] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:32:35.057 [2024-11-05 19:22:04.273378] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid577060 ] 00:32:35.057 [2024-11-05 19:22:04.343920] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:35.057 [2024-11-05 19:22:04.380752] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:35.317 Running I/O for 1 seconds... 00:32:36.259 1813.00 IOPS, 113.31 MiB/s 00:32:36.259 Latency(us) 00:32:36.259 [2024-11-05T18:22:05.582Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:36.259 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:32:36.259 Verification LBA range: start 0x0 length 0x400 00:32:36.259 Nvme0n1 : 1.01 1861.45 116.34 0.00 0.00 33707.46 2334.72 34297.17 00:32:36.259 [2024-11-05T18:22:05.582Z] =================================================================================================================== 00:32:36.259 [2024-11-05T18:22:05.582Z] Total : 1861.45 116.34 0.00 0.00 33707.46 2334.72 34297.17 00:32:36.520 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:32:36.520 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:32:36.520 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:32:36.520 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:32:36.520 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:32:36.520 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@335 -- # nvmfcleanup 00:32:36.520 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@99 -- # sync 00:32:36.520 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:32:36.520 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@102 -- # set +e 00:32:36.520 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@103 -- # for i in {1..20} 00:32:36.520 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:32:36.520 rmmod nvme_tcp 00:32:36.520 rmmod nvme_fabrics 00:32:36.520 rmmod nvme_keyring 00:32:36.520 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:32:36.520 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@106 -- # set -e 00:32:36.520 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@107 -- # return 0 00:32:36.520 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # '[' -n 576338 ']' 00:32:36.520 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@337 -- # killprocess 576338 00:32:36.520 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@952 -- # '[' -z 576338 ']' 00:32:36.520 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@956 -- # kill -0 576338 00:32:36.520 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@957 -- # uname 00:32:36.520 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:32:36.520 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 576338 00:32:36.520 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:32:36.520 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:32:36.520 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@970 -- # echo 'killing process with pid 576338' 00:32:36.520 killing process with pid 576338 00:32:36.520 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@971 -- # kill 576338 00:32:36.520 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@976 -- # wait 576338 00:32:36.781 [2024-11-05 19:22:05.939713] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:32:36.781 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:32:36.781 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@342 -- # nvmf_fini 00:32:36.781 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@264 -- # local dev 00:32:36.781 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@267 -- # remove_target_ns 00:32:36.781 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:32:36.781 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:32:36.781 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_target_ns 00:32:39.326 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@268 -- # delete_main_bridge 00:32:39.326 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:32:39.326 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@130 -- # return 0 00:32:39.326 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:32:39.326 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:32:39.326 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:32:39.326 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:32:39.326 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:32:39.326 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:32:39.326 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:32:39.326 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:32:39.326 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:32:39.326 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:32:39.326 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:32:39.326 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:32:39.326 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:32:39.326 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:32:39.326 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:32:39.326 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:32:39.326 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:32:39.326 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@41 -- # _dev=0 00:32:39.326 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@41 -- # dev_map=() 00:32:39.326 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@284 -- # iptr 00:32:39.326 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@542 -- # iptables-save 00:32:39.326 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:32:39.326 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@542 -- # iptables-restore 00:32:39.326 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:32:39.326 00:32:39.326 real 0m14.530s 00:32:39.326 user 0m18.924s 00:32:39.326 sys 0m7.404s 00:32:39.326 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:39.326 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:39.326 ************************************ 00:32:39.326 END TEST nvmf_host_management 00:32:39.326 ************************************ 00:32:39.326 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:32:39.326 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:32:39.326 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:32:39.326 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:39.326 ************************************ 00:32:39.326 START TEST nvmf_lvol 00:32:39.326 ************************************ 00:32:39.326 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:32:39.326 * Looking for test storage... 00:32:39.326 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:39.326 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:32:39.326 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # lcov --version 00:32:39.326 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:32:39.327 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:32:39.327 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:39.327 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:39.327 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:39.327 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:32:39.327 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:32:39.327 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:32:39.327 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:32:39.327 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:32:39.327 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:32:39.327 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:32:39.327 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:39.327 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:32:39.327 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:32:39.327 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:39.327 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:39.327 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:32:39.327 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:32:39.327 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:39.327 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:32:39.327 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:32:39.327 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:32:39.327 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:32:39.327 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:39.327 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:32:39.327 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:32:39.327 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:39.327 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:39.327 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:32:39.327 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:39.327 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:32:39.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:39.327 --rc genhtml_branch_coverage=1 00:32:39.327 --rc genhtml_function_coverage=1 00:32:39.327 --rc genhtml_legend=1 00:32:39.327 --rc geninfo_all_blocks=1 00:32:39.327 --rc geninfo_unexecuted_blocks=1 00:32:39.327 00:32:39.327 ' 00:32:39.327 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:32:39.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:39.327 --rc genhtml_branch_coverage=1 00:32:39.327 --rc genhtml_function_coverage=1 00:32:39.327 --rc genhtml_legend=1 00:32:39.327 --rc geninfo_all_blocks=1 00:32:39.327 --rc geninfo_unexecuted_blocks=1 00:32:39.327 00:32:39.327 ' 00:32:39.327 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:32:39.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:39.327 --rc genhtml_branch_coverage=1 00:32:39.327 --rc genhtml_function_coverage=1 00:32:39.327 --rc genhtml_legend=1 00:32:39.327 --rc geninfo_all_blocks=1 00:32:39.327 --rc geninfo_unexecuted_blocks=1 00:32:39.327 00:32:39.327 ' 00:32:39.327 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:32:39.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:39.327 --rc genhtml_branch_coverage=1 00:32:39.327 --rc genhtml_function_coverage=1 00:32:39.327 --rc genhtml_legend=1 00:32:39.327 --rc geninfo_all_blocks=1 00:32:39.327 --rc geninfo_unexecuted_blocks=1 00:32:39.327 00:32:39.327 ' 00:32:39.327 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:39.327 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:32:39.327 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:39.327 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:39.327 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:39.327 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:39.327 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:39.327 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:32:39.327 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:39.327 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:32:39.327 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:39.327 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:39.327 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:39.327 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:32:39.327 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:32:39.327 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:39.327 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:39.327 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:32:39.327 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:39.327 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:39.327 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:39.327 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:39.327 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:39.327 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:39.327 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:32:39.327 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:39.327 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:32:39.327 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:32:39.327 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:32:39.327 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:32:39.327 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@50 -- # : 0 00:32:39.327 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:32:39.327 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:32:39.327 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:32:39.327 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:39.327 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:39.327 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:32:39.327 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:32:39.327 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:32:39.327 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:32:39.328 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@54 -- # have_pci_nics=0 00:32:39.328 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:39.328 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:39.328 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:32:39.328 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:32:39.328 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:39.328 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:32:39.328 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:32:39.328 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:39.328 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@296 -- # prepare_net_devs 00:32:39.328 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # local -g is_hw=no 00:32:39.328 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@260 -- # remove_target_ns 00:32:39.328 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:32:39.328 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:32:39.328 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_target_ns 00:32:39.328 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:32:39.328 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:32:39.328 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # xtrace_disable 00:32:39.328 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:32:45.922 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:45.922 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@131 -- # pci_devs=() 00:32:45.922 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@131 -- # local -a pci_devs 00:32:45.922 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@132 -- # pci_net_devs=() 00:32:45.922 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:32:45.922 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@133 -- # pci_drivers=() 00:32:45.922 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@133 -- # local -A pci_drivers 00:32:45.922 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@135 -- # net_devs=() 00:32:45.922 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@135 -- # local -ga net_devs 00:32:45.922 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@136 -- # e810=() 00:32:45.922 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@136 -- # local -ga e810 00:32:45.922 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@137 -- # x722=() 00:32:45.922 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@137 -- # local -ga x722 00:32:45.922 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@138 -- # mlx=() 00:32:45.922 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@138 -- # local -ga mlx 00:32:45.922 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:45.922 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:45.922 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:45.922 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:45.922 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:45.922 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:45.922 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:45.922 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:45.922 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:45.922 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:45.922 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:45.922 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:45.922 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:32:45.922 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:32:45.922 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:32:45.922 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:32:45.922 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:32:45.922 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:32:45.923 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:32:45.923 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:45.923 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:45.923 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:32:45.923 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:32:45.923 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:45.923 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:45.923 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:32:45.923 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:32:45.923 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:45.923 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:45.923 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:32:45.923 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:32:45.923 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:45.923 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:45.923 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:32:45.923 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:32:45.923 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:32:45.923 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:32:45.923 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:32:45.923 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:45.923 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:32:45.923 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:45.923 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@234 -- # [[ up == up ]] 00:32:45.923 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:32:45.923 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:45.923 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:45.923 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:45.923 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:32:45.923 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:32:45.923 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:45.923 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:32:45.923 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:45.923 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@234 -- # [[ up == up ]] 00:32:45.923 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:32:45.923 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:45.923 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:45.923 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:45.923 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:32:45.923 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:32:45.923 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:32:45.923 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # is_hw=yes 00:32:45.923 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:32:45.923 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:32:45.923 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:32:45.923 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:32:45.923 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@257 -- # create_target_ns 00:32:45.923 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:32:45.923 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:32:45.923 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:32:45.923 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:45.923 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:32:45.923 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:32:45.923 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:45.923 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:45.923 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:32:45.923 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:32:45.923 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:32:45.923 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:32:45.923 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@27 -- # local -gA dev_map 00:32:45.923 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@28 -- # local -g _dev 00:32:45.923 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:32:45.923 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:32:45.923 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:32:45.923 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:32:45.923 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@44 -- # ips=() 00:32:45.923 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:32:45.923 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:32:45.923 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:32:45.923 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:32:45.923 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:32:45.923 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:32:45.923 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:32:45.923 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:32:45.923 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:32:45.923 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:32:45.923 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:32:45.923 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:32:45.923 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:32:45.923 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:32:45.923 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:32:45.923 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:32:45.923 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:32:45.923 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:32:45.923 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:32:45.923 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:32:45.923 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@11 -- # local val=167772161 00:32:45.923 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:32:45.923 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:32:45.923 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:32:45.923 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:32:45.923 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:32:45.923 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:32:45.923 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:32:45.923 10.0.0.1 00:32:45.923 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:32:45.923 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:32:45.923 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:45.923 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:45.923 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:32:45.923 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@11 -- # local val=167772162 00:32:45.923 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:32:45.923 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:32:45.923 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:32:45.923 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:32:46.187 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:32:46.187 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:32:46.187 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:32:46.187 10.0.0.2 00:32:46.187 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:32:46.187 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:32:46.187 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:32:46.187 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:32:46.187 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:32:46.187 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:32:46.187 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:32:46.187 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:46.187 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:46.187 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:32:46.187 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:32:46.187 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:32:46.187 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:32:46.187 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:32:46.187 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:32:46.187 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:32:46.187 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:32:46.187 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:32:46.187 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:32:46.187 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:32:46.187 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@38 -- # ping_ips 1 00:32:46.187 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:32:46.187 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:32:46.187 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:32:46.187 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:32:46.187 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:32:46.187 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:32:46.187 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:32:46.187 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:32:46.187 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:32:46.187 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@107 -- # local dev=initiator0 00:32:46.187 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:32:46.187 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:32:46.187 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:32:46.187 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:32:46.187 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:46.187 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:46.187 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:32:46.187 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:32:46.187 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:32:46.187 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:32:46.187 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:32:46.187 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:46.187 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:46.187 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:32:46.187 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:32:46.187 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:46.187 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.553 ms 00:32:46.187 00:32:46.187 --- 10.0.0.1 ping statistics --- 00:32:46.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:46.187 rtt min/avg/max/mdev = 0.553/0.553/0.553/0.000 ms 00:32:46.187 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:32:46.187 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:32:46.187 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:32:46.187 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:32:46.187 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:46.187 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:46.187 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@168 -- # get_net_dev target0 00:32:46.187 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@107 -- # local dev=target0 00:32:46.187 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:32:46.187 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:32:46.187 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:32:46.187 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:32:46.187 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:32:46.187 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:32:46.187 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:32:46.187 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:32:46.187 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:32:46.187 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:32:46.187 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:32:46.187 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:32:46.187 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:32:46.187 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:32:46.187 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:46.187 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.161 ms 00:32:46.187 00:32:46.187 --- 10.0.0.2 ping statistics --- 00:32:46.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:46.187 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:32:46.187 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@98 -- # (( pair++ )) 00:32:46.187 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:32:46.187 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:46.187 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@270 -- # return 0 00:32:46.187 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:32:46.187 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:32:46.187 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:32:46.188 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:32:46.188 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:32:46.188 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:32:46.188 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:32:46.188 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:32:46.188 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:32:46.188 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:32:46.188 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@107 -- # local dev=initiator0 00:32:46.188 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:32:46.188 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:32:46.188 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:32:46.188 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:32:46.188 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:46.188 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:46.188 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:32:46.188 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:32:46.188 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:32:46.188 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:46.188 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:32:46.188 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:32:46.188 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:32:46.188 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:32:46.188 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:32:46.188 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:32:46.188 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@107 -- # local dev=initiator1 00:32:46.188 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:32:46.188 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:32:46.188 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@109 -- # return 1 00:32:46.188 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@168 -- # dev= 00:32:46.188 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@169 -- # return 0 00:32:46.188 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:32:46.188 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:32:46.188 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:32:46.188 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:32:46.188 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:32:46.188 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:46.188 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:46.188 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@168 -- # get_net_dev target0 00:32:46.188 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@107 -- # local dev=target0 00:32:46.188 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:32:46.188 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:32:46.188 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:32:46.188 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:32:46.188 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:32:46.188 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:32:46.188 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:32:46.188 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:32:46.188 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:32:46.188 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:46.188 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:32:46.188 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:32:46.188 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:32:46.188 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:32:46.188 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:46.188 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:46.188 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@168 -- # get_net_dev target1 00:32:46.188 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@107 -- # local dev=target1 00:32:46.188 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:32:46.188 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:32:46.188 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@109 -- # return 1 00:32:46.188 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@168 -- # dev= 00:32:46.188 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@169 -- # return 0 00:32:46.188 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:32:46.188 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:46.188 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:32:46.188 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:32:46.188 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:46.188 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:32:46.188 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:32:46.450 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:32:46.450 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:32:46.450 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:46.450 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:32:46.450 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # nvmfpid=581423 00:32:46.450 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@329 -- # waitforlisten 581423 00:32:46.450 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:32:46.450 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@833 -- # '[' -z 581423 ']' 00:32:46.450 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:46.450 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:46.450 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:46.450 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:46.450 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:46.450 19:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:32:46.450 [2024-11-05 19:22:15.602627] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:46.450 [2024-11-05 19:22:15.603735] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:32:46.450 [2024-11-05 19:22:15.603793] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:46.450 [2024-11-05 19:22:15.688553] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:46.450 [2024-11-05 19:22:15.730177] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:46.450 [2024-11-05 19:22:15.730216] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:46.450 [2024-11-05 19:22:15.730224] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:46.450 [2024-11-05 19:22:15.730230] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:46.450 [2024-11-05 19:22:15.730236] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:46.450 [2024-11-05 19:22:15.734768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:46.450 [2024-11-05 19:22:15.735032] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:46.450 [2024-11-05 19:22:15.735038] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:46.711 [2024-11-05 19:22:15.791196] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:46.711 [2024-11-05 19:22:15.791623] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:46.711 [2024-11-05 19:22:15.791954] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:46.711 [2024-11-05 19:22:15.792249] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:47.284 19:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:47.284 19:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@866 -- # return 0 00:32:47.284 19:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:32:47.284 19:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:47.284 19:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:32:47.284 19:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:47.284 19:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:47.544 [2024-11-05 19:22:16.615905] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:47.544 19:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:47.544 19:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:32:47.544 19:22:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:47.805 19:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:32:47.805 19:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:32:48.067 19:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:32:48.327 19:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=f0860e49-bda2-4ff6-a089-947701ac8a63 00:32:48.327 19:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u f0860e49-bda2-4ff6-a089-947701ac8a63 lvol 20 00:32:48.327 19:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=3bfcac77-8909-4fd5-a248-ffcce93bc123 00:32:48.327 19:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:32:48.588 19:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 3bfcac77-8909-4fd5-a248-ffcce93bc123 00:32:48.588 19:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:48.848 [2024-11-05 19:22:18.047756] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:48.848 19:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:49.109 19:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=582083 00:32:49.109 19:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:32:49.109 19:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:32:50.049 19:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 3bfcac77-8909-4fd5-a248-ffcce93bc123 MY_SNAPSHOT 00:32:50.309 19:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=700d5545-79c6-4b65-b4af-5461b93dfc9c 00:32:50.309 19:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 3bfcac77-8909-4fd5-a248-ffcce93bc123 30 00:32:50.569 19:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 700d5545-79c6-4b65-b4af-5461b93dfc9c MY_CLONE 00:32:50.569 19:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=d77e69c8-9676-4db8-9e61-2613e444a93d 00:32:50.569 19:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate d77e69c8-9676-4db8-9e61-2613e444a93d 00:32:51.138 19:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 582083 00:32:59.275 Initializing NVMe Controllers 00:32:59.275 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:32:59.275 Controller IO queue size 128, less than required. 00:32:59.275 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:59.275 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:32:59.275 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:32:59.275 Initialization complete. Launching workers. 00:32:59.275 ======================================================== 00:32:59.275 Latency(us) 00:32:59.275 Device Information : IOPS MiB/s Average min max 00:32:59.275 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12425.80 48.54 10306.23 1530.58 64762.11 00:32:59.275 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 15701.10 61.33 8155.43 1802.75 69312.10 00:32:59.275 ======================================================== 00:32:59.275 Total : 28126.89 109.87 9105.60 1530.58 69312.10 00:32:59.275 00:32:59.275 19:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:59.536 19:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 3bfcac77-8909-4fd5-a248-ffcce93bc123 00:32:59.796 19:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f0860e49-bda2-4ff6-a089-947701ac8a63 00:32:59.796 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:32:59.796 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:32:59.796 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:32:59.796 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@335 -- # nvmfcleanup 00:32:59.796 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@99 -- # sync 00:32:59.796 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:32:59.796 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@102 -- # set +e 00:32:59.796 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@103 -- # for i in {1..20} 00:32:59.796 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:32:59.796 rmmod nvme_tcp 00:33:00.057 rmmod nvme_fabrics 00:33:00.057 rmmod nvme_keyring 00:33:00.057 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:33:00.057 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@106 -- # set -e 00:33:00.057 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@107 -- # return 0 00:33:00.057 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # '[' -n 581423 ']' 00:33:00.057 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@337 -- # killprocess 581423 00:33:00.057 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@952 -- # '[' -z 581423 ']' 00:33:00.057 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@956 -- # kill -0 581423 00:33:00.057 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@957 -- # uname 00:33:00.057 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:33:00.057 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 581423 00:33:00.057 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:33:00.057 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:33:00.057 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@970 -- # echo 'killing process with pid 581423' 00:33:00.057 killing process with pid 581423 00:33:00.057 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@971 -- # kill 581423 00:33:00.057 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@976 -- # wait 581423 00:33:00.317 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:33:00.317 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@342 -- # nvmf_fini 00:33:00.317 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@264 -- # local dev 00:33:00.317 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@267 -- # remove_target_ns 00:33:00.317 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:33:00.317 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:33:00.317 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_target_ns 00:33:02.230 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@268 -- # delete_main_bridge 00:33:02.230 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:33:02.230 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@130 -- # return 0 00:33:02.230 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:33:02.230 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:33:02.230 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:33:02.230 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:33:02.230 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:33:02.230 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:33:02.230 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:33:02.230 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:33:02.230 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:33:02.231 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:33:02.231 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:33:02.231 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:33:02.231 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:33:02.231 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:33:02.231 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:33:02.231 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:33:02.231 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:33:02.231 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@41 -- # _dev=0 00:33:02.231 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@41 -- # dev_map=() 00:33:02.231 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@284 -- # iptr 00:33:02.231 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@542 -- # iptables-save 00:33:02.231 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:33:02.231 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@542 -- # iptables-restore 00:33:02.231 00:33:02.231 real 0m23.350s 00:33:02.231 user 0m55.266s 00:33:02.231 sys 0m10.430s 00:33:02.231 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1128 -- # xtrace_disable 00:33:02.231 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:33:02.231 ************************************ 00:33:02.231 END TEST nvmf_lvol 00:33:02.231 ************************************ 00:33:02.231 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:33:02.231 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:33:02.231 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:33:02.231 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:02.493 ************************************ 00:33:02.493 START TEST nvmf_lvs_grow 00:33:02.493 ************************************ 00:33:02.493 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:33:02.493 * Looking for test storage... 00:33:02.493 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:02.493 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:33:02.493 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lcov --version 00:33:02.493 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:33:02.493 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:33:02.493 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:02.493 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:02.493 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:02.493 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:33:02.493 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:33:02.493 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:33:02.493 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:33:02.493 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:33:02.493 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:33:02.493 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:33:02.493 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:02.493 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:33:02.493 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:33:02.493 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:02.493 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:02.493 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:33:02.493 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:33:02.493 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:02.493 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:33:02.493 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:33:02.493 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:33:02.493 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:33:02.493 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:02.493 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:33:02.493 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:33:02.493 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:02.493 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:02.493 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:33:02.493 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:02.493 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:33:02.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:02.493 --rc genhtml_branch_coverage=1 00:33:02.493 --rc genhtml_function_coverage=1 00:33:02.493 --rc genhtml_legend=1 00:33:02.493 --rc geninfo_all_blocks=1 00:33:02.493 --rc geninfo_unexecuted_blocks=1 00:33:02.493 00:33:02.493 ' 00:33:02.493 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:33:02.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:02.493 --rc genhtml_branch_coverage=1 00:33:02.493 --rc genhtml_function_coverage=1 00:33:02.493 --rc genhtml_legend=1 00:33:02.493 --rc geninfo_all_blocks=1 00:33:02.493 --rc geninfo_unexecuted_blocks=1 00:33:02.493 00:33:02.493 ' 00:33:02.493 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:33:02.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:02.493 --rc genhtml_branch_coverage=1 00:33:02.493 --rc genhtml_function_coverage=1 00:33:02.493 --rc genhtml_legend=1 00:33:02.493 --rc geninfo_all_blocks=1 00:33:02.494 --rc geninfo_unexecuted_blocks=1 00:33:02.494 00:33:02.494 ' 00:33:02.494 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:33:02.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:02.494 --rc genhtml_branch_coverage=1 00:33:02.494 --rc genhtml_function_coverage=1 00:33:02.494 --rc genhtml_legend=1 00:33:02.494 --rc geninfo_all_blocks=1 00:33:02.494 --rc geninfo_unexecuted_blocks=1 00:33:02.494 00:33:02.494 ' 00:33:02.494 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:02.494 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:33:02.494 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:02.494 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:02.494 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:02.494 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:02.494 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:02.494 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:33:02.494 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:02.494 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:33:02.494 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:02.494 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:02.494 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:02.494 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:33:02.494 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:33:02.494 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:02.494 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:02.494 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:33:02.494 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:02.494 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:02.494 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:02.494 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:02.494 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:02.494 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:02.494 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:33:02.494 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:02.494 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:33:02.494 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:33:02.494 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:33:02.494 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:33:02.494 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@50 -- # : 0 00:33:02.494 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:33:02.494 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:33:02.494 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:33:02.494 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:02.494 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:02.494 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:33:02.494 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:33:02.494 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:33:02.494 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:33:02.494 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@54 -- # have_pci_nics=0 00:33:02.494 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:02.494 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:33:02.494 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:33:02.494 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:33:02.494 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:02.494 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@296 -- # prepare_net_devs 00:33:02.494 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # local -g is_hw=no 00:33:02.494 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@260 -- # remove_target_ns 00:33:02.494 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:33:02.494 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:33:02.494 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_target_ns 00:33:02.494 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:33:02.494 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:33:02.494 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # xtrace_disable 00:33:02.494 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:33:10.643 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:10.643 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@131 -- # pci_devs=() 00:33:10.643 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@131 -- # local -a pci_devs 00:33:10.643 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@132 -- # pci_net_devs=() 00:33:10.643 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:33:10.643 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@133 -- # pci_drivers=() 00:33:10.643 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@133 -- # local -A pci_drivers 00:33:10.643 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@135 -- # net_devs=() 00:33:10.643 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@135 -- # local -ga net_devs 00:33:10.643 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@136 -- # e810=() 00:33:10.643 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@136 -- # local -ga e810 00:33:10.643 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@137 -- # x722=() 00:33:10.643 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@137 -- # local -ga x722 00:33:10.643 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@138 -- # mlx=() 00:33:10.643 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@138 -- # local -ga mlx 00:33:10.643 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:10.643 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:10.643 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:10.643 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:10.643 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:10.643 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:10.643 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:10.643 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:10.643 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:10.643 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:10.643 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:10.643 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:10.643 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:33:10.643 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:33:10.643 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:33:10.643 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:33:10.643 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:33:10.643 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:33:10.643 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:33:10.643 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:10.643 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:10.643 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:33:10.643 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:33:10.643 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:10.643 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:10.643 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:33:10.643 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:33:10.643 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:10.643 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:10.643 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:33:10.643 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:33:10.643 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:10.643 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:10.643 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:33:10.643 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:33:10.643 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:33:10.643 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:33:10.643 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:33:10.643 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:10.643 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:33:10.643 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:10.643 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@234 -- # [[ up == up ]] 00:33:10.643 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:33:10.643 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:10.643 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:10.643 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:10.643 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:33:10.643 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:33:10.643 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:10.643 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:33:10.643 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:10.643 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@234 -- # [[ up == up ]] 00:33:10.643 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:33:10.643 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:10.643 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:10.644 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:10.644 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:33:10.644 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:33:10.644 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:33:10.644 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # is_hw=yes 00:33:10.644 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:33:10.644 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:33:10.644 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:33:10.644 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:33:10.644 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@257 -- # create_target_ns 00:33:10.644 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:33:10.644 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:33:10.644 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:33:10.644 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:10.644 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:33:10.644 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:33:10.644 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:10.644 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:10.644 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:33:10.644 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:33:10.644 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:33:10.644 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:33:10.644 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@27 -- # local -gA dev_map 00:33:10.644 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@28 -- # local -g _dev 00:33:10.644 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:33:10.644 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:33:10.644 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:33:10.644 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:33:10.644 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@44 -- # ips=() 00:33:10.644 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:33:10.644 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:33:10.644 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:33:10.644 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:33:10.644 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:33:10.644 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:33:10.644 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:33:10.644 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:33:10.644 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:33:10.644 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:33:10.644 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:33:10.644 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:33:10.644 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:33:10.644 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:33:10.644 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:33:10.644 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:33:10.644 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:33:10.644 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:33:10.644 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:10.644 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:33:10.644 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@11 -- # local val=167772161 00:33:10.644 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:33:10.644 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:33:10.644 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:33:10.644 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:33:10.644 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:33:10.644 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:33:10.644 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:33:10.644 10.0.0.1 00:33:10.644 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:33:10.644 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:33:10.644 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:10.644 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:10.644 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:33:10.644 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@11 -- # local val=167772162 00:33:10.644 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:33:10.644 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:33:10.644 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:33:10.644 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:33:10.644 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:33:10.644 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:33:10.644 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:33:10.644 10.0.0.2 00:33:10.644 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:33:10.644 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:33:10.644 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:33:10.644 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:33:10.644 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:33:10.644 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:33:10.644 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:33:10.644 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:10.644 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:10.644 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:33:10.644 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:33:10.644 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:33:10.644 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:33:10.644 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:33:10.644 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:33:10.644 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:33:10.644 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:33:10.644 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:33:10.644 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:33:10.644 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:33:10.644 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@38 -- # ping_ips 1 00:33:10.644 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:33:10.644 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:33:10.644 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:33:10.644 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:33:10.644 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:33:10.644 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:33:10.644 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:33:10.644 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:33:10.644 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:33:10.645 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@107 -- # local dev=initiator0 00:33:10.645 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:33:10.645 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:33:10.645 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:33:10.645 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:33:10.645 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:10.645 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:10.645 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:33:10.645 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:33:10.645 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:33:10.645 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:33:10.645 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:33:10.645 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:10.645 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:10.645 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:33:10.645 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:33:10.645 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:10.645 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.618 ms 00:33:10.645 00:33:10.645 --- 10.0.0.1 ping statistics --- 00:33:10.645 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:10.645 rtt min/avg/max/mdev = 0.618/0.618/0.618/0.000 ms 00:33:10.645 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:33:10.645 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:33:10.645 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:33:10.645 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:33:10.645 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:10.645 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:10.645 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@168 -- # get_net_dev target0 00:33:10.645 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@107 -- # local dev=target0 00:33:10.645 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:33:10.645 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:33:10.645 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:33:10.645 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:33:10.645 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:33:10.645 19:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:33:10.645 19:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:33:10.645 19:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:33:10.645 19:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:33:10.645 19:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:33:10.645 19:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:33:10.645 19:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:33:10.645 19:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:33:10.645 19:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:33:10.645 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:10.645 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.331 ms 00:33:10.645 00:33:10.645 --- 10.0.0.2 ping statistics --- 00:33:10.645 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:10.645 rtt min/avg/max/mdev = 0.331/0.331/0.331/0.000 ms 00:33:10.645 19:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # (( pair++ )) 00:33:10.645 19:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:33:10.645 19:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:10.645 19:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@270 -- # return 0 00:33:10.645 19:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:33:10.645 19:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:33:10.645 19:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:33:10.645 19:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:33:10.645 19:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:33:10.645 19:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:33:10.645 19:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:33:10.645 19:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:33:10.645 19:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:33:10.645 19:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:33:10.645 19:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@107 -- # local dev=initiator0 00:33:10.645 19:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:33:10.645 19:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:33:10.645 19:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:33:10.645 19:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:33:10.645 19:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:10.645 19:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:10.645 19:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:33:10.645 19:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:33:10.645 19:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:33:10.645 19:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:10.645 19:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:33:10.645 19:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:33:10.645 19:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:33:10.645 19:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:33:10.645 19:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:33:10.645 19:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:33:10.645 19:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@107 -- # local dev=initiator1 00:33:10.645 19:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:33:10.645 19:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:33:10.645 19:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # return 1 00:33:10.645 19:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@168 -- # dev= 00:33:10.645 19:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@169 -- # return 0 00:33:10.645 19:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:33:10.645 19:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:33:10.645 19:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:33:10.645 19:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:33:10.645 19:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:33:10.645 19:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:10.645 19:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:10.645 19:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@168 -- # get_net_dev target0 00:33:10.645 19:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@107 -- # local dev=target0 00:33:10.645 19:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:33:10.645 19:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:33:10.645 19:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:33:10.645 19:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:33:10.645 19:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:33:10.645 19:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:33:10.645 19:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:33:10.645 19:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:33:10.645 19:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:33:10.645 19:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:10.645 19:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:33:10.645 19:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:33:10.646 19:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:33:10.646 19:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:33:10.646 19:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:10.646 19:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:10.646 19:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@168 -- # get_net_dev target1 00:33:10.646 19:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@107 -- # local dev=target1 00:33:10.646 19:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:33:10.646 19:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:33:10.646 19:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # return 1 00:33:10.646 19:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@168 -- # dev= 00:33:10.646 19:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@169 -- # return 0 00:33:10.646 19:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:33:10.646 19:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:10.646 19:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:33:10.646 19:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:33:10.646 19:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:10.646 19:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:33:10.646 19:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:33:10.646 19:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:33:10.646 19:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:33:10.646 19:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:10.646 19:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:33:10.646 19:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # nvmfpid=588159 00:33:10.646 19:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@329 -- # waitforlisten 588159 00:33:10.646 19:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:33:10.646 19:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # '[' -z 588159 ']' 00:33:10.646 19:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:10.646 19:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # local max_retries=100 00:33:10.646 19:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:10.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:10.646 19:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # xtrace_disable 00:33:10.646 19:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:33:10.646 [2024-11-05 19:22:39.195566] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:10.646 [2024-11-05 19:22:39.197402] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:33:10.646 [2024-11-05 19:22:39.197486] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:10.646 [2024-11-05 19:22:39.282796] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:10.646 [2024-11-05 19:22:39.322911] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:10.646 [2024-11-05 19:22:39.322948] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:10.646 [2024-11-05 19:22:39.322956] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:10.646 [2024-11-05 19:22:39.322964] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:10.646 [2024-11-05 19:22:39.322969] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:10.646 [2024-11-05 19:22:39.323570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:10.646 [2024-11-05 19:22:39.378929] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:10.646 [2024-11-05 19:22:39.379179] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:10.907 19:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:33:10.907 19:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@866 -- # return 0 00:33:10.907 19:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:33:10.907 19:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:10.907 19:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:33:10.907 19:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:10.907 19:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:10.907 [2024-11-05 19:22:40.192069] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:10.907 19:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:33:10.907 19:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:33:10.907 19:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:33:10.907 19:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:33:11.168 ************************************ 00:33:11.168 START TEST lvs_grow_clean 00:33:11.168 ************************************ 00:33:11.168 19:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1127 -- # lvs_grow 00:33:11.168 19:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:33:11.168 19:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:33:11.168 19:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:33:11.168 19:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:33:11.168 19:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:33:11.168 19:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:33:11.168 19:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:33:11.168 19:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:33:11.168 19:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:33:11.168 19:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:33:11.169 19:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:33:11.430 19:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=01e681c2-5056-4da0-b269-f23cb95e4b7e 00:33:11.430 19:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 01e681c2-5056-4da0-b269-f23cb95e4b7e 00:33:11.430 19:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:33:11.691 19:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:33:11.691 19:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:33:11.691 19:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 01e681c2-5056-4da0-b269-f23cb95e4b7e lvol 150 00:33:11.691 19:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=d9421d60-bcbc-424b-be83-a0cd560abc19 00:33:11.691 19:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:33:11.691 19:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:33:11.951 [2024-11-05 19:22:41.159944] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:33:11.951 [2024-11-05 19:22:41.160032] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:33:11.951 true 00:33:11.951 19:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 01e681c2-5056-4da0-b269-f23cb95e4b7e 00:33:11.951 19:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:33:12.211 19:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:33:12.211 19:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:33:12.471 19:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 d9421d60-bcbc-424b-be83-a0cd560abc19 00:33:12.471 19:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:12.732 [2024-11-05 19:22:41.856691] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:12.732 19:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:12.732 19:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=588858 00:33:12.732 19:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:33:12.732 19:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:12.732 19:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 588858 /var/tmp/bdevperf.sock 00:33:12.732 19:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # '[' -z 588858 ']' 00:33:12.732 19:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:12.732 19:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:33:12.732 19:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:12.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:12.732 19:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:33:12.732 19:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:33:12.992 [2024-11-05 19:22:42.067771] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:33:12.992 [2024-11-05 19:22:42.067815] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid588858 ] 00:33:12.992 [2024-11-05 19:22:42.147307] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:12.992 [2024-11-05 19:22:42.187172] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:12.992 19:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:33:12.992 19:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@866 -- # return 0 00:33:12.992 19:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:33:13.253 Nvme0n1 00:33:13.253 19:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:33:13.514 [ 00:33:13.514 { 00:33:13.514 "name": "Nvme0n1", 00:33:13.514 "aliases": [ 00:33:13.514 "d9421d60-bcbc-424b-be83-a0cd560abc19" 00:33:13.514 ], 00:33:13.514 "product_name": "NVMe disk", 00:33:13.514 "block_size": 4096, 00:33:13.514 "num_blocks": 38912, 00:33:13.514 "uuid": "d9421d60-bcbc-424b-be83-a0cd560abc19", 00:33:13.514 "numa_id": 0, 00:33:13.514 "assigned_rate_limits": { 00:33:13.514 "rw_ios_per_sec": 0, 00:33:13.514 "rw_mbytes_per_sec": 0, 00:33:13.514 "r_mbytes_per_sec": 0, 00:33:13.514 "w_mbytes_per_sec": 0 00:33:13.514 }, 00:33:13.514 "claimed": false, 00:33:13.514 "zoned": false, 00:33:13.514 "supported_io_types": { 00:33:13.514 "read": true, 00:33:13.514 "write": true, 00:33:13.514 "unmap": true, 00:33:13.514 "flush": true, 00:33:13.514 "reset": true, 00:33:13.514 "nvme_admin": true, 00:33:13.514 "nvme_io": true, 00:33:13.514 "nvme_io_md": false, 00:33:13.514 "write_zeroes": true, 00:33:13.514 "zcopy": false, 00:33:13.514 "get_zone_info": false, 00:33:13.514 "zone_management": false, 00:33:13.514 "zone_append": false, 00:33:13.514 "compare": true, 00:33:13.514 "compare_and_write": true, 00:33:13.514 "abort": true, 00:33:13.514 "seek_hole": false, 00:33:13.514 "seek_data": false, 00:33:13.514 "copy": true, 00:33:13.514 "nvme_iov_md": false 00:33:13.514 }, 00:33:13.514 "memory_domains": [ 00:33:13.514 { 00:33:13.514 "dma_device_id": "system", 00:33:13.514 "dma_device_type": 1 00:33:13.514 } 00:33:13.514 ], 00:33:13.514 "driver_specific": { 00:33:13.514 "nvme": [ 00:33:13.514 { 00:33:13.514 "trid": { 00:33:13.514 "trtype": "TCP", 00:33:13.514 "adrfam": "IPv4", 00:33:13.514 "traddr": "10.0.0.2", 00:33:13.514 "trsvcid": "4420", 00:33:13.514 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:33:13.514 }, 00:33:13.514 "ctrlr_data": { 00:33:13.514 "cntlid": 1, 00:33:13.514 "vendor_id": "0x8086", 00:33:13.514 "model_number": "SPDK bdev Controller", 00:33:13.514 "serial_number": "SPDK0", 00:33:13.514 "firmware_revision": "25.01", 00:33:13.514 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:13.514 "oacs": { 00:33:13.514 "security": 0, 00:33:13.514 "format": 0, 00:33:13.514 "firmware": 0, 00:33:13.514 "ns_manage": 0 00:33:13.514 }, 00:33:13.514 "multi_ctrlr": true, 00:33:13.514 "ana_reporting": false 00:33:13.514 }, 00:33:13.514 "vs": { 00:33:13.514 "nvme_version": "1.3" 00:33:13.514 }, 00:33:13.514 "ns_data": { 00:33:13.514 "id": 1, 00:33:13.514 "can_share": true 00:33:13.514 } 00:33:13.514 } 00:33:13.514 ], 00:33:13.514 "mp_policy": "active_passive" 00:33:13.514 } 00:33:13.514 } 00:33:13.514 ] 00:33:13.514 19:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=588902 00:33:13.514 19:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:33:13.514 19:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:33:13.775 Running I/O for 10 seconds... 00:33:14.715 Latency(us) 00:33:14.715 [2024-11-05T18:22:44.038Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:14.715 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:14.715 Nvme0n1 : 1.00 17663.00 69.00 0.00 0.00 0.00 0.00 0.00 00:33:14.715 [2024-11-05T18:22:44.038Z] =================================================================================================================== 00:33:14.715 [2024-11-05T18:22:44.038Z] Total : 17663.00 69.00 0.00 0.00 0.00 0.00 0.00 00:33:14.715 00:33:15.657 19:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 01e681c2-5056-4da0-b269-f23cb95e4b7e 00:33:15.657 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:15.657 Nvme0n1 : 2.00 17785.00 69.47 0.00 0.00 0.00 0.00 0.00 00:33:15.657 [2024-11-05T18:22:44.980Z] =================================================================================================================== 00:33:15.657 [2024-11-05T18:22:44.980Z] Total : 17785.00 69.47 0.00 0.00 0.00 0.00 0.00 00:33:15.657 00:33:15.657 true 00:33:15.657 19:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 01e681c2-5056-4da0-b269-f23cb95e4b7e 00:33:15.657 19:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:33:15.918 19:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:33:15.918 19:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:33:15.918 19:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 588902 00:33:16.859 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:16.859 Nvme0n1 : 3.00 17825.67 69.63 0.00 0.00 0.00 0.00 0.00 00:33:16.859 [2024-11-05T18:22:46.182Z] =================================================================================================================== 00:33:16.859 [2024-11-05T18:22:46.182Z] Total : 17825.67 69.63 0.00 0.00 0.00 0.00 0.00 00:33:16.859 00:33:17.799 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:17.799 Nvme0n1 : 4.00 17877.75 69.83 0.00 0.00 0.00 0.00 0.00 00:33:17.799 [2024-11-05T18:22:47.122Z] =================================================================================================================== 00:33:17.799 [2024-11-05T18:22:47.122Z] Total : 17877.75 69.83 0.00 0.00 0.00 0.00 0.00 00:33:17.799 00:33:18.741 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:18.741 Nvme0n1 : 5.00 17909.00 69.96 0.00 0.00 0.00 0.00 0.00 00:33:18.741 [2024-11-05T18:22:48.064Z] =================================================================================================================== 00:33:18.741 [2024-11-05T18:22:48.064Z] Total : 17909.00 69.96 0.00 0.00 0.00 0.00 0.00 00:33:18.741 00:33:19.683 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:19.683 Nvme0n1 : 6.00 17929.83 70.04 0.00 0.00 0.00 0.00 0.00 00:33:19.683 [2024-11-05T18:22:49.006Z] =================================================================================================================== 00:33:19.683 [2024-11-05T18:22:49.006Z] Total : 17929.83 70.04 0.00 0.00 0.00 0.00 0.00 00:33:19.683 00:33:20.623 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:20.623 Nvme0n1 : 7.00 17944.71 70.10 0.00 0.00 0.00 0.00 0.00 00:33:20.623 [2024-11-05T18:22:49.946Z] =================================================================================================================== 00:33:20.623 [2024-11-05T18:22:49.946Z] Total : 17944.71 70.10 0.00 0.00 0.00 0.00 0.00 00:33:20.623 00:33:21.563 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:21.563 Nvme0n1 : 8.00 17940.00 70.08 0.00 0.00 0.00 0.00 0.00 00:33:21.563 [2024-11-05T18:22:50.886Z] =================================================================================================================== 00:33:21.563 [2024-11-05T18:22:50.886Z] Total : 17940.00 70.08 0.00 0.00 0.00 0.00 0.00 00:33:21.563 00:33:22.946 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:22.946 Nvme0n1 : 9.00 17950.44 70.12 0.00 0.00 0.00 0.00 0.00 00:33:22.946 [2024-11-05T18:22:52.269Z] =================================================================================================================== 00:33:22.946 [2024-11-05T18:22:52.269Z] Total : 17950.44 70.12 0.00 0.00 0.00 0.00 0.00 00:33:22.946 00:33:23.886 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:23.886 Nvme0n1 : 10.00 17965.20 70.18 0.00 0.00 0.00 0.00 0.00 00:33:23.886 [2024-11-05T18:22:53.209Z] =================================================================================================================== 00:33:23.886 [2024-11-05T18:22:53.209Z] Total : 17965.20 70.18 0.00 0.00 0.00 0.00 0.00 00:33:23.886 00:33:23.886 00:33:23.886 Latency(us) 00:33:23.886 [2024-11-05T18:22:53.209Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:23.886 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:23.886 Nvme0n1 : 10.01 17970.58 70.20 0.00 0.00 7118.92 2594.13 13981.01 00:33:23.886 [2024-11-05T18:22:53.209Z] =================================================================================================================== 00:33:23.886 [2024-11-05T18:22:53.209Z] Total : 17970.58 70.20 0.00 0.00 7118.92 2594.13 13981.01 00:33:23.886 { 00:33:23.886 "results": [ 00:33:23.886 { 00:33:23.886 "job": "Nvme0n1", 00:33:23.886 "core_mask": "0x2", 00:33:23.886 "workload": "randwrite", 00:33:23.886 "status": "finished", 00:33:23.886 "queue_depth": 128, 00:33:23.886 "io_size": 4096, 00:33:23.886 "runtime": 10.007635, 00:33:23.886 "iops": 17970.57946258032, 00:33:23.886 "mibps": 70.19757602570438, 00:33:23.886 "io_failed": 0, 00:33:23.886 "io_timeout": 0, 00:33:23.886 "avg_latency_us": 7118.915324143837, 00:33:23.886 "min_latency_us": 2594.133333333333, 00:33:23.886 "max_latency_us": 13981.013333333334 00:33:23.886 } 00:33:23.886 ], 00:33:23.886 "core_count": 1 00:33:23.886 } 00:33:23.886 19:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 588858 00:33:23.886 19:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # '[' -z 588858 ']' 00:33:23.886 19:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # kill -0 588858 00:33:23.886 19:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # uname 00:33:23.886 19:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:33:23.886 19:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 588858 00:33:23.886 19:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:33:23.886 19:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:33:23.886 19:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 588858' 00:33:23.886 killing process with pid 588858 00:33:23.886 19:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@971 -- # kill 588858 00:33:23.886 Received shutdown signal, test time was about 10.000000 seconds 00:33:23.886 00:33:23.886 Latency(us) 00:33:23.886 [2024-11-05T18:22:53.209Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:23.886 [2024-11-05T18:22:53.209Z] =================================================================================================================== 00:33:23.886 [2024-11-05T18:22:53.209Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:23.886 19:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@976 -- # wait 588858 00:33:23.886 19:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:24.146 19:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:24.146 19:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 01e681c2-5056-4da0-b269-f23cb95e4b7e 00:33:24.146 19:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:33:24.407 19:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:33:24.407 19:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:33:24.407 19:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:33:24.667 [2024-11-05 19:22:53.764124] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:33:24.667 19:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 01e681c2-5056-4da0-b269-f23cb95e4b7e 00:33:24.667 19:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:33:24.667 19:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 01e681c2-5056-4da0-b269-f23cb95e4b7e 00:33:24.667 19:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:24.667 19:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:24.668 19:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:24.668 19:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:24.668 19:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:24.668 19:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:24.668 19:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:24.668 19:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:33:24.668 19:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 01e681c2-5056-4da0-b269-f23cb95e4b7e 00:33:24.668 request: 00:33:24.668 { 00:33:24.668 "uuid": "01e681c2-5056-4da0-b269-f23cb95e4b7e", 00:33:24.668 "method": "bdev_lvol_get_lvstores", 00:33:24.668 "req_id": 1 00:33:24.668 } 00:33:24.668 Got JSON-RPC error response 00:33:24.668 response: 00:33:24.668 { 00:33:24.668 "code": -19, 00:33:24.668 "message": "No such device" 00:33:24.668 } 00:33:24.668 19:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:33:24.668 19:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:33:24.668 19:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:33:24.668 19:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:33:24.668 19:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:33:24.928 aio_bdev 00:33:24.929 19:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev d9421d60-bcbc-424b-be83-a0cd560abc19 00:33:24.929 19:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local bdev_name=d9421d60-bcbc-424b-be83-a0cd560abc19 00:33:24.929 19:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:33:24.929 19:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local i 00:33:24.929 19:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:33:24.929 19:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:33:24.929 19:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:33:25.189 19:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b d9421d60-bcbc-424b-be83-a0cd560abc19 -t 2000 00:33:25.189 [ 00:33:25.189 { 00:33:25.189 "name": "d9421d60-bcbc-424b-be83-a0cd560abc19", 00:33:25.189 "aliases": [ 00:33:25.189 "lvs/lvol" 00:33:25.189 ], 00:33:25.189 "product_name": "Logical Volume", 00:33:25.189 "block_size": 4096, 00:33:25.189 "num_blocks": 38912, 00:33:25.189 "uuid": "d9421d60-bcbc-424b-be83-a0cd560abc19", 00:33:25.189 "assigned_rate_limits": { 00:33:25.189 "rw_ios_per_sec": 0, 00:33:25.189 "rw_mbytes_per_sec": 0, 00:33:25.189 "r_mbytes_per_sec": 0, 00:33:25.189 "w_mbytes_per_sec": 0 00:33:25.189 }, 00:33:25.189 "claimed": false, 00:33:25.189 "zoned": false, 00:33:25.189 "supported_io_types": { 00:33:25.189 "read": true, 00:33:25.189 "write": true, 00:33:25.189 "unmap": true, 00:33:25.189 "flush": false, 00:33:25.189 "reset": true, 00:33:25.189 "nvme_admin": false, 00:33:25.189 "nvme_io": false, 00:33:25.189 "nvme_io_md": false, 00:33:25.189 "write_zeroes": true, 00:33:25.189 "zcopy": false, 00:33:25.189 "get_zone_info": false, 00:33:25.189 "zone_management": false, 00:33:25.189 "zone_append": false, 00:33:25.189 "compare": false, 00:33:25.189 "compare_and_write": false, 00:33:25.189 "abort": false, 00:33:25.189 "seek_hole": true, 00:33:25.189 "seek_data": true, 00:33:25.189 "copy": false, 00:33:25.189 "nvme_iov_md": false 00:33:25.189 }, 00:33:25.189 "driver_specific": { 00:33:25.189 "lvol": { 00:33:25.189 "lvol_store_uuid": "01e681c2-5056-4da0-b269-f23cb95e4b7e", 00:33:25.189 "base_bdev": "aio_bdev", 00:33:25.189 "thin_provision": false, 00:33:25.189 "num_allocated_clusters": 38, 00:33:25.189 "snapshot": false, 00:33:25.189 "clone": false, 00:33:25.189 "esnap_clone": false 00:33:25.189 } 00:33:25.189 } 00:33:25.189 } 00:33:25.189 ] 00:33:25.189 19:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@909 -- # return 0 00:33:25.189 19:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 01e681c2-5056-4da0-b269-f23cb95e4b7e 00:33:25.189 19:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:33:25.449 19:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:33:25.449 19:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 01e681c2-5056-4da0-b269-f23cb95e4b7e 00:33:25.449 19:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:33:25.710 19:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:33:25.710 19:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete d9421d60-bcbc-424b-be83-a0cd560abc19 00:33:25.710 19:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 01e681c2-5056-4da0-b269-f23cb95e4b7e 00:33:25.970 19:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:33:26.231 19:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:33:26.231 00:33:26.231 real 0m15.184s 00:33:26.231 user 0m14.745s 00:33:26.231 sys 0m1.355s 00:33:26.231 19:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1128 -- # xtrace_disable 00:33:26.231 19:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:33:26.231 ************************************ 00:33:26.231 END TEST lvs_grow_clean 00:33:26.231 ************************************ 00:33:26.231 19:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:33:26.231 19:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:33:26.231 19:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:33:26.231 19:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:33:26.231 ************************************ 00:33:26.231 START TEST lvs_grow_dirty 00:33:26.231 ************************************ 00:33:26.231 19:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1127 -- # lvs_grow dirty 00:33:26.231 19:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:33:26.231 19:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:33:26.231 19:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:33:26.231 19:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:33:26.231 19:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:33:26.231 19:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:33:26.232 19:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:33:26.232 19:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:33:26.232 19:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:33:26.492 19:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:33:26.492 19:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:33:26.810 19:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=3bdf8886-33de-4cb5-8754-22f94dbb858f 00:33:26.810 19:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3bdf8886-33de-4cb5-8754-22f94dbb858f 00:33:26.810 19:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:33:26.810 19:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:33:26.810 19:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:33:26.810 19:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 3bdf8886-33de-4cb5-8754-22f94dbb858f lvol 150 00:33:27.116 19:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=32ed2605-4f59-41c6-9912-d7b61dd6be55 00:33:27.116 19:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:33:27.116 19:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:33:27.116 [2024-11-05 19:22:56.399954] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:33:27.116 [2024-11-05 19:22:56.400029] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:33:27.116 true 00:33:27.116 19:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:33:27.116 19:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3bdf8886-33de-4cb5-8754-22f94dbb858f 00:33:27.402 19:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:33:27.402 19:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:33:27.662 19:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 32ed2605-4f59-41c6-9912-d7b61dd6be55 00:33:27.662 19:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:27.923 [2024-11-05 19:22:57.084343] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:27.923 19:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:28.183 19:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:33:28.183 19:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=591777 00:33:28.183 19:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:28.183 19:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 591777 /var/tmp/bdevperf.sock 00:33:28.183 19:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 591777 ']' 00:33:28.183 19:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:28.183 19:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:33:28.183 19:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:28.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:28.183 19:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:33:28.183 19:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:33:28.183 [2024-11-05 19:22:57.306450] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:33:28.183 [2024-11-05 19:22:57.306519] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid591777 ] 00:33:28.184 [2024-11-05 19:22:57.394146] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:28.184 [2024-11-05 19:22:57.428425] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:28.444 19:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:33:28.444 19:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:33:28.444 19:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:33:28.704 Nvme0n1 00:33:28.704 19:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:33:28.704 [ 00:33:28.704 { 00:33:28.704 "name": "Nvme0n1", 00:33:28.704 "aliases": [ 00:33:28.704 "32ed2605-4f59-41c6-9912-d7b61dd6be55" 00:33:28.704 ], 00:33:28.704 "product_name": "NVMe disk", 00:33:28.704 "block_size": 4096, 00:33:28.704 "num_blocks": 38912, 00:33:28.704 "uuid": "32ed2605-4f59-41c6-9912-d7b61dd6be55", 00:33:28.704 "numa_id": 0, 00:33:28.704 "assigned_rate_limits": { 00:33:28.704 "rw_ios_per_sec": 0, 00:33:28.704 "rw_mbytes_per_sec": 0, 00:33:28.704 "r_mbytes_per_sec": 0, 00:33:28.704 "w_mbytes_per_sec": 0 00:33:28.704 }, 00:33:28.704 "claimed": false, 00:33:28.704 "zoned": false, 00:33:28.704 "supported_io_types": { 00:33:28.704 "read": true, 00:33:28.704 "write": true, 00:33:28.704 "unmap": true, 00:33:28.704 "flush": true, 00:33:28.704 "reset": true, 00:33:28.704 "nvme_admin": true, 00:33:28.704 "nvme_io": true, 00:33:28.704 "nvme_io_md": false, 00:33:28.704 "write_zeroes": true, 00:33:28.704 "zcopy": false, 00:33:28.704 "get_zone_info": false, 00:33:28.704 "zone_management": false, 00:33:28.704 "zone_append": false, 00:33:28.704 "compare": true, 00:33:28.704 "compare_and_write": true, 00:33:28.704 "abort": true, 00:33:28.704 "seek_hole": false, 00:33:28.704 "seek_data": false, 00:33:28.704 "copy": true, 00:33:28.704 "nvme_iov_md": false 00:33:28.704 }, 00:33:28.704 "memory_domains": [ 00:33:28.704 { 00:33:28.704 "dma_device_id": "system", 00:33:28.704 "dma_device_type": 1 00:33:28.704 } 00:33:28.704 ], 00:33:28.704 "driver_specific": { 00:33:28.704 "nvme": [ 00:33:28.704 { 00:33:28.704 "trid": { 00:33:28.704 "trtype": "TCP", 00:33:28.704 "adrfam": "IPv4", 00:33:28.704 "traddr": "10.0.0.2", 00:33:28.704 "trsvcid": "4420", 00:33:28.704 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:33:28.704 }, 00:33:28.704 "ctrlr_data": { 00:33:28.704 "cntlid": 1, 00:33:28.704 "vendor_id": "0x8086", 00:33:28.704 "model_number": "SPDK bdev Controller", 00:33:28.704 "serial_number": "SPDK0", 00:33:28.704 "firmware_revision": "25.01", 00:33:28.704 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:28.704 "oacs": { 00:33:28.704 "security": 0, 00:33:28.704 "format": 0, 00:33:28.704 "firmware": 0, 00:33:28.704 "ns_manage": 0 00:33:28.704 }, 00:33:28.704 "multi_ctrlr": true, 00:33:28.704 "ana_reporting": false 00:33:28.704 }, 00:33:28.704 "vs": { 00:33:28.704 "nvme_version": "1.3" 00:33:28.704 }, 00:33:28.704 "ns_data": { 00:33:28.704 "id": 1, 00:33:28.704 "can_share": true 00:33:28.704 } 00:33:28.704 } 00:33:28.704 ], 00:33:28.704 "mp_policy": "active_passive" 00:33:28.704 } 00:33:28.704 } 00:33:28.704 ] 00:33:28.704 19:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=591944 00:33:28.704 19:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:33:28.704 19:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:33:28.964 Running I/O for 10 seconds... 00:33:29.904 Latency(us) 00:33:29.904 [2024-11-05T18:22:59.227Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:29.904 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:29.904 Nvme0n1 : 1.00 17720.00 69.22 0.00 0.00 0.00 0.00 0.00 00:33:29.904 [2024-11-05T18:22:59.227Z] =================================================================================================================== 00:33:29.904 [2024-11-05T18:22:59.227Z] Total : 17720.00 69.22 0.00 0.00 0.00 0.00 0.00 00:33:29.904 00:33:30.854 19:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 3bdf8886-33de-4cb5-8754-22f94dbb858f 00:33:30.854 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:30.854 Nvme0n1 : 2.00 17813.50 69.58 0.00 0.00 0.00 0.00 0.00 00:33:30.854 [2024-11-05T18:23:00.177Z] =================================================================================================================== 00:33:30.854 [2024-11-05T18:23:00.177Z] Total : 17813.50 69.58 0.00 0.00 0.00 0.00 0.00 00:33:30.854 00:33:30.854 true 00:33:30.854 19:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3bdf8886-33de-4cb5-8754-22f94dbb858f 00:33:30.854 19:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:33:31.114 19:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:33:31.114 19:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:33:31.114 19:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 591944 00:33:32.053 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:32.053 Nvme0n1 : 3.00 17865.67 69.79 0.00 0.00 0.00 0.00 0.00 00:33:32.053 [2024-11-05T18:23:01.376Z] =================================================================================================================== 00:33:32.053 [2024-11-05T18:23:01.376Z] Total : 17865.67 69.79 0.00 0.00 0.00 0.00 0.00 00:33:32.053 00:33:32.993 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:32.993 Nvme0n1 : 4.00 17907.75 69.95 0.00 0.00 0.00 0.00 0.00 00:33:32.993 [2024-11-05T18:23:02.316Z] =================================================================================================================== 00:33:32.993 [2024-11-05T18:23:02.316Z] Total : 17907.75 69.95 0.00 0.00 0.00 0.00 0.00 00:33:32.993 00:33:33.934 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:33.934 Nvme0n1 : 5.00 17933.00 70.05 0.00 0.00 0.00 0.00 0.00 00:33:33.934 [2024-11-05T18:23:03.257Z] =================================================================================================================== 00:33:33.934 [2024-11-05T18:23:03.257Z] Total : 17933.00 70.05 0.00 0.00 0.00 0.00 0.00 00:33:33.934 00:33:34.874 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:34.874 Nvme0n1 : 6.00 17949.83 70.12 0.00 0.00 0.00 0.00 0.00 00:33:34.874 [2024-11-05T18:23:04.197Z] =================================================================================================================== 00:33:34.874 [2024-11-05T18:23:04.197Z] Total : 17949.83 70.12 0.00 0.00 0.00 0.00 0.00 00:33:34.874 00:33:35.815 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:35.815 Nvme0n1 : 7.00 17971.00 70.20 0.00 0.00 0.00 0.00 0.00 00:33:35.815 [2024-11-05T18:23:05.138Z] =================================================================================================================== 00:33:35.815 [2024-11-05T18:23:05.138Z] Total : 17971.00 70.20 0.00 0.00 0.00 0.00 0.00 00:33:35.815 00:33:36.755 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:36.755 Nvme0n1 : 8.00 17986.75 70.26 0.00 0.00 0.00 0.00 0.00 00:33:36.755 [2024-11-05T18:23:06.078Z] =================================================================================================================== 00:33:36.755 [2024-11-05T18:23:06.078Z] Total : 17986.75 70.26 0.00 0.00 0.00 0.00 0.00 00:33:36.755 00:33:38.138 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:38.138 Nvme0n1 : 9.00 17999.11 70.31 0.00 0.00 0.00 0.00 0.00 00:33:38.138 [2024-11-05T18:23:07.461Z] =================================================================================================================== 00:33:38.138 [2024-11-05T18:23:07.461Z] Total : 17999.11 70.31 0.00 0.00 0.00 0.00 0.00 00:33:38.138 00:33:39.079 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:39.079 Nvme0n1 : 10.00 18008.90 70.35 0.00 0.00 0.00 0.00 0.00 00:33:39.079 [2024-11-05T18:23:08.402Z] =================================================================================================================== 00:33:39.079 [2024-11-05T18:23:08.402Z] Total : 18008.90 70.35 0.00 0.00 0.00 0.00 0.00 00:33:39.079 00:33:39.079 00:33:39.079 Latency(us) 00:33:39.079 [2024-11-05T18:23:08.402Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:39.079 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:39.079 Nvme0n1 : 10.01 18011.84 70.36 0.00 0.00 7103.28 1740.80 13325.65 00:33:39.079 [2024-11-05T18:23:08.402Z] =================================================================================================================== 00:33:39.079 [2024-11-05T18:23:08.402Z] Total : 18011.84 70.36 0.00 0.00 7103.28 1740.80 13325.65 00:33:39.079 { 00:33:39.079 "results": [ 00:33:39.079 { 00:33:39.079 "job": "Nvme0n1", 00:33:39.079 "core_mask": "0x2", 00:33:39.079 "workload": "randwrite", 00:33:39.079 "status": "finished", 00:33:39.079 "queue_depth": 128, 00:33:39.079 "io_size": 4096, 00:33:39.079 "runtime": 10.005474, 00:33:39.079 "iops": 18011.840318609593, 00:33:39.079 "mibps": 70.35875124456872, 00:33:39.079 "io_failed": 0, 00:33:39.079 "io_timeout": 0, 00:33:39.079 "avg_latency_us": 7103.280118819719, 00:33:39.079 "min_latency_us": 1740.8, 00:33:39.079 "max_latency_us": 13325.653333333334 00:33:39.079 } 00:33:39.079 ], 00:33:39.079 "core_count": 1 00:33:39.079 } 00:33:39.079 19:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 591777 00:33:39.079 19:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # '[' -z 591777 ']' 00:33:39.080 19:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # kill -0 591777 00:33:39.080 19:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # uname 00:33:39.080 19:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:33:39.080 19:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 591777 00:33:39.080 19:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:33:39.080 19:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:33:39.080 19:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # echo 'killing process with pid 591777' 00:33:39.080 killing process with pid 591777 00:33:39.080 19:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@971 -- # kill 591777 00:33:39.080 Received shutdown signal, test time was about 10.000000 seconds 00:33:39.080 00:33:39.080 Latency(us) 00:33:39.080 [2024-11-05T18:23:08.403Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:39.080 [2024-11-05T18:23:08.403Z] =================================================================================================================== 00:33:39.080 [2024-11-05T18:23:08.403Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:39.080 19:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@976 -- # wait 591777 00:33:39.080 19:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:39.340 19:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:39.340 19:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3bdf8886-33de-4cb5-8754-22f94dbb858f 00:33:39.340 19:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:33:39.600 19:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:33:39.600 19:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:33:39.600 19:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 588159 00:33:39.600 19:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 588159 00:33:39.600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 588159 Killed "${NVMF_APP[@]}" "$@" 00:33:39.600 19:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:33:39.600 19:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:33:39.600 19:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:33:39.600 19:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:39.600 19:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:33:39.600 19:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@328 -- # nvmfpid=593958 00:33:39.600 19:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@329 -- # waitforlisten 593958 00:33:39.600 19:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 593958 ']' 00:33:39.600 19:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:39.600 19:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:33:39.600 19:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:39.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:39.600 19:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:33:39.600 19:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:33:39.600 19:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:33:39.600 [2024-11-05 19:23:08.903682] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:39.600 [2024-11-05 19:23:08.904719] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:33:39.600 [2024-11-05 19:23:08.904774] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:39.860 [2024-11-05 19:23:08.984384] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:39.861 [2024-11-05 19:23:09.022942] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:39.861 [2024-11-05 19:23:09.022979] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:39.861 [2024-11-05 19:23:09.022987] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:39.861 [2024-11-05 19:23:09.022993] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:39.861 [2024-11-05 19:23:09.023000] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:39.861 [2024-11-05 19:23:09.023587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:39.861 [2024-11-05 19:23:09.078390] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:39.861 [2024-11-05 19:23:09.078653] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:40.449 19:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:33:40.449 19:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:33:40.449 19:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:33:40.449 19:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:40.449 19:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:33:40.449 19:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:40.449 19:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:33:40.710 [2024-11-05 19:23:09.910382] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:33:40.710 [2024-11-05 19:23:09.910507] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:33:40.710 [2024-11-05 19:23:09.910539] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:33:40.710 19:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:33:40.710 19:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 32ed2605-4f59-41c6-9912-d7b61dd6be55 00:33:40.710 19:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=32ed2605-4f59-41c6-9912-d7b61dd6be55 00:33:40.710 19:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:33:40.710 19:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:33:40.710 19:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:33:40.710 19:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:33:40.710 19:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:33:40.971 19:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 32ed2605-4f59-41c6-9912-d7b61dd6be55 -t 2000 00:33:41.230 [ 00:33:41.230 { 00:33:41.230 "name": "32ed2605-4f59-41c6-9912-d7b61dd6be55", 00:33:41.230 "aliases": [ 00:33:41.230 "lvs/lvol" 00:33:41.230 ], 00:33:41.230 "product_name": "Logical Volume", 00:33:41.230 "block_size": 4096, 00:33:41.230 "num_blocks": 38912, 00:33:41.230 "uuid": "32ed2605-4f59-41c6-9912-d7b61dd6be55", 00:33:41.230 "assigned_rate_limits": { 00:33:41.230 "rw_ios_per_sec": 0, 00:33:41.230 "rw_mbytes_per_sec": 0, 00:33:41.230 "r_mbytes_per_sec": 0, 00:33:41.230 "w_mbytes_per_sec": 0 00:33:41.230 }, 00:33:41.230 "claimed": false, 00:33:41.230 "zoned": false, 00:33:41.230 "supported_io_types": { 00:33:41.230 "read": true, 00:33:41.230 "write": true, 00:33:41.230 "unmap": true, 00:33:41.230 "flush": false, 00:33:41.230 "reset": true, 00:33:41.230 "nvme_admin": false, 00:33:41.230 "nvme_io": false, 00:33:41.230 "nvme_io_md": false, 00:33:41.230 "write_zeroes": true, 00:33:41.230 "zcopy": false, 00:33:41.230 "get_zone_info": false, 00:33:41.230 "zone_management": false, 00:33:41.230 "zone_append": false, 00:33:41.230 "compare": false, 00:33:41.230 "compare_and_write": false, 00:33:41.230 "abort": false, 00:33:41.230 "seek_hole": true, 00:33:41.230 "seek_data": true, 00:33:41.230 "copy": false, 00:33:41.230 "nvme_iov_md": false 00:33:41.230 }, 00:33:41.230 "driver_specific": { 00:33:41.230 "lvol": { 00:33:41.230 "lvol_store_uuid": "3bdf8886-33de-4cb5-8754-22f94dbb858f", 00:33:41.230 "base_bdev": "aio_bdev", 00:33:41.230 "thin_provision": false, 00:33:41.230 "num_allocated_clusters": 38, 00:33:41.230 "snapshot": false, 00:33:41.230 "clone": false, 00:33:41.230 "esnap_clone": false 00:33:41.230 } 00:33:41.230 } 00:33:41.230 } 00:33:41.230 ] 00:33:41.230 19:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:33:41.230 19:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3bdf8886-33de-4cb5-8754-22f94dbb858f 00:33:41.230 19:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:33:41.230 19:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:33:41.230 19:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3bdf8886-33de-4cb5-8754-22f94dbb858f 00:33:41.230 19:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:33:41.490 19:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:33:41.490 19:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:33:41.490 [2024-11-05 19:23:10.804016] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:33:41.751 19:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3bdf8886-33de-4cb5-8754-22f94dbb858f 00:33:41.751 19:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:33:41.751 19:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3bdf8886-33de-4cb5-8754-22f94dbb858f 00:33:41.751 19:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:41.751 19:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:41.751 19:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:41.751 19:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:41.751 19:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:41.751 19:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:41.751 19:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:41.751 19:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:33:41.751 19:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3bdf8886-33de-4cb5-8754-22f94dbb858f 00:33:41.751 request: 00:33:41.751 { 00:33:41.751 "uuid": "3bdf8886-33de-4cb5-8754-22f94dbb858f", 00:33:41.751 "method": "bdev_lvol_get_lvstores", 00:33:41.751 "req_id": 1 00:33:41.751 } 00:33:41.751 Got JSON-RPC error response 00:33:41.751 response: 00:33:41.751 { 00:33:41.751 "code": -19, 00:33:41.751 "message": "No such device" 00:33:41.751 } 00:33:41.751 19:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:33:41.751 19:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:33:41.751 19:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:33:41.751 19:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:33:41.751 19:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:33:42.012 aio_bdev 00:33:42.012 19:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 32ed2605-4f59-41c6-9912-d7b61dd6be55 00:33:42.012 19:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=32ed2605-4f59-41c6-9912-d7b61dd6be55 00:33:42.012 19:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:33:42.012 19:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:33:42.012 19:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:33:42.012 19:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:33:42.012 19:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:33:42.272 19:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 32ed2605-4f59-41c6-9912-d7b61dd6be55 -t 2000 00:33:42.272 [ 00:33:42.272 { 00:33:42.272 "name": "32ed2605-4f59-41c6-9912-d7b61dd6be55", 00:33:42.272 "aliases": [ 00:33:42.272 "lvs/lvol" 00:33:42.272 ], 00:33:42.272 "product_name": "Logical Volume", 00:33:42.272 "block_size": 4096, 00:33:42.272 "num_blocks": 38912, 00:33:42.272 "uuid": "32ed2605-4f59-41c6-9912-d7b61dd6be55", 00:33:42.272 "assigned_rate_limits": { 00:33:42.272 "rw_ios_per_sec": 0, 00:33:42.272 "rw_mbytes_per_sec": 0, 00:33:42.272 "r_mbytes_per_sec": 0, 00:33:42.272 "w_mbytes_per_sec": 0 00:33:42.272 }, 00:33:42.272 "claimed": false, 00:33:42.272 "zoned": false, 00:33:42.272 "supported_io_types": { 00:33:42.272 "read": true, 00:33:42.272 "write": true, 00:33:42.272 "unmap": true, 00:33:42.272 "flush": false, 00:33:42.272 "reset": true, 00:33:42.272 "nvme_admin": false, 00:33:42.272 "nvme_io": false, 00:33:42.272 "nvme_io_md": false, 00:33:42.272 "write_zeroes": true, 00:33:42.272 "zcopy": false, 00:33:42.272 "get_zone_info": false, 00:33:42.272 "zone_management": false, 00:33:42.272 "zone_append": false, 00:33:42.272 "compare": false, 00:33:42.272 "compare_and_write": false, 00:33:42.272 "abort": false, 00:33:42.272 "seek_hole": true, 00:33:42.272 "seek_data": true, 00:33:42.272 "copy": false, 00:33:42.272 "nvme_iov_md": false 00:33:42.272 }, 00:33:42.272 "driver_specific": { 00:33:42.272 "lvol": { 00:33:42.272 "lvol_store_uuid": "3bdf8886-33de-4cb5-8754-22f94dbb858f", 00:33:42.272 "base_bdev": "aio_bdev", 00:33:42.272 "thin_provision": false, 00:33:42.272 "num_allocated_clusters": 38, 00:33:42.272 "snapshot": false, 00:33:42.272 "clone": false, 00:33:42.272 "esnap_clone": false 00:33:42.272 } 00:33:42.272 } 00:33:42.272 } 00:33:42.272 ] 00:33:42.272 19:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:33:42.272 19:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3bdf8886-33de-4cb5-8754-22f94dbb858f 00:33:42.272 19:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:33:42.532 19:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:33:42.532 19:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3bdf8886-33de-4cb5-8754-22f94dbb858f 00:33:42.532 19:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:33:42.792 19:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:33:42.792 19:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 32ed2605-4f59-41c6-9912-d7b61dd6be55 00:33:42.792 19:23:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3bdf8886-33de-4cb5-8754-22f94dbb858f 00:33:43.052 19:23:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:33:43.313 19:23:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:33:43.313 00:33:43.313 real 0m16.956s 00:33:43.313 user 0m34.746s 00:33:43.313 sys 0m2.929s 00:33:43.313 19:23:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1128 -- # xtrace_disable 00:33:43.313 19:23:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:33:43.313 ************************************ 00:33:43.313 END TEST lvs_grow_dirty 00:33:43.313 ************************************ 00:33:43.313 19:23:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:33:43.313 19:23:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # type=--id 00:33:43.313 19:23:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@811 -- # id=0 00:33:43.313 19:23:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:33:43.313 19:23:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:33:43.313 19:23:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:33:43.313 19:23:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:33:43.313 19:23:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@822 -- # for n in $shm_files 00:33:43.313 19:23:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:33:43.313 nvmf_trace.0 00:33:43.313 19:23:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # return 0 00:33:43.313 19:23:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:33:43.313 19:23:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@335 -- # nvmfcleanup 00:33:43.313 19:23:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@99 -- # sync 00:33:43.313 19:23:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:33:43.313 19:23:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@102 -- # set +e 00:33:43.313 19:23:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@103 -- # for i in {1..20} 00:33:43.313 19:23:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:33:43.313 rmmod nvme_tcp 00:33:43.313 rmmod nvme_fabrics 00:33:43.313 rmmod nvme_keyring 00:33:43.313 19:23:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:33:43.313 19:23:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@106 -- # set -e 00:33:43.313 19:23:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@107 -- # return 0 00:33:43.313 19:23:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # '[' -n 593958 ']' 00:33:43.313 19:23:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@337 -- # killprocess 593958 00:33:43.313 19:23:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # '[' -z 593958 ']' 00:33:43.313 19:23:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # kill -0 593958 00:33:43.313 19:23:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # uname 00:33:43.313 19:23:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:33:43.313 19:23:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 593958 00:33:43.574 19:23:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:33:43.574 19:23:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:33:43.574 19:23:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # echo 'killing process with pid 593958' 00:33:43.574 killing process with pid 593958 00:33:43.574 19:23:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@971 -- # kill 593958 00:33:43.574 19:23:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@976 -- # wait 593958 00:33:43.574 19:23:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:33:43.574 19:23:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@342 -- # nvmf_fini 00:33:43.574 19:23:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@264 -- # local dev 00:33:43.574 19:23:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@267 -- # remove_target_ns 00:33:43.574 19:23:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:33:43.574 19:23:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:33:43.574 19:23:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_target_ns 00:33:46.120 19:23:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@268 -- # delete_main_bridge 00:33:46.120 19:23:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:33:46.120 19:23:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@130 -- # return 0 00:33:46.120 19:23:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:33:46.120 19:23:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:33:46.120 19:23:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:33:46.120 19:23:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:33:46.120 19:23:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:33:46.120 19:23:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:33:46.120 19:23:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:33:46.120 19:23:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:33:46.120 19:23:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:33:46.120 19:23:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:33:46.120 19:23:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:33:46.120 19:23:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:33:46.120 19:23:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:33:46.120 19:23:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:33:46.120 19:23:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:33:46.120 19:23:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:33:46.120 19:23:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:33:46.120 19:23:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@41 -- # _dev=0 00:33:46.120 19:23:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@41 -- # dev_map=() 00:33:46.120 19:23:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@284 -- # iptr 00:33:46.120 19:23:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@542 -- # iptables-save 00:33:46.120 19:23:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:33:46.120 19:23:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@542 -- # iptables-restore 00:33:46.120 00:33:46.120 real 0m43.350s 00:33:46.120 user 0m52.465s 00:33:46.120 sys 0m10.235s 00:33:46.120 19:23:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1128 -- # xtrace_disable 00:33:46.120 19:23:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:33:46.120 ************************************ 00:33:46.120 END TEST nvmf_lvs_grow 00:33:46.120 ************************************ 00:33:46.120 19:23:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@24 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:33:46.120 19:23:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:33:46.120 19:23:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:33:46.120 19:23:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:46.120 ************************************ 00:33:46.120 START TEST nvmf_bdev_io_wait 00:33:46.120 ************************************ 00:33:46.120 19:23:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:33:46.120 * Looking for test storage... 00:33:46.120 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:46.120 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:33:46.120 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lcov --version 00:33:46.120 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:33:46.120 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:33:46.120 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:46.120 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:46.120 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:46.120 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:33:46.120 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:33:46.120 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:33:46.120 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:33:46.120 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:33:46.121 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:33:46.121 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:33:46.121 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:46.121 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:33:46.121 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:33:46.121 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:46.121 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:46.121 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:33:46.121 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:33:46.121 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:46.121 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:33:46.121 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:33:46.121 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:33:46.121 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:33:46.121 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:46.121 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:33:46.121 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:33:46.121 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:46.121 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:46.121 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:33:46.121 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:46.121 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:33:46.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:46.121 --rc genhtml_branch_coverage=1 00:33:46.121 --rc genhtml_function_coverage=1 00:33:46.121 --rc genhtml_legend=1 00:33:46.121 --rc geninfo_all_blocks=1 00:33:46.121 --rc geninfo_unexecuted_blocks=1 00:33:46.121 00:33:46.121 ' 00:33:46.121 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:33:46.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:46.121 --rc genhtml_branch_coverage=1 00:33:46.121 --rc genhtml_function_coverage=1 00:33:46.121 --rc genhtml_legend=1 00:33:46.121 --rc geninfo_all_blocks=1 00:33:46.121 --rc geninfo_unexecuted_blocks=1 00:33:46.121 00:33:46.121 ' 00:33:46.121 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:33:46.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:46.121 --rc genhtml_branch_coverage=1 00:33:46.121 --rc genhtml_function_coverage=1 00:33:46.121 --rc genhtml_legend=1 00:33:46.121 --rc geninfo_all_blocks=1 00:33:46.121 --rc geninfo_unexecuted_blocks=1 00:33:46.121 00:33:46.121 ' 00:33:46.121 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:33:46.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:46.121 --rc genhtml_branch_coverage=1 00:33:46.121 --rc genhtml_function_coverage=1 00:33:46.121 --rc genhtml_legend=1 00:33:46.121 --rc geninfo_all_blocks=1 00:33:46.121 --rc geninfo_unexecuted_blocks=1 00:33:46.121 00:33:46.121 ' 00:33:46.121 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:46.121 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:33:46.121 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:46.121 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:46.121 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:46.121 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:46.121 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:46.121 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:33:46.121 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:46.121 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:33:46.121 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:46.121 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:46.121 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:46.121 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:33:46.121 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:33:46.121 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:46.121 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:46.121 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:33:46.121 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:46.121 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:46.121 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:46.121 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:46.121 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:46.121 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:46.121 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:33:46.121 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:46.121 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:33:46.121 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:33:46.121 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:33:46.121 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:33:46.121 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@50 -- # : 0 00:33:46.121 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:33:46.121 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:33:46.121 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:33:46.121 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:46.121 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:46.121 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:33:46.121 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:33:46.121 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:33:46.121 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:33:46.121 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@54 -- # have_pci_nics=0 00:33:46.121 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:46.121 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:46.121 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:33:46.121 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:33:46.121 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:46.122 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # prepare_net_devs 00:33:46.122 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # local -g is_hw=no 00:33:46.122 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # remove_target_ns 00:33:46.122 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:33:46.122 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:33:46.122 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_target_ns 00:33:46.122 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:33:46.122 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:33:46.122 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # xtrace_disable 00:33:46.122 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:54.262 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:54.262 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@131 -- # pci_devs=() 00:33:54.262 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@131 -- # local -a pci_devs 00:33:54.262 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@132 -- # pci_net_devs=() 00:33:54.262 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:33:54.262 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@133 -- # pci_drivers=() 00:33:54.262 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@133 -- # local -A pci_drivers 00:33:54.262 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@135 -- # net_devs=() 00:33:54.262 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@135 -- # local -ga net_devs 00:33:54.262 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@136 -- # e810=() 00:33:54.262 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@136 -- # local -ga e810 00:33:54.262 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@137 -- # x722=() 00:33:54.262 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@137 -- # local -ga x722 00:33:54.262 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@138 -- # mlx=() 00:33:54.262 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@138 -- # local -ga mlx 00:33:54.262 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:54.262 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:54.262 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:54.262 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:54.262 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:54.262 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:54.262 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:54.262 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:54.262 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:54.262 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:54.262 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:54.262 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:54.262 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:33:54.262 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:33:54.262 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:33:54.262 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:33:54.262 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:33:54.262 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:33:54.262 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:33:54.262 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:54.262 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:54.262 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:33:54.262 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:33:54.262 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:54.262 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:54.262 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:33:54.262 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:33:54.262 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:54.262 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:54.262 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:33:54.262 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:33:54.262 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:54.262 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:54.262 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:33:54.262 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:33:54.262 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:33:54.262 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:33:54.262 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:33:54.263 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:54.263 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:33:54.263 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:54.263 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # [[ up == up ]] 00:33:54.263 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:33:54.263 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:54.263 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:54.263 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:54.263 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:33:54.263 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:33:54.263 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:54.263 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:33:54.263 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:54.263 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # [[ up == up ]] 00:33:54.263 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:33:54.263 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:54.263 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:54.263 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:54.263 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:33:54.263 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:33:54.263 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:33:54.263 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # is_hw=yes 00:33:54.263 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:33:54.263 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:33:54.263 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:33:54.263 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:33:54.263 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@257 -- # create_target_ns 00:33:54.263 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:33:54.263 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:33:54.263 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:33:54.263 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:54.263 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:33:54.263 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:33:54.263 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:54.263 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:54.263 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:33:54.263 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:33:54.263 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:33:54.263 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:33:54.263 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@27 -- # local -gA dev_map 00:33:54.263 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@28 -- # local -g _dev 00:33:54.263 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:33:54.263 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:33:54.263 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:33:54.263 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:33:54.263 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@44 -- # ips=() 00:33:54.263 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:33:54.263 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:33:54.263 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:33:54.263 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:33:54.263 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:33:54.263 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:33:54.263 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:33:54.263 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:33:54.263 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:33:54.263 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:33:54.263 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:33:54.263 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:33:54.263 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:33:54.263 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:33:54.263 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:33:54.263 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:33:54.263 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:33:54.263 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:33:54.263 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:54.263 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:33:54.263 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@11 -- # local val=167772161 00:33:54.263 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:33:54.263 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:33:54.263 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:33:54.263 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:33:54.263 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:33:54.263 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:33:54.263 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:33:54.263 10.0.0.1 00:33:54.263 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:33:54.263 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:33:54.263 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:54.263 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:54.263 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:33:54.263 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@11 -- # local val=167772162 00:33:54.263 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:33:54.263 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:33:54.263 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:33:54.263 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:33:54.263 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:33:54.263 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:33:54.263 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:33:54.263 10.0.0.2 00:33:54.263 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:33:54.263 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:33:54.263 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:33:54.263 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:33:54.263 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:33:54.263 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:33:54.263 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:33:54.263 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:54.263 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:54.264 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:33:54.264 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:33:54.264 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:33:54.264 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:33:54.264 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:33:54.264 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:33:54.264 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:33:54.264 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:33:54.264 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:33:54.264 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:33:54.264 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:33:54.264 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@38 -- # ping_ips 1 00:33:54.264 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:33:54.264 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:33:54.264 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:33:54.264 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:33:54.264 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:33:54.264 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:33:54.264 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:33:54.264 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:33:54.264 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:33:54.264 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@107 -- # local dev=initiator0 00:33:54.264 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:33:54.264 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:33:54.264 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:33:54.264 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:33:54.264 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:54.264 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:54.264 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:33:54.264 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:33:54.264 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:33:54.264 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:33:54.264 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:33:54.264 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:54.264 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:54.264 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:33:54.264 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:33:54.264 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:54.264 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.634 ms 00:33:54.264 00:33:54.264 --- 10.0.0.1 ping statistics --- 00:33:54.264 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:54.264 rtt min/avg/max/mdev = 0.634/0.634/0.634/0.000 ms 00:33:54.264 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:33:54.264 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:33:54.264 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:33:54.264 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:33:54.264 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:54.264 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:54.264 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@168 -- # get_net_dev target0 00:33:54.264 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@107 -- # local dev=target0 00:33:54.264 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:33:54.264 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:33:54.264 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:33:54.264 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:33:54.264 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:33:54.264 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:33:54.264 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:33:54.264 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:33:54.264 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:33:54.264 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:33:54.264 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:33:54.264 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:33:54.264 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:33:54.264 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:33:54.264 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:54.264 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.244 ms 00:33:54.264 00:33:54.264 --- 10.0.0.2 ping statistics --- 00:33:54.264 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:54.264 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:33:54.264 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # (( pair++ )) 00:33:54.264 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:33:54.264 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:54.264 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # return 0 00:33:54.264 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:33:54.264 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:33:54.264 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:33:54.264 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:33:54.264 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:33:54.264 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:33:54.264 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:33:54.264 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:33:54.264 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:33:54.264 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:33:54.264 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@107 -- # local dev=initiator0 00:33:54.264 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:33:54.264 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:33:54.264 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:33:54.264 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:33:54.264 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:54.264 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:54.264 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:33:54.264 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:33:54.264 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:33:54.264 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:54.264 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:33:54.264 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:33:54.264 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:33:54.264 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:33:54.264 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:33:54.264 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:33:54.265 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@107 -- # local dev=initiator1 00:33:54.265 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:33:54.265 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:33:54.265 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # return 1 00:33:54.265 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@168 -- # dev= 00:33:54.265 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@169 -- # return 0 00:33:54.265 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:33:54.265 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:33:54.265 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:33:54.265 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:33:54.265 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:33:54.265 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:54.265 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:54.265 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@168 -- # get_net_dev target0 00:33:54.265 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@107 -- # local dev=target0 00:33:54.265 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:33:54.265 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:33:54.265 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:33:54.265 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:33:54.265 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:33:54.265 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:33:54.265 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:33:54.265 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:33:54.265 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:33:54.265 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:54.265 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:33:54.265 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:33:54.265 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:33:54.265 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:33:54.265 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:54.265 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:54.265 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@168 -- # get_net_dev target1 00:33:54.265 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@107 -- # local dev=target1 00:33:54.265 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:33:54.265 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:33:54.265 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # return 1 00:33:54.265 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@168 -- # dev= 00:33:54.265 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@169 -- # return 0 00:33:54.265 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:33:54.265 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:54.265 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:33:54.265 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:33:54.265 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:54.265 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:33:54.265 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:33:54.265 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:33:54.265 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:33:54.265 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:54.265 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:54.265 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # nvmfpid=598992 00:33:54.265 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # waitforlisten 598992 00:33:54.265 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:33:54.265 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # '[' -z 598992 ']' 00:33:54.265 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:54.265 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # local max_retries=100 00:33:54.265 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:54.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:54.265 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # xtrace_disable 00:33:54.265 19:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:54.265 [2024-11-05 19:23:22.827002] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:54.265 [2024-11-05 19:23:22.828154] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:33:54.265 [2024-11-05 19:23:22.828206] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:54.265 [2024-11-05 19:23:22.910894] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:54.265 [2024-11-05 19:23:22.953976] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:54.265 [2024-11-05 19:23:22.954014] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:54.265 [2024-11-05 19:23:22.954022] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:54.265 [2024-11-05 19:23:22.954028] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:54.265 [2024-11-05 19:23:22.954034] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:54.265 [2024-11-05 19:23:22.955593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:54.265 [2024-11-05 19:23:22.955738] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:54.265 [2024-11-05 19:23:22.955911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:54.265 [2024-11-05 19:23:22.955912] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:54.265 [2024-11-05 19:23:22.956189] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:54.526 19:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:33:54.526 19:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@866 -- # return 0 00:33:54.526 19:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:33:54.526 19:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:54.526 19:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:54.526 19:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:54.526 19:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:33:54.526 19:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:54.526 19:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:54.526 19:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:54.526 19:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:33:54.526 19:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:54.526 19:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:54.526 [2024-11-05 19:23:23.709728] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:54.526 [2024-11-05 19:23:23.710086] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:54.526 [2024-11-05 19:23:23.710814] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:33:54.526 [2024-11-05 19:23:23.710953] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:33:54.526 19:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:54.526 19:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:54.526 19:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:54.526 19:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:54.526 [2024-11-05 19:23:23.720379] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:54.526 19:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:54.526 19:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:54.526 19:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:54.526 19:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:54.526 Malloc0 00:33:54.526 19:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:54.526 19:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:54.526 19:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:54.526 19:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:54.526 19:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:54.526 19:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:54.526 19:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:54.526 19:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:54.526 19:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:54.526 19:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:54.526 19:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:54.526 19:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:54.526 [2024-11-05 19:23:23.784563] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:54.526 19:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:54.526 19:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=599062 00:33:54.526 19:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=599064 00:33:54.526 19:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:33:54.526 19:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:33:54.526 19:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # config=() 00:33:54.526 19:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # local subsystem config 00:33:54.526 19:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:33:54.526 19:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:33:54.526 { 00:33:54.526 "params": { 00:33:54.526 "name": "Nvme$subsystem", 00:33:54.526 "trtype": "$TEST_TRANSPORT", 00:33:54.526 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:54.526 "adrfam": "ipv4", 00:33:54.526 "trsvcid": "$NVMF_PORT", 00:33:54.526 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:54.526 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:54.526 "hdgst": ${hdgst:-false}, 00:33:54.526 "ddgst": ${ddgst:-false} 00:33:54.526 }, 00:33:54.526 "method": "bdev_nvme_attach_controller" 00:33:54.526 } 00:33:54.526 EOF 00:33:54.526 )") 00:33:54.526 19:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=599066 00:33:54.526 19:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:33:54.526 19:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:33:54.526 19:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # config=() 00:33:54.526 19:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # local subsystem config 00:33:54.526 19:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:33:54.526 19:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:33:54.526 { 00:33:54.526 "params": { 00:33:54.526 "name": "Nvme$subsystem", 00:33:54.526 "trtype": "$TEST_TRANSPORT", 00:33:54.526 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:54.526 "adrfam": "ipv4", 00:33:54.526 "trsvcid": "$NVMF_PORT", 00:33:54.526 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:54.526 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:54.526 "hdgst": ${hdgst:-false}, 00:33:54.526 "ddgst": ${ddgst:-false} 00:33:54.526 }, 00:33:54.526 "method": "bdev_nvme_attach_controller" 00:33:54.526 } 00:33:54.526 EOF 00:33:54.526 )") 00:33:54.526 19:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=599069 00:33:54.526 19:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:33:54.526 19:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:33:54.526 19:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:33:54.526 19:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # config=() 00:33:54.526 19:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # cat 00:33:54.526 19:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # local subsystem config 00:33:54.526 19:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:33:54.526 19:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:33:54.526 { 00:33:54.526 "params": { 00:33:54.526 "name": "Nvme$subsystem", 00:33:54.526 "trtype": "$TEST_TRANSPORT", 00:33:54.526 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:54.526 "adrfam": "ipv4", 00:33:54.526 "trsvcid": "$NVMF_PORT", 00:33:54.526 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:54.526 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:54.526 "hdgst": ${hdgst:-false}, 00:33:54.526 "ddgst": ${ddgst:-false} 00:33:54.526 }, 00:33:54.526 "method": "bdev_nvme_attach_controller" 00:33:54.526 } 00:33:54.526 EOF 00:33:54.526 )") 00:33:54.527 19:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:33:54.527 19:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:33:54.527 19:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # config=() 00:33:54.527 19:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # cat 00:33:54.527 19:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # local subsystem config 00:33:54.527 19:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:33:54.527 19:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:33:54.527 { 00:33:54.527 "params": { 00:33:54.527 "name": "Nvme$subsystem", 00:33:54.527 "trtype": "$TEST_TRANSPORT", 00:33:54.527 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:54.527 "adrfam": "ipv4", 00:33:54.527 "trsvcid": "$NVMF_PORT", 00:33:54.527 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:54.527 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:54.527 "hdgst": ${hdgst:-false}, 00:33:54.527 "ddgst": ${ddgst:-false} 00:33:54.527 }, 00:33:54.527 "method": "bdev_nvme_attach_controller" 00:33:54.527 } 00:33:54.527 EOF 00:33:54.527 )") 00:33:54.527 19:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # cat 00:33:54.527 19:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 599062 00:33:54.527 19:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # cat 00:33:54.527 19:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # jq . 00:33:54.527 19:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # jq . 00:33:54.527 19:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # jq . 00:33:54.527 19:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@397 -- # IFS=, 00:33:54.527 19:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:33:54.527 "params": { 00:33:54.527 "name": "Nvme1", 00:33:54.527 "trtype": "tcp", 00:33:54.527 "traddr": "10.0.0.2", 00:33:54.527 "adrfam": "ipv4", 00:33:54.527 "trsvcid": "4420", 00:33:54.527 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:54.527 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:54.527 "hdgst": false, 00:33:54.527 "ddgst": false 00:33:54.527 }, 00:33:54.527 "method": "bdev_nvme_attach_controller" 00:33:54.527 }' 00:33:54.527 19:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # jq . 00:33:54.527 19:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@397 -- # IFS=, 00:33:54.527 19:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:33:54.527 "params": { 00:33:54.527 "name": "Nvme1", 00:33:54.527 "trtype": "tcp", 00:33:54.527 "traddr": "10.0.0.2", 00:33:54.527 "adrfam": "ipv4", 00:33:54.527 "trsvcid": "4420", 00:33:54.527 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:54.527 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:54.527 "hdgst": false, 00:33:54.527 "ddgst": false 00:33:54.527 }, 00:33:54.527 "method": "bdev_nvme_attach_controller" 00:33:54.527 }' 00:33:54.527 19:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@397 -- # IFS=, 00:33:54.527 19:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:33:54.527 "params": { 00:33:54.527 "name": "Nvme1", 00:33:54.527 "trtype": "tcp", 00:33:54.527 "traddr": "10.0.0.2", 00:33:54.527 "adrfam": "ipv4", 00:33:54.527 "trsvcid": "4420", 00:33:54.527 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:54.527 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:54.527 "hdgst": false, 00:33:54.527 "ddgst": false 00:33:54.527 }, 00:33:54.527 "method": "bdev_nvme_attach_controller" 00:33:54.527 }' 00:33:54.527 19:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@397 -- # IFS=, 00:33:54.527 19:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:33:54.527 "params": { 00:33:54.527 "name": "Nvme1", 00:33:54.527 "trtype": "tcp", 00:33:54.527 "traddr": "10.0.0.2", 00:33:54.527 "adrfam": "ipv4", 00:33:54.527 "trsvcid": "4420", 00:33:54.527 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:54.527 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:54.527 "hdgst": false, 00:33:54.527 "ddgst": false 00:33:54.527 }, 00:33:54.527 "method": "bdev_nvme_attach_controller" 00:33:54.527 }' 00:33:54.527 [2024-11-05 19:23:23.841617] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:33:54.527 [2024-11-05 19:23:23.841618] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:33:54.527 [2024-11-05 19:23:23.841673] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-05 19:23:23.841674] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:33:54.527 --proc-type=auto ] 00:33:54.527 [2024-11-05 19:23:23.842663] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:33:54.527 [2024-11-05 19:23:23.842710] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:33:54.527 [2024-11-05 19:23:23.843065] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:33:54.527 [2024-11-05 19:23:23.843109] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:33:54.787 [2024-11-05 19:23:24.005059] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:54.787 [2024-11-05 19:23:24.033828] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:33:54.787 [2024-11-05 19:23:24.062808] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:54.787 [2024-11-05 19:23:24.092574] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:33:54.787 [2024-11-05 19:23:24.106164] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:55.047 [2024-11-05 19:23:24.134617] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:33:55.047 [2024-11-05 19:23:24.156726] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:55.047 [2024-11-05 19:23:24.184668] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:33:55.047 Running I/O for 1 seconds... 00:33:55.047 Running I/O for 1 seconds... 00:33:55.047 Running I/O for 1 seconds... 00:33:55.047 Running I/O for 1 seconds... 00:33:55.988 21294.00 IOPS, 83.18 MiB/s 00:33:55.988 Latency(us) 00:33:55.988 [2024-11-05T18:23:25.311Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:55.988 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:33:55.988 Nvme1n1 : 1.01 21357.84 83.43 0.00 0.00 5977.87 2430.29 9065.81 00:33:55.988 [2024-11-05T18:23:25.311Z] =================================================================================================================== 00:33:55.988 [2024-11-05T18:23:25.311Z] Total : 21357.84 83.43 0.00 0.00 5977.87 2430.29 9065.81 00:33:55.988 7379.00 IOPS, 28.82 MiB/s 00:33:55.988 Latency(us) 00:33:55.988 [2024-11-05T18:23:25.311Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:55.988 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:33:55.988 Nvme1n1 : 1.02 7403.13 28.92 0.00 0.00 17170.56 5488.64 26432.85 00:33:55.988 [2024-11-05T18:23:25.311Z] =================================================================================================================== 00:33:55.988 [2024-11-05T18:23:25.311Z] Total : 7403.13 28.92 0.00 0.00 17170.56 5488.64 26432.85 00:33:56.248 186656.00 IOPS, 729.12 MiB/s 00:33:56.248 Latency(us) 00:33:56.248 [2024-11-05T18:23:25.571Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:56.248 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:33:56.248 Nvme1n1 : 1.00 186284.17 727.67 0.00 0.00 683.30 302.08 1966.08 00:33:56.248 [2024-11-05T18:23:25.571Z] =================================================================================================================== 00:33:56.248 [2024-11-05T18:23:25.571Z] Total : 186284.17 727.67 0.00 0.00 683.30 302.08 1966.08 00:33:56.248 7672.00 IOPS, 29.97 MiB/s [2024-11-05T18:23:25.571Z] 19:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 599064 00:33:56.248 00:33:56.248 Latency(us) 00:33:56.248 [2024-11-05T18:23:25.571Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:56.248 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:33:56.248 Nvme1n1 : 1.01 7796.88 30.46 0.00 0.00 16373.27 3659.09 32112.64 00:33:56.248 [2024-11-05T18:23:25.571Z] =================================================================================================================== 00:33:56.248 [2024-11-05T18:23:25.571Z] Total : 7796.88 30.46 0.00 0.00 16373.27 3659.09 32112.64 00:33:56.248 19:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 599066 00:33:56.248 19:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 599069 00:33:56.248 19:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:56.248 19:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:56.248 19:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:56.248 19:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:56.248 19:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:33:56.248 19:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:33:56.248 19:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # nvmfcleanup 00:33:56.248 19:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@99 -- # sync 00:33:56.248 19:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:33:56.248 19:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # set +e 00:33:56.248 19:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # for i in {1..20} 00:33:56.248 19:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:33:56.248 rmmod nvme_tcp 00:33:56.248 rmmod nvme_fabrics 00:33:56.248 rmmod nvme_keyring 00:33:56.248 19:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:33:56.248 19:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # set -e 00:33:56.249 19:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # return 0 00:33:56.249 19:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # '[' -n 598992 ']' 00:33:56.249 19:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@337 -- # killprocess 598992 00:33:56.249 19:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # '[' -z 598992 ']' 00:33:56.249 19:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # kill -0 598992 00:33:56.509 19:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # uname 00:33:56.509 19:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:33:56.509 19:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 598992 00:33:56.509 19:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:33:56.509 19:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:33:56.509 19:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # echo 'killing process with pid 598992' 00:33:56.509 killing process with pid 598992 00:33:56.509 19:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@971 -- # kill 598992 00:33:56.509 19:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@976 -- # wait 598992 00:33:56.509 19:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:33:56.509 19:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # nvmf_fini 00:33:56.509 19:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@264 -- # local dev 00:33:56.509 19:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@267 -- # remove_target_ns 00:33:56.509 19:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:33:56.509 19:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:33:56.509 19:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_target_ns 00:33:59.052 19:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@268 -- # delete_main_bridge 00:33:59.052 19:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:33:59.052 19:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@130 -- # return 0 00:33:59.052 19:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:33:59.052 19:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:33:59.052 19:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:33:59.052 19:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:33:59.053 19:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:33:59.053 19:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:33:59.053 19:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:33:59.053 19:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:33:59.053 19:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:33:59.053 19:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:33:59.053 19:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:33:59.053 19:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:33:59.053 19:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:33:59.053 19:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:33:59.053 19:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:33:59.053 19:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:33:59.053 19:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:33:59.053 19:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@41 -- # _dev=0 00:33:59.053 19:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@41 -- # dev_map=() 00:33:59.053 19:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@284 -- # iptr 00:33:59.053 19:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@542 -- # iptables-save 00:33:59.053 19:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:33:59.053 19:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@542 -- # iptables-restore 00:33:59.053 00:33:59.053 real 0m12.850s 00:33:59.053 user 0m14.869s 00:33:59.053 sys 0m7.423s 00:33:59.053 19:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1128 -- # xtrace_disable 00:33:59.053 19:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:59.053 ************************************ 00:33:59.053 END TEST nvmf_bdev_io_wait 00:33:59.053 ************************************ 00:33:59.053 19:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@25 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:33:59.053 19:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:33:59.053 19:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:33:59.053 19:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:59.053 ************************************ 00:33:59.053 START TEST nvmf_queue_depth 00:33:59.053 ************************************ 00:33:59.053 19:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:33:59.053 * Looking for test storage... 00:33:59.053 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:59.053 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:33:59.053 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lcov --version 00:33:59.053 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:33:59.053 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:33:59.053 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:59.053 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:59.053 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:59.053 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:33:59.053 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:33:59.053 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:33:59.053 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:33:59.053 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:33:59.053 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:33:59.053 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:33:59.053 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:59.053 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:33:59.053 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:33:59.053 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:59.053 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:59.053 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:33:59.053 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:33:59.053 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:59.053 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:33:59.053 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:33:59.053 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:33:59.053 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:33:59.053 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:59.053 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:33:59.053 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:33:59.053 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:59.053 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:59.053 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:33:59.053 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:59.053 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:33:59.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:59.053 --rc genhtml_branch_coverage=1 00:33:59.053 --rc genhtml_function_coverage=1 00:33:59.053 --rc genhtml_legend=1 00:33:59.053 --rc geninfo_all_blocks=1 00:33:59.053 --rc geninfo_unexecuted_blocks=1 00:33:59.053 00:33:59.053 ' 00:33:59.053 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:33:59.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:59.053 --rc genhtml_branch_coverage=1 00:33:59.053 --rc genhtml_function_coverage=1 00:33:59.053 --rc genhtml_legend=1 00:33:59.053 --rc geninfo_all_blocks=1 00:33:59.053 --rc geninfo_unexecuted_blocks=1 00:33:59.053 00:33:59.053 ' 00:33:59.053 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:33:59.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:59.053 --rc genhtml_branch_coverage=1 00:33:59.053 --rc genhtml_function_coverage=1 00:33:59.053 --rc genhtml_legend=1 00:33:59.053 --rc geninfo_all_blocks=1 00:33:59.053 --rc geninfo_unexecuted_blocks=1 00:33:59.053 00:33:59.053 ' 00:33:59.053 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:33:59.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:59.053 --rc genhtml_branch_coverage=1 00:33:59.053 --rc genhtml_function_coverage=1 00:33:59.053 --rc genhtml_legend=1 00:33:59.053 --rc geninfo_all_blocks=1 00:33:59.053 --rc geninfo_unexecuted_blocks=1 00:33:59.053 00:33:59.053 ' 00:33:59.053 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:59.053 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:33:59.053 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:59.053 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:59.053 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:59.053 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:59.053 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:59.053 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:33:59.053 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:59.053 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:33:59.053 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:59.053 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:59.053 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:59.053 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:33:59.053 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:33:59.053 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:59.053 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:59.053 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:33:59.053 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:59.054 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:59.054 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:59.054 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:59.054 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:59.054 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:59.054 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:33:59.054 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:59.054 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:33:59.054 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:33:59.054 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:33:59.054 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:33:59.054 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@50 -- # : 0 00:33:59.054 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:33:59.054 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:33:59.054 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:33:59.054 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:59.054 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:59.054 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:33:59.054 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:33:59.054 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:33:59.054 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:33:59.054 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@54 -- # have_pci_nics=0 00:33:59.054 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:33:59.054 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:33:59.054 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:33:59.054 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:33:59.054 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:33:59.054 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:59.054 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@296 -- # prepare_net_devs 00:33:59.054 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # local -g is_hw=no 00:33:59.054 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@260 -- # remove_target_ns 00:33:59.054 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:33:59.054 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:33:59.054 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_target_ns 00:33:59.054 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:33:59.054 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:33:59.054 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # xtrace_disable 00:33:59.054 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:07.191 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:07.191 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@131 -- # pci_devs=() 00:34:07.191 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@131 -- # local -a pci_devs 00:34:07.191 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@132 -- # pci_net_devs=() 00:34:07.191 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:34:07.191 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@133 -- # pci_drivers=() 00:34:07.191 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@133 -- # local -A pci_drivers 00:34:07.191 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@135 -- # net_devs=() 00:34:07.191 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@135 -- # local -ga net_devs 00:34:07.191 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@136 -- # e810=() 00:34:07.191 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@136 -- # local -ga e810 00:34:07.191 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@137 -- # x722=() 00:34:07.191 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@137 -- # local -ga x722 00:34:07.191 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@138 -- # mlx=() 00:34:07.191 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@138 -- # local -ga mlx 00:34:07.191 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:07.191 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:07.191 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:07.191 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:07.191 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:07.191 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:07.191 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:07.191 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:07.191 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:07.191 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:07.191 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:07.191 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:07.191 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:34:07.191 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:34:07.191 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:34:07.191 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:34:07.191 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:34:07.191 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:34:07.191 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:34:07.191 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:34:07.191 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:34:07.191 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:34:07.191 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:34:07.191 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:07.191 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:07.191 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:34:07.191 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:34:07.191 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:34:07.191 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:34:07.191 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:34:07.191 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:34:07.191 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:07.191 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:07.191 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:34:07.191 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:34:07.191 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:34:07.191 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:34:07.191 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:34:07.191 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:07.191 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:34:07.191 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:07.191 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@234 -- # [[ up == up ]] 00:34:07.191 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:34:07.191 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:07.191 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:34:07.191 Found net devices under 0000:4b:00.0: cvl_0_0 00:34:07.191 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:34:07.191 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:34:07.191 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:07.191 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@234 -- # [[ up == up ]] 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:34:07.192 Found net devices under 0000:4b:00.1: cvl_0_1 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # is_hw=yes 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@257 -- # create_target_ns 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@27 -- # local -gA dev_map 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@28 -- # local -g _dev 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@44 -- # ips=() 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@11 -- # local val=167772161 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:34:07.192 10.0.0.1 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@11 -- # local val=167772162 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:34:07.192 10.0.0.2 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@38 -- # ping_ips 1 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@107 -- # local dev=initiator0 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:34:07.192 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:07.192 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.631 ms 00:34:07.192 00:34:07.192 --- 10.0.0.1 ping statistics --- 00:34:07.192 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:07.192 rtt min/avg/max/mdev = 0.631/0.631/0.631/0.000 ms 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@168 -- # get_net_dev target0 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@107 -- # local dev=target0 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:34:07.192 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:34:07.193 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:34:07.193 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:34:07.193 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:07.193 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.320 ms 00:34:07.193 00:34:07.193 --- 10.0.0.2 ping statistics --- 00:34:07.193 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:07.193 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:34:07.193 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@98 -- # (( pair++ )) 00:34:07.193 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:34:07.193 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:07.193 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@270 -- # return 0 00:34:07.193 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:34:07.193 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:34:07.193 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:34:07.193 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:34:07.193 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:34:07.193 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:34:07.193 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:34:07.193 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:34:07.193 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:34:07.193 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:34:07.193 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@107 -- # local dev=initiator0 00:34:07.193 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:34:07.193 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:34:07.193 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:34:07.193 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:34:07.193 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:34:07.193 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:34:07.193 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:34:07.193 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:34:07.193 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:34:07.193 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:07.193 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:34:07.193 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:34:07.193 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:34:07.193 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:34:07.193 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:34:07.193 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:34:07.193 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@107 -- # local dev=initiator1 00:34:07.193 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:34:07.193 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:34:07.193 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@109 -- # return 1 00:34:07.193 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@168 -- # dev= 00:34:07.193 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@169 -- # return 0 00:34:07.193 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:34:07.193 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:34:07.193 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:34:07.193 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:34:07.193 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:34:07.193 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:34:07.193 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:34:07.193 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@168 -- # get_net_dev target0 00:34:07.193 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@107 -- # local dev=target0 00:34:07.193 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:34:07.193 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:34:07.193 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:34:07.193 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:34:07.193 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:34:07.193 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:34:07.193 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:34:07.193 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:34:07.193 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:34:07.193 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:07.193 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:34:07.193 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:34:07.193 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:34:07.193 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:34:07.193 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:34:07.193 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:34:07.193 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@168 -- # get_net_dev target1 00:34:07.193 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@107 -- # local dev=target1 00:34:07.193 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:34:07.193 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:34:07.193 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@109 -- # return 1 00:34:07.193 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@168 -- # dev= 00:34:07.193 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@169 -- # return 0 00:34:07.193 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:34:07.193 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:07.193 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:34:07.193 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:34:07.193 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:07.193 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:34:07.193 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:34:07.193 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:34:07.193 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:34:07.193 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:07.193 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:07.193 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # nvmfpid=603757 00:34:07.193 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@329 -- # waitforlisten 603757 00:34:07.193 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:34:07.193 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 603757 ']' 00:34:07.193 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:07.193 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:34:07.193 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:07.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:07.193 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:34:07.193 19:23:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:07.193 [2024-11-05 19:23:35.739250] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:07.193 [2024-11-05 19:23:35.740376] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:34:07.193 [2024-11-05 19:23:35.740429] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:07.193 [2024-11-05 19:23:35.843694] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:07.193 [2024-11-05 19:23:35.893884] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:07.193 [2024-11-05 19:23:35.893937] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:07.193 [2024-11-05 19:23:35.893946] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:07.193 [2024-11-05 19:23:35.893953] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:07.193 [2024-11-05 19:23:35.893960] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:07.193 [2024-11-05 19:23:35.894759] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:07.193 [2024-11-05 19:23:35.970422] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:07.193 [2024-11-05 19:23:35.970720] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:07.453 19:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:34:07.453 19:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:34:07.453 19:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:34:07.453 19:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:07.453 19:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:07.453 19:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:07.453 19:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:07.453 19:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:07.453 19:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:07.453 [2024-11-05 19:23:36.599590] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:07.453 19:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:07.453 19:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:07.453 19:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:07.453 19:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:07.453 Malloc0 00:34:07.453 19:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:07.453 19:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:07.453 19:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:07.453 19:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:07.453 19:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:07.453 19:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:07.453 19:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:07.453 19:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:07.453 19:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:07.453 19:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:07.453 19:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:07.453 19:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:07.453 [2024-11-05 19:23:36.687825] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:07.453 19:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:07.453 19:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=603827 00:34:07.453 19:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:07.453 19:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:34:07.453 19:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 603827 /var/tmp/bdevperf.sock 00:34:07.453 19:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 603827 ']' 00:34:07.453 19:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:07.453 19:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:34:07.453 19:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:07.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:07.453 19:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:34:07.453 19:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:07.453 [2024-11-05 19:23:36.745712] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:34:07.453 [2024-11-05 19:23:36.745786] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid603827 ] 00:34:07.713 [2024-11-05 19:23:36.820837] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:07.713 [2024-11-05 19:23:36.862501] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:08.283 19:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:34:08.283 19:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:34:08.283 19:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:08.283 19:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:08.283 19:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:08.543 NVMe0n1 00:34:08.543 19:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:08.543 19:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:34:08.543 Running I/O for 10 seconds... 00:34:10.860 8193.00 IOPS, 32.00 MiB/s [2024-11-05T18:23:41.122Z] 8704.50 IOPS, 34.00 MiB/s [2024-11-05T18:23:42.060Z] 8869.67 IOPS, 34.65 MiB/s [2024-11-05T18:23:42.998Z] 9594.00 IOPS, 37.48 MiB/s [2024-11-05T18:23:43.941Z] 10103.20 IOPS, 39.47 MiB/s [2024-11-05T18:23:44.883Z] 10419.17 IOPS, 40.70 MiB/s [2024-11-05T18:23:46.296Z] 10655.00 IOPS, 41.62 MiB/s [2024-11-05T18:23:46.868Z] 10757.38 IOPS, 42.02 MiB/s [2024-11-05T18:23:48.258Z] 10872.00 IOPS, 42.47 MiB/s [2024-11-05T18:23:48.258Z] 10957.90 IOPS, 42.80 MiB/s 00:34:18.935 Latency(us) 00:34:18.935 [2024-11-05T18:23:48.258Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:18.935 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:34:18.935 Verification LBA range: start 0x0 length 0x4000 00:34:18.935 NVMe0n1 : 10.07 10984.62 42.91 0.00 0.00 92887.28 24576.00 78643.20 00:34:18.935 [2024-11-05T18:23:48.258Z] =================================================================================================================== 00:34:18.935 [2024-11-05T18:23:48.258Z] Total : 10984.62 42.91 0.00 0.00 92887.28 24576.00 78643.20 00:34:18.935 { 00:34:18.935 "results": [ 00:34:18.935 { 00:34:18.935 "job": "NVMe0n1", 00:34:18.935 "core_mask": "0x1", 00:34:18.935 "workload": "verify", 00:34:18.935 "status": "finished", 00:34:18.935 "verify_range": { 00:34:18.935 "start": 0, 00:34:18.935 "length": 16384 00:34:18.935 }, 00:34:18.935 "queue_depth": 1024, 00:34:18.935 "io_size": 4096, 00:34:18.935 "runtime": 10.067346, 00:34:18.935 "iops": 10984.622958225535, 00:34:18.935 "mibps": 42.908683430568495, 00:34:18.935 "io_failed": 0, 00:34:18.935 "io_timeout": 0, 00:34:18.935 "avg_latency_us": 92887.27945743584, 00:34:18.935 "min_latency_us": 24576.0, 00:34:18.935 "max_latency_us": 78643.2 00:34:18.935 } 00:34:18.935 ], 00:34:18.935 "core_count": 1 00:34:18.935 } 00:34:18.935 19:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 603827 00:34:18.935 19:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 603827 ']' 00:34:18.935 19:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 603827 00:34:18.935 19:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:34:18.935 19:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:34:18.935 19:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 603827 00:34:18.935 19:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:34:18.935 19:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:34:18.935 19:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 603827' 00:34:18.935 killing process with pid 603827 00:34:18.935 19:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 603827 00:34:18.935 Received shutdown signal, test time was about 10.000000 seconds 00:34:18.935 00:34:18.935 Latency(us) 00:34:18.935 [2024-11-05T18:23:48.258Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:18.935 [2024-11-05T18:23:48.258Z] =================================================================================================================== 00:34:18.935 [2024-11-05T18:23:48.258Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:18.935 19:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 603827 00:34:18.935 19:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:34:18.935 19:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:34:18.935 19:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@335 -- # nvmfcleanup 00:34:18.935 19:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@99 -- # sync 00:34:18.935 19:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:34:18.935 19:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@102 -- # set +e 00:34:18.935 19:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@103 -- # for i in {1..20} 00:34:18.935 19:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:34:18.935 rmmod nvme_tcp 00:34:18.935 rmmod nvme_fabrics 00:34:18.935 rmmod nvme_keyring 00:34:18.935 19:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:34:18.935 19:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@106 -- # set -e 00:34:18.935 19:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@107 -- # return 0 00:34:18.935 19:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # '[' -n 603757 ']' 00:34:18.935 19:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@337 -- # killprocess 603757 00:34:18.935 19:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 603757 ']' 00:34:18.935 19:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 603757 00:34:18.935 19:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:34:18.935 19:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:34:18.935 19:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 603757 00:34:19.196 19:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:34:19.196 19:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:34:19.196 19:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 603757' 00:34:19.197 killing process with pid 603757 00:34:19.197 19:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 603757 00:34:19.197 19:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 603757 00:34:19.197 19:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:34:19.197 19:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@342 -- # nvmf_fini 00:34:19.197 19:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@264 -- # local dev 00:34:19.197 19:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@267 -- # remove_target_ns 00:34:19.197 19:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:34:19.197 19:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:34:19.197 19:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_target_ns 00:34:21.182 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@268 -- # delete_main_bridge 00:34:21.182 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:34:21.182 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@130 -- # return 0 00:34:21.182 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:34:21.182 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:34:21.183 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:34:21.183 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:34:21.183 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:34:21.183 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:34:21.183 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:34:21.183 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:34:21.183 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:34:21.183 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:34:21.183 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:34:21.183 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:34:21.183 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:34:21.183 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:34:21.183 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:34:21.183 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:34:21.183 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:34:21.183 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@41 -- # _dev=0 00:34:21.183 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@41 -- # dev_map=() 00:34:21.183 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@284 -- # iptr 00:34:21.183 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@542 -- # iptables-save 00:34:21.183 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:34:21.183 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@542 -- # iptables-restore 00:34:21.183 00:34:21.183 real 0m22.561s 00:34:21.183 user 0m24.772s 00:34:21.183 sys 0m7.469s 00:34:21.183 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1128 -- # xtrace_disable 00:34:21.183 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:21.183 ************************************ 00:34:21.183 END TEST nvmf_queue_depth 00:34:21.183 ************************************ 00:34:21.483 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:34:21.483 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:34:21.483 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:34:21.483 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:21.483 ************************************ 00:34:21.483 START TEST nvmf_nmic 00:34:21.483 ************************************ 00:34:21.483 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:34:21.483 * Looking for test storage... 00:34:21.483 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:21.483 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:34:21.483 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # lcov --version 00:34:21.483 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:34:21.483 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:34:21.483 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:21.483 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:21.483 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:21.483 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:34:21.483 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:34:21.483 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:34:21.483 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:34:21.483 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:34:21.483 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:34:21.483 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:34:21.483 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:21.483 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:34:21.483 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:34:21.484 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:21.484 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:21.484 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:34:21.484 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:34:21.484 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:21.484 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:34:21.484 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:34:21.484 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:34:21.484 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:34:21.484 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:21.484 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:34:21.484 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:34:21.484 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:21.484 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:21.484 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:34:21.484 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:21.484 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:34:21.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:21.484 --rc genhtml_branch_coverage=1 00:34:21.484 --rc genhtml_function_coverage=1 00:34:21.484 --rc genhtml_legend=1 00:34:21.484 --rc geninfo_all_blocks=1 00:34:21.484 --rc geninfo_unexecuted_blocks=1 00:34:21.484 00:34:21.484 ' 00:34:21.484 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:34:21.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:21.484 --rc genhtml_branch_coverage=1 00:34:21.484 --rc genhtml_function_coverage=1 00:34:21.484 --rc genhtml_legend=1 00:34:21.484 --rc geninfo_all_blocks=1 00:34:21.484 --rc geninfo_unexecuted_blocks=1 00:34:21.484 00:34:21.484 ' 00:34:21.484 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:34:21.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:21.484 --rc genhtml_branch_coverage=1 00:34:21.484 --rc genhtml_function_coverage=1 00:34:21.484 --rc genhtml_legend=1 00:34:21.484 --rc geninfo_all_blocks=1 00:34:21.484 --rc geninfo_unexecuted_blocks=1 00:34:21.484 00:34:21.484 ' 00:34:21.484 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:34:21.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:21.484 --rc genhtml_branch_coverage=1 00:34:21.484 --rc genhtml_function_coverage=1 00:34:21.484 --rc genhtml_legend=1 00:34:21.484 --rc geninfo_all_blocks=1 00:34:21.484 --rc geninfo_unexecuted_blocks=1 00:34:21.484 00:34:21.484 ' 00:34:21.484 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:21.484 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:34:21.484 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:21.484 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:21.484 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:21.484 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:21.484 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:21.484 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:34:21.484 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:21.484 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:34:21.484 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:21.484 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:21.484 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:21.484 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:34:21.484 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:34:21.484 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:21.484 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:21.484 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:34:21.484 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:21.484 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:21.484 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:21.484 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:21.484 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:21.484 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:21.484 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:34:21.484 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:21.484 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:34:21.484 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:34:21.484 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:34:21.484 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:34:21.484 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@50 -- # : 0 00:34:21.484 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:34:21.484 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:34:21.484 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:34:21.484 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:21.484 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:21.484 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:34:21.484 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:34:21.484 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:34:21.484 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:34:21.484 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@54 -- # have_pci_nics=0 00:34:21.484 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:21.484 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:21.484 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:34:21.484 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:34:21.484 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:21.484 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@296 -- # prepare_net_devs 00:34:21.756 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # local -g is_hw=no 00:34:21.756 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@260 -- # remove_target_ns 00:34:21.756 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:34:21.756 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:34:21.756 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_target_ns 00:34:21.756 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:34:21.756 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:34:21.756 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # xtrace_disable 00:34:21.756 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:29.900 19:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:29.900 19:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@131 -- # pci_devs=() 00:34:29.900 19:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@131 -- # local -a pci_devs 00:34:29.900 19:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@132 -- # pci_net_devs=() 00:34:29.900 19:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:34:29.900 19:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@133 -- # pci_drivers=() 00:34:29.900 19:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@133 -- # local -A pci_drivers 00:34:29.900 19:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@135 -- # net_devs=() 00:34:29.900 19:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@135 -- # local -ga net_devs 00:34:29.900 19:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@136 -- # e810=() 00:34:29.900 19:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@136 -- # local -ga e810 00:34:29.900 19:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@137 -- # x722=() 00:34:29.900 19:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@137 -- # local -ga x722 00:34:29.900 19:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@138 -- # mlx=() 00:34:29.900 19:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@138 -- # local -ga mlx 00:34:29.900 19:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:29.900 19:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:29.900 19:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:29.900 19:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:29.900 19:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:29.900 19:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:29.900 19:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:29.900 19:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:29.900 19:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:29.900 19:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:29.900 19:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:29.900 19:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:29.900 19:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:34:29.900 19:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:34:29.900 19:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:34:29.900 19:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:34:29.900 19:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:34:29.900 19:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:34:29.900 19:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:34:29.900 19:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:34:29.900 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:34:29.900 19:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:34:29.900 19:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:34:29.900 19:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:29.900 19:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:29.900 19:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:34:29.900 19:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:34:29.900 19:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:34:29.900 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:34:29.900 19:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:34:29.900 19:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:34:29.900 19:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:29.900 19:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:29.900 19:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:34:29.900 19:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:34:29.900 19:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:34:29.900 19:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:34:29.900 19:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:34:29.900 19:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:29.900 19:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:34:29.900 19:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:29.900 19:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@234 -- # [[ up == up ]] 00:34:29.900 19:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:34:29.900 19:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:29.900 19:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:34:29.900 Found net devices under 0000:4b:00.0: cvl_0_0 00:34:29.900 19:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:34:29.900 19:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:34:29.900 19:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:29.900 19:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:34:29.900 19:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:29.900 19:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@234 -- # [[ up == up ]] 00:34:29.900 19:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:34:29.900 19:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:29.900 19:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:34:29.900 Found net devices under 0000:4b:00.1: cvl_0_1 00:34:29.900 19:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:34:29.900 19:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:34:29.900 19:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:34:29.900 19:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # is_hw=yes 00:34:29.900 19:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:34:29.900 19:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:34:29.901 19:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:34:29.901 19:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:34:29.901 19:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@257 -- # create_target_ns 00:34:29.901 19:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:34:29.901 19:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:34:29.901 19:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:34:29.901 19:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:29.901 19:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:34:29.901 19:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:34:29.901 19:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:34:29.901 19:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:34:29.901 19:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:34:29.901 19:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:34:29.901 19:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:34:29.901 19:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:34:29.901 19:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@27 -- # local -gA dev_map 00:34:29.901 19:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@28 -- # local -g _dev 00:34:29.901 19:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:34:29.901 19:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:34:29.901 19:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:34:29.901 19:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:34:29.901 19:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@44 -- # ips=() 00:34:29.901 19:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:34:29.901 19:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:34:29.901 19:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:34:29.901 19:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:34:29.901 19:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:34:29.901 19:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:34:29.901 19:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:34:29.901 19:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:34:29.901 19:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:34:29.901 19:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:34:29.901 19:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:34:29.901 19:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:34:29.901 19:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:34:29.901 19:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:34:29.901 19:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:34:29.901 19:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:34:29.901 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:34:29.901 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:34:29.901 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:34:29.901 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:34:29.901 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@11 -- # local val=167772161 00:34:29.901 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:34:29.901 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:34:29.901 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:34:29.901 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:34:29.901 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:34:29.901 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:34:29.901 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:34:29.901 10.0.0.1 00:34:29.901 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:34:29.901 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:34:29.901 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:34:29.901 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:34:29.901 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:34:29.901 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@11 -- # local val=167772162 00:34:29.901 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:34:29.901 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:34:29.901 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:34:29.901 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:34:29.901 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:34:29.901 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:34:29.901 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:34:29.901 10.0.0.2 00:34:29.901 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:34:29.901 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:34:29.901 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:34:29.901 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:34:29.901 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:34:29.901 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:34:29.901 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:34:29.901 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:34:29.901 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:34:29.901 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:34:29.901 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:34:29.901 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:34:29.901 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:34:29.901 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:34:29.901 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:34:29.901 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:34:29.901 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:34:29.901 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:34:29.901 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:34:29.901 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:34:29.901 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@38 -- # ping_ips 1 00:34:29.901 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:34:29.901 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:34:29.901 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:34:29.901 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:34:29.901 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:34:29.901 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:34:29.901 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:34:29.901 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:34:29.901 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:34:29.901 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@107 -- # local dev=initiator0 00:34:29.901 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:34:29.901 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:34:29.901 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:34:29.901 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:34:29.901 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:34:29.901 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:34:29.901 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:34:29.901 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:34:29.902 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:34:29.902 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:34:29.902 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:34:29.902 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:34:29.902 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:34:29.902 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:34:29.902 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:34:29.902 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:29.902 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.471 ms 00:34:29.902 00:34:29.902 --- 10.0.0.1 ping statistics --- 00:34:29.902 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:29.902 rtt min/avg/max/mdev = 0.471/0.471/0.471/0.000 ms 00:34:29.902 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:34:29.902 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:34:29.902 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:34:29.902 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:34:29.902 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:34:29.902 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:34:29.902 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@168 -- # get_net_dev target0 00:34:29.902 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@107 -- # local dev=target0 00:34:29.902 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:34:29.902 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:34:29.902 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:34:29.902 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:34:29.902 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:34:29.902 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:34:29.902 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:34:29.902 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:34:29.902 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:34:29.902 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:34:29.902 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:34:29.902 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:34:29.902 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:34:29.902 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:34:29.902 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:29.902 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.275 ms 00:34:29.902 00:34:29.902 --- 10.0.0.2 ping statistics --- 00:34:29.902 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:29.902 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:34:29.902 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@98 -- # (( pair++ )) 00:34:29.902 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:34:29.902 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:29.902 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@270 -- # return 0 00:34:29.902 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:34:29.902 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:34:29.902 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:34:29.902 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:34:29.902 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:34:29.902 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:34:29.902 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:34:29.902 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:34:29.902 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:34:29.902 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:34:29.902 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@107 -- # local dev=initiator0 00:34:29.902 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:34:29.902 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:34:29.902 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:34:29.902 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:34:29.902 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:34:29.902 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:34:29.902 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:34:29.902 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:34:29.902 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:34:29.902 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:29.902 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:34:29.902 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:34:29.902 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:34:29.902 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:34:29.902 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:34:29.902 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:34:29.902 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@107 -- # local dev=initiator1 00:34:29.902 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:34:29.902 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:34:29.902 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@109 -- # return 1 00:34:29.902 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@168 -- # dev= 00:34:29.902 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@169 -- # return 0 00:34:29.902 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:34:29.902 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:34:29.902 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:34:29.902 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:34:29.902 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:34:29.902 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:34:29.902 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:34:29.902 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@168 -- # get_net_dev target0 00:34:29.902 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@107 -- # local dev=target0 00:34:29.902 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:34:29.902 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:34:29.902 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:34:29.902 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:34:29.902 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:34:29.902 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:34:29.902 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:34:29.902 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:34:29.902 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:34:29.902 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:29.902 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:34:29.902 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:34:29.902 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:34:29.902 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:34:29.902 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:34:29.902 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:34:29.902 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@168 -- # get_net_dev target1 00:34:29.902 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@107 -- # local dev=target1 00:34:29.902 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:34:29.902 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:34:29.902 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@109 -- # return 1 00:34:29.902 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@168 -- # dev= 00:34:29.902 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@169 -- # return 0 00:34:29.903 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:34:29.903 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:29.903 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:34:29.903 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:34:29.903 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:29.903 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:34:29.903 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:34:29.903 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:34:29.903 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:34:29.903 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:29.903 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:29.903 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # nvmfpid=610203 00:34:29.903 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@329 -- # waitforlisten 610203 00:34:29.903 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:34:29.903 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@833 -- # '[' -z 610203 ']' 00:34:29.903 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:29.903 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@838 -- # local max_retries=100 00:34:29.903 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:29.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:29.903 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # xtrace_disable 00:34:29.903 19:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:29.903 [2024-11-05 19:23:58.404876] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:29.903 [2024-11-05 19:23:58.405946] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:34:29.903 [2024-11-05 19:23:58.405996] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:29.903 [2024-11-05 19:23:58.488382] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:29.903 [2024-11-05 19:23:58.526573] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:29.903 [2024-11-05 19:23:58.526608] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:29.903 [2024-11-05 19:23:58.526616] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:29.903 [2024-11-05 19:23:58.526623] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:29.903 [2024-11-05 19:23:58.526629] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:29.903 [2024-11-05 19:23:58.528144] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:29.903 [2024-11-05 19:23:58.528255] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:29.903 [2024-11-05 19:23:58.528409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:29.903 [2024-11-05 19:23:58.528409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:29.903 [2024-11-05 19:23:58.583457] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:29.903 [2024-11-05 19:23:58.583801] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:29.903 [2024-11-05 19:23:58.584596] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:34:29.903 [2024-11-05 19:23:58.585066] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:29.903 [2024-11-05 19:23:58.585221] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:34:29.903 19:23:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:34:29.903 19:23:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@866 -- # return 0 00:34:29.903 19:23:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:34:29.903 19:23:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:29.903 19:23:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:30.164 19:23:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:30.164 19:23:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:30.164 19:23:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:30.164 19:23:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:30.164 [2024-11-05 19:23:59.252932] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:30.164 19:23:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:30.164 19:23:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:30.164 19:23:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:30.164 19:23:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:30.164 Malloc0 00:34:30.164 19:23:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:30.164 19:23:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:34:30.164 19:23:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:30.164 19:23:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:30.164 19:23:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:30.164 19:23:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:30.164 19:23:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:30.164 19:23:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:30.164 19:23:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:30.164 19:23:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:30.164 19:23:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:30.164 19:23:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:30.164 [2024-11-05 19:23:59.333041] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:30.164 19:23:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:30.164 19:23:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:34:30.164 test case1: single bdev can't be used in multiple subsystems 00:34:30.164 19:23:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:34:30.164 19:23:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:30.164 19:23:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:30.164 19:23:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:30.164 19:23:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:34:30.164 19:23:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:30.164 19:23:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:30.164 19:23:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:30.164 19:23:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:34:30.164 19:23:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:34:30.164 19:23:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:30.164 19:23:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:30.164 [2024-11-05 19:23:59.368782] bdev.c:8198:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:34:30.164 [2024-11-05 19:23:59.368803] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:34:30.164 [2024-11-05 19:23:59.368811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:30.164 request: 00:34:30.164 { 00:34:30.164 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:34:30.164 "namespace": { 00:34:30.164 "bdev_name": "Malloc0", 00:34:30.164 "no_auto_visible": false 00:34:30.164 }, 00:34:30.164 "method": "nvmf_subsystem_add_ns", 00:34:30.164 "req_id": 1 00:34:30.164 } 00:34:30.164 Got JSON-RPC error response 00:34:30.164 response: 00:34:30.164 { 00:34:30.164 "code": -32602, 00:34:30.164 "message": "Invalid parameters" 00:34:30.164 } 00:34:30.164 19:23:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:34:30.164 19:23:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:34:30.164 19:23:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:34:30.164 19:23:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:34:30.164 Adding namespace failed - expected result. 00:34:30.165 19:23:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:34:30.165 test case2: host connect to nvmf target in multiple paths 00:34:30.165 19:23:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:34:30.165 19:23:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:30.165 19:23:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:30.165 [2024-11-05 19:23:59.380886] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:34:30.165 19:23:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:30.165 19:23:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:34:30.426 19:23:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:34:30.997 19:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:34:30.997 19:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1200 -- # local i=0 00:34:30.997 19:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:34:30.997 19:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:34:30.997 19:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # sleep 2 00:34:32.910 19:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:34:32.910 19:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:34:32.910 19:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:34:32.910 19:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:34:32.910 19:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:34:32.910 19:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # return 0 00:34:32.910 19:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:34:32.910 [global] 00:34:32.910 thread=1 00:34:32.910 invalidate=1 00:34:32.910 rw=write 00:34:32.910 time_based=1 00:34:32.910 runtime=1 00:34:32.910 ioengine=libaio 00:34:32.910 direct=1 00:34:32.910 bs=4096 00:34:32.910 iodepth=1 00:34:32.910 norandommap=0 00:34:32.910 numjobs=1 00:34:32.910 00:34:32.910 verify_dump=1 00:34:32.910 verify_backlog=512 00:34:32.910 verify_state_save=0 00:34:32.910 do_verify=1 00:34:32.910 verify=crc32c-intel 00:34:32.910 [job0] 00:34:32.910 filename=/dev/nvme0n1 00:34:32.910 Could not set queue depth (nvme0n1) 00:34:33.477 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:33.477 fio-3.35 00:34:33.477 Starting 1 thread 00:34:34.418 00:34:34.418 job0: (groupid=0, jobs=1): err= 0: pid=611403: Tue Nov 5 19:24:03 2024 00:34:34.418 read: IOPS=18, BW=73.5KiB/s (75.3kB/s)(76.0KiB/1034msec) 00:34:34.418 slat (nsec): min=26131, max=28311, avg=26746.53, stdev=455.10 00:34:34.418 clat (usec): min=957, max=43005, avg=39890.78, stdev=9432.19 00:34:34.418 lat (usec): min=984, max=43032, avg=39917.53, stdev=9432.08 00:34:34.418 clat percentiles (usec): 00:34:34.418 | 1.00th=[ 955], 5.00th=[ 955], 10.00th=[41681], 20.00th=[41681], 00:34:34.418 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:34:34.418 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42730], 95.00th=[43254], 00:34:34.418 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:34:34.418 | 99.99th=[43254] 00:34:34.418 write: IOPS=495, BW=1981KiB/s (2028kB/s)(2048KiB/1034msec); 0 zone resets 00:34:34.418 slat (nsec): min=9007, max=51896, avg=28334.67, stdev=10704.14 00:34:34.418 clat (usec): min=147, max=1840, avg=502.92, stdev=128.00 00:34:34.418 lat (usec): min=183, max=1874, avg=531.26, stdev=131.74 00:34:34.418 clat percentiles (usec): 00:34:34.418 | 1.00th=[ 289], 5.00th=[ 330], 10.00th=[ 347], 20.00th=[ 408], 00:34:34.418 | 30.00th=[ 433], 40.00th=[ 465], 50.00th=[ 478], 60.00th=[ 515], 00:34:34.418 | 70.00th=[ 562], 80.00th=[ 611], 90.00th=[ 668], 95.00th=[ 685], 00:34:34.418 | 99.00th=[ 775], 99.50th=[ 799], 99.90th=[ 1844], 99.95th=[ 1844], 00:34:34.418 | 99.99th=[ 1844] 00:34:34.418 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:34:34.418 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:34.418 lat (usec) : 250=0.56%, 500=53.67%, 750=40.87%, 1000=1.32% 00:34:34.418 lat (msec) : 2=0.19%, 50=3.39% 00:34:34.418 cpu : usr=0.68%, sys=2.03%, ctx=531, majf=0, minf=1 00:34:34.418 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:34.418 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:34.418 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:34.418 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:34.418 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:34.418 00:34:34.418 Run status group 0 (all jobs): 00:34:34.418 READ: bw=73.5KiB/s (75.3kB/s), 73.5KiB/s-73.5KiB/s (75.3kB/s-75.3kB/s), io=76.0KiB (77.8kB), run=1034-1034msec 00:34:34.418 WRITE: bw=1981KiB/s (2028kB/s), 1981KiB/s-1981KiB/s (2028kB/s-2028kB/s), io=2048KiB (2097kB), run=1034-1034msec 00:34:34.418 00:34:34.418 Disk stats (read/write): 00:34:34.418 nvme0n1: ios=65/512, merge=0/0, ticks=645/213, in_queue=858, util=92.99% 00:34:34.418 19:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:34:34.679 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:34:34.679 19:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:34:34.679 19:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1221 -- # local i=0 00:34:34.679 19:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:34:34.679 19:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:34.679 19:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:34:34.679 19:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:34.679 19:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1233 -- # return 0 00:34:34.679 19:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:34:34.679 19:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:34:34.679 19:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@335 -- # nvmfcleanup 00:34:34.679 19:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@99 -- # sync 00:34:34.679 19:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:34:34.679 19:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@102 -- # set +e 00:34:34.679 19:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@103 -- # for i in {1..20} 00:34:34.679 19:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:34:34.679 rmmod nvme_tcp 00:34:34.679 rmmod nvme_fabrics 00:34:34.679 rmmod nvme_keyring 00:34:34.679 19:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:34:34.679 19:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@106 -- # set -e 00:34:34.679 19:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@107 -- # return 0 00:34:34.679 19:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # '[' -n 610203 ']' 00:34:34.679 19:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@337 -- # killprocess 610203 00:34:34.679 19:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@952 -- # '[' -z 610203 ']' 00:34:34.679 19:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@956 -- # kill -0 610203 00:34:34.679 19:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@957 -- # uname 00:34:34.679 19:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:34:34.679 19:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 610203 00:34:34.940 19:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:34:34.940 19:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:34:34.940 19:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@970 -- # echo 'killing process with pid 610203' 00:34:34.940 killing process with pid 610203 00:34:34.940 19:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@971 -- # kill 610203 00:34:34.940 19:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@976 -- # wait 610203 00:34:34.940 19:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:34:34.940 19:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@342 -- # nvmf_fini 00:34:34.940 19:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@264 -- # local dev 00:34:34.940 19:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@267 -- # remove_target_ns 00:34:34.940 19:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:34:34.940 19:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:34:34.940 19:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_target_ns 00:34:37.483 19:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@268 -- # delete_main_bridge 00:34:37.483 19:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:34:37.483 19:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@130 -- # return 0 00:34:37.483 19:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:34:37.483 19:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:34:37.483 19:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:34:37.483 19:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:34:37.483 19:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:34:37.483 19:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:34:37.483 19:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:34:37.483 19:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:34:37.483 19:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:34:37.483 19:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:34:37.483 19:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:34:37.483 19:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:34:37.483 19:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:34:37.483 19:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:34:37.483 19:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:34:37.483 19:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:34:37.483 19:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:34:37.483 19:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@41 -- # _dev=0 00:34:37.483 19:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@41 -- # dev_map=() 00:34:37.483 19:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@284 -- # iptr 00:34:37.483 19:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@542 -- # iptables-save 00:34:37.483 19:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:34:37.483 19:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@542 -- # iptables-restore 00:34:37.483 00:34:37.483 real 0m15.698s 00:34:37.483 user 0m36.919s 00:34:37.483 sys 0m7.413s 00:34:37.483 19:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1128 -- # xtrace_disable 00:34:37.483 19:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:37.483 ************************************ 00:34:37.483 END TEST nvmf_nmic 00:34:37.483 ************************************ 00:34:37.483 19:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:34:37.483 19:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:34:37.483 19:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:34:37.483 19:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:37.483 ************************************ 00:34:37.483 START TEST nvmf_fio_target 00:34:37.483 ************************************ 00:34:37.483 19:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:34:37.483 * Looking for test storage... 00:34:37.483 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:37.483 19:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:34:37.483 19:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lcov --version 00:34:37.483 19:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:34:37.483 19:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:34:37.483 19:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:37.483 19:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:37.483 19:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:37.483 19:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:34:37.483 19:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:34:37.483 19:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:34:37.483 19:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:34:37.483 19:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:34:37.483 19:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:34:37.483 19:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:34:37.483 19:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:37.483 19:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:34:37.483 19:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:34:37.483 19:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:37.483 19:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:37.483 19:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:34:37.483 19:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:34:37.483 19:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:37.483 19:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:34:37.483 19:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:34:37.483 19:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:34:37.483 19:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:34:37.483 19:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:37.483 19:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:34:37.483 19:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:34:37.483 19:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:37.483 19:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:37.483 19:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:34:37.483 19:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:37.483 19:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:34:37.483 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:37.483 --rc genhtml_branch_coverage=1 00:34:37.483 --rc genhtml_function_coverage=1 00:34:37.483 --rc genhtml_legend=1 00:34:37.483 --rc geninfo_all_blocks=1 00:34:37.483 --rc geninfo_unexecuted_blocks=1 00:34:37.483 00:34:37.483 ' 00:34:37.483 19:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:34:37.483 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:37.483 --rc genhtml_branch_coverage=1 00:34:37.483 --rc genhtml_function_coverage=1 00:34:37.483 --rc genhtml_legend=1 00:34:37.483 --rc geninfo_all_blocks=1 00:34:37.483 --rc geninfo_unexecuted_blocks=1 00:34:37.483 00:34:37.483 ' 00:34:37.483 19:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:34:37.483 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:37.483 --rc genhtml_branch_coverage=1 00:34:37.483 --rc genhtml_function_coverage=1 00:34:37.483 --rc genhtml_legend=1 00:34:37.483 --rc geninfo_all_blocks=1 00:34:37.483 --rc geninfo_unexecuted_blocks=1 00:34:37.483 00:34:37.483 ' 00:34:37.483 19:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:34:37.483 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:37.483 --rc genhtml_branch_coverage=1 00:34:37.483 --rc genhtml_function_coverage=1 00:34:37.483 --rc genhtml_legend=1 00:34:37.483 --rc geninfo_all_blocks=1 00:34:37.483 --rc geninfo_unexecuted_blocks=1 00:34:37.483 00:34:37.483 ' 00:34:37.483 19:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:37.483 19:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:34:37.483 19:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:37.483 19:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:37.483 19:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:37.483 19:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:37.483 19:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:37.483 19:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:34:37.483 19:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:37.483 19:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:34:37.483 19:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:37.483 19:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:37.483 19:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:37.483 19:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:34:37.483 19:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:34:37.483 19:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:37.483 19:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:37.483 19:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:34:37.483 19:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:37.483 19:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:37.483 19:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:37.484 19:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:37.484 19:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:37.484 19:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:37.484 19:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:34:37.484 19:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:37.484 19:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:34:37.484 19:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:34:37.484 19:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:34:37.484 19:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:34:37.484 19:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@50 -- # : 0 00:34:37.484 19:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:34:37.484 19:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:34:37.484 19:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:34:37.484 19:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:37.484 19:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:37.484 19:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:34:37.484 19:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:34:37.484 19:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:34:37.484 19:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:34:37.484 19:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@54 -- # have_pci_nics=0 00:34:37.484 19:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:37.484 19:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:37.484 19:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:37.484 19:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:34:37.484 19:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:34:37.484 19:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:37.484 19:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@296 -- # prepare_net_devs 00:34:37.484 19:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # local -g is_hw=no 00:34:37.484 19:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@260 -- # remove_target_ns 00:34:37.484 19:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:34:37.484 19:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:34:37.484 19:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_target_ns 00:34:37.484 19:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:34:37.484 19:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:34:37.484 19:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # xtrace_disable 00:34:37.484 19:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:45.619 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:45.619 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@131 -- # pci_devs=() 00:34:45.620 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@131 -- # local -a pci_devs 00:34:45.620 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@132 -- # pci_net_devs=() 00:34:45.620 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:34:45.620 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@133 -- # pci_drivers=() 00:34:45.620 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@133 -- # local -A pci_drivers 00:34:45.620 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@135 -- # net_devs=() 00:34:45.620 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@135 -- # local -ga net_devs 00:34:45.620 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@136 -- # e810=() 00:34:45.620 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@136 -- # local -ga e810 00:34:45.620 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@137 -- # x722=() 00:34:45.620 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@137 -- # local -ga x722 00:34:45.620 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@138 -- # mlx=() 00:34:45.620 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@138 -- # local -ga mlx 00:34:45.620 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:45.620 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:45.620 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:45.620 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:45.620 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:45.620 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:45.620 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:45.620 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:45.620 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:45.620 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:45.620 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:45.620 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:45.620 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:34:45.620 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:34:45.620 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:34:45.620 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:34:45.620 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:34:45.620 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:34:45.620 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:34:45.620 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:34:45.620 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:34:45.620 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:34:45.620 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:34:45.620 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:45.620 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:45.620 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:34:45.620 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:34:45.620 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:34:45.620 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:34:45.620 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:34:45.620 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:34:45.620 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:45.620 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:45.620 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:34:45.620 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:34:45.620 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:34:45.620 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:34:45.620 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:34:45.620 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:45.620 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:34:45.620 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:45.620 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@234 -- # [[ up == up ]] 00:34:45.620 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:34:45.620 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:45.620 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:34:45.620 Found net devices under 0000:4b:00.0: cvl_0_0 00:34:45.620 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:34:45.620 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:34:45.620 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:45.620 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:34:45.620 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:45.620 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@234 -- # [[ up == up ]] 00:34:45.620 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:34:45.620 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:45.620 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:34:45.620 Found net devices under 0000:4b:00.1: cvl_0_1 00:34:45.620 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:34:45.620 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:34:45.620 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:34:45.620 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # is_hw=yes 00:34:45.620 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:34:45.620 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:34:45.620 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:34:45.620 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:34:45.620 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@257 -- # create_target_ns 00:34:45.620 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:34:45.620 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:34:45.620 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:34:45.620 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:45.620 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:34:45.620 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:34:45.620 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:34:45.620 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:34:45.620 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:34:45.620 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:34:45.620 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:34:45.620 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:34:45.620 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@27 -- # local -gA dev_map 00:34:45.620 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@28 -- # local -g _dev 00:34:45.620 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:34:45.620 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:34:45.620 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:34:45.621 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:34:45.621 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@44 -- # ips=() 00:34:45.621 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:34:45.621 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:34:45.621 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:34:45.621 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:34:45.621 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:34:45.621 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:34:45.621 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:34:45.621 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:34:45.621 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:34:45.621 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:34:45.621 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:34:45.621 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:34:45.621 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:34:45.621 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:34:45.621 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:34:45.621 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:34:45.621 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:34:45.621 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:34:45.621 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:34:45.621 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:34:45.621 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@11 -- # local val=167772161 00:34:45.621 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:34:45.621 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:34:45.621 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:34:45.621 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:34:45.621 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:34:45.621 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:34:45.621 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:34:45.621 10.0.0.1 00:34:45.621 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:34:45.621 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:34:45.621 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:34:45.621 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:34:45.621 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:34:45.621 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@11 -- # local val=167772162 00:34:45.621 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:34:45.621 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:34:45.621 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:34:45.621 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:34:45.621 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:34:45.621 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:34:45.621 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:34:45.621 10.0.0.2 00:34:45.621 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:34:45.621 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:34:45.621 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:34:45.621 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:34:45.621 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:34:45.621 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:34:45.621 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:34:45.621 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:34:45.621 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:34:45.621 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:34:45.621 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:34:45.621 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:34:45.621 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:34:45.621 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:34:45.621 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:34:45.621 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:34:45.621 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:34:45.621 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:34:45.621 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:34:45.621 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:34:45.621 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@38 -- # ping_ips 1 00:34:45.621 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:34:45.621 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:34:45.621 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:34:45.621 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:34:45.621 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:34:45.621 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:34:45.621 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:34:45.621 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:34:45.621 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:34:45.621 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@107 -- # local dev=initiator0 00:34:45.621 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:34:45.621 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:34:45.621 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:34:45.621 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:34:45.621 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:34:45.621 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:34:45.621 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:34:45.621 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:34:45.621 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:34:45.621 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:34:45.622 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:34:45.622 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:34:45.622 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:34:45.622 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:34:45.622 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:34:45.622 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:45.622 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.582 ms 00:34:45.622 00:34:45.622 --- 10.0.0.1 ping statistics --- 00:34:45.622 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:45.622 rtt min/avg/max/mdev = 0.582/0.582/0.582/0.000 ms 00:34:45.622 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:34:45.622 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:34:45.622 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:34:45.622 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:34:45.622 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:34:45.622 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:34:45.622 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@168 -- # get_net_dev target0 00:34:45.622 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@107 -- # local dev=target0 00:34:45.622 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:34:45.622 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:34:45.622 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:34:45.622 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:34:45.622 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:34:45.622 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:34:45.622 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:34:45.622 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:34:45.622 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:34:45.622 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:34:45.622 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:34:45.622 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:34:45.622 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:34:45.622 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:34:45.622 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:45.622 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.304 ms 00:34:45.622 00:34:45.622 --- 10.0.0.2 ping statistics --- 00:34:45.622 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:45.622 rtt min/avg/max/mdev = 0.304/0.304/0.304/0.000 ms 00:34:45.622 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@98 -- # (( pair++ )) 00:34:45.622 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:34:45.622 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:45.622 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@270 -- # return 0 00:34:45.622 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:34:45.622 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:34:45.622 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:34:45.622 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:34:45.622 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:34:45.622 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:34:45.622 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:34:45.622 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:34:45.622 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:34:45.622 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:34:45.622 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@107 -- # local dev=initiator0 00:34:45.622 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:34:45.622 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:34:45.622 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:34:45.622 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:34:45.622 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:34:45.622 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:34:45.622 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:34:45.622 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:34:45.622 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:34:45.622 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:45.622 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:34:45.622 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:34:45.622 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:34:45.622 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:34:45.622 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:34:45.622 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:34:45.622 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@107 -- # local dev=initiator1 00:34:45.622 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:34:45.622 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:34:45.622 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@109 -- # return 1 00:34:45.622 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@168 -- # dev= 00:34:45.622 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@169 -- # return 0 00:34:45.622 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:34:45.622 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:34:45.622 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:34:45.622 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:34:45.622 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:34:45.622 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:34:45.622 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:34:45.622 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@168 -- # get_net_dev target0 00:34:45.622 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@107 -- # local dev=target0 00:34:45.622 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:34:45.622 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:34:45.622 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:34:45.622 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:34:45.622 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:34:45.622 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:34:45.622 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:34:45.622 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:34:45.622 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:34:45.622 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:45.622 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:34:45.622 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:34:45.622 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:34:45.622 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:34:45.622 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:34:45.622 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:34:45.622 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@168 -- # get_net_dev target1 00:34:45.622 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@107 -- # local dev=target1 00:34:45.622 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:34:45.622 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:34:45.622 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@109 -- # return 1 00:34:45.623 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@168 -- # dev= 00:34:45.623 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@169 -- # return 0 00:34:45.623 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:34:45.623 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:45.623 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:34:45.623 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:34:45.623 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:45.623 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:34:45.623 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:34:45.623 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:34:45.623 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:34:45.623 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:45.623 19:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:45.623 19:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # nvmfpid=616283 00:34:45.623 19:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@329 -- # waitforlisten 616283 00:34:45.623 19:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:34:45.623 19:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@833 -- # '[' -z 616283 ']' 00:34:45.623 19:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:45.623 19:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:34:45.623 19:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:45.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:45.623 19:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:34:45.623 19:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:45.623 [2024-11-05 19:24:14.057134] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:45.623 [2024-11-05 19:24:14.058107] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:34:45.623 [2024-11-05 19:24:14.058146] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:45.623 [2024-11-05 19:24:14.133755] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:45.623 [2024-11-05 19:24:14.169491] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:45.623 [2024-11-05 19:24:14.169523] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:45.623 [2024-11-05 19:24:14.169530] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:45.623 [2024-11-05 19:24:14.169537] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:45.623 [2024-11-05 19:24:14.169543] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:45.623 [2024-11-05 19:24:14.171236] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:45.623 [2024-11-05 19:24:14.171348] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:45.623 [2024-11-05 19:24:14.171506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:45.623 [2024-11-05 19:24:14.171506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:45.623 [2024-11-05 19:24:14.226661] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:45.623 [2024-11-05 19:24:14.226700] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:45.623 [2024-11-05 19:24:14.227668] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:34:45.623 [2024-11-05 19:24:14.228312] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:45.623 [2024-11-05 19:24:14.228435] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:34:45.623 19:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:34:45.623 19:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@866 -- # return 0 00:34:45.623 19:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:34:45.623 19:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:45.623 19:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:45.623 19:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:45.623 19:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:34:45.883 [2024-11-05 19:24:15.080000] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:45.883 19:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:46.143 19:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:34:46.143 19:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:46.403 19:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:34:46.403 19:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:46.403 19:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:34:46.403 19:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:46.663 19:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:34:46.663 19:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:34:46.922 19:24:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:46.922 19:24:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:34:46.922 19:24:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:47.182 19:24:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:34:47.182 19:24:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:47.442 19:24:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:34:47.442 19:24:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:34:47.442 19:24:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:34:47.702 19:24:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:34:47.702 19:24:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:47.962 19:24:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:34:47.962 19:24:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:34:47.962 19:24:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:48.222 [2024-11-05 19:24:17.428106] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:48.222 19:24:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:34:48.480 19:24:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:34:48.740 19:24:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:34:49.000 19:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:34:49.000 19:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1200 -- # local i=0 00:34:49.000 19:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:34:49.000 19:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # [[ -n 4 ]] 00:34:49.000 19:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_device_counter=4 00:34:49.000 19:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # sleep 2 00:34:50.926 19:24:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:34:50.926 19:24:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:34:50.926 19:24:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:34:50.926 19:24:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # nvme_devices=4 00:34:50.926 19:24:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:34:50.926 19:24:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # return 0 00:34:50.926 19:24:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:34:50.926 [global] 00:34:50.926 thread=1 00:34:50.926 invalidate=1 00:34:50.926 rw=write 00:34:50.926 time_based=1 00:34:50.926 runtime=1 00:34:50.926 ioengine=libaio 00:34:50.926 direct=1 00:34:50.926 bs=4096 00:34:50.926 iodepth=1 00:34:50.926 norandommap=0 00:34:50.926 numjobs=1 00:34:50.926 00:34:50.926 verify_dump=1 00:34:50.926 verify_backlog=512 00:34:50.926 verify_state_save=0 00:34:50.926 do_verify=1 00:34:50.926 verify=crc32c-intel 00:34:50.926 [job0] 00:34:50.926 filename=/dev/nvme0n1 00:34:50.926 [job1] 00:34:50.926 filename=/dev/nvme0n2 00:34:50.926 [job2] 00:34:50.926 filename=/dev/nvme0n3 00:34:50.926 [job3] 00:34:50.926 filename=/dev/nvme0n4 00:34:51.205 Could not set queue depth (nvme0n1) 00:34:51.205 Could not set queue depth (nvme0n2) 00:34:51.205 Could not set queue depth (nvme0n3) 00:34:51.205 Could not set queue depth (nvme0n4) 00:34:51.473 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:51.473 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:51.473 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:51.473 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:51.473 fio-3.35 00:34:51.473 Starting 4 threads 00:34:52.856 00:34:52.856 job0: (groupid=0, jobs=1): err= 0: pid=617836: Tue Nov 5 19:24:21 2024 00:34:52.856 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:34:52.856 slat (nsec): min=24329, max=42384, avg=25165.88, stdev=1713.11 00:34:52.856 clat (usec): min=736, max=1234, avg=1025.00, stdev=82.15 00:34:52.856 lat (usec): min=761, max=1259, avg=1050.17, stdev=81.97 00:34:52.856 clat percentiles (usec): 00:34:52.856 | 1.00th=[ 791], 5.00th=[ 848], 10.00th=[ 914], 20.00th=[ 979], 00:34:52.856 | 30.00th=[ 996], 40.00th=[ 1020], 50.00th=[ 1029], 60.00th=[ 1057], 00:34:52.856 | 70.00th=[ 1074], 80.00th=[ 1090], 90.00th=[ 1123], 95.00th=[ 1139], 00:34:52.856 | 99.00th=[ 1188], 99.50th=[ 1205], 99.90th=[ 1237], 99.95th=[ 1237], 00:34:52.856 | 99.99th=[ 1237] 00:34:52.856 write: IOPS=679, BW=2717KiB/s (2782kB/s)(2720KiB/1001msec); 0 zone resets 00:34:52.856 slat (nsec): min=9398, max=63699, avg=28912.40, stdev=9060.02 00:34:52.856 clat (usec): min=336, max=944, avg=637.48, stdev=101.82 00:34:52.856 lat (usec): min=352, max=977, avg=666.39, stdev=105.29 00:34:52.856 clat percentiles (usec): 00:34:52.856 | 1.00th=[ 363], 5.00th=[ 465], 10.00th=[ 494], 20.00th=[ 553], 00:34:52.856 | 30.00th=[ 594], 40.00th=[ 619], 50.00th=[ 644], 60.00th=[ 676], 00:34:52.856 | 70.00th=[ 701], 80.00th=[ 725], 90.00th=[ 758], 95.00th=[ 783], 00:34:52.856 | 99.00th=[ 848], 99.50th=[ 898], 99.90th=[ 947], 99.95th=[ 947], 00:34:52.856 | 99.99th=[ 947] 00:34:52.856 bw ( KiB/s): min= 4096, max= 4096, per=47.97%, avg=4096.00, stdev= 0.00, samples=1 00:34:52.856 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:52.856 lat (usec) : 500=6.12%, 750=43.88%, 1000=20.55% 00:34:52.856 lat (msec) : 2=29.45% 00:34:52.856 cpu : usr=1.80%, sys=3.40%, ctx=1192, majf=0, minf=1 00:34:52.856 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:52.857 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:52.857 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:52.857 issued rwts: total=512,680,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:52.857 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:52.857 job1: (groupid=0, jobs=1): err= 0: pid=617852: Tue Nov 5 19:24:21 2024 00:34:52.857 read: IOPS=356, BW=1427KiB/s (1461kB/s)(1428KiB/1001msec) 00:34:52.857 slat (nsec): min=27012, max=45714, avg=28035.55, stdev=2824.36 00:34:52.857 clat (usec): min=858, max=42009, avg=1816.24, stdev=5214.14 00:34:52.857 lat (usec): min=887, max=42036, avg=1844.28, stdev=5214.09 00:34:52.857 clat percentiles (usec): 00:34:52.857 | 1.00th=[ 906], 5.00th=[ 988], 10.00th=[ 1012], 20.00th=[ 1057], 00:34:52.857 | 30.00th=[ 1106], 40.00th=[ 1123], 50.00th=[ 1139], 60.00th=[ 1172], 00:34:52.857 | 70.00th=[ 1188], 80.00th=[ 1221], 90.00th=[ 1237], 95.00th=[ 1287], 00:34:52.857 | 99.00th=[41681], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:34:52.857 | 99.99th=[42206] 00:34:52.857 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:34:52.857 slat (nsec): min=9643, max=70032, avg=32493.27, stdev=9295.72 00:34:52.857 clat (usec): min=201, max=1096, avg=621.18, stdev=131.88 00:34:52.857 lat (usec): min=213, max=1131, avg=653.67, stdev=135.39 00:34:52.857 clat percentiles (usec): 00:34:52.857 | 1.00th=[ 277], 5.00th=[ 379], 10.00th=[ 441], 20.00th=[ 510], 00:34:52.857 | 30.00th=[ 570], 40.00th=[ 603], 50.00th=[ 635], 60.00th=[ 668], 00:34:52.857 | 70.00th=[ 701], 80.00th=[ 734], 90.00th=[ 775], 95.00th=[ 799], 00:34:52.857 | 99.00th=[ 906], 99.50th=[ 938], 99.90th=[ 1090], 99.95th=[ 1090], 00:34:52.857 | 99.99th=[ 1090] 00:34:52.857 bw ( KiB/s): min= 4096, max= 4096, per=47.97%, avg=4096.00, stdev= 0.00, samples=1 00:34:52.857 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:52.857 lat (usec) : 250=0.12%, 500=11.28%, 750=38.32%, 1000=12.31% 00:34:52.857 lat (msec) : 2=37.28%, 50=0.69% 00:34:52.857 cpu : usr=1.80%, sys=3.50%, ctx=870, majf=0, minf=1 00:34:52.857 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:52.857 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:52.857 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:52.857 issued rwts: total=357,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:52.857 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:52.857 job2: (groupid=0, jobs=1): err= 0: pid=617870: Tue Nov 5 19:24:21 2024 00:34:52.857 read: IOPS=201, BW=805KiB/s (825kB/s)(836KiB/1038msec) 00:34:52.857 slat (nsec): min=27247, max=47850, avg=28719.30, stdev=2734.91 00:34:52.857 clat (usec): min=672, max=42085, avg=2975.42, stdev=8665.59 00:34:52.857 lat (usec): min=701, max=42114, avg=3004.14, stdev=8665.52 00:34:52.857 clat percentiles (usec): 00:34:52.857 | 1.00th=[ 775], 5.00th=[ 889], 10.00th=[ 955], 20.00th=[ 988], 00:34:52.857 | 30.00th=[ 1012], 40.00th=[ 1029], 50.00th=[ 1057], 60.00th=[ 1074], 00:34:52.857 | 70.00th=[ 1074], 80.00th=[ 1106], 90.00th=[ 1156], 95.00th=[ 1254], 00:34:52.857 | 99.00th=[41681], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:34:52.857 | 99.99th=[42206] 00:34:52.857 write: IOPS=493, BW=1973KiB/s (2020kB/s)(2048KiB/1038msec); 0 zone resets 00:34:52.857 slat (usec): min=9, max=35003, avg=155.73, stdev=1755.10 00:34:52.857 clat (usec): min=246, max=1043, avg=634.57, stdev=128.35 00:34:52.857 lat (usec): min=256, max=35859, avg=790.30, stdev=1772.63 00:34:52.857 clat percentiles (usec): 00:34:52.857 | 1.00th=[ 347], 5.00th=[ 404], 10.00th=[ 465], 20.00th=[ 537], 00:34:52.857 | 30.00th=[ 570], 40.00th=[ 603], 50.00th=[ 635], 60.00th=[ 668], 00:34:52.857 | 70.00th=[ 709], 80.00th=[ 742], 90.00th=[ 791], 95.00th=[ 832], 00:34:52.857 | 99.00th=[ 922], 99.50th=[ 996], 99.90th=[ 1045], 99.95th=[ 1045], 00:34:52.857 | 99.99th=[ 1045] 00:34:52.857 bw ( KiB/s): min= 4096, max= 4096, per=47.97%, avg=4096.00, stdev= 0.00, samples=1 00:34:52.857 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:52.857 lat (usec) : 250=0.14%, 500=10.26%, 750=47.99%, 1000=19.56% 00:34:52.857 lat (msec) : 2=20.67%, 50=1.39% 00:34:52.857 cpu : usr=1.35%, sys=2.89%, ctx=728, majf=0, minf=1 00:34:52.857 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:52.857 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:52.857 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:52.857 issued rwts: total=209,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:52.857 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:52.857 job3: (groupid=0, jobs=1): err= 0: pid=617872: Tue Nov 5 19:24:21 2024 00:34:52.857 read: IOPS=17, BW=69.8KiB/s (71.4kB/s)(72.0KiB/1032msec) 00:34:52.857 slat (nsec): min=27520, max=32762, avg=28036.44, stdev=1194.23 00:34:52.857 clat (usec): min=1127, max=42096, avg=39687.62, stdev=9623.74 00:34:52.857 lat (usec): min=1155, max=42129, avg=39715.65, stdev=9623.82 00:34:52.857 clat percentiles (usec): 00:34:52.857 | 1.00th=[ 1123], 5.00th=[ 1123], 10.00th=[41681], 20.00th=[41681], 00:34:52.857 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:34:52.857 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:34:52.857 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:52.857 | 99.99th=[42206] 00:34:52.857 write: IOPS=496, BW=1984KiB/s (2032kB/s)(2048KiB/1032msec); 0 zone resets 00:34:52.857 slat (nsec): min=9658, max=56319, avg=32097.49, stdev=9965.02 00:34:52.857 clat (usec): min=143, max=976, avg=580.54, stdev=129.48 00:34:52.857 lat (usec): min=155, max=1011, avg=612.63, stdev=132.16 00:34:52.857 clat percentiles (usec): 00:34:52.857 | 1.00th=[ 277], 5.00th=[ 363], 10.00th=[ 396], 20.00th=[ 482], 00:34:52.857 | 30.00th=[ 519], 40.00th=[ 553], 50.00th=[ 586], 60.00th=[ 619], 00:34:52.857 | 70.00th=[ 652], 80.00th=[ 693], 90.00th=[ 742], 95.00th=[ 775], 00:34:52.857 | 99.00th=[ 848], 99.50th=[ 947], 99.90th=[ 979], 99.95th=[ 979], 00:34:52.857 | 99.99th=[ 979] 00:34:52.857 bw ( KiB/s): min= 4096, max= 4096, per=47.97%, avg=4096.00, stdev= 0.00, samples=1 00:34:52.857 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:52.857 lat (usec) : 250=0.19%, 500=24.34%, 750=63.58%, 1000=8.49% 00:34:52.857 lat (msec) : 2=0.19%, 50=3.21% 00:34:52.857 cpu : usr=0.68%, sys=2.33%, ctx=531, majf=0, minf=1 00:34:52.857 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:52.857 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:52.857 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:52.857 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:52.857 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:52.857 00:34:52.857 Run status group 0 (all jobs): 00:34:52.857 READ: bw=4224KiB/s (4325kB/s), 69.8KiB/s-2046KiB/s (71.4kB/s-2095kB/s), io=4384KiB (4489kB), run=1001-1038msec 00:34:52.857 WRITE: bw=8539KiB/s (8744kB/s), 1973KiB/s-2717KiB/s (2020kB/s-2782kB/s), io=8864KiB (9077kB), run=1001-1038msec 00:34:52.857 00:34:52.857 Disk stats (read/write): 00:34:52.857 nvme0n1: ios=502/512, merge=0/0, ticks=514/328, in_queue=842, util=86.77% 00:34:52.857 nvme0n2: ios=278/512, merge=0/0, ticks=1428/253, in_queue=1681, util=96.52% 00:34:52.857 nvme0n3: ios=194/512, merge=0/0, ticks=766/260, in_queue=1026, util=97.34% 00:34:52.857 nvme0n4: ios=68/512, merge=0/0, ticks=844/239, in_queue=1083, util=96.77% 00:34:52.857 19:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:34:52.857 [global] 00:34:52.857 thread=1 00:34:52.857 invalidate=1 00:34:52.857 rw=randwrite 00:34:52.857 time_based=1 00:34:52.857 runtime=1 00:34:52.857 ioengine=libaio 00:34:52.857 direct=1 00:34:52.857 bs=4096 00:34:52.857 iodepth=1 00:34:52.857 norandommap=0 00:34:52.857 numjobs=1 00:34:52.857 00:34:52.857 verify_dump=1 00:34:52.857 verify_backlog=512 00:34:52.857 verify_state_save=0 00:34:52.857 do_verify=1 00:34:52.857 verify=crc32c-intel 00:34:52.857 [job0] 00:34:52.857 filename=/dev/nvme0n1 00:34:52.857 [job1] 00:34:52.857 filename=/dev/nvme0n2 00:34:52.857 [job2] 00:34:52.857 filename=/dev/nvme0n3 00:34:52.857 [job3] 00:34:52.857 filename=/dev/nvme0n4 00:34:52.857 Could not set queue depth (nvme0n1) 00:34:52.857 Could not set queue depth (nvme0n2) 00:34:52.857 Could not set queue depth (nvme0n3) 00:34:52.857 Could not set queue depth (nvme0n4) 00:34:53.126 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:53.126 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:53.126 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:53.126 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:53.126 fio-3.35 00:34:53.126 Starting 4 threads 00:34:54.513 00:34:54.513 job0: (groupid=0, jobs=1): err= 0: pid=618296: Tue Nov 5 19:24:23 2024 00:34:54.513 read: IOPS=15, BW=63.3KiB/s (64.8kB/s)(64.0KiB/1011msec) 00:34:54.513 slat (nsec): min=26024, max=30932, avg=26496.56, stdev=1188.26 00:34:54.513 clat (usec): min=41740, max=44040, avg=42073.31, stdev=531.72 00:34:54.513 lat (usec): min=41766, max=44071, avg=42099.80, stdev=532.89 00:34:54.513 clat percentiles (usec): 00:34:54.513 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:34:54.513 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:34:54.513 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[44303], 00:34:54.513 | 99.00th=[44303], 99.50th=[44303], 99.90th=[44303], 99.95th=[44303], 00:34:54.513 | 99.99th=[44303] 00:34:54.513 write: IOPS=506, BW=2026KiB/s (2074kB/s)(2048KiB/1011msec); 0 zone resets 00:34:54.513 slat (nsec): min=8863, max=52344, avg=29578.03, stdev=8637.40 00:34:54.513 clat (usec): min=232, max=1058, avg=621.15, stdev=110.19 00:34:54.513 lat (usec): min=242, max=1068, avg=650.73, stdev=113.73 00:34:54.513 clat percentiles (usec): 00:34:54.513 | 1.00th=[ 359], 5.00th=[ 441], 10.00th=[ 482], 20.00th=[ 537], 00:34:54.513 | 30.00th=[ 578], 40.00th=[ 594], 50.00th=[ 619], 60.00th=[ 652], 00:34:54.513 | 70.00th=[ 685], 80.00th=[ 709], 90.00th=[ 750], 95.00th=[ 791], 00:34:54.513 | 99.00th=[ 898], 99.50th=[ 906], 99.90th=[ 1057], 99.95th=[ 1057], 00:34:54.513 | 99.99th=[ 1057] 00:34:54.513 bw ( KiB/s): min= 4096, max= 4096, per=38.06%, avg=4096.00, stdev= 0.00, samples=1 00:34:54.513 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:54.513 lat (usec) : 250=0.19%, 500=12.88%, 750=73.86%, 1000=9.85% 00:34:54.513 lat (msec) : 2=0.19%, 50=3.03% 00:34:54.513 cpu : usr=1.39%, sys=1.58%, ctx=528, majf=0, minf=1 00:34:54.513 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:54.513 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:54.513 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:54.514 issued rwts: total=16,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:54.514 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:54.514 job1: (groupid=0, jobs=1): err= 0: pid=618314: Tue Nov 5 19:24:23 2024 00:34:54.514 read: IOPS=590, BW=2362KiB/s (2418kB/s)(2364KiB/1001msec) 00:34:54.514 slat (nsec): min=6761, max=56800, avg=23983.05, stdev=6525.76 00:34:54.514 clat (usec): min=273, max=1033, avg=727.02, stdev=117.61 00:34:54.514 lat (usec): min=299, max=1058, avg=751.00, stdev=118.94 00:34:54.514 clat percentiles (usec): 00:34:54.514 | 1.00th=[ 388], 5.00th=[ 537], 10.00th=[ 570], 20.00th=[ 627], 00:34:54.514 | 30.00th=[ 668], 40.00th=[ 701], 50.00th=[ 734], 60.00th=[ 775], 00:34:54.514 | 70.00th=[ 807], 80.00th=[ 832], 90.00th=[ 865], 95.00th=[ 889], 00:34:54.514 | 99.00th=[ 963], 99.50th=[ 963], 99.90th=[ 1037], 99.95th=[ 1037], 00:34:54.514 | 99.99th=[ 1037] 00:34:54.514 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:34:54.514 slat (nsec): min=9282, max=70099, avg=29819.14, stdev=7442.09 00:34:54.514 clat (usec): min=112, max=893, avg=500.93, stdev=131.75 00:34:54.514 lat (usec): min=121, max=939, avg=530.75, stdev=133.43 00:34:54.514 clat percentiles (usec): 00:34:54.514 | 1.00th=[ 180], 5.00th=[ 277], 10.00th=[ 338], 20.00th=[ 383], 00:34:54.514 | 30.00th=[ 437], 40.00th=[ 469], 50.00th=[ 498], 60.00th=[ 537], 00:34:54.514 | 70.00th=[ 578], 80.00th=[ 619], 90.00th=[ 660], 95.00th=[ 701], 00:34:54.514 | 99.00th=[ 799], 99.50th=[ 824], 99.90th=[ 889], 99.95th=[ 898], 00:34:54.514 | 99.99th=[ 898] 00:34:54.514 bw ( KiB/s): min= 4096, max= 4096, per=38.06%, avg=4096.00, stdev= 0.00, samples=1 00:34:54.514 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:54.514 lat (usec) : 250=1.55%, 500=31.46%, 750=48.42%, 1000=18.45% 00:34:54.514 lat (msec) : 2=0.12% 00:34:54.514 cpu : usr=2.20%, sys=4.80%, ctx=1616, majf=0, minf=1 00:34:54.514 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:54.514 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:54.514 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:54.514 issued rwts: total=591,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:54.514 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:54.514 job2: (groupid=0, jobs=1): err= 0: pid=618335: Tue Nov 5 19:24:23 2024 00:34:54.514 read: IOPS=15, BW=63.9KiB/s (65.5kB/s)(64.0KiB/1001msec) 00:34:54.514 slat (nsec): min=28156, max=29123, avg=28574.69, stdev=243.71 00:34:54.514 clat (usec): min=40901, max=42060, avg=41834.64, stdev=361.32 00:34:54.514 lat (usec): min=40930, max=42089, avg=41863.22, stdev=361.22 00:34:54.514 clat percentiles (usec): 00:34:54.514 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41681], 00:34:54.514 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:34:54.514 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:34:54.514 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:54.514 | 99.99th=[42206] 00:34:54.514 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:34:54.514 slat (nsec): min=9490, max=62572, avg=31390.99, stdev=10314.03 00:34:54.514 clat (usec): min=133, max=899, avg=606.18, stdev=125.07 00:34:54.514 lat (usec): min=143, max=923, avg=637.57, stdev=130.04 00:34:54.514 clat percentiles (usec): 00:34:54.514 | 1.00th=[ 265], 5.00th=[ 371], 10.00th=[ 437], 20.00th=[ 510], 00:34:54.514 | 30.00th=[ 553], 40.00th=[ 586], 50.00th=[ 619], 60.00th=[ 660], 00:34:54.514 | 70.00th=[ 685], 80.00th=[ 717], 90.00th=[ 750], 95.00th=[ 783], 00:34:54.514 | 99.00th=[ 832], 99.50th=[ 848], 99.90th=[ 898], 99.95th=[ 898], 00:34:54.514 | 99.99th=[ 898] 00:34:54.514 bw ( KiB/s): min= 4096, max= 4096, per=38.06%, avg=4096.00, stdev= 0.00, samples=1 00:34:54.514 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:54.514 lat (usec) : 250=0.76%, 500=17.23%, 750=68.56%, 1000=10.42% 00:34:54.514 lat (msec) : 50=3.03% 00:34:54.514 cpu : usr=1.30%, sys=1.80%, ctx=530, majf=0, minf=1 00:34:54.514 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:54.514 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:54.514 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:54.514 issued rwts: total=16,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:54.514 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:54.514 job3: (groupid=0, jobs=1): err= 0: pid=618343: Tue Nov 5 19:24:23 2024 00:34:54.514 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:34:54.514 slat (nsec): min=7006, max=55745, avg=26512.93, stdev=3789.81 00:34:54.514 clat (usec): min=633, max=1321, avg=1061.66, stdev=111.76 00:34:54.514 lat (usec): min=661, max=1365, avg=1088.17, stdev=112.63 00:34:54.514 clat percentiles (usec): 00:34:54.514 | 1.00th=[ 750], 5.00th=[ 848], 10.00th=[ 922], 20.00th=[ 971], 00:34:54.514 | 30.00th=[ 1020], 40.00th=[ 1057], 50.00th=[ 1074], 60.00th=[ 1106], 00:34:54.514 | 70.00th=[ 1123], 80.00th=[ 1156], 90.00th=[ 1188], 95.00th=[ 1221], 00:34:54.514 | 99.00th=[ 1270], 99.50th=[ 1287], 99.90th=[ 1319], 99.95th=[ 1319], 00:34:54.514 | 99.99th=[ 1319] 00:34:54.514 write: IOPS=671, BW=2685KiB/s (2750kB/s)(2688KiB/1001msec); 0 zone resets 00:34:54.514 slat (nsec): min=9913, max=54525, avg=29293.96, stdev=9761.54 00:34:54.514 clat (usec): min=275, max=942, avg=615.77, stdev=114.06 00:34:54.514 lat (usec): min=286, max=977, avg=645.07, stdev=118.22 00:34:54.514 clat percentiles (usec): 00:34:54.514 | 1.00th=[ 338], 5.00th=[ 416], 10.00th=[ 461], 20.00th=[ 529], 00:34:54.514 | 30.00th=[ 562], 40.00th=[ 594], 50.00th=[ 619], 60.00th=[ 652], 00:34:54.514 | 70.00th=[ 685], 80.00th=[ 709], 90.00th=[ 750], 95.00th=[ 783], 00:34:54.514 | 99.00th=[ 865], 99.50th=[ 906], 99.90th=[ 947], 99.95th=[ 947], 00:34:54.514 | 99.99th=[ 947] 00:34:54.514 bw ( KiB/s): min= 4096, max= 4096, per=38.06%, avg=4096.00, stdev= 0.00, samples=1 00:34:54.514 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:54.514 lat (usec) : 500=9.46%, 750=42.06%, 1000=16.22% 00:34:54.514 lat (msec) : 2=32.26% 00:34:54.514 cpu : usr=2.10%, sys=3.20%, ctx=1186, majf=0, minf=1 00:34:54.514 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:54.514 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:54.514 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:54.514 issued rwts: total=512,672,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:54.514 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:54.514 00:34:54.514 Run status group 0 (all jobs): 00:34:54.514 READ: bw=4491KiB/s (4598kB/s), 63.3KiB/s-2362KiB/s (64.8kB/s-2418kB/s), io=4540KiB (4649kB), run=1001-1011msec 00:34:54.514 WRITE: bw=10.5MiB/s (11.0MB/s), 2026KiB/s-4092KiB/s (2074kB/s-4190kB/s), io=10.6MiB (11.1MB), run=1001-1011msec 00:34:54.514 00:34:54.514 Disk stats (read/write): 00:34:54.514 nvme0n1: ios=61/512, merge=0/0, ticks=603/246, in_queue=849, util=94.89% 00:34:54.514 nvme0n2: ios=512/810, merge=0/0, ticks=359/392, in_queue=751, util=84.97% 00:34:54.514 nvme0n3: ios=68/512, merge=0/0, ticks=1044/248, in_queue=1292, util=96.07% 00:34:54.514 nvme0n4: ios=505/512, merge=0/0, ticks=926/315, in_queue=1241, util=96.02% 00:34:54.514 19:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:34:54.514 [global] 00:34:54.514 thread=1 00:34:54.514 invalidate=1 00:34:54.514 rw=write 00:34:54.514 time_based=1 00:34:54.514 runtime=1 00:34:54.514 ioengine=libaio 00:34:54.514 direct=1 00:34:54.514 bs=4096 00:34:54.514 iodepth=128 00:34:54.514 norandommap=0 00:34:54.514 numjobs=1 00:34:54.514 00:34:54.514 verify_dump=1 00:34:54.514 verify_backlog=512 00:34:54.514 verify_state_save=0 00:34:54.514 do_verify=1 00:34:54.514 verify=crc32c-intel 00:34:54.514 [job0] 00:34:54.514 filename=/dev/nvme0n1 00:34:54.514 [job1] 00:34:54.514 filename=/dev/nvme0n2 00:34:54.514 [job2] 00:34:54.514 filename=/dev/nvme0n3 00:34:54.514 [job3] 00:34:54.514 filename=/dev/nvme0n4 00:34:54.514 Could not set queue depth (nvme0n1) 00:34:54.514 Could not set queue depth (nvme0n2) 00:34:54.514 Could not set queue depth (nvme0n3) 00:34:54.514 Could not set queue depth (nvme0n4) 00:34:54.774 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:54.774 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:54.774 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:54.774 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:54.774 fio-3.35 00:34:54.774 Starting 4 threads 00:34:56.157 00:34:56.157 job0: (groupid=0, jobs=1): err= 0: pid=618773: Tue Nov 5 19:24:25 2024 00:34:56.157 read: IOPS=8292, BW=32.4MiB/s (34.0MB/s)(32.5MiB/1003msec) 00:34:56.157 slat (nsec): min=920, max=7370.0k, avg=61855.97, stdev=481624.63 00:34:56.157 clat (usec): min=1561, max=16452, avg=8034.12, stdev=2173.72 00:34:56.157 lat (usec): min=3072, max=16928, avg=8095.98, stdev=2199.54 00:34:56.157 clat percentiles (usec): 00:34:56.157 | 1.00th=[ 4047], 5.00th=[ 5669], 10.00th=[ 6128], 20.00th=[ 6456], 00:34:56.157 | 30.00th=[ 6718], 40.00th=[ 7111], 50.00th=[ 7373], 60.00th=[ 7767], 00:34:56.157 | 70.00th=[ 8455], 80.00th=[ 9896], 90.00th=[11600], 95.00th=[12518], 00:34:56.157 | 99.00th=[13960], 99.50th=[14222], 99.90th=[16319], 99.95th=[16450], 00:34:56.157 | 99.99th=[16450] 00:34:56.157 write: IOPS=8677, BW=33.9MiB/s (35.5MB/s)(34.0MiB/1003msec); 0 zone resets 00:34:56.157 slat (nsec): min=1613, max=6924.7k, avg=51244.37, stdev=346793.37 00:34:56.157 clat (usec): min=1213, max=16460, avg=6937.10, stdev=1803.94 00:34:56.157 lat (usec): min=1293, max=16465, avg=6988.34, stdev=1813.64 00:34:56.157 clat percentiles (usec): 00:34:56.157 | 1.00th=[ 2704], 5.00th=[ 4080], 10.00th=[ 4359], 20.00th=[ 5211], 00:34:56.157 | 30.00th=[ 6325], 40.00th=[ 6915], 50.00th=[ 7177], 60.00th=[ 7373], 00:34:56.157 | 70.00th=[ 7570], 80.00th=[ 8225], 90.00th=[ 9110], 95.00th=[ 9896], 00:34:56.157 | 99.00th=[11731], 99.50th=[12256], 99.90th=[13304], 99.95th=[14484], 00:34:56.157 | 99.99th=[16450] 00:34:56.157 bw ( KiB/s): min=32768, max=36848, per=33.05%, avg=34808.00, stdev=2885.00, samples=2 00:34:56.157 iops : min= 8192, max= 9212, avg=8702.00, stdev=721.25, samples=2 00:34:56.157 lat (msec) : 2=0.13%, 4=2.47%, 10=85.72%, 20=11.69% 00:34:56.157 cpu : usr=4.49%, sys=8.98%, ctx=708, majf=0, minf=2 00:34:56.157 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:34:56.157 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:56.157 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:56.157 issued rwts: total=8317,8704,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:56.157 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:56.157 job1: (groupid=0, jobs=1): err= 0: pid=618791: Tue Nov 5 19:24:25 2024 00:34:56.157 read: IOPS=6410, BW=25.0MiB/s (26.3MB/s)(25.2MiB/1005msec) 00:34:56.157 slat (nsec): min=936, max=13291k, avg=71123.93, stdev=585689.89 00:34:56.157 clat (usec): min=3182, max=27290, avg=9532.58, stdev=3792.26 00:34:56.157 lat (usec): min=3905, max=37352, avg=9603.70, stdev=3839.84 00:34:56.158 clat percentiles (usec): 00:34:56.158 | 1.00th=[ 4752], 5.00th=[ 5538], 10.00th=[ 6063], 20.00th=[ 6652], 00:34:56.158 | 30.00th=[ 6980], 40.00th=[ 7439], 50.00th=[ 8291], 60.00th=[ 9241], 00:34:56.158 | 70.00th=[10683], 80.00th=[12649], 90.00th=[14615], 95.00th=[16188], 00:34:56.158 | 99.00th=[23462], 99.50th=[23987], 99.90th=[24249], 99.95th=[26084], 00:34:56.158 | 99.99th=[27395] 00:34:56.158 write: IOPS=6622, BW=25.9MiB/s (27.1MB/s)(26.0MiB/1005msec); 0 zone resets 00:34:56.158 slat (nsec): min=1600, max=12979k, avg=76238.64, stdev=583303.56 00:34:56.158 clat (usec): min=1357, max=66422, avg=9856.33, stdev=7530.79 00:34:56.158 lat (usec): min=1366, max=66447, avg=9932.56, stdev=7583.32 00:34:56.158 clat percentiles (usec): 00:34:56.158 | 1.00th=[ 3752], 5.00th=[ 4113], 10.00th=[ 4359], 20.00th=[ 6063], 00:34:56.158 | 30.00th=[ 6849], 40.00th=[ 7504], 50.00th=[ 8029], 60.00th=[ 8979], 00:34:56.158 | 70.00th=[ 9503], 80.00th=[11731], 90.00th=[14746], 95.00th=[20055], 00:34:56.158 | 99.00th=[52691], 99.50th=[57934], 99.90th=[66323], 99.95th=[66323], 00:34:56.158 | 99.99th=[66323] 00:34:56.158 bw ( KiB/s): min=24576, max=28672, per=25.28%, avg=26624.00, stdev=2896.31, samples=2 00:34:56.158 iops : min= 6144, max= 7168, avg=6656.00, stdev=724.08, samples=2 00:34:56.158 lat (msec) : 2=0.07%, 4=2.10%, 10=67.91%, 20=26.23%, 50=3.03% 00:34:56.158 lat (msec) : 100=0.66% 00:34:56.158 cpu : usr=4.48%, sys=6.87%, ctx=414, majf=0, minf=1 00:34:56.158 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:34:56.158 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:56.158 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:56.158 issued rwts: total=6443,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:56.158 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:56.158 job2: (groupid=0, jobs=1): err= 0: pid=618811: Tue Nov 5 19:24:25 2024 00:34:56.158 read: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec) 00:34:56.158 slat (nsec): min=923, max=13290k, avg=111004.49, stdev=831659.70 00:34:56.158 clat (usec): min=3478, max=30691, avg=15117.89, stdev=4358.73 00:34:56.158 lat (usec): min=3485, max=35098, avg=15228.89, stdev=4395.68 00:34:56.158 clat percentiles (usec): 00:34:56.158 | 1.00th=[ 6915], 5.00th=[ 9372], 10.00th=[ 9765], 20.00th=[11600], 00:34:56.158 | 30.00th=[13435], 40.00th=[14091], 50.00th=[14484], 60.00th=[14877], 00:34:56.158 | 70.00th=[15926], 80.00th=[19268], 90.00th=[20841], 95.00th=[23462], 00:34:56.158 | 99.00th=[27395], 99.50th=[30540], 99.90th=[30540], 99.95th=[30540], 00:34:56.158 | 99.99th=[30802] 00:34:56.158 write: IOPS=4724, BW=18.5MiB/s (19.3MB/s)(18.5MiB/1004msec); 0 zone resets 00:34:56.158 slat (nsec): min=1587, max=14562k, avg=90520.32, stdev=760892.22 00:34:56.158 clat (usec): min=723, max=65454, avg=12159.65, stdev=6426.81 00:34:56.158 lat (usec): min=1013, max=65463, avg=12250.17, stdev=6464.40 00:34:56.158 clat percentiles (usec): 00:34:56.158 | 1.00th=[ 1369], 5.00th=[ 5145], 10.00th=[ 6980], 20.00th=[ 8717], 00:34:56.158 | 30.00th=[ 9503], 40.00th=[10683], 50.00th=[11600], 60.00th=[13042], 00:34:56.158 | 70.00th=[13829], 80.00th=[14091], 90.00th=[15139], 95.00th=[19268], 00:34:56.158 | 99.00th=[43779], 99.50th=[60556], 99.90th=[61604], 99.95th=[65274], 00:34:56.158 | 99.99th=[65274] 00:34:56.158 bw ( KiB/s): min=16440, max=20480, per=17.53%, avg=18460.00, stdev=2856.71, samples=2 00:34:56.158 iops : min= 4110, max= 5120, avg=4615.00, stdev=714.18, samples=2 00:34:56.158 lat (usec) : 750=0.02%, 1000=0.01% 00:34:56.158 lat (msec) : 2=0.66%, 4=1.48%, 10=21.51%, 20=66.97%, 50=8.98% 00:34:56.158 lat (msec) : 100=0.37% 00:34:56.158 cpu : usr=2.59%, sys=6.18%, ctx=278, majf=0, minf=2 00:34:56.158 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:34:56.158 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:56.158 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:56.158 issued rwts: total=4608,4743,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:56.158 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:56.158 job3: (groupid=0, jobs=1): err= 0: pid=618818: Tue Nov 5 19:24:25 2024 00:34:56.158 read: IOPS=6119, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1004msec) 00:34:56.158 slat (nsec): min=1022, max=17569k, avg=82111.77, stdev=682665.05 00:34:56.158 clat (usec): min=2975, max=34960, avg=11013.75, stdev=5112.18 00:34:56.158 lat (usec): min=2986, max=37661, avg=11095.87, stdev=5161.94 00:34:56.158 clat percentiles (usec): 00:34:56.158 | 1.00th=[ 5080], 5.00th=[ 6521], 10.00th=[ 7046], 20.00th=[ 7504], 00:34:56.158 | 30.00th=[ 8029], 40.00th=[ 8455], 50.00th=[ 8717], 60.00th=[ 9634], 00:34:56.158 | 70.00th=[11863], 80.00th=[14091], 90.00th=[19006], 95.00th=[21627], 00:34:56.158 | 99.00th=[32637], 99.50th=[32637], 99.90th=[34866], 99.95th=[34866], 00:34:56.158 | 99.99th=[34866] 00:34:56.158 write: IOPS=6336, BW=24.8MiB/s (26.0MB/s)(24.9MiB/1004msec); 0 zone resets 00:34:56.158 slat (nsec): min=1730, max=10384k, avg=72021.66, stdev=531222.32 00:34:56.158 clat (usec): min=1186, max=25822, avg=9345.92, stdev=3878.52 00:34:56.158 lat (usec): min=1197, max=32858, avg=9417.94, stdev=3905.53 00:34:56.158 clat percentiles (usec): 00:34:56.158 | 1.00th=[ 3589], 5.00th=[ 5145], 10.00th=[ 5538], 20.00th=[ 7046], 00:34:56.158 | 30.00th=[ 7635], 40.00th=[ 7898], 50.00th=[ 8094], 60.00th=[ 8455], 00:34:56.158 | 70.00th=[ 9241], 80.00th=[11731], 90.00th=[14746], 95.00th=[18482], 00:34:56.158 | 99.00th=[22414], 99.50th=[22938], 99.90th=[25297], 99.95th=[25822], 00:34:56.158 | 99.99th=[25822] 00:34:56.158 bw ( KiB/s): min=20480, max=29400, per=23.68%, avg=24940.00, stdev=6307.39, samples=2 00:34:56.158 iops : min= 5120, max= 7350, avg=6235.00, stdev=1576.85, samples=2 00:34:56.158 lat (msec) : 2=0.13%, 4=0.84%, 10=66.28%, 20=27.83%, 50=4.93% 00:34:56.158 cpu : usr=5.28%, sys=6.48%, ctx=496, majf=0, minf=1 00:34:56.158 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:34:56.158 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:56.158 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:56.158 issued rwts: total=6144,6362,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:56.158 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:56.158 00:34:56.158 Run status group 0 (all jobs): 00:34:56.158 READ: bw=99.2MiB/s (104MB/s), 17.9MiB/s-32.4MiB/s (18.8MB/s-34.0MB/s), io=99.7MiB (104MB), run=1003-1005msec 00:34:56.158 WRITE: bw=103MiB/s (108MB/s), 18.5MiB/s-33.9MiB/s (19.3MB/s-35.5MB/s), io=103MiB (108MB), run=1003-1005msec 00:34:56.158 00:34:56.158 Disk stats (read/write): 00:34:56.158 nvme0n1: ios=7201/7239, merge=0/0, ticks=53583/46383, in_queue=99966, util=87.58% 00:34:56.158 nvme0n2: ios=5158/5271, merge=0/0, ticks=48014/51720, in_queue=99734, util=88.27% 00:34:56.158 nvme0n3: ios=3654/4160, merge=0/0, ticks=53519/46168, in_queue=99687, util=88.37% 00:34:56.158 nvme0n4: ios=4977/5120, merge=0/0, ticks=55302/47627, in_queue=102929, util=96.79% 00:34:56.158 19:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:34:56.158 [global] 00:34:56.158 thread=1 00:34:56.158 invalidate=1 00:34:56.158 rw=randwrite 00:34:56.158 time_based=1 00:34:56.158 runtime=1 00:34:56.158 ioengine=libaio 00:34:56.158 direct=1 00:34:56.158 bs=4096 00:34:56.158 iodepth=128 00:34:56.158 norandommap=0 00:34:56.158 numjobs=1 00:34:56.158 00:34:56.158 verify_dump=1 00:34:56.158 verify_backlog=512 00:34:56.158 verify_state_save=0 00:34:56.158 do_verify=1 00:34:56.158 verify=crc32c-intel 00:34:56.158 [job0] 00:34:56.158 filename=/dev/nvme0n1 00:34:56.158 [job1] 00:34:56.158 filename=/dev/nvme0n2 00:34:56.158 [job2] 00:34:56.158 filename=/dev/nvme0n3 00:34:56.158 [job3] 00:34:56.158 filename=/dev/nvme0n4 00:34:56.158 Could not set queue depth (nvme0n1) 00:34:56.158 Could not set queue depth (nvme0n2) 00:34:56.158 Could not set queue depth (nvme0n3) 00:34:56.158 Could not set queue depth (nvme0n4) 00:34:56.419 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:56.419 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:56.419 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:56.419 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:56.419 fio-3.35 00:34:56.419 Starting 4 threads 00:34:57.801 00:34:57.801 job0: (groupid=0, jobs=1): err= 0: pid=619293: Tue Nov 5 19:24:26 2024 00:34:57.801 read: IOPS=4566, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1009msec) 00:34:57.801 slat (nsec): min=936, max=12974k, avg=94661.50, stdev=737704.40 00:34:57.801 clat (usec): min=2233, max=76834, avg=11765.70, stdev=7428.36 00:34:57.801 lat (usec): min=3182, max=76841, avg=11860.36, stdev=7512.01 00:34:57.801 clat percentiles (usec): 00:34:57.801 | 1.00th=[ 3687], 5.00th=[ 5669], 10.00th=[ 5997], 20.00th=[ 6718], 00:34:57.801 | 30.00th=[ 7504], 40.00th=[ 8979], 50.00th=[10552], 60.00th=[11731], 00:34:57.801 | 70.00th=[13435], 80.00th=[15139], 90.00th=[17695], 95.00th=[21103], 00:34:57.801 | 99.00th=[43779], 99.50th=[65799], 99.90th=[77071], 99.95th=[77071], 00:34:57.801 | 99.99th=[77071] 00:34:57.801 write: IOPS=4840, BW=18.9MiB/s (19.8MB/s)(19.1MiB/1009msec); 0 zone resets 00:34:57.801 slat (nsec): min=1571, max=17187k, avg=110831.99, stdev=731297.57 00:34:57.801 clat (usec): min=1145, max=76816, avg=15096.10, stdev=14926.31 00:34:57.801 lat (usec): min=1156, max=76825, avg=15206.94, stdev=15015.80 00:34:57.801 clat percentiles (usec): 00:34:57.801 | 1.00th=[ 3392], 5.00th=[ 4686], 10.00th=[ 5407], 20.00th=[ 6390], 00:34:57.801 | 30.00th=[ 7111], 40.00th=[ 8094], 50.00th=[10028], 60.00th=[12125], 00:34:57.801 | 70.00th=[13042], 80.00th=[15926], 90.00th=[37487], 95.00th=[56886], 00:34:57.801 | 99.00th=[68682], 99.50th=[69731], 99.90th=[74974], 99.95th=[74974], 00:34:57.801 | 99.99th=[77071] 00:34:57.801 bw ( KiB/s): min=13504, max=24544, per=21.39%, avg=19024.00, stdev=7806.46, samples=2 00:34:57.801 iops : min= 3376, max= 6136, avg=4756.00, stdev=1951.61, samples=2 00:34:57.801 lat (msec) : 2=0.08%, 4=1.89%, 10=45.02%, 20=41.02%, 50=7.90% 00:34:57.801 lat (msec) : 100=4.09% 00:34:57.801 cpu : usr=3.67%, sys=4.96%, ctx=325, majf=0, minf=2 00:34:57.801 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:34:57.801 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:57.801 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:57.801 issued rwts: total=4608,4884,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:57.801 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:57.801 job1: (groupid=0, jobs=1): err= 0: pid=619297: Tue Nov 5 19:24:26 2024 00:34:57.801 read: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec) 00:34:57.801 slat (nsec): min=917, max=12913k, avg=140352.99, stdev=909807.12 00:34:57.801 clat (usec): min=5099, max=41667, avg=16626.01, stdev=8441.27 00:34:57.801 lat (usec): min=5101, max=41692, avg=16766.36, stdev=8534.59 00:34:57.801 clat percentiles (usec): 00:34:57.801 | 1.00th=[ 5735], 5.00th=[ 7635], 10.00th=[ 8160], 20.00th=[ 9765], 00:34:57.801 | 30.00th=[10028], 40.00th=[10945], 50.00th=[12518], 60.00th=[16319], 00:34:57.801 | 70.00th=[23462], 80.00th=[25822], 90.00th=[27919], 95.00th=[30802], 00:34:57.801 | 99.00th=[36439], 99.50th=[40109], 99.90th=[40633], 99.95th=[41157], 00:34:57.801 | 99.99th=[41681] 00:34:57.801 write: IOPS=3846, BW=15.0MiB/s (15.8MB/s)(15.1MiB/1005msec); 0 zone resets 00:34:57.801 slat (nsec): min=1503, max=15762k, avg=122728.18, stdev=828938.41 00:34:57.801 clat (usec): min=3216, max=49054, avg=17330.83, stdev=9235.25 00:34:57.801 lat (usec): min=3223, max=49078, avg=17453.56, stdev=9303.07 00:34:57.801 clat percentiles (usec): 00:34:57.801 | 1.00th=[ 4359], 5.00th=[ 5932], 10.00th=[ 7635], 20.00th=[ 9110], 00:34:57.801 | 30.00th=[ 9896], 40.00th=[11338], 50.00th=[14877], 60.00th=[20841], 00:34:57.801 | 70.00th=[23725], 80.00th=[25560], 90.00th=[29754], 95.00th=[33424], 00:34:57.801 | 99.00th=[40633], 99.50th=[40633], 99.90th=[47973], 99.95th=[48497], 00:34:57.801 | 99.99th=[49021] 00:34:57.801 bw ( KiB/s): min=13528, max=16384, per=16.81%, avg=14956.00, stdev=2019.50, samples=2 00:34:57.801 iops : min= 3382, max= 4096, avg=3739.00, stdev=504.87, samples=2 00:34:57.801 lat (msec) : 4=0.07%, 10=29.81%, 20=30.89%, 50=39.23% 00:34:57.801 cpu : usr=2.89%, sys=3.88%, ctx=274, majf=0, minf=1 00:34:57.801 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:34:57.801 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:57.801 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:57.801 issued rwts: total=3584,3866,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:57.801 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:57.801 job2: (groupid=0, jobs=1): err= 0: pid=619304: Tue Nov 5 19:24:26 2024 00:34:57.801 read: IOPS=6095, BW=23.8MiB/s (25.0MB/s)(24.0MiB/1008msec) 00:34:57.801 slat (nsec): min=919, max=13619k, avg=79434.93, stdev=648892.70 00:34:57.801 clat (usec): min=1761, max=28458, avg=10668.01, stdev=3778.33 00:34:57.801 lat (usec): min=1788, max=28481, avg=10747.44, stdev=3829.92 00:34:57.801 clat percentiles (usec): 00:34:57.801 | 1.00th=[ 4555], 5.00th=[ 6390], 10.00th=[ 6849], 20.00th=[ 7701], 00:34:57.801 | 30.00th=[ 8160], 40.00th=[ 9241], 50.00th=[10159], 60.00th=[10814], 00:34:57.801 | 70.00th=[11338], 80.00th=[13173], 90.00th=[16188], 95.00th=[18482], 00:34:57.801 | 99.00th=[22152], 99.50th=[25822], 99.90th=[25822], 99.95th=[25822], 00:34:57.801 | 99.99th=[28443] 00:34:57.801 write: IOPS=6433, BW=25.1MiB/s (26.4MB/s)(25.3MiB/1008msec); 0 zone resets 00:34:57.801 slat (nsec): min=1592, max=10284k, avg=70696.42, stdev=544410.65 00:34:57.801 clat (usec): min=573, max=31748, avg=9622.68, stdev=4217.04 00:34:57.801 lat (usec): min=678, max=31751, avg=9693.38, stdev=4245.62 00:34:57.802 clat percentiles (usec): 00:34:57.802 | 1.00th=[ 1418], 5.00th=[ 4490], 10.00th=[ 4948], 20.00th=[ 6587], 00:34:57.802 | 30.00th=[ 7111], 40.00th=[ 7963], 50.00th=[ 9241], 60.00th=[10683], 00:34:57.802 | 70.00th=[11338], 80.00th=[11863], 90.00th=[13829], 95.00th=[16057], 00:34:57.802 | 99.00th=[27395], 99.50th=[31589], 99.90th=[31851], 99.95th=[31851], 00:34:57.802 | 99.99th=[31851] 00:34:57.802 bw ( KiB/s): min=24576, max=26280, per=28.59%, avg=25428.00, stdev=1204.91, samples=2 00:34:57.802 iops : min= 6144, max= 6570, avg=6357.00, stdev=301.23, samples=2 00:34:57.802 lat (usec) : 750=0.01%, 1000=0.01% 00:34:57.802 lat (msec) : 2=0.78%, 4=1.21%, 10=49.97%, 20=45.86%, 50=2.15% 00:34:57.802 cpu : usr=3.87%, sys=7.75%, ctx=368, majf=0, minf=1 00:34:57.802 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:34:57.802 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:57.802 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:57.802 issued rwts: total=6144,6485,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:57.802 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:57.802 job3: (groupid=0, jobs=1): err= 0: pid=619310: Tue Nov 5 19:24:26 2024 00:34:57.802 read: IOPS=7132, BW=27.9MiB/s (29.2MB/s)(28.0MiB/1005msec) 00:34:57.802 slat (nsec): min=929, max=10257k, avg=70996.62, stdev=542872.50 00:34:57.802 clat (usec): min=3235, max=19232, avg=9496.20, stdev=2564.15 00:34:57.802 lat (usec): min=3252, max=21162, avg=9567.20, stdev=2588.33 00:34:57.802 clat percentiles (usec): 00:34:57.802 | 1.00th=[ 4359], 5.00th=[ 6390], 10.00th=[ 6915], 20.00th=[ 7308], 00:34:57.802 | 30.00th=[ 7963], 40.00th=[ 8356], 50.00th=[ 9241], 60.00th=[ 9634], 00:34:57.802 | 70.00th=[10552], 80.00th=[11469], 90.00th=[12649], 95.00th=[14877], 00:34:57.802 | 99.00th=[16450], 99.50th=[19006], 99.90th=[19268], 99.95th=[19268], 00:34:57.802 | 99.99th=[19268] 00:34:57.802 write: IOPS=7168, BW=28.0MiB/s (29.4MB/s)(28.1MiB/1005msec); 0 zone resets 00:34:57.802 slat (nsec): min=1553, max=8368.8k, avg=58260.22, stdev=413277.79 00:34:57.802 clat (usec): min=367, max=26977, avg=8255.74, stdev=3747.98 00:34:57.802 lat (usec): min=400, max=26987, avg=8314.00, stdev=3764.24 00:34:57.802 clat percentiles (usec): 00:34:57.802 | 1.00th=[ 1483], 5.00th=[ 3720], 10.00th=[ 4555], 20.00th=[ 5473], 00:34:57.802 | 30.00th=[ 6652], 40.00th=[ 7242], 50.00th=[ 7832], 60.00th=[ 8094], 00:34:57.802 | 70.00th=[ 8455], 80.00th=[10159], 90.00th=[13173], 95.00th=[16450], 00:34:57.802 | 99.00th=[21103], 99.50th=[23462], 99.90th=[25560], 99.95th=[26870], 00:34:57.802 | 99.99th=[26870] 00:34:57.802 bw ( KiB/s): min=28672, max=28672, per=32.23%, avg=28672.00, stdev= 0.00, samples=2 00:34:57.802 iops : min= 7168, max= 7168, avg=7168.00, stdev= 0.00, samples=2 00:34:57.802 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.15% 00:34:57.802 lat (msec) : 2=0.89%, 4=2.95%, 10=68.33%, 20=26.84%, 50=0.82% 00:34:57.802 cpu : usr=5.08%, sys=8.37%, ctx=408, majf=0, minf=1 00:34:57.802 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:34:57.802 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:57.802 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:57.802 issued rwts: total=7168,7204,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:57.802 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:57.802 00:34:57.802 Run status group 0 (all jobs): 00:34:57.802 READ: bw=83.2MiB/s (87.3MB/s), 13.9MiB/s-27.9MiB/s (14.6MB/s-29.2MB/s), io=84.0MiB (88.1MB), run=1005-1009msec 00:34:57.802 WRITE: bw=86.9MiB/s (91.1MB/s), 15.0MiB/s-28.0MiB/s (15.8MB/s-29.4MB/s), io=87.7MiB (91.9MB), run=1005-1009msec 00:34:57.802 00:34:57.802 Disk stats (read/write): 00:34:57.802 nvme0n1: ios=3743/4096, merge=0/0, ticks=39533/62657, in_queue=102190, util=86.67% 00:34:57.802 nvme0n2: ios=2976/3072, merge=0/0, ticks=18256/17840, in_queue=36096, util=87.33% 00:34:57.802 nvme0n3: ios=5155/5533, merge=0/0, ticks=51204/48628, in_queue=99832, util=96.08% 00:34:57.802 nvme0n4: ios=5936/6144, merge=0/0, ticks=51171/45578, in_queue=96749, util=91.00% 00:34:57.802 19:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:34:57.802 19:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=619457 00:34:57.802 19:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:34:57.802 19:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:34:57.802 [global] 00:34:57.802 thread=1 00:34:57.802 invalidate=1 00:34:57.802 rw=read 00:34:57.802 time_based=1 00:34:57.802 runtime=10 00:34:57.802 ioengine=libaio 00:34:57.802 direct=1 00:34:57.802 bs=4096 00:34:57.802 iodepth=1 00:34:57.802 norandommap=1 00:34:57.802 numjobs=1 00:34:57.802 00:34:57.802 [job0] 00:34:57.802 filename=/dev/nvme0n1 00:34:57.802 [job1] 00:34:57.802 filename=/dev/nvme0n2 00:34:57.802 [job2] 00:34:57.802 filename=/dev/nvme0n3 00:34:57.802 [job3] 00:34:57.802 filename=/dev/nvme0n4 00:34:57.802 Could not set queue depth (nvme0n1) 00:34:57.802 Could not set queue depth (nvme0n2) 00:34:57.802 Could not set queue depth (nvme0n3) 00:34:57.802 Could not set queue depth (nvme0n4) 00:34:58.062 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:58.062 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:58.062 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:58.062 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:58.062 fio-3.35 00:34:58.062 Starting 4 threads 00:35:00.606 19:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:35:00.866 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=10985472, buflen=4096 00:35:00.866 fio: pid=619767, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:35:00.866 19:24:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:35:01.127 19:24:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:01.127 19:24:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:35:01.127 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=14671872, buflen=4096 00:35:01.127 fio: pid=619762, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:35:01.127 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=491520, buflen=4096 00:35:01.127 fio: pid=619729, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:35:01.389 19:24:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:01.389 19:24:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:35:01.389 19:24:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:01.389 19:24:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:35:01.389 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=806912, buflen=4096 00:35:01.389 fio: pid=619743, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:35:01.389 00:35:01.389 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=619729: Tue Nov 5 19:24:30 2024 00:35:01.389 read: IOPS=40, BW=161KiB/s (165kB/s)(480KiB/2976msec) 00:35:01.389 slat (usec): min=7, max=2604, avg=44.50, stdev=234.82 00:35:01.389 clat (usec): min=239, max=42341, avg=24566.52, stdev=20404.09 00:35:01.389 lat (usec): min=248, max=44124, avg=24611.17, stdev=20427.37 00:35:01.389 clat percentiles (usec): 00:35:01.389 | 1.00th=[ 241], 5.00th=[ 265], 10.00th=[ 445], 20.00th=[ 506], 00:35:01.389 | 30.00th=[ 594], 40.00th=[ 857], 50.00th=[41157], 60.00th=[41681], 00:35:01.389 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:35:01.389 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:35:01.389 | 99.99th=[42206] 00:35:01.389 bw ( KiB/s): min= 96, max= 352, per=2.10%, avg=174.40, stdev=115.43, samples=5 00:35:01.389 iops : min= 24, max= 88, avg=43.60, stdev=28.86, samples=5 00:35:01.389 lat (usec) : 250=4.13%, 500=14.88%, 750=18.18%, 1000=3.31% 00:35:01.389 lat (msec) : 2=0.83%, 50=57.85% 00:35:01.389 cpu : usr=0.00%, sys=0.20%, ctx=122, majf=0, minf=2 00:35:01.389 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:01.389 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:01.389 complete : 0=0.8%, 4=99.2%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:01.389 issued rwts: total=121,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:01.389 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:01.389 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=619743: Tue Nov 5 19:24:30 2024 00:35:01.389 read: IOPS=62, BW=248KiB/s (254kB/s)(788KiB/3182msec) 00:35:01.389 slat (usec): min=7, max=14676, avg=136.40, stdev=1167.06 00:35:01.389 clat (usec): min=219, max=44985, avg=15892.53, stdev=20066.87 00:35:01.389 lat (usec): min=250, max=56017, avg=16029.49, stdev=20245.05 00:35:01.389 clat percentiles (usec): 00:35:01.389 | 1.00th=[ 277], 5.00th=[ 441], 10.00th=[ 465], 20.00th=[ 494], 00:35:01.389 | 30.00th=[ 519], 40.00th=[ 553], 50.00th=[ 603], 60.00th=[ 685], 00:35:01.389 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:35:01.389 | 99.00th=[43254], 99.50th=[44827], 99.90th=[44827], 99.95th=[44827], 00:35:01.389 | 99.99th=[44827] 00:35:01.389 bw ( KiB/s): min= 88, max= 1072, per=3.09%, avg=256.83, stdev=399.36, samples=6 00:35:01.389 iops : min= 22, max= 268, avg=64.17, stdev=99.86, samples=6 00:35:01.389 lat (usec) : 250=0.51%, 500=23.74%, 750=37.37% 00:35:01.389 lat (msec) : 2=1.01%, 50=36.87% 00:35:01.389 cpu : usr=0.00%, sys=0.28%, ctx=200, majf=0, minf=2 00:35:01.389 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:01.389 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:01.389 complete : 0=0.5%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:01.389 issued rwts: total=198,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:01.389 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:01.389 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=619762: Tue Nov 5 19:24:30 2024 00:35:01.389 read: IOPS=1264, BW=5058KiB/s (5179kB/s)(14.0MiB/2833msec) 00:35:01.389 slat (usec): min=6, max=11150, avg=28.73, stdev=211.73 00:35:01.389 clat (usec): min=269, max=41553, avg=748.95, stdev=690.17 00:35:01.389 lat (usec): min=277, max=41602, avg=777.68, stdev=722.96 00:35:01.389 clat percentiles (usec): 00:35:01.389 | 1.00th=[ 457], 5.00th=[ 562], 10.00th=[ 594], 20.00th=[ 652], 00:35:01.389 | 30.00th=[ 685], 40.00th=[ 717], 50.00th=[ 750], 60.00th=[ 783], 00:35:01.389 | 70.00th=[ 807], 80.00th=[ 832], 90.00th=[ 865], 95.00th=[ 889], 00:35:01.389 | 99.00th=[ 930], 99.50th=[ 955], 99.90th=[ 1012], 99.95th=[ 1369], 00:35:01.389 | 99.99th=[41681] 00:35:01.389 bw ( KiB/s): min= 5176, max= 5280, per=63.05%, avg=5216.00, stdev=40.00, samples=5 00:35:01.389 iops : min= 1294, max= 1320, avg=1304.00, stdev=10.00, samples=5 00:35:01.389 lat (usec) : 500=1.93%, 750=49.18%, 1000=48.70% 00:35:01.389 lat (msec) : 2=0.14%, 50=0.03% 00:35:01.389 cpu : usr=1.55%, sys=3.28%, ctx=3586, majf=0, minf=2 00:35:01.389 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:01.389 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:01.389 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:01.389 issued rwts: total=3583,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:01.389 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:01.389 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=619767: Tue Nov 5 19:24:30 2024 00:35:01.389 read: IOPS=1026, BW=4106KiB/s (4204kB/s)(10.5MiB/2613msec) 00:35:01.389 slat (nsec): min=6315, max=71648, avg=25904.05, stdev=4379.51 00:35:01.389 clat (usec): min=331, max=42318, avg=932.53, stdev=814.41 00:35:01.389 lat (usec): min=357, max=42332, avg=958.43, stdev=814.38 00:35:01.389 clat percentiles (usec): 00:35:01.389 | 1.00th=[ 465], 5.00th=[ 578], 10.00th=[ 758], 20.00th=[ 848], 00:35:01.389 | 30.00th=[ 898], 40.00th=[ 930], 50.00th=[ 955], 60.00th=[ 971], 00:35:01.389 | 70.00th=[ 988], 80.00th=[ 1012], 90.00th=[ 1037], 95.00th=[ 1057], 00:35:01.389 | 99.00th=[ 1106], 99.50th=[ 1139], 99.90th=[ 3785], 99.95th=[ 3949], 00:35:01.389 | 99.99th=[42206] 00:35:01.389 bw ( KiB/s): min= 4040, max= 4488, per=50.22%, avg=4155.20, stdev=190.71, samples=5 00:35:01.389 iops : min= 1010, max= 1122, avg=1038.80, stdev=47.68, samples=5 00:35:01.389 lat (usec) : 500=2.09%, 750=7.45%, 1000=66.16% 00:35:01.389 lat (msec) : 2=24.15%, 4=0.07%, 50=0.04% 00:35:01.389 cpu : usr=0.88%, sys=4.82%, ctx=2684, majf=0, minf=1 00:35:01.389 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:01.389 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:01.389 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:01.389 issued rwts: total=2683,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:01.389 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:01.389 00:35:01.389 Run status group 0 (all jobs): 00:35:01.389 READ: bw=8273KiB/s (8471kB/s), 161KiB/s-5058KiB/s (165kB/s-5179kB/s), io=25.7MiB (27.0MB), run=2613-3182msec 00:35:01.389 00:35:01.389 Disk stats (read/write): 00:35:01.389 nvme0n1: ios=117/0, merge=0/0, ticks=2822/0, in_queue=2822, util=95.26% 00:35:01.389 nvme0n2: ios=195/0, merge=0/0, ticks=3049/0, in_queue=3049, util=95.64% 00:35:01.389 nvme0n3: ios=3394/0, merge=0/0, ticks=2425/0, in_queue=2425, util=96.16% 00:35:01.390 nvme0n4: ios=2683/0, merge=0/0, ticks=2476/0, in_queue=2476, util=96.33% 00:35:01.651 19:24:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:01.651 19:24:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:35:01.912 19:24:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:01.912 19:24:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:35:01.912 19:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:01.912 19:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:35:02.173 19:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:02.173 19:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:35:02.435 19:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:35:02.435 19:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 619457 00:35:02.435 19:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:35:02.435 19:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:35:02.435 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:35:02.435 19:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:35:02.435 19:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1221 -- # local i=0 00:35:02.435 19:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:35:02.435 19:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:02.435 19:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:35:02.435 19:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:02.435 19:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1233 -- # return 0 00:35:02.435 19:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:35:02.435 19:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:35:02.435 nvmf hotplug test: fio failed as expected 00:35:02.435 19:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:02.695 19:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:35:02.695 19:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:35:02.695 19:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:35:02.695 19:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:35:02.695 19:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:35:02.696 19:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@335 -- # nvmfcleanup 00:35:02.696 19:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@99 -- # sync 00:35:02.696 19:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:35:02.696 19:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@102 -- # set +e 00:35:02.696 19:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@103 -- # for i in {1..20} 00:35:02.696 19:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:35:02.696 rmmod nvme_tcp 00:35:02.696 rmmod nvme_fabrics 00:35:02.696 rmmod nvme_keyring 00:35:02.696 19:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:35:02.696 19:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@106 -- # set -e 00:35:02.696 19:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@107 -- # return 0 00:35:02.696 19:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # '[' -n 616283 ']' 00:35:02.696 19:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@337 -- # killprocess 616283 00:35:02.696 19:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@952 -- # '[' -z 616283 ']' 00:35:02.696 19:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@956 -- # kill -0 616283 00:35:02.696 19:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@957 -- # uname 00:35:02.696 19:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:35:02.696 19:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 616283 00:35:02.696 19:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:35:02.696 19:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:35:02.696 19:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 616283' 00:35:02.696 killing process with pid 616283 00:35:02.696 19:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@971 -- # kill 616283 00:35:02.696 19:24:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@976 -- # wait 616283 00:35:02.956 19:24:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:35:02.956 19:24:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@342 -- # nvmf_fini 00:35:02.957 19:24:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@264 -- # local dev 00:35:02.957 19:24:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@267 -- # remove_target_ns 00:35:02.957 19:24:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:35:02.957 19:24:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:35:02.957 19:24:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_target_ns 00:35:04.867 19:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@268 -- # delete_main_bridge 00:35:04.867 19:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:35:04.867 19:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@130 -- # return 0 00:35:04.867 19:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:35:04.867 19:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:35:04.867 19:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:35:04.867 19:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:35:04.867 19:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:35:04.867 19:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:35:04.867 19:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:35:04.867 19:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:35:04.867 19:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:35:04.867 19:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:35:04.867 19:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:35:04.867 19:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:35:04.867 19:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:35:04.867 19:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:35:04.867 19:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:35:04.867 19:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:35:04.867 19:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:35:04.867 19:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@41 -- # _dev=0 00:35:04.867 19:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@41 -- # dev_map=() 00:35:04.867 19:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@284 -- # iptr 00:35:04.867 19:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@542 -- # iptables-save 00:35:04.867 19:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:35:04.867 19:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@542 -- # iptables-restore 00:35:05.128 00:35:05.128 real 0m27.859s 00:35:05.128 user 2m16.202s 00:35:05.128 sys 0m11.979s 00:35:05.128 19:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:35:05.128 19:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:35:05.128 ************************************ 00:35:05.128 END TEST nvmf_fio_target 00:35:05.128 ************************************ 00:35:05.128 19:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:35:05.128 19:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:35:05.128 19:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:35:05.128 19:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:05.128 ************************************ 00:35:05.128 START TEST nvmf_bdevio 00:35:05.128 ************************************ 00:35:05.128 19:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:35:05.128 * Looking for test storage... 00:35:05.128 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:05.128 19:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:35:05.128 19:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lcov --version 00:35:05.128 19:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:35:05.389 19:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:35:05.389 19:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:05.389 19:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:05.389 19:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:05.389 19:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:35:05.389 19:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:35:05.389 19:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:35:05.389 19:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:35:05.389 19:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:35:05.389 19:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:35:05.389 19:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:35:05.389 19:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:05.389 19:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:35:05.389 19:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:35:05.389 19:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:05.389 19:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:05.389 19:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:35:05.389 19:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:35:05.389 19:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:05.389 19:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:35:05.390 19:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:35:05.390 19:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:35:05.390 19:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:35:05.390 19:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:05.390 19:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:35:05.390 19:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:35:05.390 19:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:05.390 19:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:05.390 19:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:35:05.390 19:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:05.390 19:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:35:05.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:05.390 --rc genhtml_branch_coverage=1 00:35:05.390 --rc genhtml_function_coverage=1 00:35:05.390 --rc genhtml_legend=1 00:35:05.390 --rc geninfo_all_blocks=1 00:35:05.390 --rc geninfo_unexecuted_blocks=1 00:35:05.390 00:35:05.390 ' 00:35:05.390 19:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:35:05.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:05.390 --rc genhtml_branch_coverage=1 00:35:05.390 --rc genhtml_function_coverage=1 00:35:05.390 --rc genhtml_legend=1 00:35:05.390 --rc geninfo_all_blocks=1 00:35:05.390 --rc geninfo_unexecuted_blocks=1 00:35:05.390 00:35:05.390 ' 00:35:05.390 19:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:35:05.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:05.390 --rc genhtml_branch_coverage=1 00:35:05.390 --rc genhtml_function_coverage=1 00:35:05.390 --rc genhtml_legend=1 00:35:05.390 --rc geninfo_all_blocks=1 00:35:05.390 --rc geninfo_unexecuted_blocks=1 00:35:05.390 00:35:05.390 ' 00:35:05.390 19:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:35:05.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:05.390 --rc genhtml_branch_coverage=1 00:35:05.390 --rc genhtml_function_coverage=1 00:35:05.390 --rc genhtml_legend=1 00:35:05.390 --rc geninfo_all_blocks=1 00:35:05.390 --rc geninfo_unexecuted_blocks=1 00:35:05.390 00:35:05.390 ' 00:35:05.390 19:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:05.390 19:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:35:05.390 19:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:05.390 19:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:05.390 19:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:05.390 19:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:05.390 19:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:05.390 19:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:35:05.390 19:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:05.390 19:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:35:05.390 19:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:05.390 19:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:05.390 19:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:05.390 19:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:35:05.390 19:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:35:05.390 19:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:05.390 19:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:05.390 19:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:35:05.390 19:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:05.390 19:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:05.390 19:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:05.390 19:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:05.390 19:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:05.390 19:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:05.390 19:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:35:05.390 19:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:05.390 19:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:35:05.390 19:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:35:05.390 19:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:35:05.390 19:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:35:05.390 19:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@50 -- # : 0 00:35:05.390 19:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:35:05.390 19:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:35:05.390 19:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:35:05.390 19:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:05.390 19:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:05.390 19:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:35:05.390 19:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:35:05.390 19:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:35:05.390 19:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:35:05.390 19:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@54 -- # have_pci_nics=0 00:35:05.390 19:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:05.390 19:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:05.390 19:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:35:05.390 19:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:35:05.390 19:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:05.390 19:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@296 -- # prepare_net_devs 00:35:05.390 19:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # local -g is_hw=no 00:35:05.390 19:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@260 -- # remove_target_ns 00:35:05.390 19:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:35:05.390 19:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:35:05.390 19:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_target_ns 00:35:05.390 19:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:35:05.390 19:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:35:05.390 19:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # xtrace_disable 00:35:05.390 19:24:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:13.529 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:13.529 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@131 -- # pci_devs=() 00:35:13.529 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@131 -- # local -a pci_devs 00:35:13.529 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@132 -- # pci_net_devs=() 00:35:13.529 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:35:13.529 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@133 -- # pci_drivers=() 00:35:13.529 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@133 -- # local -A pci_drivers 00:35:13.529 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@135 -- # net_devs=() 00:35:13.529 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@135 -- # local -ga net_devs 00:35:13.529 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@136 -- # e810=() 00:35:13.529 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@136 -- # local -ga e810 00:35:13.529 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@137 -- # x722=() 00:35:13.529 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@137 -- # local -ga x722 00:35:13.529 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@138 -- # mlx=() 00:35:13.529 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@138 -- # local -ga mlx 00:35:13.529 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:13.529 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:13.529 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:13.529 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:13.529 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:13.529 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:13.529 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:13.529 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:13.529 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:13.529 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:13.529 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:13.529 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:13.529 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:35:13.529 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:35:13.529 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:35:13.529 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:35:13.530 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:35:13.530 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:35:13.530 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:35:13.530 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:35:13.530 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:35:13.530 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:35:13.530 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:35:13.530 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:13.530 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:13.530 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:35:13.530 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:35:13.530 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:35:13.530 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:35:13.530 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:35:13.530 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:35:13.530 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:13.530 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:13.530 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:35:13.530 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:35:13.530 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:35:13.530 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:35:13.530 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:35:13.530 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:13.530 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:35:13.530 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:13.530 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@234 -- # [[ up == up ]] 00:35:13.530 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:35:13.530 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:13.530 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:35:13.530 Found net devices under 0000:4b:00.0: cvl_0_0 00:35:13.530 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:35:13.530 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:35:13.530 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:13.530 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:35:13.530 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:13.530 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@234 -- # [[ up == up ]] 00:35:13.530 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:35:13.530 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:13.530 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:35:13.530 Found net devices under 0000:4b:00.1: cvl_0_1 00:35:13.530 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:35:13.530 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:35:13.530 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:35:13.530 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # is_hw=yes 00:35:13.530 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:35:13.530 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:35:13.530 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:35:13.530 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:35:13.530 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@257 -- # create_target_ns 00:35:13.530 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:35:13.530 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:35:13.530 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:35:13.530 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:13.530 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:35:13.530 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:35:13.530 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:13.530 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:13.530 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:35:13.530 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:35:13.530 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:35:13.530 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:35:13.530 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@27 -- # local -gA dev_map 00:35:13.530 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@28 -- # local -g _dev 00:35:13.530 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:35:13.530 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:35:13.530 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:35:13.530 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:35:13.530 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@44 -- # ips=() 00:35:13.530 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:35:13.530 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:35:13.530 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:35:13.530 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:35:13.530 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:35:13.530 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:35:13.530 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:35:13.530 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:35:13.530 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:35:13.530 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:35:13.530 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:35:13.530 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:35:13.530 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:35:13.530 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:35:13.530 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:35:13.530 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:35:13.530 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:35:13.530 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:35:13.530 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:35:13.530 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:35:13.530 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@11 -- # local val=167772161 00:35:13.530 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:35:13.530 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:35:13.530 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:35:13.530 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:35:13.530 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:35:13.530 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:35:13.530 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:35:13.530 10.0.0.1 00:35:13.530 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:35:13.530 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:35:13.530 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:13.530 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:13.530 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:35:13.530 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@11 -- # local val=167772162 00:35:13.530 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:35:13.530 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:35:13.531 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:35:13.531 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:35:13.531 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:35:13.531 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:35:13.531 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:35:13.531 10.0.0.2 00:35:13.531 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:35:13.531 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:35:13.531 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:35:13.531 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:35:13.531 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:35:13.531 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:35:13.531 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:35:13.531 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:13.531 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:13.531 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:35:13.531 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:35:13.531 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:35:13.531 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:35:13.531 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:35:13.531 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:35:13.531 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:35:13.531 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:35:13.531 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:35:13.531 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:35:13.531 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:35:13.531 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@38 -- # ping_ips 1 00:35:13.531 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:35:13.531 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:35:13.531 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:35:13.531 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:35:13.531 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:35:13.531 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:35:13.531 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:35:13.531 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:35:13.531 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:35:13.531 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@107 -- # local dev=initiator0 00:35:13.531 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:35:13.531 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:35:13.531 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:35:13.531 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:35:13.531 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:35:13.531 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:35:13.531 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:35:13.531 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:35:13.531 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:35:13.531 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:35:13.531 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:35:13.531 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:13.531 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:13.531 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:35:13.531 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:35:13.531 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:13.531 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.675 ms 00:35:13.531 00:35:13.531 --- 10.0.0.1 ping statistics --- 00:35:13.531 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:13.531 rtt min/avg/max/mdev = 0.675/0.675/0.675/0.000 ms 00:35:13.531 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:35:13.531 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:35:13.531 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:35:13.531 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:35:13.531 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:13.531 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:13.531 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@168 -- # get_net_dev target0 00:35:13.531 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@107 -- # local dev=target0 00:35:13.531 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:35:13.531 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:35:13.531 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:35:13.531 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:35:13.531 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:35:13.531 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:35:13.531 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:35:13.531 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:35:13.531 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:35:13.531 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:35:13.531 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:35:13.531 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:35:13.531 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:35:13.531 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:35:13.531 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:13.531 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.324 ms 00:35:13.531 00:35:13.531 --- 10.0.0.2 ping statistics --- 00:35:13.531 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:13.531 rtt min/avg/max/mdev = 0.324/0.324/0.324/0.000 ms 00:35:13.531 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@98 -- # (( pair++ )) 00:35:13.531 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:35:13.531 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:13.531 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@270 -- # return 0 00:35:13.531 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:35:13.531 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:35:13.531 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:35:13.531 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:35:13.531 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:35:13.531 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:35:13.531 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:35:13.531 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:35:13.531 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:35:13.531 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:35:13.531 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@107 -- # local dev=initiator0 00:35:13.531 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:35:13.531 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:35:13.531 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:35:13.531 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:35:13.531 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:35:13.531 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:35:13.531 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:35:13.532 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:35:13.532 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:35:13.532 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:13.532 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:35:13.532 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:35:13.532 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:35:13.532 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:35:13.532 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:35:13.532 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:35:13.532 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@107 -- # local dev=initiator1 00:35:13.532 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:35:13.532 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:35:13.532 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@109 -- # return 1 00:35:13.532 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@168 -- # dev= 00:35:13.532 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@169 -- # return 0 00:35:13.532 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:35:13.532 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:35:13.532 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:35:13.532 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:35:13.532 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:35:13.532 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:13.532 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:13.532 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@168 -- # get_net_dev target0 00:35:13.532 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@107 -- # local dev=target0 00:35:13.532 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:35:13.532 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:35:13.532 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:35:13.532 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:35:13.532 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:35:13.532 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:35:13.532 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:35:13.532 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:35:13.532 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:35:13.532 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:13.532 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:35:13.532 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:35:13.532 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:35:13.532 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:35:13.532 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:13.532 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:13.532 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@168 -- # get_net_dev target1 00:35:13.532 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@107 -- # local dev=target1 00:35:13.532 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:35:13.532 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:35:13.532 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@109 -- # return 1 00:35:13.532 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@168 -- # dev= 00:35:13.532 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@169 -- # return 0 00:35:13.532 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:35:13.532 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:13.532 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:35:13.532 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:35:13.532 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:13.532 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:35:13.532 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:35:13.532 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:35:13.532 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:35:13.532 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:13.532 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:13.532 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:35:13.532 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # nvmfpid=624895 00:35:13.532 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@329 -- # waitforlisten 624895 00:35:13.532 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@833 -- # '[' -z 624895 ']' 00:35:13.532 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:13.532 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@838 -- # local max_retries=100 00:35:13.532 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:13.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:13.532 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # xtrace_disable 00:35:13.532 19:24:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:13.532 [2024-11-05 19:24:42.027330] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:13.532 [2024-11-05 19:24:42.028157] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:35:13.532 [2024-11-05 19:24:42.028197] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:13.532 [2024-11-05 19:24:42.115069] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:13.532 [2024-11-05 19:24:42.162781] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:13.532 [2024-11-05 19:24:42.162835] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:13.532 [2024-11-05 19:24:42.162844] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:13.532 [2024-11-05 19:24:42.162851] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:13.532 [2024-11-05 19:24:42.162857] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:13.532 [2024-11-05 19:24:42.164710] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:35:13.532 [2024-11-05 19:24:42.164843] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:35:13.532 [2024-11-05 19:24:42.165179] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:35:13.532 [2024-11-05 19:24:42.165183] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:13.532 [2024-11-05 19:24:42.237946] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:13.532 [2024-11-05 19:24:42.239334] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:35:13.532 [2024-11-05 19:24:42.239380] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:13.532 [2024-11-05 19:24:42.239956] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:35:13.532 [2024-11-05 19:24:42.240016] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:35:13.794 19:24:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:35:13.794 19:24:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@866 -- # return 0 00:35:13.794 19:24:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:35:13.794 19:24:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:13.794 19:24:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:13.794 19:24:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:13.794 19:24:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:13.794 19:24:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:13.794 19:24:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:13.794 [2024-11-05 19:24:42.906190] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:13.794 19:24:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:13.794 19:24:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:13.794 19:24:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:13.794 19:24:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:13.794 Malloc0 00:35:13.794 19:24:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:13.794 19:24:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:13.794 19:24:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:13.794 19:24:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:13.794 19:24:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:13.794 19:24:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:13.794 19:24:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:13.794 19:24:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:13.794 19:24:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:13.794 19:24:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:13.794 19:24:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:13.794 19:24:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:13.794 [2024-11-05 19:24:43.006354] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:13.794 19:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:13.794 19:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:35:13.794 19:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:35:13.794 19:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # config=() 00:35:13.794 19:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # local subsystem config 00:35:13.794 19:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:35:13.794 19:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:35:13.794 { 00:35:13.795 "params": { 00:35:13.795 "name": "Nvme$subsystem", 00:35:13.795 "trtype": "$TEST_TRANSPORT", 00:35:13.795 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:13.795 "adrfam": "ipv4", 00:35:13.795 "trsvcid": "$NVMF_PORT", 00:35:13.795 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:13.795 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:13.795 "hdgst": ${hdgst:-false}, 00:35:13.795 "ddgst": ${ddgst:-false} 00:35:13.795 }, 00:35:13.795 "method": "bdev_nvme_attach_controller" 00:35:13.795 } 00:35:13.795 EOF 00:35:13.795 )") 00:35:13.795 19:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@394 -- # cat 00:35:13.795 19:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@396 -- # jq . 00:35:13.795 19:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@397 -- # IFS=, 00:35:13.795 19:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:35:13.795 "params": { 00:35:13.795 "name": "Nvme1", 00:35:13.795 "trtype": "tcp", 00:35:13.795 "traddr": "10.0.0.2", 00:35:13.795 "adrfam": "ipv4", 00:35:13.795 "trsvcid": "4420", 00:35:13.795 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:13.795 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:13.795 "hdgst": false, 00:35:13.795 "ddgst": false 00:35:13.795 }, 00:35:13.795 "method": "bdev_nvme_attach_controller" 00:35:13.795 }' 00:35:13.795 [2024-11-05 19:24:43.065700] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:35:13.795 [2024-11-05 19:24:43.065782] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid625047 ] 00:35:14.055 [2024-11-05 19:24:43.143545] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:14.055 [2024-11-05 19:24:43.188102] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:14.055 [2024-11-05 19:24:43.188221] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:14.055 [2024-11-05 19:24:43.188224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:14.055 I/O targets: 00:35:14.055 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:35:14.055 00:35:14.055 00:35:14.055 CUnit - A unit testing framework for C - Version 2.1-3 00:35:14.055 http://cunit.sourceforge.net/ 00:35:14.055 00:35:14.055 00:35:14.055 Suite: bdevio tests on: Nvme1n1 00:35:14.315 Test: blockdev write read block ...passed 00:35:14.315 Test: blockdev write zeroes read block ...passed 00:35:14.315 Test: blockdev write zeroes read no split ...passed 00:35:14.315 Test: blockdev write zeroes read split ...passed 00:35:14.315 Test: blockdev write zeroes read split partial ...passed 00:35:14.315 Test: blockdev reset ...[2024-11-05 19:24:43.573194] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:35:14.315 [2024-11-05 19:24:43.573259] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1525970 (9): Bad file descriptor 00:35:14.575 [2024-11-05 19:24:43.668336] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:35:14.575 passed 00:35:14.575 Test: blockdev write read 8 blocks ...passed 00:35:14.575 Test: blockdev write read size > 128k ...passed 00:35:14.575 Test: blockdev write read invalid size ...passed 00:35:14.575 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:35:14.575 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:35:14.575 Test: blockdev write read max offset ...passed 00:35:14.575 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:35:14.575 Test: blockdev writev readv 8 blocks ...passed 00:35:14.575 Test: blockdev writev readv 30 x 1block ...passed 00:35:14.836 Test: blockdev writev readv block ...passed 00:35:14.836 Test: blockdev writev readv size > 128k ...passed 00:35:14.836 Test: blockdev writev readv size > 128k in two iovs ...passed 00:35:14.836 Test: blockdev comparev and writev ...[2024-11-05 19:24:43.935558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:14.836 [2024-11-05 19:24:43.935584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:14.836 [2024-11-05 19:24:43.935596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:14.836 [2024-11-05 19:24:43.935602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:14.836 [2024-11-05 19:24:43.936143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:14.836 [2024-11-05 19:24:43.936151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:14.836 [2024-11-05 19:24:43.936160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:14.836 [2024-11-05 19:24:43.936166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:14.836 [2024-11-05 19:24:43.936702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:14.836 [2024-11-05 19:24:43.936713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:14.836 [2024-11-05 19:24:43.936723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:14.836 [2024-11-05 19:24:43.936728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:35:14.836 [2024-11-05 19:24:43.937270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:14.836 [2024-11-05 19:24:43.937277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:35:14.836 [2024-11-05 19:24:43.937287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:14.836 [2024-11-05 19:24:43.937292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:35:14.836 passed 00:35:14.836 Test: blockdev nvme passthru rw ...passed 00:35:14.837 Test: blockdev nvme passthru vendor specific ...[2024-11-05 19:24:44.021681] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:35:14.837 [2024-11-05 19:24:44.021691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:35:14.837 [2024-11-05 19:24:44.022080] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:35:14.837 [2024-11-05 19:24:44.022088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:35:14.837 [2024-11-05 19:24:44.022436] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:35:14.837 [2024-11-05 19:24:44.022443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:35:14.837 [2024-11-05 19:24:44.022801] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:35:14.837 [2024-11-05 19:24:44.022808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:14.837 passed 00:35:14.837 Test: blockdev nvme admin passthru ...passed 00:35:14.837 Test: blockdev copy ...passed 00:35:14.837 00:35:14.837 Run Summary: Type Total Ran Passed Failed Inactive 00:35:14.837 suites 1 1 n/a 0 0 00:35:14.837 tests 23 23 23 0 0 00:35:14.837 asserts 152 152 152 0 n/a 00:35:14.837 00:35:14.837 Elapsed time = 1.425 seconds 00:35:15.097 19:24:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:15.097 19:24:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:15.097 19:24:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:15.097 19:24:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:15.097 19:24:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:35:15.097 19:24:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:35:15.097 19:24:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@335 -- # nvmfcleanup 00:35:15.097 19:24:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@99 -- # sync 00:35:15.097 19:24:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:35:15.097 19:24:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@102 -- # set +e 00:35:15.097 19:24:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@103 -- # for i in {1..20} 00:35:15.097 19:24:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:35:15.097 rmmod nvme_tcp 00:35:15.097 rmmod nvme_fabrics 00:35:15.097 rmmod nvme_keyring 00:35:15.097 19:24:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:35:15.097 19:24:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@106 -- # set -e 00:35:15.097 19:24:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@107 -- # return 0 00:35:15.097 19:24:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # '[' -n 624895 ']' 00:35:15.097 19:24:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@337 -- # killprocess 624895 00:35:15.097 19:24:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@952 -- # '[' -z 624895 ']' 00:35:15.097 19:24:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@956 -- # kill -0 624895 00:35:15.097 19:24:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@957 -- # uname 00:35:15.097 19:24:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:35:15.097 19:24:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 624895 00:35:15.097 19:24:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # process_name=reactor_3 00:35:15.097 19:24:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@962 -- # '[' reactor_3 = sudo ']' 00:35:15.097 19:24:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@970 -- # echo 'killing process with pid 624895' 00:35:15.097 killing process with pid 624895 00:35:15.097 19:24:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@971 -- # kill 624895 00:35:15.097 19:24:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@976 -- # wait 624895 00:35:15.357 19:24:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:35:15.357 19:24:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@342 -- # nvmf_fini 00:35:15.357 19:24:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@264 -- # local dev 00:35:15.357 19:24:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@267 -- # remove_target_ns 00:35:15.357 19:24:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:35:15.357 19:24:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:35:15.358 19:24:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_target_ns 00:35:17.268 19:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@268 -- # delete_main_bridge 00:35:17.268 19:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:35:17.268 19:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@130 -- # return 0 00:35:17.268 19:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:35:17.268 19:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:35:17.268 19:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:35:17.268 19:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:35:17.268 19:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:35:17.268 19:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:35:17.268 19:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:35:17.268 19:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:35:17.268 19:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:35:17.268 19:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:35:17.268 19:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:35:17.268 19:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:35:17.268 19:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:35:17.268 19:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:35:17.268 19:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:35:17.268 19:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:35:17.268 19:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:35:17.268 19:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@41 -- # _dev=0 00:35:17.268 19:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@41 -- # dev_map=() 00:35:17.268 19:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@284 -- # iptr 00:35:17.268 19:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@542 -- # iptables-save 00:35:17.268 19:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:35:17.268 19:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@542 -- # iptables-restore 00:35:17.268 00:35:17.268 real 0m12.300s 00:35:17.268 user 0m9.902s 00:35:17.268 sys 0m6.414s 00:35:17.268 19:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:35:17.268 19:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:17.268 ************************************ 00:35:17.268 END TEST nvmf_bdevio 00:35:17.268 ************************************ 00:35:17.529 19:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # [[ tcp == \t\c\p ]] 00:35:17.529 19:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # [[ phy != phy ]] 00:35:17.529 19:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:35:17.529 19:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:35:17.529 19:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:35:17.529 19:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:17.529 ************************************ 00:35:17.529 START TEST nvmf_zcopy 00:35:17.529 ************************************ 00:35:17.529 19:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:35:17.529 * Looking for test storage... 00:35:17.529 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:17.529 19:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:35:17.529 19:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lcov --version 00:35:17.529 19:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:35:17.529 19:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:35:17.529 19:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:17.529 19:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:17.529 19:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:17.529 19:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:35:17.529 19:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:35:17.529 19:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:35:17.529 19:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:35:17.529 19:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:35:17.529 19:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:35:17.529 19:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:35:17.529 19:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:17.529 19:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:35:17.529 19:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:35:17.529 19:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:17.529 19:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:17.529 19:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:35:17.529 19:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:35:17.529 19:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:17.529 19:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:35:17.529 19:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:35:17.529 19:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:35:17.529 19:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:35:17.529 19:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:17.529 19:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:35:17.529 19:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:35:17.529 19:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:17.529 19:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:17.529 19:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:35:17.529 19:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:17.529 19:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:35:17.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:17.529 --rc genhtml_branch_coverage=1 00:35:17.529 --rc genhtml_function_coverage=1 00:35:17.529 --rc genhtml_legend=1 00:35:17.529 --rc geninfo_all_blocks=1 00:35:17.529 --rc geninfo_unexecuted_blocks=1 00:35:17.529 00:35:17.529 ' 00:35:17.529 19:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:35:17.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:17.529 --rc genhtml_branch_coverage=1 00:35:17.529 --rc genhtml_function_coverage=1 00:35:17.529 --rc genhtml_legend=1 00:35:17.529 --rc geninfo_all_blocks=1 00:35:17.529 --rc geninfo_unexecuted_blocks=1 00:35:17.529 00:35:17.529 ' 00:35:17.529 19:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:35:17.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:17.529 --rc genhtml_branch_coverage=1 00:35:17.529 --rc genhtml_function_coverage=1 00:35:17.529 --rc genhtml_legend=1 00:35:17.529 --rc geninfo_all_blocks=1 00:35:17.529 --rc geninfo_unexecuted_blocks=1 00:35:17.529 00:35:17.529 ' 00:35:17.529 19:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:35:17.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:17.529 --rc genhtml_branch_coverage=1 00:35:17.529 --rc genhtml_function_coverage=1 00:35:17.529 --rc genhtml_legend=1 00:35:17.529 --rc geninfo_all_blocks=1 00:35:17.529 --rc geninfo_unexecuted_blocks=1 00:35:17.529 00:35:17.529 ' 00:35:17.529 19:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:17.529 19:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:35:17.529 19:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:17.529 19:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:17.529 19:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:17.529 19:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:17.529 19:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:17.529 19:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:35:17.529 19:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:17.529 19:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:35:17.790 19:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:17.790 19:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:17.790 19:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:17.790 19:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:35:17.790 19:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:35:17.790 19:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:17.790 19:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:17.790 19:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:35:17.790 19:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:17.790 19:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:17.790 19:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:17.790 19:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:17.790 19:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:17.790 19:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:17.790 19:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:35:17.790 19:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:17.790 19:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:35:17.790 19:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:35:17.790 19:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:35:17.790 19:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:35:17.790 19:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@50 -- # : 0 00:35:17.790 19:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:35:17.790 19:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:35:17.790 19:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:35:17.790 19:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:17.790 19:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:17.790 19:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:35:17.790 19:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:35:17.790 19:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:35:17.790 19:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:35:17.790 19:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@54 -- # have_pci_nics=0 00:35:17.790 19:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:35:17.790 19:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:35:17.790 19:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:17.790 19:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@296 -- # prepare_net_devs 00:35:17.790 19:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # local -g is_hw=no 00:35:17.790 19:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@260 -- # remove_target_ns 00:35:17.790 19:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:35:17.790 19:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:35:17.790 19:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_target_ns 00:35:17.790 19:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:35:17.791 19:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:35:17.791 19:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # xtrace_disable 00:35:17.791 19:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:24.511 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:24.511 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@131 -- # pci_devs=() 00:35:24.511 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@131 -- # local -a pci_devs 00:35:24.511 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@132 -- # pci_net_devs=() 00:35:24.511 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:35:24.511 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@133 -- # pci_drivers=() 00:35:24.511 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@133 -- # local -A pci_drivers 00:35:24.511 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@135 -- # net_devs=() 00:35:24.511 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@135 -- # local -ga net_devs 00:35:24.511 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@136 -- # e810=() 00:35:24.511 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@136 -- # local -ga e810 00:35:24.511 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@137 -- # x722=() 00:35:24.511 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@137 -- # local -ga x722 00:35:24.511 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@138 -- # mlx=() 00:35:24.511 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@138 -- # local -ga mlx 00:35:24.511 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:24.511 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:24.511 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:24.512 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:24.512 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:24.512 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:24.512 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:24.512 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:24.512 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:24.512 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:24.512 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:24.512 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:24.512 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:35:24.512 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:35:24.512 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:35:24.512 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:35:24.512 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:35:24.512 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:35:24.512 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:35:24.512 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:35:24.512 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:35:24.512 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:35:24.512 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:35:24.512 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:24.512 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:24.512 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:35:24.512 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:35:24.512 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:35:24.512 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:35:24.512 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:35:24.512 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:35:24.512 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:24.512 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:24.512 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:35:24.512 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:35:24.512 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:35:24.512 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:35:24.512 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:35:24.512 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:24.512 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:35:24.512 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:24.512 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@234 -- # [[ up == up ]] 00:35:24.512 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:35:24.512 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:24.512 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:35:24.512 Found net devices under 0000:4b:00.0: cvl_0_0 00:35:24.512 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:35:24.512 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:35:24.512 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:24.512 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:35:24.512 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:24.512 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@234 -- # [[ up == up ]] 00:35:24.512 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:35:24.512 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:24.512 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:35:24.512 Found net devices under 0000:4b:00.1: cvl_0_1 00:35:24.512 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:35:24.512 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:35:24.512 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:35:24.512 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # is_hw=yes 00:35:24.512 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:35:24.512 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:35:24.512 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:35:24.512 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:35:24.512 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@257 -- # create_target_ns 00:35:24.512 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:35:24.512 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:35:24.512 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:35:24.512 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:24.512 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:35:24.512 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:35:24.512 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:24.512 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:24.512 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:35:24.512 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:35:24.512 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:35:24.512 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:35:24.512 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@27 -- # local -gA dev_map 00:35:24.512 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@28 -- # local -g _dev 00:35:24.512 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:35:24.512 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:35:24.512 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:35:24.512 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:35:24.512 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@44 -- # ips=() 00:35:24.512 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:35:24.512 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:35:24.512 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:35:24.512 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:35:24.512 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:35:24.512 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:35:24.512 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:35:24.512 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:35:24.512 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:35:24.512 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:35:24.512 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:35:24.512 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:35:24.512 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:35:24.512 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:35:24.513 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:35:24.513 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:35:24.774 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:35:24.774 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:35:24.774 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:35:24.774 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:35:24.774 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@11 -- # local val=167772161 00:35:24.774 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:35:24.774 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:35:24.774 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:35:24.774 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:35:24.774 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:35:24.774 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:35:24.774 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:35:24.774 10.0.0.1 00:35:24.774 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:35:24.774 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:35:24.774 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:24.774 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:24.775 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:35:24.775 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@11 -- # local val=167772162 00:35:24.775 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:35:24.775 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:35:24.775 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:35:24.775 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:35:24.775 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:35:24.775 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:35:24.775 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:35:24.775 10.0.0.2 00:35:24.775 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:35:24.775 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:35:24.775 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:35:24.775 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:35:24.775 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:35:24.775 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:35:24.775 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:35:24.775 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:24.775 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:24.775 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:35:24.775 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:35:24.775 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:35:24.775 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:35:24.775 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:35:24.775 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:35:24.775 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:35:24.775 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:35:24.775 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:35:24.775 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:35:24.775 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:35:24.775 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@38 -- # ping_ips 1 00:35:24.775 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:35:24.775 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:35:24.775 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:35:24.775 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:35:24.775 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:35:24.775 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:35:24.775 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:35:24.775 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:35:24.775 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:35:24.775 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@107 -- # local dev=initiator0 00:35:24.775 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:35:24.775 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:35:24.775 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:35:24.775 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:35:24.775 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:35:24.775 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:35:24.775 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:35:24.775 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:35:24.775 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:35:24.775 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:35:24.775 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:35:24.775 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:24.775 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:24.775 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:35:24.775 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:35:24.775 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:24.775 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.533 ms 00:35:24.775 00:35:24.775 --- 10.0.0.1 ping statistics --- 00:35:24.775 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:24.775 rtt min/avg/max/mdev = 0.533/0.533/0.533/0.000 ms 00:35:24.775 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:35:24.775 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:35:24.775 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:35:24.775 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:35:24.775 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:24.775 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:24.775 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@168 -- # get_net_dev target0 00:35:24.775 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@107 -- # local dev=target0 00:35:24.775 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:35:24.775 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:35:24.775 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:35:24.775 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:35:25.038 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:35:25.038 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:35:25.038 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:35:25.038 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:35:25.038 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:35:25.038 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:35:25.038 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:35:25.038 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:35:25.038 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:35:25.038 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:35:25.038 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:25.038 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.206 ms 00:35:25.038 00:35:25.038 --- 10.0.0.2 ping statistics --- 00:35:25.038 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:25.038 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:35:25.038 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@98 -- # (( pair++ )) 00:35:25.038 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:35:25.038 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:25.038 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@270 -- # return 0 00:35:25.038 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:35:25.038 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:35:25.038 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:35:25.038 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:35:25.038 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:35:25.038 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:35:25.038 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:35:25.038 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:35:25.038 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:35:25.038 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:35:25.038 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@107 -- # local dev=initiator0 00:35:25.038 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:35:25.038 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:35:25.038 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:35:25.038 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:35:25.038 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:35:25.038 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:35:25.038 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:35:25.038 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:35:25.038 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:35:25.038 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:25.038 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:35:25.038 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:35:25.038 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:35:25.038 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:35:25.038 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:35:25.038 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:35:25.038 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@107 -- # local dev=initiator1 00:35:25.038 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:35:25.038 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:35:25.038 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@109 -- # return 1 00:35:25.038 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@168 -- # dev= 00:35:25.038 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@169 -- # return 0 00:35:25.038 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:35:25.038 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:35:25.038 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:35:25.038 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:35:25.038 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:35:25.038 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:25.038 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:25.038 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@168 -- # get_net_dev target0 00:35:25.038 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@107 -- # local dev=target0 00:35:25.038 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:35:25.038 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:35:25.038 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:35:25.038 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:35:25.038 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:35:25.038 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:35:25.038 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:35:25.038 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:35:25.038 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:35:25.038 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:25.038 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:35:25.038 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:35:25.038 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:35:25.038 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:35:25.038 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:25.038 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:25.038 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@168 -- # get_net_dev target1 00:35:25.038 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@107 -- # local dev=target1 00:35:25.038 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:35:25.038 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:35:25.038 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@109 -- # return 1 00:35:25.038 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@168 -- # dev= 00:35:25.039 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@169 -- # return 0 00:35:25.039 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:35:25.039 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:25.039 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:35:25.039 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:35:25.039 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:25.039 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:35:25.039 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:35:25.039 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:35:25.039 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:35:25.039 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:25.039 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:25.039 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # nvmfpid=629474 00:35:25.039 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@329 -- # waitforlisten 629474 00:35:25.039 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:35:25.039 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@833 -- # '[' -z 629474 ']' 00:35:25.039 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:25.039 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@838 -- # local max_retries=100 00:35:25.039 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:25.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:25.039 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # xtrace_disable 00:35:25.039 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:25.039 [2024-11-05 19:24:54.310578] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:25.039 [2024-11-05 19:24:54.311732] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:35:25.039 [2024-11-05 19:24:54.311796] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:25.300 [2024-11-05 19:24:54.409698] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:25.300 [2024-11-05 19:24:54.462671] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:25.300 [2024-11-05 19:24:54.462721] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:25.300 [2024-11-05 19:24:54.462731] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:25.300 [2024-11-05 19:24:54.462738] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:25.300 [2024-11-05 19:24:54.462744] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:25.300 [2024-11-05 19:24:54.463405] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:25.300 [2024-11-05 19:24:54.547127] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:25.300 [2024-11-05 19:24:54.547421] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:25.872 19:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:35:25.872 19:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@866 -- # return 0 00:35:25.872 19:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:35:25.872 19:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:25.872 19:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:25.872 19:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:25.872 19:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:35:25.872 19:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:25.872 19:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:25.872 [2024-11-05 19:24:55.160274] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:25.872 19:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:25.872 19:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:35:25.872 19:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:25.872 19:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:25.872 19:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:25.872 19:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@20 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:25.872 19:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:25.872 19:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:25.872 [2024-11-05 19:24:55.188574] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:25.872 19:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:25.872 19:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:35:25.872 19:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:25.872 19:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:26.133 19:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:26.133 19:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:35:26.133 19:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:26.133 19:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:26.133 malloc0 00:35:26.133 19:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:26.133 19:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:35:26.133 19:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:26.133 19:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:26.133 19:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:26.133 19:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:35:26.133 19:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@28 -- # gen_nvmf_target_json 00:35:26.133 19:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # config=() 00:35:26.133 19:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # local subsystem config 00:35:26.133 19:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:35:26.133 19:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:35:26.133 { 00:35:26.133 "params": { 00:35:26.133 "name": "Nvme$subsystem", 00:35:26.133 "trtype": "$TEST_TRANSPORT", 00:35:26.133 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:26.133 "adrfam": "ipv4", 00:35:26.133 "trsvcid": "$NVMF_PORT", 00:35:26.133 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:26.133 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:26.133 "hdgst": ${hdgst:-false}, 00:35:26.133 "ddgst": ${ddgst:-false} 00:35:26.133 }, 00:35:26.133 "method": "bdev_nvme_attach_controller" 00:35:26.133 } 00:35:26.133 EOF 00:35:26.133 )") 00:35:26.133 19:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@394 -- # cat 00:35:26.133 19:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@396 -- # jq . 00:35:26.133 19:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@397 -- # IFS=, 00:35:26.133 19:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:35:26.133 "params": { 00:35:26.133 "name": "Nvme1", 00:35:26.133 "trtype": "tcp", 00:35:26.133 "traddr": "10.0.0.2", 00:35:26.133 "adrfam": "ipv4", 00:35:26.133 "trsvcid": "4420", 00:35:26.133 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:26.133 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:26.133 "hdgst": false, 00:35:26.133 "ddgst": false 00:35:26.133 }, 00:35:26.133 "method": "bdev_nvme_attach_controller" 00:35:26.133 }' 00:35:26.133 [2024-11-05 19:24:55.291171] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:35:26.133 [2024-11-05 19:24:55.291239] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid629750 ] 00:35:26.133 [2024-11-05 19:24:55.365955] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:26.133 [2024-11-05 19:24:55.407952] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:26.394 Running I/O for 10 seconds... 00:35:28.277 6643.00 IOPS, 51.90 MiB/s [2024-11-05T18:24:58.985Z] 6685.50 IOPS, 52.23 MiB/s [2024-11-05T18:24:59.926Z] 6698.67 IOPS, 52.33 MiB/s [2024-11-05T18:25:00.867Z] 6702.75 IOPS, 52.37 MiB/s [2024-11-05T18:25:01.810Z] 6732.40 IOPS, 52.60 MiB/s [2024-11-05T18:25:02.752Z] 7223.50 IOPS, 56.43 MiB/s [2024-11-05T18:25:03.694Z] 7576.29 IOPS, 59.19 MiB/s [2024-11-05T18:25:04.634Z] 7836.88 IOPS, 61.23 MiB/s [2024-11-05T18:25:05.576Z] 8040.56 IOPS, 62.82 MiB/s [2024-11-05T18:25:05.837Z] 8204.10 IOPS, 64.09 MiB/s 00:35:36.514 Latency(us) 00:35:36.514 [2024-11-05T18:25:05.837Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:36.514 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:35:36.514 Verification LBA range: start 0x0 length 0x1000 00:35:36.514 Nvme1n1 : 10.01 8207.51 64.12 0.00 0.00 15542.96 1515.52 26323.63 00:35:36.514 [2024-11-05T18:25:05.837Z] =================================================================================================================== 00:35:36.514 [2024-11-05T18:25:05.837Z] Total : 8207.51 64.12 0.00 0.00 15542.96 1515.52 26323.63 00:35:36.514 19:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@34 -- # perfpid=631748 00:35:36.514 19:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@36 -- # xtrace_disable 00:35:36.514 19:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:36.514 19:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:35:36.514 19:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@32 -- # gen_nvmf_target_json 00:35:36.514 19:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # config=() 00:35:36.514 19:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # local subsystem config 00:35:36.514 19:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:35:36.514 19:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:35:36.514 { 00:35:36.514 "params": { 00:35:36.514 "name": "Nvme$subsystem", 00:35:36.514 "trtype": "$TEST_TRANSPORT", 00:35:36.514 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:36.514 "adrfam": "ipv4", 00:35:36.514 "trsvcid": "$NVMF_PORT", 00:35:36.514 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:36.514 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:36.514 "hdgst": ${hdgst:-false}, 00:35:36.514 "ddgst": ${ddgst:-false} 00:35:36.514 }, 00:35:36.515 "method": "bdev_nvme_attach_controller" 00:35:36.515 } 00:35:36.515 EOF 00:35:36.515 )") 00:35:36.515 19:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@394 -- # cat 00:35:36.515 [2024-11-05 19:25:05.703798] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:36.515 [2024-11-05 19:25:05.703827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:36.515 19:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@396 -- # jq . 00:35:36.515 19:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@397 -- # IFS=, 00:35:36.515 19:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:35:36.515 "params": { 00:35:36.515 "name": "Nvme1", 00:35:36.515 "trtype": "tcp", 00:35:36.515 "traddr": "10.0.0.2", 00:35:36.515 "adrfam": "ipv4", 00:35:36.515 "trsvcid": "4420", 00:35:36.515 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:36.515 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:36.515 "hdgst": false, 00:35:36.515 "ddgst": false 00:35:36.515 }, 00:35:36.515 "method": "bdev_nvme_attach_controller" 00:35:36.515 }' 00:35:36.515 [2024-11-05 19:25:05.715770] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:36.515 [2024-11-05 19:25:05.715779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:36.515 [2024-11-05 19:25:05.727770] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:36.515 [2024-11-05 19:25:05.727778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:36.515 [2024-11-05 19:25:05.739769] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:36.515 [2024-11-05 19:25:05.739777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:36.515 [2024-11-05 19:25:05.744875] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:35:36.515 [2024-11-05 19:25:05.744924] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid631748 ] 00:35:36.515 [2024-11-05 19:25:05.751768] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:36.515 [2024-11-05 19:25:05.751775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:36.515 [2024-11-05 19:25:05.763768] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:36.515 [2024-11-05 19:25:05.763776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:36.515 [2024-11-05 19:25:05.775769] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:36.515 [2024-11-05 19:25:05.775776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:36.515 [2024-11-05 19:25:05.787767] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:36.515 [2024-11-05 19:25:05.787775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:36.515 [2024-11-05 19:25:05.799767] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:36.515 [2024-11-05 19:25:05.799775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:36.515 [2024-11-05 19:25:05.811768] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:36.515 [2024-11-05 19:25:05.811774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:36.515 [2024-11-05 19:25:05.814496] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:36.515 [2024-11-05 19:25:05.823768] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:36.515 [2024-11-05 19:25:05.823778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:36.515 [2024-11-05 19:25:05.835775] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:36.515 [2024-11-05 19:25:05.835785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:36.776 [2024-11-05 19:25:05.847769] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:36.776 [2024-11-05 19:25:05.847779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:36.776 [2024-11-05 19:25:05.849778] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:36.776 [2024-11-05 19:25:05.859774] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:36.776 [2024-11-05 19:25:05.859781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:36.776 [2024-11-05 19:25:05.871774] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:36.776 [2024-11-05 19:25:05.871788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:36.776 [2024-11-05 19:25:05.883771] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:36.776 [2024-11-05 19:25:05.883781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:36.776 [2024-11-05 19:25:05.895769] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:36.776 [2024-11-05 19:25:05.895778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:36.776 [2024-11-05 19:25:05.907770] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:36.776 [2024-11-05 19:25:05.907776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:36.776 [2024-11-05 19:25:05.919817] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:36.776 [2024-11-05 19:25:05.919831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:36.776 [2024-11-05 19:25:05.931772] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:36.776 [2024-11-05 19:25:05.931782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:36.776 [2024-11-05 19:25:05.943770] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:36.776 [2024-11-05 19:25:05.943778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:36.776 [2024-11-05 19:25:05.955769] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:36.776 [2024-11-05 19:25:05.955778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:36.776 [2024-11-05 19:25:05.967769] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:36.776 [2024-11-05 19:25:05.967777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:36.776 [2024-11-05 19:25:05.979767] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:36.776 [2024-11-05 19:25:05.979774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:36.776 [2024-11-05 19:25:05.991768] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:36.776 [2024-11-05 19:25:05.991777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:36.776 [2024-11-05 19:25:06.003772] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:36.776 [2024-11-05 19:25:06.003783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:36.776 [2024-11-05 19:25:06.015789] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:36.776 [2024-11-05 19:25:06.015802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:36.776 [2024-11-05 19:25:06.027772] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:36.776 [2024-11-05 19:25:06.027785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:36.776 Running I/O for 5 seconds... 00:35:36.776 [2024-11-05 19:25:06.043228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:36.776 [2024-11-05 19:25:06.043249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:36.776 [2024-11-05 19:25:06.056394] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:36.776 [2024-11-05 19:25:06.056409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:36.776 [2024-11-05 19:25:06.068902] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:36.776 [2024-11-05 19:25:06.068917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:36.776 [2024-11-05 19:25:06.078218] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:36.776 [2024-11-05 19:25:06.078233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:36.776 [2024-11-05 19:25:06.087383] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:36.776 [2024-11-05 19:25:06.087397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:36.776 [2024-11-05 19:25:06.099885] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:36.776 [2024-11-05 19:25:06.099900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:37.036 [2024-11-05 19:25:06.106087] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:37.036 [2024-11-05 19:25:06.106101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:37.036 [2024-11-05 19:25:06.119047] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:37.036 [2024-11-05 19:25:06.119062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:37.036 [2024-11-05 19:25:06.132281] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:37.036 [2024-11-05 19:25:06.132295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:37.036 [2024-11-05 19:25:06.145057] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:37.036 [2024-11-05 19:25:06.145073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:37.036 [2024-11-05 19:25:06.155546] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:37.036 [2024-11-05 19:25:06.155561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:37.036 [2024-11-05 19:25:06.168141] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:37.036 [2024-11-05 19:25:06.168155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:37.037 [2024-11-05 19:25:06.180734] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:37.037 [2024-11-05 19:25:06.180752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:37.037 [2024-11-05 19:25:06.192830] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:37.037 [2024-11-05 19:25:06.192844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:37.037 [2024-11-05 19:25:06.204769] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:37.037 [2024-11-05 19:25:06.204783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:37.037 [2024-11-05 19:25:06.216806] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:37.037 [2024-11-05 19:25:06.216821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:37.037 [2024-11-05 19:25:06.227555] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:37.037 [2024-11-05 19:25:06.227569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:37.037 [2024-11-05 19:25:06.240523] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:37.037 [2024-11-05 19:25:06.240537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:37.037 [2024-11-05 19:25:06.252827] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:37.037 [2024-11-05 19:25:06.252842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:37.037 [2024-11-05 19:25:06.263560] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:37.037 [2024-11-05 19:25:06.263578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:37.037 [2024-11-05 19:25:06.276392] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:37.037 [2024-11-05 19:25:06.276406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:37.037 [2024-11-05 19:25:06.288688] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:37.037 [2024-11-05 19:25:06.288702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:37.037 [2024-11-05 19:25:06.300716] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:37.037 [2024-11-05 19:25:06.300730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:37.037 [2024-11-05 19:25:06.312773] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:37.037 [2024-11-05 19:25:06.312787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:37.037 [2024-11-05 19:25:06.324938] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:37.037 [2024-11-05 19:25:06.324952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:37.037 [2024-11-05 19:25:06.335820] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:37.037 [2024-11-05 19:25:06.335835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:37.037 [2024-11-05 19:25:06.341646] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:37.037 [2024-11-05 19:25:06.341660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:37.037 [2024-11-05 19:25:06.350668] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:37.037 [2024-11-05 19:25:06.350682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:37.297 [2024-11-05 19:25:06.363395] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:37.297 [2024-11-05 19:25:06.363409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:37.297 [2024-11-05 19:25:06.375611] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:37.297 [2024-11-05 19:25:06.375625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:37.297 [2024-11-05 19:25:06.388311] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:37.297 [2024-11-05 19:25:06.388324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:37.297 [2024-11-05 19:25:06.400452] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:37.297 [2024-11-05 19:25:06.400466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:37.297 [2024-11-05 19:25:06.412942] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:37.297 [2024-11-05 19:25:06.412956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:37.297 [2024-11-05 19:25:06.423935] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:37.297 [2024-11-05 19:25:06.423949] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:37.297 [2024-11-05 19:25:06.429679] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:37.297 [2024-11-05 19:25:06.429693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:37.297 [2024-11-05 19:25:06.438586] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:37.298 [2024-11-05 19:25:06.438600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:37.298 [2024-11-05 19:25:06.447208] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:37.298 [2024-11-05 19:25:06.447222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:37.298 [2024-11-05 19:25:06.459942] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:37.298 [2024-11-05 19:25:06.459957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:37.298 [2024-11-05 19:25:06.466056] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:37.298 [2024-11-05 19:25:06.466071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:37.298 [2024-11-05 19:25:06.475760] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:37.298 [2024-11-05 19:25:06.475774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:37.298 [2024-11-05 19:25:06.481718] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:37.298 [2024-11-05 19:25:06.481732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:37.298 [2024-11-05 19:25:06.491251] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:37.298 [2024-11-05 19:25:06.491266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:37.298 [2024-11-05 19:25:06.504062] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:37.298 [2024-11-05 19:25:06.504076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:37.298 [2024-11-05 19:25:06.516416] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:37.298 [2024-11-05 19:25:06.516430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:37.298 [2024-11-05 19:25:06.528956] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:37.298 [2024-11-05 19:25:06.528970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:37.298 [2024-11-05 19:25:06.539737] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:37.298 [2024-11-05 19:25:06.539757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:37.298 [2024-11-05 19:25:06.545439] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:37.298 [2024-11-05 19:25:06.545453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:37.298 [2024-11-05 19:25:06.558613] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:37.298 [2024-11-05 19:25:06.558628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:37.298 [2024-11-05 19:25:06.571463] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:37.298 [2024-11-05 19:25:06.571477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:37.298 [2024-11-05 19:25:06.583436] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:37.298 [2024-11-05 19:25:06.583451] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:37.298 [2024-11-05 19:25:06.596252] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:37.298 [2024-11-05 19:25:06.596266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:37.298 [2024-11-05 19:25:06.608856] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:37.298 [2024-11-05 19:25:06.608870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:37.298 [2024-11-05 19:25:06.620074] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:37.298 [2024-11-05 19:25:06.620088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:37.558 [2024-11-05 19:25:06.632560] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:37.558 [2024-11-05 19:25:06.632575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:37.559 [2024-11-05 19:25:06.644864] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:37.559 [2024-11-05 19:25:06.644879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:37.559 [2024-11-05 19:25:06.657150] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:37.559 [2024-11-05 19:25:06.657165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:37.559 [2024-11-05 19:25:06.667042] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:37.559 [2024-11-05 19:25:06.667057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:37.559 [2024-11-05 19:25:06.679850] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:37.559 [2024-11-05 19:25:06.679865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:37.559 [2024-11-05 19:25:06.686133] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:37.559 [2024-11-05 19:25:06.686148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:37.559 [2024-11-05 19:25:06.698933] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:37.559 [2024-11-05 19:25:06.698947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:37.559 [2024-11-05 19:25:06.711272] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:37.559 [2024-11-05 19:25:06.711286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:37.559 [2024-11-05 19:25:06.723696] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:37.559 [2024-11-05 19:25:06.723710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:37.559 [2024-11-05 19:25:06.735982] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:37.559 [2024-11-05 19:25:06.735996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:37.559 [2024-11-05 19:25:06.742374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:37.559 [2024-11-05 19:25:06.742388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:37.559 [2024-11-05 19:25:06.751283] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:37.559 [2024-11-05 19:25:06.751298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:37.559 [2024-11-05 19:25:06.764047] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:37.559 [2024-11-05 19:25:06.764061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:37.559 [2024-11-05 19:25:06.776405] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:37.559 [2024-11-05 19:25:06.776419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:37.559 [2024-11-05 19:25:06.788990] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:37.559 [2024-11-05 19:25:06.789005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:37.559 [2024-11-05 19:25:06.800711] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:37.559 [2024-11-05 19:25:06.800725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:37.559 [2024-11-05 19:25:06.812147] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:37.559 [2024-11-05 19:25:06.812161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:37.559 [2024-11-05 19:25:06.824426] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:37.559 [2024-11-05 19:25:06.824442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:37.559 [2024-11-05 19:25:06.837125] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:37.559 [2024-11-05 19:25:06.837140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:37.559 [2024-11-05 19:25:06.847630] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:37.559 [2024-11-05 19:25:06.847645] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:37.559 [2024-11-05 19:25:06.860378] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:37.559 [2024-11-05 19:25:06.860392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:37.559 [2024-11-05 19:25:06.872872] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:37.559 [2024-11-05 19:25:06.872886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:37.559 [2024-11-05 19:25:06.882089] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:37.559 [2024-11-05 19:25:06.882104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:37.820 [2024-11-05 19:25:06.891376] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:37.820 [2024-11-05 19:25:06.891391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:37.820 [2024-11-05 19:25:06.904244] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:37.820 [2024-11-05 19:25:06.904258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:37.820 [2024-11-05 19:25:06.916476] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:37.820 [2024-11-05 19:25:06.916491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:37.820 [2024-11-05 19:25:06.929005] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:37.820 [2024-11-05 19:25:06.929020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:37.820 [2024-11-05 19:25:06.940769] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:37.820 [2024-11-05 19:25:06.940784] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:37.820 [2024-11-05 19:25:06.953344] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:37.820 [2024-11-05 19:25:06.953359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:37.820 [2024-11-05 19:25:06.963232] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:37.820 [2024-11-05 19:25:06.963246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:37.820 [2024-11-05 19:25:06.976138] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:37.820 [2024-11-05 19:25:06.976153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:37.820 [2024-11-05 19:25:06.988544] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:37.820 [2024-11-05 19:25:06.988558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:37.820 [2024-11-05 19:25:07.000694] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:37.820 [2024-11-05 19:25:07.000709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:37.820 [2024-11-05 19:25:07.013214] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:37.820 [2024-11-05 19:25:07.013229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:37.820 [2024-11-05 19:25:07.023956] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:37.820 [2024-11-05 19:25:07.023970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:37.820 [2024-11-05 19:25:07.029633] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:37.820 [2024-11-05 19:25:07.029648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:37.820 19351.00 IOPS, 151.18 MiB/s [2024-11-05T18:25:07.143Z] [2024-11-05 19:25:07.038061] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:37.820 [2024-11-05 19:25:07.038075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:37.820 [2024-11-05 19:25:07.047565] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:37.820 [2024-11-05 19:25:07.047580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:37.820 [2024-11-05 19:25:07.060162] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:37.820 [2024-11-05 19:25:07.060177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:37.820 [2024-11-05 19:25:07.072571] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:37.820 [2024-11-05 19:25:07.072586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:37.820 [2024-11-05 19:25:07.084631] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:37.820 [2024-11-05 19:25:07.084646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:37.820 [2024-11-05 19:25:07.096934] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:37.820 [2024-11-05 19:25:07.096956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:37.820 [2024-11-05 19:25:07.109181] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:37.820 [2024-11-05 19:25:07.109195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:37.820 [2024-11-05 19:25:07.118208] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:37.820 [2024-11-05 19:25:07.118222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:37.820 [2024-11-05 19:25:07.131238] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:37.820 [2024-11-05 19:25:07.131253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:37.820 [2024-11-05 19:25:07.144022] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:37.820 [2024-11-05 19:25:07.144039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.081 [2024-11-05 19:25:07.150028] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.081 [2024-11-05 19:25:07.150042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.081 [2024-11-05 19:25:07.158457] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.081 [2024-11-05 19:25:07.158471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.081 [2024-11-05 19:25:07.167060] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.081 [2024-11-05 19:25:07.167074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.081 [2024-11-05 19:25:07.179706] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.081 [2024-11-05 19:25:07.179722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.081 [2024-11-05 19:25:07.186042] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.081 [2024-11-05 19:25:07.186057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.081 [2024-11-05 19:25:07.199618] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.081 [2024-11-05 19:25:07.199633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.081 [2024-11-05 19:25:07.212027] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.081 [2024-11-05 19:25:07.212042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.081 [2024-11-05 19:25:07.218162] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.081 [2024-11-05 19:25:07.218176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.081 [2024-11-05 19:25:07.230982] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.081 [2024-11-05 19:25:07.230996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.081 [2024-11-05 19:25:07.243773] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.081 [2024-11-05 19:25:07.243788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.081 [2024-11-05 19:25:07.250129] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.081 [2024-11-05 19:25:07.250143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.081 [2024-11-05 19:25:07.258123] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.081 [2024-11-05 19:25:07.258138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.081 [2024-11-05 19:25:07.267565] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.081 [2024-11-05 19:25:07.267580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.081 [2024-11-05 19:25:07.280275] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.081 [2024-11-05 19:25:07.280289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.081 [2024-11-05 19:25:07.293116] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.081 [2024-11-05 19:25:07.293135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.081 [2024-11-05 19:25:07.303848] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.081 [2024-11-05 19:25:07.303862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.081 [2024-11-05 19:25:07.309723] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.081 [2024-11-05 19:25:07.309737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.081 [2024-11-05 19:25:07.318399] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.081 [2024-11-05 19:25:07.318413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.081 [2024-11-05 19:25:07.331178] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.081 [2024-11-05 19:25:07.331192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.081 [2024-11-05 19:25:07.344065] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.081 [2024-11-05 19:25:07.344079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.081 [2024-11-05 19:25:07.356454] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.081 [2024-11-05 19:25:07.356467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.081 [2024-11-05 19:25:07.368525] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.081 [2024-11-05 19:25:07.368539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.081 [2024-11-05 19:25:07.380726] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.081 [2024-11-05 19:25:07.380741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.081 [2024-11-05 19:25:07.392952] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.081 [2024-11-05 19:25:07.392967] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.081 [2024-11-05 19:25:07.404817] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.081 [2024-11-05 19:25:07.404832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.342 [2024-11-05 19:25:07.416833] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.342 [2024-11-05 19:25:07.416848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.342 [2024-11-05 19:25:07.428958] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.342 [2024-11-05 19:25:07.428972] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.342 [2024-11-05 19:25:07.439647] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.342 [2024-11-05 19:25:07.439661] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.342 [2024-11-05 19:25:07.452611] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.342 [2024-11-05 19:25:07.452626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.342 [2024-11-05 19:25:07.465138] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.342 [2024-11-05 19:25:07.465152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.342 [2024-11-05 19:25:07.476727] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.342 [2024-11-05 19:25:07.476741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.342 [2024-11-05 19:25:07.488891] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.342 [2024-11-05 19:25:07.488906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.342 [2024-11-05 19:25:07.501030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.342 [2024-11-05 19:25:07.501045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.342 [2024-11-05 19:25:07.512951] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.342 [2024-11-05 19:25:07.512969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.342 [2024-11-05 19:25:07.525013] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.342 [2024-11-05 19:25:07.525028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.342 [2024-11-05 19:25:07.535902] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.342 [2024-11-05 19:25:07.535917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.342 [2024-11-05 19:25:07.541563] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.342 [2024-11-05 19:25:07.541576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.342 [2024-11-05 19:25:07.555533] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.342 [2024-11-05 19:25:07.555548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.342 [2024-11-05 19:25:07.568345] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.342 [2024-11-05 19:25:07.568359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.342 [2024-11-05 19:25:07.580521] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.342 [2024-11-05 19:25:07.580535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.342 [2024-11-05 19:25:07.593220] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.342 [2024-11-05 19:25:07.593234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.342 [2024-11-05 19:25:07.604668] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.342 [2024-11-05 19:25:07.604683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.342 [2024-11-05 19:25:07.616662] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.342 [2024-11-05 19:25:07.616677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.342 [2024-11-05 19:25:07.628691] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.342 [2024-11-05 19:25:07.628706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.342 [2024-11-05 19:25:07.640971] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.342 [2024-11-05 19:25:07.640985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.342 [2024-11-05 19:25:07.652058] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.342 [2024-11-05 19:25:07.652072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.342 [2024-11-05 19:25:07.664964] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.342 [2024-11-05 19:25:07.664978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.603 [2024-11-05 19:25:07.675978] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.603 [2024-11-05 19:25:07.675992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.603 [2024-11-05 19:25:07.681721] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.603 [2024-11-05 19:25:07.681735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.603 [2024-11-05 19:25:07.690471] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.603 [2024-11-05 19:25:07.690485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.603 [2024-11-05 19:25:07.699194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.603 [2024-11-05 19:25:07.699208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.603 [2024-11-05 19:25:07.711701] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.603 [2024-11-05 19:25:07.711716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.603 [2024-11-05 19:25:07.717877] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.603 [2024-11-05 19:25:07.717891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.603 [2024-11-05 19:25:07.731116] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.603 [2024-11-05 19:25:07.731130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.603 [2024-11-05 19:25:07.744197] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.603 [2024-11-05 19:25:07.744211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.603 [2024-11-05 19:25:07.756751] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.603 [2024-11-05 19:25:07.756766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.603 [2024-11-05 19:25:07.769018] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.603 [2024-11-05 19:25:07.769032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.603 [2024-11-05 19:25:07.780921] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.603 [2024-11-05 19:25:07.780936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.603 [2024-11-05 19:25:07.792414] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.603 [2024-11-05 19:25:07.792427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.603 [2024-11-05 19:25:07.804858] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.603 [2024-11-05 19:25:07.804872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.603 [2024-11-05 19:25:07.816482] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.603 [2024-11-05 19:25:07.816496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.603 [2024-11-05 19:25:07.828545] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.603 [2024-11-05 19:25:07.828560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.603 [2024-11-05 19:25:07.840669] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.603 [2024-11-05 19:25:07.840683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.603 [2024-11-05 19:25:07.852793] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.603 [2024-11-05 19:25:07.852808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.603 [2024-11-05 19:25:07.864795] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.603 [2024-11-05 19:25:07.864810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.603 [2024-11-05 19:25:07.876953] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.603 [2024-11-05 19:25:07.876968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.603 [2024-11-05 19:25:07.888932] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.603 [2024-11-05 19:25:07.888946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.603 [2024-11-05 19:25:07.899915] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.603 [2024-11-05 19:25:07.899929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.603 [2024-11-05 19:25:07.905855] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.603 [2024-11-05 19:25:07.905870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.603 [2024-11-05 19:25:07.914503] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.603 [2024-11-05 19:25:07.914518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.603 [2024-11-05 19:25:07.923156] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.603 [2024-11-05 19:25:07.923170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.864 [2024-11-05 19:25:07.935720] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.864 [2024-11-05 19:25:07.935734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.864 [2024-11-05 19:25:07.942124] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.864 [2024-11-05 19:25:07.942138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.864 [2024-11-05 19:25:07.951314] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.864 [2024-11-05 19:25:07.951329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.864 [2024-11-05 19:25:07.963710] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.864 [2024-11-05 19:25:07.963726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.864 [2024-11-05 19:25:07.970409] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.864 [2024-11-05 19:25:07.970423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.864 [2024-11-05 19:25:07.982859] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.864 [2024-11-05 19:25:07.982874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.864 [2024-11-05 19:25:07.995050] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.864 [2024-11-05 19:25:07.995065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.864 [2024-11-05 19:25:08.007774] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.864 [2024-11-05 19:25:08.007789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.864 [2024-11-05 19:25:08.013945] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.864 [2024-11-05 19:25:08.013959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.864 [2024-11-05 19:25:08.023177] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.864 [2024-11-05 19:25:08.023191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.864 [2024-11-05 19:25:08.035947] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.864 [2024-11-05 19:25:08.035961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.864 19416.00 IOPS, 151.69 MiB/s [2024-11-05T18:25:08.187Z] [2024-11-05 19:25:08.042234] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.864 [2024-11-05 19:25:08.042248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.864 [2024-11-05 19:25:08.055081] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.864 [2024-11-05 19:25:08.055095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.864 [2024-11-05 19:25:08.067906] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.864 [2024-11-05 19:25:08.067920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.864 [2024-11-05 19:25:08.074156] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.864 [2024-11-05 19:25:08.074170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.864 [2024-11-05 19:25:08.083298] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.864 [2024-11-05 19:25:08.083312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.864 [2024-11-05 19:25:08.095884] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.864 [2024-11-05 19:25:08.095898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.864 [2024-11-05 19:25:08.102392] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.864 [2024-11-05 19:25:08.102406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.864 [2024-11-05 19:25:08.115428] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.864 [2024-11-05 19:25:08.115442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.864 [2024-11-05 19:25:08.128481] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.864 [2024-11-05 19:25:08.128495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.864 [2024-11-05 19:25:08.141144] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.864 [2024-11-05 19:25:08.141158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.864 [2024-11-05 19:25:08.151678] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.864 [2024-11-05 19:25:08.151692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.864 [2024-11-05 19:25:08.164456] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.864 [2024-11-05 19:25:08.164470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.864 [2024-11-05 19:25:08.176860] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.864 [2024-11-05 19:25:08.176874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.864 [2024-11-05 19:25:08.189023] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:38.864 [2024-11-05 19:25:08.189037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.125 [2024-11-05 19:25:08.200706] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.125 [2024-11-05 19:25:08.200720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.125 [2024-11-05 19:25:08.212757] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.125 [2024-11-05 19:25:08.212771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.125 [2024-11-05 19:25:08.224658] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.125 [2024-11-05 19:25:08.224673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.125 [2024-11-05 19:25:08.237314] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.125 [2024-11-05 19:25:08.237328] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.125 [2024-11-05 19:25:08.247903] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.125 [2024-11-05 19:25:08.247917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.125 [2024-11-05 19:25:08.253668] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.125 [2024-11-05 19:25:08.253681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.125 [2024-11-05 19:25:08.262335] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.125 [2024-11-05 19:25:08.262348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.125 [2024-11-05 19:25:08.271110] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.125 [2024-11-05 19:25:08.271124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.125 [2024-11-05 19:25:08.283821] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.125 [2024-11-05 19:25:08.283836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.125 [2024-11-05 19:25:08.290313] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.125 [2024-11-05 19:25:08.290327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.125 [2024-11-05 19:25:08.303954] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.125 [2024-11-05 19:25:08.303969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.125 [2024-11-05 19:25:08.309987] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.125 [2024-11-05 19:25:08.310001] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.125 [2024-11-05 19:25:08.318638] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.125 [2024-11-05 19:25:08.318657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.125 [2024-11-05 19:25:08.331291] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.125 [2024-11-05 19:25:08.331305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.125 [2024-11-05 19:25:08.343675] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.125 [2024-11-05 19:25:08.343690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.125 [2024-11-05 19:25:08.356308] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.125 [2024-11-05 19:25:08.356322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.125 [2024-11-05 19:25:08.369364] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.125 [2024-11-05 19:25:08.369379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.125 [2024-11-05 19:25:08.380138] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.125 [2024-11-05 19:25:08.380152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.125 [2024-11-05 19:25:08.393087] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.125 [2024-11-05 19:25:08.393102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.125 [2024-11-05 19:25:08.404931] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.125 [2024-11-05 19:25:08.404945] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.125 [2024-11-05 19:25:08.416842] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.125 [2024-11-05 19:25:08.416857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.125 [2024-11-05 19:25:08.428705] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.125 [2024-11-05 19:25:08.428720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.125 [2024-11-05 19:25:08.440491] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.125 [2024-11-05 19:25:08.440505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.386 [2024-11-05 19:25:08.452855] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.386 [2024-11-05 19:25:08.452870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.386 [2024-11-05 19:25:08.464968] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.386 [2024-11-05 19:25:08.464983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.386 [2024-11-05 19:25:08.475723] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.386 [2024-11-05 19:25:08.475738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.386 [2024-11-05 19:25:08.481706] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.386 [2024-11-05 19:25:08.481721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.386 [2024-11-05 19:25:08.495587] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.386 [2024-11-05 19:25:08.495601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.386 [2024-11-05 19:25:08.508010] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.386 [2024-11-05 19:25:08.508025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.386 [2024-11-05 19:25:08.514278] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.386 [2024-11-05 19:25:08.514292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.386 [2024-11-05 19:25:08.523478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.386 [2024-11-05 19:25:08.523492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.386 [2024-11-05 19:25:08.536524] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.386 [2024-11-05 19:25:08.536542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.386 [2024-11-05 19:25:08.548286] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.386 [2024-11-05 19:25:08.548301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.386 [2024-11-05 19:25:08.560549] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.386 [2024-11-05 19:25:08.560564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.386 [2024-11-05 19:25:08.572544] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.386 [2024-11-05 19:25:08.572558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.386 [2024-11-05 19:25:08.585085] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.386 [2024-11-05 19:25:08.585100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.386 [2024-11-05 19:25:08.595588] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.386 [2024-11-05 19:25:08.595602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.386 [2024-11-05 19:25:08.608572] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.386 [2024-11-05 19:25:08.608586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.386 [2024-11-05 19:25:08.620741] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.386 [2024-11-05 19:25:08.620762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.386 [2024-11-05 19:25:08.632941] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.386 [2024-11-05 19:25:08.632955] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.386 [2024-11-05 19:25:08.643921] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.386 [2024-11-05 19:25:08.643936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.386 [2024-11-05 19:25:08.649644] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.386 [2024-11-05 19:25:08.649658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.386 [2024-11-05 19:25:08.658256] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.386 [2024-11-05 19:25:08.658271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.386 [2024-11-05 19:25:08.667658] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.386 [2024-11-05 19:25:08.667673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.386 [2024-11-05 19:25:08.680601] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.386 [2024-11-05 19:25:08.680616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.386 [2024-11-05 19:25:08.692605] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.386 [2024-11-05 19:25:08.692619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.386 [2024-11-05 19:25:08.705100] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.386 [2024-11-05 19:25:08.705115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.648 [2024-11-05 19:25:08.715394] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.648 [2024-11-05 19:25:08.715410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.648 [2024-11-05 19:25:08.728206] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.648 [2024-11-05 19:25:08.728220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.648 [2024-11-05 19:25:08.741065] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.648 [2024-11-05 19:25:08.741079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.648 [2024-11-05 19:25:08.752980] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.648 [2024-11-05 19:25:08.753000] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.648 [2024-11-05 19:25:08.765027] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.648 [2024-11-05 19:25:08.765041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.648 [2024-11-05 19:25:08.777144] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.648 [2024-11-05 19:25:08.777158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.648 [2024-11-05 19:25:08.787788] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.648 [2024-11-05 19:25:08.787802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.648 [2024-11-05 19:25:08.793551] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.648 [2024-11-05 19:25:08.793565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.648 [2024-11-05 19:25:08.807196] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.648 [2024-11-05 19:25:08.807210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.648 [2024-11-05 19:25:08.819945] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.648 [2024-11-05 19:25:08.819960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.648 [2024-11-05 19:25:08.826095] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.648 [2024-11-05 19:25:08.826109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.648 [2024-11-05 19:25:08.834430] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.648 [2024-11-05 19:25:08.834445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.648 [2024-11-05 19:25:08.842468] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.648 [2024-11-05 19:25:08.842482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.648 [2024-11-05 19:25:08.850099] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.648 [2024-11-05 19:25:08.850113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.648 [2024-11-05 19:25:08.859518] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.648 [2024-11-05 19:25:08.859534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.648 [2024-11-05 19:25:08.872406] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.648 [2024-11-05 19:25:08.872420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.648 [2024-11-05 19:25:08.885103] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.648 [2024-11-05 19:25:08.885118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.648 [2024-11-05 19:25:08.895720] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.648 [2024-11-05 19:25:08.895735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.648 [2024-11-05 19:25:08.901466] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.648 [2024-11-05 19:25:08.901480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.648 [2024-11-05 19:25:08.910338] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.648 [2024-11-05 19:25:08.910352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.648 [2024-11-05 19:25:08.919611] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.648 [2024-11-05 19:25:08.919626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.648 [2024-11-05 19:25:08.932325] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.648 [2024-11-05 19:25:08.932340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.648 [2024-11-05 19:25:08.944773] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.648 [2024-11-05 19:25:08.944791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.648 [2024-11-05 19:25:08.956607] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.648 [2024-11-05 19:25:08.956622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.648 [2024-11-05 19:25:08.968786] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.648 [2024-11-05 19:25:08.968800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.909 [2024-11-05 19:25:08.980887] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.909 [2024-11-05 19:25:08.980902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.909 [2024-11-05 19:25:08.991637] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.909 [2024-11-05 19:25:08.991652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.909 [2024-11-05 19:25:09.004531] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.909 [2024-11-05 19:25:09.004546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.909 [2024-11-05 19:25:09.017033] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.909 [2024-11-05 19:25:09.017048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.909 [2024-11-05 19:25:09.027909] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.909 [2024-11-05 19:25:09.027924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.909 [2024-11-05 19:25:09.033643] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.909 [2024-11-05 19:25:09.033658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.909 19430.00 IOPS, 151.80 MiB/s [2024-11-05T18:25:09.232Z] [2024-11-05 19:25:09.042325] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.909 [2024-11-05 19:25:09.042340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.909 [2024-11-05 19:25:09.050782] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.909 [2024-11-05 19:25:09.050796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.909 [2024-11-05 19:25:09.063524] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.909 [2024-11-05 19:25:09.063540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.909 [2024-11-05 19:25:09.075906] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.909 [2024-11-05 19:25:09.075920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.909 [2024-11-05 19:25:09.081840] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.909 [2024-11-05 19:25:09.081854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.909 [2024-11-05 19:25:09.090537] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.909 [2024-11-05 19:25:09.090551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.909 [2024-11-05 19:25:09.103519] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.909 [2024-11-05 19:25:09.103533] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.909 [2024-11-05 19:25:09.115396] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.909 [2024-11-05 19:25:09.115410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.909 [2024-11-05 19:25:09.127948] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.909 [2024-11-05 19:25:09.127962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.909 [2024-11-05 19:25:09.134438] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.909 [2024-11-05 19:25:09.134452] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.909 [2024-11-05 19:25:09.147258] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.909 [2024-11-05 19:25:09.147272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.909 [2024-11-05 19:25:09.160350] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.909 [2024-11-05 19:25:09.160364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.909 [2024-11-05 19:25:09.172935] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.909 [2024-11-05 19:25:09.172949] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.909 [2024-11-05 19:25:09.183882] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.909 [2024-11-05 19:25:09.183896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.909 [2024-11-05 19:25:09.189708] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.909 [2024-11-05 19:25:09.189722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.909 [2024-11-05 19:25:09.199120] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.909 [2024-11-05 19:25:09.199134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.909 [2024-11-05 19:25:09.211991] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.909 [2024-11-05 19:25:09.212005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.909 [2024-11-05 19:25:09.218408] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.909 [2024-11-05 19:25:09.218422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:39.909 [2024-11-05 19:25:09.231229] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:39.909 [2024-11-05 19:25:09.231243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.170 [2024-11-05 19:25:09.243719] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.170 [2024-11-05 19:25:09.243734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.170 [2024-11-05 19:25:09.256174] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.170 [2024-11-05 19:25:09.256188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.170 [2024-11-05 19:25:09.268983] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.170 [2024-11-05 19:25:09.268997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.170 [2024-11-05 19:25:09.279912] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.170 [2024-11-05 19:25:09.279926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.170 [2024-11-05 19:25:09.285614] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.170 [2024-11-05 19:25:09.285627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.170 [2024-11-05 19:25:09.295042] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.170 [2024-11-05 19:25:09.295056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.170 [2024-11-05 19:25:09.307422] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.170 [2024-11-05 19:25:09.307437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.170 [2024-11-05 19:25:09.320759] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.170 [2024-11-05 19:25:09.320774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.170 [2024-11-05 19:25:09.332720] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.170 [2024-11-05 19:25:09.332734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.170 [2024-11-05 19:25:09.344993] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.170 [2024-11-05 19:25:09.345007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.170 [2024-11-05 19:25:09.356441] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.170 [2024-11-05 19:25:09.356455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.170 [2024-11-05 19:25:09.369142] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.170 [2024-11-05 19:25:09.369156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.170 [2024-11-05 19:25:09.379731] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.170 [2024-11-05 19:25:09.379750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.170 [2024-11-05 19:25:09.392645] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.170 [2024-11-05 19:25:09.392659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.170 [2024-11-05 19:25:09.404603] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.170 [2024-11-05 19:25:09.404617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.170 [2024-11-05 19:25:09.416676] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.170 [2024-11-05 19:25:09.416690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.170 [2024-11-05 19:25:09.428926] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.170 [2024-11-05 19:25:09.428939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.170 [2024-11-05 19:25:09.439753] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.170 [2024-11-05 19:25:09.439768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.170 [2024-11-05 19:25:09.445717] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.170 [2024-11-05 19:25:09.445731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.170 [2024-11-05 19:25:09.454475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.170 [2024-11-05 19:25:09.454489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.170 [2024-11-05 19:25:09.463277] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.170 [2024-11-05 19:25:09.463291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.170 [2024-11-05 19:25:09.475952] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.170 [2024-11-05 19:25:09.475967] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.170 [2024-11-05 19:25:09.482259] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.170 [2024-11-05 19:25:09.482273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.170 [2024-11-05 19:25:09.491416] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.170 [2024-11-05 19:25:09.491430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.431 [2024-11-05 19:25:09.504229] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.431 [2024-11-05 19:25:09.504243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.431 [2024-11-05 19:25:09.516666] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.431 [2024-11-05 19:25:09.516681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.431 [2024-11-05 19:25:09.528843] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.431 [2024-11-05 19:25:09.528857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.431 [2024-11-05 19:25:09.541257] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.431 [2024-11-05 19:25:09.541271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.431 [2024-11-05 19:25:09.552341] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.431 [2024-11-05 19:25:09.552355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.431 [2024-11-05 19:25:09.564705] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.431 [2024-11-05 19:25:09.564719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.431 [2024-11-05 19:25:09.576946] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.431 [2024-11-05 19:25:09.576960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.431 [2024-11-05 19:25:09.588650] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.431 [2024-11-05 19:25:09.588664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.431 [2024-11-05 19:25:09.600740] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.431 [2024-11-05 19:25:09.600758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.431 [2024-11-05 19:25:09.612827] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.431 [2024-11-05 19:25:09.612841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.431 [2024-11-05 19:25:09.624776] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.431 [2024-11-05 19:25:09.624791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.431 [2024-11-05 19:25:09.636299] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.431 [2024-11-05 19:25:09.636312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.431 [2024-11-05 19:25:09.648742] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.431 [2024-11-05 19:25:09.648760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.431 [2024-11-05 19:25:09.660900] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.431 [2024-11-05 19:25:09.660914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.431 [2024-11-05 19:25:09.672894] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.431 [2024-11-05 19:25:09.672908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.431 [2024-11-05 19:25:09.684457] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.431 [2024-11-05 19:25:09.684471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.431 [2024-11-05 19:25:09.696982] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.431 [2024-11-05 19:25:09.696996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.431 [2024-11-05 19:25:09.706112] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.431 [2024-11-05 19:25:09.706126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.431 [2024-11-05 19:25:09.715522] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.431 [2024-11-05 19:25:09.715537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.431 [2024-11-05 19:25:09.728260] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.431 [2024-11-05 19:25:09.728274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.431 [2024-11-05 19:25:09.740695] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.431 [2024-11-05 19:25:09.740709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.431 [2024-11-05 19:25:09.752755] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.431 [2024-11-05 19:25:09.752769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.692 [2024-11-05 19:25:09.764810] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.692 [2024-11-05 19:25:09.764824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.692 [2024-11-05 19:25:09.777126] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.692 [2024-11-05 19:25:09.777144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.692 [2024-11-05 19:25:09.788417] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.692 [2024-11-05 19:25:09.788431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.692 [2024-11-05 19:25:09.800815] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.692 [2024-11-05 19:25:09.800830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.692 [2024-11-05 19:25:09.811734] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.692 [2024-11-05 19:25:09.811753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.692 [2024-11-05 19:25:09.817810] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.692 [2024-11-05 19:25:09.817824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.692 [2024-11-05 19:25:09.826476] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.692 [2024-11-05 19:25:09.826490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.692 [2024-11-05 19:25:09.839286] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.692 [2024-11-05 19:25:09.839301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.692 [2024-11-05 19:25:09.851599] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.692 [2024-11-05 19:25:09.851614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.692 [2024-11-05 19:25:09.864033] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.692 [2024-11-05 19:25:09.864048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.692 [2024-11-05 19:25:09.870289] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.692 [2024-11-05 19:25:09.870302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.692 [2024-11-05 19:25:09.883200] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.692 [2024-11-05 19:25:09.883215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.692 [2024-11-05 19:25:09.896023] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.692 [2024-11-05 19:25:09.896038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.692 [2024-11-05 19:25:09.902587] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.692 [2024-11-05 19:25:09.902601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.692 [2024-11-05 19:25:09.915294] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.692 [2024-11-05 19:25:09.915309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.692 [2024-11-05 19:25:09.927826] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.692 [2024-11-05 19:25:09.927842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.692 [2024-11-05 19:25:09.940492] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.692 [2024-11-05 19:25:09.940506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.692 [2024-11-05 19:25:09.953082] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.692 [2024-11-05 19:25:09.953097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.692 [2024-11-05 19:25:09.963923] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.692 [2024-11-05 19:25:09.963937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.692 [2024-11-05 19:25:09.969754] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.692 [2024-11-05 19:25:09.969768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.692 [2024-11-05 19:25:09.978533] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.692 [2024-11-05 19:25:09.978550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.692 [2024-11-05 19:25:09.991840] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.692 [2024-11-05 19:25:09.991854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.692 [2024-11-05 19:25:09.998105] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.692 [2024-11-05 19:25:09.998119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.692 [2024-11-05 19:25:10.011520] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.692 [2024-11-05 19:25:10.011536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.952 [2024-11-05 19:25:10.024541] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.952 [2024-11-05 19:25:10.024556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.952 [2024-11-05 19:25:10.036875] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.952 [2024-11-05 19:25:10.036889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.952 19422.50 IOPS, 151.74 MiB/s [2024-11-05T18:25:10.275Z] [2024-11-05 19:25:10.048026] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.952 [2024-11-05 19:25:10.048041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.952 [2024-11-05 19:25:10.060563] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.952 [2024-11-05 19:25:10.060580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.952 [2024-11-05 19:25:10.073066] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.952 [2024-11-05 19:25:10.073082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.952 [2024-11-05 19:25:10.083954] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.952 [2024-11-05 19:25:10.083969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.952 [2024-11-05 19:25:10.096474] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.952 [2024-11-05 19:25:10.096490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.952 [2024-11-05 19:25:10.108770] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.953 [2024-11-05 19:25:10.108785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.953 [2024-11-05 19:25:10.120770] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.953 [2024-11-05 19:25:10.120785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.953 [2024-11-05 19:25:10.132194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.953 [2024-11-05 19:25:10.132208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.953 [2024-11-05 19:25:10.144665] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.953 [2024-11-05 19:25:10.144679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.953 [2024-11-05 19:25:10.157065] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.953 [2024-11-05 19:25:10.157080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.953 [2024-11-05 19:25:10.169153] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.953 [2024-11-05 19:25:10.169168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.953 [2024-11-05 19:25:10.178321] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.953 [2024-11-05 19:25:10.178335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.953 [2024-11-05 19:25:10.187055] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.953 [2024-11-05 19:25:10.187070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.953 [2024-11-05 19:25:10.200086] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.953 [2024-11-05 19:25:10.200108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.953 [2024-11-05 19:25:10.212735] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.953 [2024-11-05 19:25:10.212755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.953 [2024-11-05 19:25:10.224803] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.953 [2024-11-05 19:25:10.224818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.953 [2024-11-05 19:25:10.236630] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.953 [2024-11-05 19:25:10.236645] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.953 [2024-11-05 19:25:10.249013] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.953 [2024-11-05 19:25:10.249027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.953 [2024-11-05 19:25:10.260555] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.953 [2024-11-05 19:25:10.260570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:40.953 [2024-11-05 19:25:10.273198] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:40.953 [2024-11-05 19:25:10.273213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.214 [2024-11-05 19:25:10.284464] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.214 [2024-11-05 19:25:10.284478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.214 [2024-11-05 19:25:10.296632] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.214 [2024-11-05 19:25:10.296646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.214 [2024-11-05 19:25:10.308616] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.214 [2024-11-05 19:25:10.308630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.214 [2024-11-05 19:25:10.321180] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.214 [2024-11-05 19:25:10.321194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.214 [2024-11-05 19:25:10.331557] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.214 [2024-11-05 19:25:10.331571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.214 [2024-11-05 19:25:10.344166] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.214 [2024-11-05 19:25:10.344180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.214 [2024-11-05 19:25:10.356697] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.214 [2024-11-05 19:25:10.356711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.214 [2024-11-05 19:25:10.368721] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.214 [2024-11-05 19:25:10.368735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.214 [2024-11-05 19:25:10.380954] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.214 [2024-11-05 19:25:10.380968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.214 [2024-11-05 19:25:10.390831] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.214 [2024-11-05 19:25:10.390845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.214 [2024-11-05 19:25:10.403284] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.214 [2024-11-05 19:25:10.403298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.214 [2024-11-05 19:25:10.415985] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.214 [2024-11-05 19:25:10.415999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.214 [2024-11-05 19:25:10.422065] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.214 [2024-11-05 19:25:10.422080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.214 [2024-11-05 19:25:10.434874] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.214 [2024-11-05 19:25:10.434888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.214 [2024-11-05 19:25:10.447882] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.214 [2024-11-05 19:25:10.447897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.214 [2024-11-05 19:25:10.454161] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.214 [2024-11-05 19:25:10.454175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.214 [2024-11-05 19:25:10.463206] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.214 [2024-11-05 19:25:10.463221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.214 [2024-11-05 19:25:10.475891] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.214 [2024-11-05 19:25:10.475906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.214 [2024-11-05 19:25:10.482416] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.214 [2024-11-05 19:25:10.482430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.214 [2024-11-05 19:25:10.495375] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.214 [2024-11-05 19:25:10.495390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.214 [2024-11-05 19:25:10.507600] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.214 [2024-11-05 19:25:10.507615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.214 [2024-11-05 19:25:10.519911] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.214 [2024-11-05 19:25:10.519926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.214 [2024-11-05 19:25:10.525919] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.214 [2024-11-05 19:25:10.525933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.214 [2024-11-05 19:25:10.538554] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.214 [2024-11-05 19:25:10.538568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.475 [2024-11-05 19:25:10.551318] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.475 [2024-11-05 19:25:10.551333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.475 [2024-11-05 19:25:10.564522] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.475 [2024-11-05 19:25:10.564536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.475 [2024-11-05 19:25:10.576147] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.475 [2024-11-05 19:25:10.576161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.475 [2024-11-05 19:25:10.588811] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.475 [2024-11-05 19:25:10.588826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.475 [2024-11-05 19:25:10.599804] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.475 [2024-11-05 19:25:10.599818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.475 [2024-11-05 19:25:10.605551] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.475 [2024-11-05 19:25:10.605566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.475 [2024-11-05 19:25:10.615490] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.475 [2024-11-05 19:25:10.615505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.475 [2024-11-05 19:25:10.628180] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.475 [2024-11-05 19:25:10.628194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.475 [2024-11-05 19:25:10.640576] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.475 [2024-11-05 19:25:10.640590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.475 [2024-11-05 19:25:10.652728] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.475 [2024-11-05 19:25:10.652742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.475 [2024-11-05 19:25:10.664607] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.475 [2024-11-05 19:25:10.664621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.475 [2024-11-05 19:25:10.676849] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.475 [2024-11-05 19:25:10.676863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.475 [2024-11-05 19:25:10.689077] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.475 [2024-11-05 19:25:10.689092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.475 [2024-11-05 19:25:10.699755] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.475 [2024-11-05 19:25:10.699769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.475 [2024-11-05 19:25:10.712671] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.475 [2024-11-05 19:25:10.712686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.475 [2024-11-05 19:25:10.724883] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.475 [2024-11-05 19:25:10.724897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.475 [2024-11-05 19:25:10.736704] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.475 [2024-11-05 19:25:10.736718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.475 [2024-11-05 19:25:10.749229] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.475 [2024-11-05 19:25:10.749244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.475 [2024-11-05 19:25:10.760234] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.475 [2024-11-05 19:25:10.760249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.475 [2024-11-05 19:25:10.772646] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.475 [2024-11-05 19:25:10.772661] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.475 [2024-11-05 19:25:10.784538] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.475 [2024-11-05 19:25:10.784552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.475 [2024-11-05 19:25:10.797244] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.475 [2024-11-05 19:25:10.797258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.735 [2024-11-05 19:25:10.807357] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.735 [2024-11-05 19:25:10.807372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.735 [2024-11-05 19:25:10.819944] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.735 [2024-11-05 19:25:10.819959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.735 [2024-11-05 19:25:10.826189] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.735 [2024-11-05 19:25:10.826204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.735 [2024-11-05 19:25:10.835401] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.735 [2024-11-05 19:25:10.835416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.735 [2024-11-05 19:25:10.848220] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.735 [2024-11-05 19:25:10.848234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.735 [2024-11-05 19:25:10.861224] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.735 [2024-11-05 19:25:10.861238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.735 [2024-11-05 19:25:10.872129] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.735 [2024-11-05 19:25:10.872143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.735 [2024-11-05 19:25:10.884988] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.735 [2024-11-05 19:25:10.885003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.735 [2024-11-05 19:25:10.894093] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.735 [2024-11-05 19:25:10.894107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.735 [2024-11-05 19:25:10.903465] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.735 [2024-11-05 19:25:10.903479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.735 [2024-11-05 19:25:10.916343] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.735 [2024-11-05 19:25:10.916357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.735 [2024-11-05 19:25:10.928732] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.735 [2024-11-05 19:25:10.928750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.735 [2024-11-05 19:25:10.940395] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.735 [2024-11-05 19:25:10.940409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.735 [2024-11-05 19:25:10.952739] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.735 [2024-11-05 19:25:10.952757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.735 [2024-11-05 19:25:10.964903] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.735 [2024-11-05 19:25:10.964917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.735 [2024-11-05 19:25:10.976939] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.736 [2024-11-05 19:25:10.976953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.736 [2024-11-05 19:25:10.987353] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.736 [2024-11-05 19:25:10.987367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.736 [2024-11-05 19:25:11.000444] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.736 [2024-11-05 19:25:11.000458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.736 [2024-11-05 19:25:11.012548] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.736 [2024-11-05 19:25:11.012562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.736 [2024-11-05 19:25:11.024414] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.736 [2024-11-05 19:25:11.024428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.736 [2024-11-05 19:25:11.036925] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.736 [2024-11-05 19:25:11.036939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.736 19429.20 IOPS, 151.79 MiB/s [2024-11-05T18:25:11.059Z] [2024-11-05 19:25:11.046309] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.736 [2024-11-05 19:25:11.046323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.736 [2024-11-05 19:25:11.051775] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.736 [2024-11-05 19:25:11.051791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.736 00:35:41.736 Latency(us) 00:35:41.736 [2024-11-05T18:25:11.059Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:41.736 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:35:41.736 Nvme1n1 : 5.01 19429.53 151.79 0.00 0.00 6581.26 2553.17 11414.19 00:35:41.736 [2024-11-05T18:25:11.059Z] =================================================================================================================== 00:35:41.736 [2024-11-05T18:25:11.059Z] Total : 19429.53 151.79 0.00 0.00 6581.26 2553.17 11414.19 00:35:41.736 [2024-11-05 19:25:11.059771] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.736 [2024-11-05 19:25:11.059783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.996 [2024-11-05 19:25:11.067771] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.996 [2024-11-05 19:25:11.067782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.996 [2024-11-05 19:25:11.075777] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.996 [2024-11-05 19:25:11.075789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.996 [2024-11-05 19:25:11.083774] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.996 [2024-11-05 19:25:11.083784] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.996 [2024-11-05 19:25:11.091774] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.996 [2024-11-05 19:25:11.091783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.996 [2024-11-05 19:25:11.099772] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.996 [2024-11-05 19:25:11.099783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.996 [2024-11-05 19:25:11.107771] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.996 [2024-11-05 19:25:11.107779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.996 [2024-11-05 19:25:11.115770] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.996 [2024-11-05 19:25:11.115778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.996 [2024-11-05 19:25:11.123769] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.996 [2024-11-05 19:25:11.123777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.996 [2024-11-05 19:25:11.143775] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.996 [2024-11-05 19:25:11.143788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.996 [2024-11-05 19:25:11.151769] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.996 [2024-11-05 19:25:11.151776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.996 [2024-11-05 19:25:11.159768] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:41.996 [2024-11-05 19:25:11.159774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:41.996 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 37: kill: (631748) - No such process 00:35:41.996 19:25:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@44 -- # wait 631748 00:35:41.996 19:25:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@47 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:41.996 19:25:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:41.996 19:25:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:41.996 19:25:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:41.996 19:25:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@48 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:35:41.996 19:25:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:41.996 19:25:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:41.996 delay0 00:35:41.996 19:25:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:41.996 19:25:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:35:41.996 19:25:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:41.996 19:25:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:41.996 19:25:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:41.996 19:25:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:35:41.996 [2024-11-05 19:25:11.292106] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:35:48.576 Initializing NVMe Controllers 00:35:48.576 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:35:48.576 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:35:48.576 Initialization complete. Launching workers. 00:35:48.576 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 1230 00:35:48.576 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 1517, failed to submit 33 00:35:48.576 success 1346, unsuccessful 171, failed 0 00:35:48.576 19:25:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:35:48.576 19:25:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@55 -- # nvmftestfini 00:35:48.576 19:25:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@335 -- # nvmfcleanup 00:35:48.577 19:25:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@99 -- # sync 00:35:48.577 19:25:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:35:48.577 19:25:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@102 -- # set +e 00:35:48.577 19:25:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@103 -- # for i in {1..20} 00:35:48.577 19:25:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:35:48.577 rmmod nvme_tcp 00:35:48.577 rmmod nvme_fabrics 00:35:48.577 rmmod nvme_keyring 00:35:48.577 19:25:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:35:48.577 19:25:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@106 -- # set -e 00:35:48.577 19:25:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@107 -- # return 0 00:35:48.577 19:25:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # '[' -n 629474 ']' 00:35:48.577 19:25:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@337 -- # killprocess 629474 00:35:48.577 19:25:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@952 -- # '[' -z 629474 ']' 00:35:48.577 19:25:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@956 -- # kill -0 629474 00:35:48.577 19:25:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@957 -- # uname 00:35:48.577 19:25:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:35:48.577 19:25:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 629474 00:35:48.577 19:25:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:35:48.577 19:25:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:35:48.577 19:25:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@970 -- # echo 'killing process with pid 629474' 00:35:48.577 killing process with pid 629474 00:35:48.577 19:25:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@971 -- # kill 629474 00:35:48.577 19:25:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@976 -- # wait 629474 00:35:48.577 19:25:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:35:48.577 19:25:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@342 -- # nvmf_fini 00:35:48.577 19:25:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@264 -- # local dev 00:35:48.577 19:25:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@267 -- # remove_target_ns 00:35:48.577 19:25:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:35:48.577 19:25:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:35:48.577 19:25:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_target_ns 00:35:51.119 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@268 -- # delete_main_bridge 00:35:51.119 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:35:51.119 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@130 -- # return 0 00:35:51.119 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:35:51.119 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:35:51.119 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:35:51.119 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:35:51.119 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:35:51.119 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:35:51.119 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:35:51.119 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:35:51.119 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:35:51.119 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:35:51.119 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:35:51.119 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:35:51.119 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:35:51.119 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:35:51.119 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:35:51.119 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:35:51.119 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:35:51.119 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@41 -- # _dev=0 00:35:51.119 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@41 -- # dev_map=() 00:35:51.119 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@284 -- # iptr 00:35:51.119 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@542 -- # iptables-save 00:35:51.119 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:35:51.119 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@542 -- # iptables-restore 00:35:51.119 00:35:51.119 real 0m33.243s 00:35:51.119 user 0m42.461s 00:35:51.119 sys 0m11.157s 00:35:51.119 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:35:51.119 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:51.120 ************************************ 00:35:51.120 END TEST nvmf_zcopy 00:35:51.120 ************************************ 00:35:51.120 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@38 -- # trap - SIGINT SIGTERM EXIT 00:35:51.120 00:35:51.120 real 4m47.829s 00:35:51.120 user 10m12.827s 00:35:51.120 sys 1m57.220s 00:35:51.120 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1128 -- # xtrace_disable 00:35:51.120 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:51.120 ************************************ 00:35:51.120 END TEST nvmf_target_core_interrupt_mode 00:35:51.120 ************************************ 00:35:51.120 19:25:19 nvmf_tcp -- nvmf/nvmf.sh@17 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:35:51.120 19:25:19 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:35:51.120 19:25:19 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:35:51.120 19:25:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:51.120 ************************************ 00:35:51.120 START TEST nvmf_interrupt 00:35:51.120 ************************************ 00:35:51.120 19:25:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:35:51.120 * Looking for test storage... 00:35:51.120 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:51.120 19:25:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:35:51.120 19:25:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:35:51.120 19:25:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # lcov --version 00:35:51.120 19:25:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:35:51.120 19:25:20 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:51.120 19:25:20 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:51.120 19:25:20 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:51.120 19:25:20 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:35:51.120 19:25:20 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:35:51.120 19:25:20 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:35:51.120 19:25:20 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:35:51.120 19:25:20 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:35:51.120 19:25:20 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:35:51.120 19:25:20 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:35:51.120 19:25:20 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:51.120 19:25:20 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:35:51.120 19:25:20 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:35:51.120 19:25:20 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:51.120 19:25:20 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:51.120 19:25:20 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:35:51.120 19:25:20 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:35:51.120 19:25:20 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:51.120 19:25:20 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:35:51.120 19:25:20 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:35:51.120 19:25:20 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:35:51.120 19:25:20 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:35:51.120 19:25:20 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:51.120 19:25:20 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:35:51.120 19:25:20 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:35:51.120 19:25:20 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:51.120 19:25:20 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:51.120 19:25:20 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:35:51.120 19:25:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:51.120 19:25:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:35:51.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:51.120 --rc genhtml_branch_coverage=1 00:35:51.120 --rc genhtml_function_coverage=1 00:35:51.120 --rc genhtml_legend=1 00:35:51.120 --rc geninfo_all_blocks=1 00:35:51.120 --rc geninfo_unexecuted_blocks=1 00:35:51.120 00:35:51.120 ' 00:35:51.120 19:25:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:35:51.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:51.120 --rc genhtml_branch_coverage=1 00:35:51.120 --rc genhtml_function_coverage=1 00:35:51.120 --rc genhtml_legend=1 00:35:51.120 --rc geninfo_all_blocks=1 00:35:51.120 --rc geninfo_unexecuted_blocks=1 00:35:51.120 00:35:51.120 ' 00:35:51.120 19:25:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:35:51.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:51.120 --rc genhtml_branch_coverage=1 00:35:51.120 --rc genhtml_function_coverage=1 00:35:51.120 --rc genhtml_legend=1 00:35:51.120 --rc geninfo_all_blocks=1 00:35:51.120 --rc geninfo_unexecuted_blocks=1 00:35:51.120 00:35:51.120 ' 00:35:51.120 19:25:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:35:51.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:51.120 --rc genhtml_branch_coverage=1 00:35:51.120 --rc genhtml_function_coverage=1 00:35:51.120 --rc genhtml_legend=1 00:35:51.120 --rc geninfo_all_blocks=1 00:35:51.120 --rc geninfo_unexecuted_blocks=1 00:35:51.120 00:35:51.120 ' 00:35:51.120 19:25:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:51.120 19:25:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:35:51.120 19:25:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:51.120 19:25:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:51.120 19:25:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:51.120 19:25:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:51.120 19:25:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:51.120 19:25:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:35:51.120 19:25:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:51.120 19:25:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:35:51.120 19:25:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:51.120 19:25:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:51.120 19:25:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:51.120 19:25:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:35:51.120 19:25:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:35:51.120 19:25:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:51.120 19:25:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:51.120 19:25:20 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:35:51.120 19:25:20 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:51.120 19:25:20 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:51.120 19:25:20 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:51.120 19:25:20 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:51.120 19:25:20 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:51.120 19:25:20 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:51.120 19:25:20 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:35:51.120 19:25:20 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:51.120 19:25:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:35:51.120 19:25:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:35:51.121 19:25:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:35:51.121 19:25:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:35:51.121 19:25:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@50 -- # : 0 00:35:51.121 19:25:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:35:51.121 19:25:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:35:51.121 19:25:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:35:51.121 19:25:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:51.121 19:25:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:51.121 19:25:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:35:51.121 19:25:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:35:51.121 19:25:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:35:51.121 19:25:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:35:51.121 19:25:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@54 -- # have_pci_nics=0 00:35:51.121 19:25:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:35:51.121 19:25:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:35:51.121 19:25:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:35:51.121 19:25:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:35:51.121 19:25:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:51.121 19:25:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@296 -- # prepare_net_devs 00:35:51.121 19:25:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # local -g is_hw=no 00:35:51.121 19:25:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@260 -- # remove_target_ns 00:35:51.121 19:25:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:35:51.121 19:25:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 14> /dev/null' 00:35:51.121 19:25:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_target_ns 00:35:51.121 19:25:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:35:51.121 19:25:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:35:51.121 19:25:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # xtrace_disable 00:35:51.121 19:25:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:59.265 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:59.265 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@131 -- # pci_devs=() 00:35:59.265 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@131 -- # local -a pci_devs 00:35:59.265 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@132 -- # pci_net_devs=() 00:35:59.265 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:35:59.265 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@133 -- # pci_drivers=() 00:35:59.265 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@133 -- # local -A pci_drivers 00:35:59.265 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@135 -- # net_devs=() 00:35:59.265 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@135 -- # local -ga net_devs 00:35:59.265 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@136 -- # e810=() 00:35:59.265 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@136 -- # local -ga e810 00:35:59.265 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@137 -- # x722=() 00:35:59.265 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@137 -- # local -ga x722 00:35:59.265 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@138 -- # mlx=() 00:35:59.265 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@138 -- # local -ga mlx 00:35:59.265 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:59.265 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:59.265 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:59.265 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:59.265 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:59.265 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:59.265 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:59.265 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:59.265 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:59.265 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:59.265 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:59.265 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:59.265 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:35:59.265 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:35:59.265 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:35:59.265 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:35:59.265 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:35:59.265 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:35:59.265 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:35:59.265 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:35:59.265 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:35:59.265 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:35:59.265 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:35:59.265 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:59.265 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:59.265 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:35:59.265 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:35:59.265 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:35:59.265 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:35:59.265 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:35:59.265 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:35:59.265 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:59.265 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:59.265 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:35:59.265 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:35:59.265 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:35:59.265 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:35:59.265 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:35:59.265 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:59.265 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:35:59.265 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:59.265 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@234 -- # [[ up == up ]] 00:35:59.265 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:35:59.265 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:59.265 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:35:59.265 Found net devices under 0000:4b:00.0: cvl_0_0 00:35:59.265 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:35:59.266 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:35:59.266 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:59.266 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:35:59.266 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:59.266 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@234 -- # [[ up == up ]] 00:35:59.266 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:35:59.266 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:59.266 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:35:59.266 Found net devices under 0000:4b:00.1: cvl_0_1 00:35:59.266 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:35:59.266 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:35:59.266 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:35:59.266 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # is_hw=yes 00:35:59.266 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:35:59.266 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:35:59.266 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:35:59.266 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:35:59.266 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@257 -- # create_target_ns 00:35:59.266 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:35:59.266 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:35:59.266 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:35:59.266 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:59.266 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:35:59.266 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:35:59.266 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:59.266 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:59.266 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:35:59.266 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:35:59.266 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:35:59.266 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:35:59.266 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@27 -- # local -gA dev_map 00:35:59.266 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@28 -- # local -g _dev 00:35:59.266 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:35:59.266 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:35:59.266 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:35:59.266 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:35:59.266 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@44 -- # ips=() 00:35:59.266 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:35:59.266 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:35:59.266 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:35:59.266 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:35:59.266 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:35:59.266 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:35:59.266 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:35:59.266 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:35:59.266 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:35:59.266 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:35:59.266 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:35:59.266 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:35:59.266 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:35:59.266 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:35:59.266 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:35:59.266 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:35:59.266 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:35:59.266 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:35:59.266 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:35:59.266 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:35:59.266 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@11 -- # local val=167772161 00:35:59.266 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:35:59.266 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:35:59.266 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:35:59.266 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:35:59.266 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:35:59.266 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:35:59.266 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:35:59.266 10.0.0.1 00:35:59.266 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:35:59.266 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:35:59.266 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:59.266 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:59.266 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:35:59.266 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@11 -- # local val=167772162 00:35:59.266 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:35:59.266 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:35:59.266 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:35:59.266 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:35:59.266 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:35:59.266 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:35:59.266 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:35:59.266 10.0.0.2 00:35:59.266 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:35:59.266 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:35:59.266 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:35:59.266 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:35:59.266 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:35:59.266 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:35:59.266 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:35:59.266 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:59.266 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:59.266 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:35:59.266 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:35:59.266 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:35:59.266 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:35:59.266 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:35:59.266 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:35:59.266 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:35:59.266 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:35:59.266 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:35:59.266 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:35:59.266 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:35:59.266 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@38 -- # ping_ips 1 00:35:59.266 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:35:59.266 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:35:59.266 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:35:59.266 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:35:59.266 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:35:59.266 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:35:59.266 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:35:59.266 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:35:59.266 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:35:59.266 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@107 -- # local dev=initiator0 00:35:59.266 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:35:59.266 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:35:59.266 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:35:59.266 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:35:59.266 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:35:59.266 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:35:59.266 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:35:59.266 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:35:59.266 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:35:59.266 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:35:59.266 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:35:59.266 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:59.266 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:59.266 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:35:59.267 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:35:59.267 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:59.267 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.528 ms 00:35:59.267 00:35:59.267 --- 10.0.0.1 ping statistics --- 00:35:59.267 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:59.267 rtt min/avg/max/mdev = 0.528/0.528/0.528/0.000 ms 00:35:59.267 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:35:59.267 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:35:59.267 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:35:59.267 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:35:59.267 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:59.267 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:59.267 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@168 -- # get_net_dev target0 00:35:59.267 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@107 -- # local dev=target0 00:35:59.267 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:35:59.267 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:35:59.267 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:35:59.267 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:35:59.267 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:35:59.267 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:35:59.267 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:35:59.267 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:35:59.267 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:35:59.267 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:35:59.267 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:35:59.267 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:35:59.267 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:35:59.267 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:35:59.267 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:59.267 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.264 ms 00:35:59.267 00:35:59.267 --- 10.0.0.2 ping statistics --- 00:35:59.267 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:59.267 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:35:59.267 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@98 -- # (( pair++ )) 00:35:59.267 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:35:59.267 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:59.267 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@270 -- # return 0 00:35:59.267 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:35:59.267 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:35:59.267 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:35:59.267 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:35:59.267 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:35:59.267 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:35:59.267 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:35:59.267 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:35:59.267 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:35:59.267 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:35:59.267 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@107 -- # local dev=initiator0 00:35:59.267 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:35:59.267 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:35:59.267 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:35:59.267 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:35:59.267 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:35:59.267 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:35:59.267 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:35:59.267 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:35:59.267 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:35:59.267 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:59.267 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:35:59.267 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:35:59.267 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:35:59.267 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:35:59.267 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:35:59.267 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:35:59.267 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@107 -- # local dev=initiator1 00:35:59.267 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:35:59.267 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:35:59.267 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@109 -- # return 1 00:35:59.267 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@168 -- # dev= 00:35:59.267 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@169 -- # return 0 00:35:59.267 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:35:59.267 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:35:59.267 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:35:59.267 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:35:59.267 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:35:59.267 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:59.267 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:59.267 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@168 -- # get_net_dev target0 00:35:59.267 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@107 -- # local dev=target0 00:35:59.267 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:35:59.267 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:35:59.267 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:35:59.267 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:35:59.267 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:35:59.267 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:35:59.267 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:35:59.267 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:35:59.267 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:35:59.267 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:59.267 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:35:59.267 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:35:59.267 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:35:59.267 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:35:59.267 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:59.267 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:59.267 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@168 -- # get_net_dev target1 00:35:59.267 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@107 -- # local dev=target1 00:35:59.267 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:35:59.267 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:35:59.267 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@109 -- # return 1 00:35:59.267 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@168 -- # dev= 00:35:59.267 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@169 -- # return 0 00:35:59.267 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:35:59.267 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:59.267 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:35:59.267 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:35:59.267 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:59.267 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:35:59.267 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:35:59.267 19:25:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:35:59.267 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:35:59.267 19:25:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:59.267 19:25:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:59.267 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # nvmfpid=638120 00:35:59.267 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@329 -- # waitforlisten 638120 00:35:59.267 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:35:59.267 19:25:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@833 -- # '[' -z 638120 ']' 00:35:59.267 19:25:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:59.267 19:25:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@838 -- # local max_retries=100 00:35:59.267 19:25:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:59.267 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:59.267 19:25:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # xtrace_disable 00:35:59.267 19:25:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:59.267 [2024-11-05 19:25:27.609167] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:59.267 [2024-11-05 19:25:27.610253] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:35:59.267 [2024-11-05 19:25:27.610301] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:59.267 [2024-11-05 19:25:27.707038] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:35:59.267 [2024-11-05 19:25:27.746756] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:59.268 [2024-11-05 19:25:27.746792] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:59.268 [2024-11-05 19:25:27.746802] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:59.268 [2024-11-05 19:25:27.746811] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:59.268 [2024-11-05 19:25:27.746818] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:59.268 [2024-11-05 19:25:27.748205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:59.268 [2024-11-05 19:25:27.748210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:59.268 [2024-11-05 19:25:27.803334] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:59.268 [2024-11-05 19:25:27.803736] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:35:59.268 [2024-11-05 19:25:27.804122] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:59.268 19:25:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:35:59.268 19:25:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@866 -- # return 0 00:35:59.268 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:35:59.268 19:25:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:59.268 19:25:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:59.268 19:25:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:59.268 19:25:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:35:59.268 19:25:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:35:59.268 19:25:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:35:59.268 19:25:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:35:59.268 5000+0 records in 00:35:59.268 5000+0 records out 00:35:59.268 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0170843 s, 599 MB/s 00:35:59.268 19:25:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:35:59.268 19:25:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:59.268 19:25:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:59.268 AIO0 00:35:59.268 19:25:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:59.268 19:25:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:35:59.268 19:25:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:59.268 19:25:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:59.268 [2024-11-05 19:25:27.932829] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:59.268 19:25:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:59.268 19:25:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:35:59.268 19:25:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:59.268 19:25:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:59.268 19:25:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:59.268 19:25:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:35:59.268 19:25:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:59.268 19:25:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:59.268 19:25:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:59.268 19:25:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:59.268 19:25:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:59.268 19:25:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:59.268 [2024-11-05 19:25:27.973489] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:59.268 19:25:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:59.268 19:25:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:35:59.268 19:25:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 638120 0 00:35:59.268 19:25:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 638120 0 idle 00:35:59.268 19:25:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=638120 00:35:59.268 19:25:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:35:59.268 19:25:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:59.268 19:25:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:59.268 19:25:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:59.268 19:25:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:59.268 19:25:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:59.268 19:25:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:59.268 19:25:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:59.268 19:25:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:59.268 19:25:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 638120 -w 256 00:35:59.268 19:25:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:35:59.268 19:25:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 638120 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:00.25 reactor_0' 00:35:59.268 19:25:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:59.268 19:25:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 638120 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:00.25 reactor_0 00:35:59.268 19:25:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:59.268 19:25:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:59.268 19:25:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:59.268 19:25:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:59.268 19:25:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:59.268 19:25:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:59.268 19:25:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:59.268 19:25:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:35:59.268 19:25:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 638120 1 00:35:59.268 19:25:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 638120 1 idle 00:35:59.268 19:25:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=638120 00:35:59.268 19:25:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:35:59.268 19:25:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:59.268 19:25:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:59.268 19:25:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:59.268 19:25:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:59.268 19:25:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:59.268 19:25:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:59.268 19:25:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:59.268 19:25:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:59.268 19:25:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 638120 -w 256 00:35:59.268 19:25:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:35:59.268 19:25:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 638124 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:00.00 reactor_1' 00:35:59.268 19:25:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 638124 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:00.00 reactor_1 00:35:59.268 19:25:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:59.268 19:25:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:59.268 19:25:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:59.268 19:25:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:59.268 19:25:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:59.268 19:25:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:59.268 19:25:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:59.268 19:25:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:59.268 19:25:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:35:59.268 19:25:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=638270 00:35:59.268 19:25:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:35:59.268 19:25:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:35:59.268 19:25:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:35:59.268 19:25:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 638120 0 00:35:59.268 19:25:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 638120 0 busy 00:35:59.268 19:25:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=638120 00:35:59.268 19:25:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:35:59.268 19:25:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:35:59.268 19:25:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:35:59.268 19:25:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:59.268 19:25:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:35:59.268 19:25:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:59.268 19:25:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:59.268 19:25:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:59.268 19:25:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 638120 -w 256 00:35:59.268 19:25:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:35:59.268 19:25:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 638120 root 20 0 128.2g 44928 32256 R 20.0 0.0 0:00.28 reactor_0' 00:35:59.268 19:25:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 638120 root 20 0 128.2g 44928 32256 R 20.0 0.0 0:00.28 reactor_0 00:35:59.268 19:25:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:59.268 19:25:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:59.268 19:25:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=20.0 00:35:59.268 19:25:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=20 00:35:59.268 19:25:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:35:59.268 19:25:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:35:59.269 19:25:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@31 -- # sleep 1 00:36:00.655 19:25:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j-- )) 00:36:00.655 19:25:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:36:00.655 19:25:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 638120 -w 256 00:36:00.655 19:25:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:36:00.655 19:25:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 638120 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:02.65 reactor_0' 00:36:00.655 19:25:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 638120 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:02.65 reactor_0 00:36:00.655 19:25:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:36:00.655 19:25:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:36:00.655 19:25:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:36:00.655 19:25:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:36:00.655 19:25:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:36:00.655 19:25:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:36:00.655 19:25:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:36:00.655 19:25:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:36:00.655 19:25:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:36:00.655 19:25:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:36:00.655 19:25:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 638120 1 00:36:00.655 19:25:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 638120 1 busy 00:36:00.655 19:25:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=638120 00:36:00.655 19:25:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:36:00.655 19:25:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:36:00.655 19:25:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:36:00.655 19:25:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:36:00.655 19:25:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:36:00.655 19:25:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:36:00.655 19:25:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:36:00.655 19:25:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:36:00.655 19:25:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 638120 -w 256 00:36:00.655 19:25:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:36:00.655 19:25:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 638124 root 20 0 128.2g 44928 32256 R 93.8 0.0 0:01.38 reactor_1' 00:36:00.655 19:25:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 638124 root 20 0 128.2g 44928 32256 R 93.8 0.0 0:01.38 reactor_1 00:36:00.655 19:25:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:36:00.655 19:25:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:36:00.655 19:25:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=93.8 00:36:00.655 19:25:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=93 00:36:00.655 19:25:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:36:00.655 19:25:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:36:00.655 19:25:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:36:00.655 19:25:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:36:00.655 19:25:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 638270 00:36:10.648 Initializing NVMe Controllers 00:36:10.648 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:10.648 Controller IO queue size 256, less than required. 00:36:10.648 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:36:10.648 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:36:10.648 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:36:10.648 Initialization complete. Launching workers. 00:36:10.648 ======================================================== 00:36:10.648 Latency(us) 00:36:10.648 Device Information : IOPS MiB/s Average min max 00:36:10.648 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 16551.50 64.65 15474.90 2714.61 22985.01 00:36:10.648 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 19925.60 77.83 12849.36 7538.46 29098.53 00:36:10.648 ======================================================== 00:36:10.648 Total : 36477.10 142.49 14040.70 2714.61 29098.53 00:36:10.648 00:36:10.648 19:25:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:36:10.648 19:25:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 638120 0 00:36:10.648 19:25:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 638120 0 idle 00:36:10.648 19:25:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=638120 00:36:10.648 19:25:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:36:10.648 19:25:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:36:10.648 19:25:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:36:10.648 19:25:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:36:10.648 19:25:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:36:10.648 19:25:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:36:10.648 19:25:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:36:10.648 19:25:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:36:10.648 19:25:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:36:10.648 19:25:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 638120 -w 256 00:36:10.648 19:25:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:36:10.648 19:25:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 638120 root 20 0 128.2g 44928 32256 S 6.7 0.0 0:20.25 reactor_0' 00:36:10.648 19:25:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 638120 root 20 0 128.2g 44928 32256 S 6.7 0.0 0:20.25 reactor_0 00:36:10.648 19:25:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:36:10.648 19:25:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:36:10.648 19:25:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.7 00:36:10.648 19:25:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:36:10.648 19:25:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:36:10.648 19:25:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:36:10.648 19:25:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:36:10.648 19:25:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:36:10.648 19:25:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:36:10.648 19:25:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 638120 1 00:36:10.648 19:25:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 638120 1 idle 00:36:10.648 19:25:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=638120 00:36:10.648 19:25:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:36:10.648 19:25:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:36:10.648 19:25:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:36:10.648 19:25:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:36:10.648 19:25:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:36:10.648 19:25:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:36:10.648 19:25:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:36:10.648 19:25:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:36:10.648 19:25:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:36:10.648 19:25:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 638120 -w 256 00:36:10.648 19:25:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:36:10.648 19:25:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 638124 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:10.00 reactor_1' 00:36:10.648 19:25:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 638124 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:10.00 reactor_1 00:36:10.648 19:25:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:36:10.648 19:25:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:36:10.648 19:25:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:36:10.648 19:25:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:36:10.648 19:25:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:36:10.648 19:25:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:36:10.648 19:25:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:36:10.648 19:25:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:36:10.648 19:25:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:36:10.648 19:25:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:36:10.648 19:25:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1200 -- # local i=0 00:36:10.648 19:25:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:36:10.648 19:25:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:36:10.648 19:25:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # sleep 2 00:36:12.559 19:25:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:36:12.559 19:25:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:36:12.559 19:25:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:36:12.559 19:25:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:36:12.559 19:25:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:36:12.559 19:25:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # return 0 00:36:12.559 19:25:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:36:12.559 19:25:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 638120 0 00:36:12.559 19:25:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 638120 0 idle 00:36:12.559 19:25:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=638120 00:36:12.559 19:25:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:36:12.559 19:25:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:36:12.559 19:25:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:36:12.559 19:25:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:36:12.559 19:25:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:36:12.559 19:25:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:36:12.559 19:25:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:36:12.559 19:25:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:36:12.559 19:25:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:36:12.559 19:25:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:36:12.559 19:25:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 638120 -w 256 00:36:12.559 19:25:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 638120 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:20.53 reactor_0' 00:36:12.559 19:25:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 638120 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:20.53 reactor_0 00:36:12.559 19:25:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:36:12.559 19:25:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:36:12.559 19:25:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:36:12.559 19:25:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:36:12.559 19:25:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:36:12.559 19:25:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:36:12.559 19:25:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:36:12.559 19:25:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:36:12.559 19:25:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:36:12.559 19:25:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 638120 1 00:36:12.559 19:25:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 638120 1 idle 00:36:12.559 19:25:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=638120 00:36:12.559 19:25:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:36:12.559 19:25:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:36:12.559 19:25:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:36:12.559 19:25:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:36:12.559 19:25:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:36:12.559 19:25:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:36:12.559 19:25:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:36:12.559 19:25:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:36:12.559 19:25:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:36:12.559 19:25:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 638120 -w 256 00:36:12.559 19:25:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:36:12.818 19:25:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 638124 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:10.15 reactor_1' 00:36:12.818 19:25:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 638124 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:10.15 reactor_1 00:36:12.818 19:25:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:36:12.818 19:25:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:36:12.818 19:25:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:36:12.818 19:25:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:36:12.819 19:25:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:36:12.819 19:25:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:36:12.819 19:25:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:36:12.819 19:25:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:36:12.819 19:25:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:36:13.078 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:36:13.078 19:25:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:36:13.078 19:25:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1221 -- # local i=0 00:36:13.078 19:25:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:36:13.078 19:25:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:36:13.079 19:25:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:36:13.079 19:25:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:36:13.079 19:25:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1233 -- # return 0 00:36:13.079 19:25:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:36:13.079 19:25:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:36:13.079 19:25:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@335 -- # nvmfcleanup 00:36:13.079 19:25:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@99 -- # sync 00:36:13.079 19:25:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:36:13.079 19:25:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@102 -- # set +e 00:36:13.079 19:25:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@103 -- # for i in {1..20} 00:36:13.079 19:25:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:36:13.079 rmmod nvme_tcp 00:36:13.079 rmmod nvme_fabrics 00:36:13.079 rmmod nvme_keyring 00:36:13.079 19:25:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:36:13.079 19:25:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@106 -- # set -e 00:36:13.079 19:25:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@107 -- # return 0 00:36:13.079 19:25:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # '[' -n 638120 ']' 00:36:13.079 19:25:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@337 -- # killprocess 638120 00:36:13.079 19:25:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@952 -- # '[' -z 638120 ']' 00:36:13.079 19:25:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@956 -- # kill -0 638120 00:36:13.079 19:25:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@957 -- # uname 00:36:13.079 19:25:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:36:13.079 19:25:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 638120 00:36:13.079 19:25:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:36:13.079 19:25:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:36:13.079 19:25:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@970 -- # echo 'killing process with pid 638120' 00:36:13.079 killing process with pid 638120 00:36:13.079 19:25:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@971 -- # kill 638120 00:36:13.079 19:25:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@976 -- # wait 638120 00:36:13.339 19:25:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:36:13.339 19:25:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@342 -- # nvmf_fini 00:36:13.339 19:25:42 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@264 -- # local dev 00:36:13.339 19:25:42 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@267 -- # remove_target_ns 00:36:13.339 19:25:42 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:36:13.339 19:25:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 14> /dev/null' 00:36:13.339 19:25:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_target_ns 00:36:15.250 19:25:44 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@268 -- # delete_main_bridge 00:36:15.250 19:25:44 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:36:15.250 19:25:44 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@130 -- # return 0 00:36:15.250 19:25:44 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:36:15.250 19:25:44 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:36:15.250 19:25:44 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:36:15.250 19:25:44 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:36:15.250 19:25:44 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:36:15.250 19:25:44 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:36:15.250 19:25:44 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:36:15.250 19:25:44 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:36:15.250 19:25:44 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:36:15.250 19:25:44 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:36:15.250 19:25:44 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:36:15.250 19:25:44 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:36:15.250 19:25:44 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:36:15.250 19:25:44 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:36:15.250 19:25:44 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:36:15.250 19:25:44 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:36:15.250 19:25:44 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:36:15.250 19:25:44 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@41 -- # _dev=0 00:36:15.250 19:25:44 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@41 -- # dev_map=() 00:36:15.250 19:25:44 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@284 -- # iptr 00:36:15.250 19:25:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@542 -- # iptables-save 00:36:15.250 19:25:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:36:15.250 19:25:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@542 -- # iptables-restore 00:36:15.511 00:36:15.511 real 0m24.578s 00:36:15.511 user 0m40.177s 00:36:15.511 sys 0m9.494s 00:36:15.511 19:25:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1128 -- # xtrace_disable 00:36:15.511 19:25:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:15.511 ************************************ 00:36:15.511 END TEST nvmf_interrupt 00:36:15.511 ************************************ 00:36:15.511 00:36:15.511 real 30m0.150s 00:36:15.511 user 61m46.988s 00:36:15.511 sys 10m1.089s 00:36:15.511 19:25:44 nvmf_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:36:15.511 19:25:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:15.511 ************************************ 00:36:15.511 END TEST nvmf_tcp 00:36:15.511 ************************************ 00:36:15.511 19:25:44 -- spdk/autotest.sh@281 -- # [[ 0 -eq 0 ]] 00:36:15.511 19:25:44 -- spdk/autotest.sh@282 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:36:15.511 19:25:44 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:36:15.511 19:25:44 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:36:15.511 19:25:44 -- common/autotest_common.sh@10 -- # set +x 00:36:15.511 ************************************ 00:36:15.511 START TEST spdkcli_nvmf_tcp 00:36:15.511 ************************************ 00:36:15.511 19:25:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:36:15.511 * Looking for test storage... 00:36:15.511 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:36:15.511 19:25:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:36:15.511 19:25:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:36:15.511 19:25:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:36:15.773 19:25:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:36:15.773 19:25:44 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:15.773 19:25:44 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:15.773 19:25:44 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:15.773 19:25:44 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:36:15.773 19:25:44 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:36:15.773 19:25:44 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:36:15.773 19:25:44 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:36:15.773 19:25:44 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:36:15.773 19:25:44 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:36:15.773 19:25:44 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:36:15.773 19:25:44 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:15.773 19:25:44 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:36:15.773 19:25:44 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:36:15.773 19:25:44 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:15.773 19:25:44 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:15.773 19:25:44 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:36:15.773 19:25:44 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:36:15.773 19:25:44 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:15.773 19:25:44 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:36:15.773 19:25:44 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:36:15.773 19:25:44 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:36:15.773 19:25:44 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:36:15.773 19:25:44 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:15.773 19:25:44 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:36:15.773 19:25:44 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:36:15.773 19:25:44 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:15.773 19:25:44 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:15.773 19:25:44 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:36:15.773 19:25:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:15.773 19:25:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:36:15.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:15.773 --rc genhtml_branch_coverage=1 00:36:15.773 --rc genhtml_function_coverage=1 00:36:15.773 --rc genhtml_legend=1 00:36:15.773 --rc geninfo_all_blocks=1 00:36:15.773 --rc geninfo_unexecuted_blocks=1 00:36:15.773 00:36:15.773 ' 00:36:15.773 19:25:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:36:15.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:15.773 --rc genhtml_branch_coverage=1 00:36:15.773 --rc genhtml_function_coverage=1 00:36:15.773 --rc genhtml_legend=1 00:36:15.773 --rc geninfo_all_blocks=1 00:36:15.773 --rc geninfo_unexecuted_blocks=1 00:36:15.773 00:36:15.773 ' 00:36:15.773 19:25:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:36:15.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:15.773 --rc genhtml_branch_coverage=1 00:36:15.773 --rc genhtml_function_coverage=1 00:36:15.773 --rc genhtml_legend=1 00:36:15.773 --rc geninfo_all_blocks=1 00:36:15.773 --rc geninfo_unexecuted_blocks=1 00:36:15.773 00:36:15.773 ' 00:36:15.773 19:25:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:36:15.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:15.773 --rc genhtml_branch_coverage=1 00:36:15.773 --rc genhtml_function_coverage=1 00:36:15.773 --rc genhtml_legend=1 00:36:15.773 --rc geninfo_all_blocks=1 00:36:15.773 --rc geninfo_unexecuted_blocks=1 00:36:15.773 00:36:15.773 ' 00:36:15.773 19:25:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:36:15.773 19:25:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:36:15.773 19:25:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:36:15.773 19:25:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:15.773 19:25:44 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:36:15.773 19:25:44 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:15.773 19:25:44 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:15.773 19:25:44 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:15.773 19:25:44 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:15.773 19:25:44 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:15.773 19:25:44 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:36:15.773 19:25:44 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:15.773 19:25:44 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:36:15.773 19:25:44 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:15.773 19:25:44 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:15.773 19:25:44 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:15.773 19:25:44 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:36:15.773 19:25:44 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:36:15.773 19:25:44 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:15.773 19:25:44 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:15.773 19:25:44 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:36:15.773 19:25:44 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:15.773 19:25:44 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:15.773 19:25:44 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:15.773 19:25:44 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:15.773 19:25:44 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:15.773 19:25:44 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:15.773 19:25:44 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:36:15.773 19:25:44 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:15.773 19:25:44 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:36:15.773 19:25:44 spdkcli_nvmf_tcp -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:36:15.774 19:25:44 spdkcli_nvmf_tcp -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:36:15.774 19:25:44 spdkcli_nvmf_tcp -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:36:15.774 19:25:44 spdkcli_nvmf_tcp -- nvmf/common.sh@50 -- # : 0 00:36:15.774 19:25:44 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:36:15.774 19:25:44 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:36:15.774 19:25:44 spdkcli_nvmf_tcp -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:36:15.774 19:25:44 spdkcli_nvmf_tcp -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:15.774 19:25:44 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:15.774 19:25:44 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:36:15.774 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:36:15.774 19:25:44 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:36:15.774 19:25:44 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:36:15.774 19:25:44 spdkcli_nvmf_tcp -- nvmf/common.sh@54 -- # have_pci_nics=0 00:36:15.774 19:25:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:36:15.774 19:25:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:36:15.774 19:25:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:36:15.774 19:25:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:36:15.774 19:25:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:15.774 19:25:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:15.774 19:25:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:36:15.774 19:25:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=641664 00:36:15.774 19:25:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 641664 00:36:15.774 19:25:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # '[' -z 641664 ']' 00:36:15.774 19:25:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:15.774 19:25:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # local max_retries=100 00:36:15.774 19:25:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:36:15.774 19:25:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:15.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:15.774 19:25:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # xtrace_disable 00:36:15.774 19:25:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:15.774 [2024-11-05 19:25:44.991593] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:36:15.774 [2024-11-05 19:25:44.991672] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid641664 ] 00:36:15.774 [2024-11-05 19:25:45.066977] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:36:16.034 [2024-11-05 19:25:45.111161] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:16.034 [2024-11-05 19:25:45.111164] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:16.605 19:25:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:36:16.605 19:25:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@866 -- # return 0 00:36:16.605 19:25:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:36:16.605 19:25:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:16.605 19:25:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:16.605 19:25:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:36:16.605 19:25:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:36:16.605 19:25:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:36:16.605 19:25:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:16.605 19:25:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:16.605 19:25:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:36:16.605 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:36:16.605 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:36:16.605 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:36:16.605 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:36:16.605 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:36:16.605 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:36:16.605 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:36:16.605 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:36:16.605 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:36:16.605 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:36:16.605 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:36:16.605 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:36:16.605 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:36:16.605 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:36:16.605 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:36:16.605 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:36:16.605 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:36:16.605 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:36:16.605 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:36:16.605 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:36:16.605 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:36:16.605 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:36:16.605 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:36:16.605 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:36:16.605 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:36:16.605 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:36:16.605 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:36:16.605 ' 00:36:19.148 [2024-11-05 19:25:48.249450] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:20.537 [2024-11-05 19:25:49.457391] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:36:22.582 [2024-11-05 19:25:51.676064] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:36:24.520 [2024-11-05 19:25:53.581625] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:36:25.905 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:36:25.905 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:36:25.905 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:36:25.905 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:36:25.905 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:36:25.905 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:36:25.905 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:36:25.905 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:36:25.905 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:36:25.905 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:36:25.905 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:36:25.905 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:36:25.905 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:36:25.905 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:36:25.905 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:36:25.905 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:36:25.905 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:36:25.905 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:36:25.905 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:36:25.905 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:36:25.905 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:36:25.905 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:36:25.905 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:36:25.905 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:36:25.905 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:36:25.905 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:36:25.905 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:36:25.905 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:36:25.905 19:25:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:36:25.905 19:25:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:25.905 19:25:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:25.905 19:25:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:36:25.905 19:25:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:25.905 19:25:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:25.905 19:25:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:36:25.905 19:25:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:36:26.477 19:25:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:36:26.477 19:25:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:36:26.477 19:25:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:36:26.477 19:25:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:26.477 19:25:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:26.477 19:25:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:36:26.477 19:25:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:26.477 19:25:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:26.477 19:25:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:36:26.477 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:36:26.477 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:36:26.477 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:36:26.477 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:36:26.477 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:36:26.477 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:36:26.477 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:36:26.477 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:36:26.477 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:36:26.477 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:36:26.477 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:36:26.477 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:36:26.477 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:36:26.477 ' 00:36:31.762 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:36:31.762 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:36:31.762 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:36:31.762 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:36:31.762 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:36:31.762 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:36:31.762 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:36:31.762 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:36:31.762 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:36:31.762 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:36:31.762 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:36:31.762 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:36:31.762 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:36:31.762 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:36:31.762 19:26:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:36:31.762 19:26:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:31.762 19:26:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:31.762 19:26:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 641664 00:36:31.762 19:26:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # '[' -z 641664 ']' 00:36:31.762 19:26:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # kill -0 641664 00:36:31.762 19:26:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@957 -- # uname 00:36:31.762 19:26:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:36:31.762 19:26:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 641664 00:36:31.762 19:26:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:36:31.762 19:26:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:36:31.762 19:26:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@970 -- # echo 'killing process with pid 641664' 00:36:31.762 killing process with pid 641664 00:36:31.762 19:26:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@971 -- # kill 641664 00:36:31.762 19:26:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@976 -- # wait 641664 00:36:31.762 19:26:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:36:31.762 19:26:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:36:31.762 19:26:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 641664 ']' 00:36:31.762 19:26:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 641664 00:36:31.762 19:26:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # '[' -z 641664 ']' 00:36:31.762 19:26:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # kill -0 641664 00:36:31.762 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (641664) - No such process 00:36:31.762 19:26:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@979 -- # echo 'Process with pid 641664 is not found' 00:36:31.762 Process with pid 641664 is not found 00:36:31.762 19:26:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:36:31.762 19:26:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:36:31.763 19:26:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:36:31.763 00:36:31.763 real 0m16.235s 00:36:31.763 user 0m33.600s 00:36:31.763 sys 0m0.725s 00:36:31.763 19:26:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:36:31.763 19:26:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:31.763 ************************************ 00:36:31.763 END TEST spdkcli_nvmf_tcp 00:36:31.763 ************************************ 00:36:31.763 19:26:00 -- spdk/autotest.sh@283 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:36:31.763 19:26:00 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:36:31.763 19:26:00 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:36:31.763 19:26:00 -- common/autotest_common.sh@10 -- # set +x 00:36:31.763 ************************************ 00:36:31.763 START TEST nvmf_identify_passthru 00:36:31.763 ************************************ 00:36:31.763 19:26:01 nvmf_identify_passthru -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:36:32.025 * Looking for test storage... 00:36:32.025 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:32.026 19:26:01 nvmf_identify_passthru -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:36:32.026 19:26:01 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # lcov --version 00:36:32.026 19:26:01 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:36:32.026 19:26:01 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:36:32.026 19:26:01 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:32.026 19:26:01 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:32.026 19:26:01 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:32.026 19:26:01 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:36:32.026 19:26:01 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:36:32.026 19:26:01 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:36:32.026 19:26:01 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:36:32.026 19:26:01 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:36:32.026 19:26:01 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:36:32.026 19:26:01 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:36:32.026 19:26:01 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:32.026 19:26:01 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:36:32.026 19:26:01 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:36:32.026 19:26:01 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:32.026 19:26:01 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:32.026 19:26:01 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:36:32.026 19:26:01 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:36:32.026 19:26:01 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:32.026 19:26:01 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:36:32.026 19:26:01 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:36:32.026 19:26:01 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:36:32.026 19:26:01 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:36:32.026 19:26:01 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:32.026 19:26:01 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:36:32.026 19:26:01 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:36:32.026 19:26:01 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:32.026 19:26:01 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:32.026 19:26:01 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:36:32.026 19:26:01 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:32.026 19:26:01 nvmf_identify_passthru -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:36:32.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:32.026 --rc genhtml_branch_coverage=1 00:36:32.026 --rc genhtml_function_coverage=1 00:36:32.026 --rc genhtml_legend=1 00:36:32.026 --rc geninfo_all_blocks=1 00:36:32.026 --rc geninfo_unexecuted_blocks=1 00:36:32.026 00:36:32.026 ' 00:36:32.026 19:26:01 nvmf_identify_passthru -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:36:32.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:32.026 --rc genhtml_branch_coverage=1 00:36:32.026 --rc genhtml_function_coverage=1 00:36:32.026 --rc genhtml_legend=1 00:36:32.026 --rc geninfo_all_blocks=1 00:36:32.026 --rc geninfo_unexecuted_blocks=1 00:36:32.026 00:36:32.026 ' 00:36:32.026 19:26:01 nvmf_identify_passthru -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:36:32.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:32.026 --rc genhtml_branch_coverage=1 00:36:32.026 --rc genhtml_function_coverage=1 00:36:32.026 --rc genhtml_legend=1 00:36:32.026 --rc geninfo_all_blocks=1 00:36:32.026 --rc geninfo_unexecuted_blocks=1 00:36:32.026 00:36:32.026 ' 00:36:32.026 19:26:01 nvmf_identify_passthru -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:36:32.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:32.026 --rc genhtml_branch_coverage=1 00:36:32.026 --rc genhtml_function_coverage=1 00:36:32.026 --rc genhtml_legend=1 00:36:32.026 --rc geninfo_all_blocks=1 00:36:32.026 --rc geninfo_unexecuted_blocks=1 00:36:32.026 00:36:32.026 ' 00:36:32.026 19:26:01 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:32.026 19:26:01 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:36:32.026 19:26:01 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:32.026 19:26:01 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:32.026 19:26:01 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:32.026 19:26:01 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:32.026 19:26:01 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:32.026 19:26:01 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:36:32.026 19:26:01 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:32.026 19:26:01 nvmf_identify_passthru -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:36:32.026 19:26:01 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:32.026 19:26:01 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:32.026 19:26:01 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:32.026 19:26:01 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:36:32.026 19:26:01 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:36:32.026 19:26:01 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:32.026 19:26:01 nvmf_identify_passthru -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:32.026 19:26:01 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:36:32.026 19:26:01 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:32.026 19:26:01 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:32.026 19:26:01 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:32.026 19:26:01 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:32.026 19:26:01 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:32.026 19:26:01 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:32.026 19:26:01 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:36:32.026 19:26:01 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:32.026 19:26:01 nvmf_identify_passthru -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:36:32.026 19:26:01 nvmf_identify_passthru -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:36:32.026 19:26:01 nvmf_identify_passthru -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:36:32.026 19:26:01 nvmf_identify_passthru -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:36:32.026 19:26:01 nvmf_identify_passthru -- nvmf/common.sh@50 -- # : 0 00:36:32.026 19:26:01 nvmf_identify_passthru -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:36:32.026 19:26:01 nvmf_identify_passthru -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:36:32.026 19:26:01 nvmf_identify_passthru -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:36:32.026 19:26:01 nvmf_identify_passthru -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:32.026 19:26:01 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:32.026 19:26:01 nvmf_identify_passthru -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:36:32.026 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:36:32.026 19:26:01 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:36:32.026 19:26:01 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:36:32.026 19:26:01 nvmf_identify_passthru -- nvmf/common.sh@54 -- # have_pci_nics=0 00:36:32.026 19:26:01 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:32.026 19:26:01 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:36:32.026 19:26:01 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:32.026 19:26:01 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:32.026 19:26:01 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:32.027 19:26:01 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:32.027 19:26:01 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:32.027 19:26:01 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:32.027 19:26:01 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:36:32.027 19:26:01 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:32.027 19:26:01 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:36:32.027 19:26:01 nvmf_identify_passthru -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:36:32.027 19:26:01 nvmf_identify_passthru -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:32.027 19:26:01 nvmf_identify_passthru -- nvmf/common.sh@296 -- # prepare_net_devs 00:36:32.027 19:26:01 nvmf_identify_passthru -- nvmf/common.sh@258 -- # local -g is_hw=no 00:36:32.027 19:26:01 nvmf_identify_passthru -- nvmf/common.sh@260 -- # remove_target_ns 00:36:32.027 19:26:01 nvmf_identify_passthru -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:36:32.027 19:26:01 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 13> /dev/null' 00:36:32.027 19:26:01 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_target_ns 00:36:32.027 19:26:01 nvmf_identify_passthru -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:36:32.027 19:26:01 nvmf_identify_passthru -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:36:32.027 19:26:01 nvmf_identify_passthru -- nvmf/common.sh@125 -- # xtrace_disable 00:36:32.027 19:26:01 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:40.171 19:26:08 nvmf_identify_passthru -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:40.171 19:26:08 nvmf_identify_passthru -- nvmf/common.sh@131 -- # pci_devs=() 00:36:40.171 19:26:08 nvmf_identify_passthru -- nvmf/common.sh@131 -- # local -a pci_devs 00:36:40.171 19:26:08 nvmf_identify_passthru -- nvmf/common.sh@132 -- # pci_net_devs=() 00:36:40.171 19:26:08 nvmf_identify_passthru -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:36:40.171 19:26:08 nvmf_identify_passthru -- nvmf/common.sh@133 -- # pci_drivers=() 00:36:40.171 19:26:08 nvmf_identify_passthru -- nvmf/common.sh@133 -- # local -A pci_drivers 00:36:40.171 19:26:08 nvmf_identify_passthru -- nvmf/common.sh@135 -- # net_devs=() 00:36:40.171 19:26:08 nvmf_identify_passthru -- nvmf/common.sh@135 -- # local -ga net_devs 00:36:40.171 19:26:08 nvmf_identify_passthru -- nvmf/common.sh@136 -- # e810=() 00:36:40.171 19:26:08 nvmf_identify_passthru -- nvmf/common.sh@136 -- # local -ga e810 00:36:40.171 19:26:08 nvmf_identify_passthru -- nvmf/common.sh@137 -- # x722=() 00:36:40.171 19:26:08 nvmf_identify_passthru -- nvmf/common.sh@137 -- # local -ga x722 00:36:40.171 19:26:08 nvmf_identify_passthru -- nvmf/common.sh@138 -- # mlx=() 00:36:40.171 19:26:08 nvmf_identify_passthru -- nvmf/common.sh@138 -- # local -ga mlx 00:36:40.171 19:26:08 nvmf_identify_passthru -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:40.171 19:26:08 nvmf_identify_passthru -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:40.171 19:26:08 nvmf_identify_passthru -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:40.171 19:26:08 nvmf_identify_passthru -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:40.171 19:26:08 nvmf_identify_passthru -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:40.171 19:26:08 nvmf_identify_passthru -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:40.171 19:26:08 nvmf_identify_passthru -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:40.171 19:26:08 nvmf_identify_passthru -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:40.171 19:26:08 nvmf_identify_passthru -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:40.171 19:26:08 nvmf_identify_passthru -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:40.171 19:26:08 nvmf_identify_passthru -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:40.171 19:26:08 nvmf_identify_passthru -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:40.171 19:26:08 nvmf_identify_passthru -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:36:40.171 19:26:08 nvmf_identify_passthru -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:36:40.171 19:26:08 nvmf_identify_passthru -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:36:40.171 19:26:08 nvmf_identify_passthru -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:36:40.171 19:26:08 nvmf_identify_passthru -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:36:40.171 19:26:08 nvmf_identify_passthru -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:36:40.171 19:26:08 nvmf_identify_passthru -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:36:40.171 19:26:08 nvmf_identify_passthru -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:36:40.171 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:36:40.171 19:26:08 nvmf_identify_passthru -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:36:40.171 19:26:08 nvmf_identify_passthru -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:36:40.171 19:26:08 nvmf_identify_passthru -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:40.171 19:26:08 nvmf_identify_passthru -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:40.171 19:26:08 nvmf_identify_passthru -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:36:40.171 19:26:08 nvmf_identify_passthru -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:36:40.171 19:26:08 nvmf_identify_passthru -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:36:40.171 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:36:40.171 19:26:08 nvmf_identify_passthru -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:36:40.171 19:26:08 nvmf_identify_passthru -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:36:40.171 19:26:08 nvmf_identify_passthru -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:40.171 19:26:08 nvmf_identify_passthru -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:40.171 19:26:08 nvmf_identify_passthru -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:36:40.171 19:26:08 nvmf_identify_passthru -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:36:40.171 19:26:08 nvmf_identify_passthru -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:36:40.171 19:26:08 nvmf_identify_passthru -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:36:40.171 19:26:08 nvmf_identify_passthru -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:36:40.171 19:26:08 nvmf_identify_passthru -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:40.171 19:26:08 nvmf_identify_passthru -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:36:40.171 19:26:08 nvmf_identify_passthru -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:40.171 19:26:08 nvmf_identify_passthru -- nvmf/common.sh@234 -- # [[ up == up ]] 00:36:40.171 19:26:08 nvmf_identify_passthru -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:36:40.171 19:26:08 nvmf_identify_passthru -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:40.171 19:26:08 nvmf_identify_passthru -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:36:40.171 Found net devices under 0000:4b:00.0: cvl_0_0 00:36:40.171 19:26:08 nvmf_identify_passthru -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:36:40.171 19:26:08 nvmf_identify_passthru -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:36:40.171 19:26:08 nvmf_identify_passthru -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:40.171 19:26:08 nvmf_identify_passthru -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:36:40.171 19:26:08 nvmf_identify_passthru -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:40.171 19:26:08 nvmf_identify_passthru -- nvmf/common.sh@234 -- # [[ up == up ]] 00:36:40.171 19:26:08 nvmf_identify_passthru -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:36:40.171 19:26:08 nvmf_identify_passthru -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:40.171 19:26:08 nvmf_identify_passthru -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:36:40.171 Found net devices under 0000:4b:00.1: cvl_0_1 00:36:40.171 19:26:08 nvmf_identify_passthru -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:36:40.171 19:26:08 nvmf_identify_passthru -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:36:40.171 19:26:08 nvmf_identify_passthru -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:36:40.171 19:26:08 nvmf_identify_passthru -- nvmf/common.sh@262 -- # is_hw=yes 00:36:40.171 19:26:08 nvmf_identify_passthru -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:36:40.171 19:26:08 nvmf_identify_passthru -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:36:40.171 19:26:08 nvmf_identify_passthru -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:36:40.171 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:36:40.171 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@257 -- # create_target_ns 00:36:40.171 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:36:40.171 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:36:40.171 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:36:40.171 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:40.171 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:36:40.171 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:36:40.171 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:36:40.171 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:36:40.171 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:36:40.171 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:36:40.171 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:36:40.171 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:36:40.171 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@27 -- # local -gA dev_map 00:36:40.171 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@28 -- # local -g _dev 00:36:40.171 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:36:40.171 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:36:40.171 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:36:40.171 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:36:40.171 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@44 -- # ips=() 00:36:40.171 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:36:40.172 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:36:40.172 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:36:40.172 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:36:40.172 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:36:40.172 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:36:40.172 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:36:40.172 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:36:40.172 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:36:40.172 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:36:40.172 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:36:40.172 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:36:40.172 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:36:40.172 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:36:40.172 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:36:40.172 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:36:40.172 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:36:40.172 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:36:40.172 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:36:40.172 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:36:40.172 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@11 -- # local val=167772161 00:36:40.172 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:36:40.172 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:36:40.172 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:36:40.172 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:36:40.172 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:36:40.172 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:36:40.172 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:36:40.172 10.0.0.1 00:36:40.172 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:36:40.172 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:36:40.172 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:36:40.172 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:36:40.172 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:36:40.172 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@11 -- # local val=167772162 00:36:40.172 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:36:40.172 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:36:40.172 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:36:40.172 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:36:40.172 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:36:40.172 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:36:40.172 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:36:40.172 10.0.0.2 00:36:40.172 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:36:40.172 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:36:40.172 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:36:40.172 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:36:40.172 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:36:40.172 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:36:40.172 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:36:40.172 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:36:40.172 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:36:40.172 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:36:40.172 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:36:40.172 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:36:40.172 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:36:40.172 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:36:40.172 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:36:40.172 19:26:08 nvmf_identify_passthru -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:36:40.172 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:36:40.172 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:36:40.172 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:36:40.172 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:36:40.172 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@38 -- # ping_ips 1 00:36:40.172 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:36:40.172 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:36:40.172 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:36:40.172 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:36:40.172 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:36:40.172 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:36:40.172 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:36:40.172 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:36:40.172 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:36:40.172 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@107 -- # local dev=initiator0 00:36:40.172 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:36:40.172 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:36:40.172 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:36:40.172 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:36:40.172 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:36:40.172 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:36:40.172 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:36:40.172 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:36:40.172 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:36:40.172 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:36:40.172 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:36:40.172 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:36:40.172 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:36:40.172 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:36:40.172 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:36:40.172 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:40.172 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.708 ms 00:36:40.172 00:36:40.172 --- 10.0.0.1 ping statistics --- 00:36:40.172 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:40.172 rtt min/avg/max/mdev = 0.708/0.708/0.708/0.000 ms 00:36:40.172 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:36:40.172 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:36:40.172 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:36:40.172 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:36:40.172 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:36:40.172 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:36:40.172 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@168 -- # get_net_dev target0 00:36:40.172 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@107 -- # local dev=target0 00:36:40.172 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:36:40.172 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:36:40.172 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:36:40.172 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:36:40.172 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:36:40.172 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:36:40.172 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:36:40.172 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:36:40.172 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:36:40.172 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:36:40.172 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:36:40.172 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:36:40.172 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:36:40.172 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:36:40.172 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:40.172 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.321 ms 00:36:40.172 00:36:40.172 --- 10.0.0.2 ping statistics --- 00:36:40.172 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:40.172 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:36:40.172 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@98 -- # (( pair++ )) 00:36:40.172 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:36:40.172 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:40.172 19:26:08 nvmf_identify_passthru -- nvmf/common.sh@270 -- # return 0 00:36:40.172 19:26:08 nvmf_identify_passthru -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:36:40.172 19:26:08 nvmf_identify_passthru -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:36:40.172 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:36:40.172 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:36:40.172 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:36:40.173 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:36:40.173 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:36:40.173 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:36:40.173 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:36:40.173 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:36:40.173 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@107 -- # local dev=initiator0 00:36:40.173 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:36:40.173 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:36:40.173 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:36:40.173 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:36:40.173 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:36:40.173 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:36:40.173 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:36:40.173 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:36:40.173 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:36:40.173 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:40.173 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:36:40.173 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:36:40.173 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:36:40.173 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:36:40.173 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:36:40.173 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:36:40.173 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@107 -- # local dev=initiator1 00:36:40.173 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:36:40.173 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:36:40.173 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@109 -- # return 1 00:36:40.173 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@168 -- # dev= 00:36:40.173 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@169 -- # return 0 00:36:40.173 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:36:40.173 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:36:40.173 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:36:40.173 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:36:40.173 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:36:40.173 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:36:40.173 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:36:40.173 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@168 -- # get_net_dev target0 00:36:40.173 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@107 -- # local dev=target0 00:36:40.173 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:36:40.173 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:36:40.173 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:36:40.173 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:36:40.173 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:36:40.173 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:36:40.173 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:36:40.173 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:36:40.173 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:36:40.173 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:40.173 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:36:40.173 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:36:40.173 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:36:40.173 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:36:40.173 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:36:40.173 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:36:40.173 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@168 -- # get_net_dev target1 00:36:40.173 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@107 -- # local dev=target1 00:36:40.173 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:36:40.173 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:36:40.173 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@109 -- # return 1 00:36:40.173 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@168 -- # dev= 00:36:40.173 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@169 -- # return 0 00:36:40.173 19:26:08 nvmf_identify_passthru -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:36:40.173 19:26:08 nvmf_identify_passthru -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:40.173 19:26:08 nvmf_identify_passthru -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:36:40.173 19:26:08 nvmf_identify_passthru -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:36:40.173 19:26:08 nvmf_identify_passthru -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:40.173 19:26:08 nvmf_identify_passthru -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:36:40.173 19:26:08 nvmf_identify_passthru -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:36:40.173 19:26:08 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:36:40.173 19:26:08 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:40.173 19:26:08 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:40.173 19:26:08 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:36:40.173 19:26:08 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # bdfs=() 00:36:40.173 19:26:08 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # local bdfs 00:36:40.173 19:26:08 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:36:40.173 19:26:08 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:36:40.173 19:26:08 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # bdfs=() 00:36:40.173 19:26:08 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # local bdfs 00:36:40.173 19:26:08 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:36:40.173 19:26:08 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:36:40.173 19:26:08 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:36:40.173 19:26:08 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:36:40.173 19:26:08 nvmf_identify_passthru -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:65:00.0 00:36:40.173 19:26:08 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # echo 0000:65:00.0 00:36:40.173 19:26:08 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:36:40.173 19:26:08 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:36:40.173 19:26:08 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:36:40.173 19:26:08 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:36:40.173 19:26:08 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:36:40.173 19:26:09 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605487 00:36:40.173 19:26:09 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:36:40.173 19:26:09 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:36:40.173 19:26:09 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:36:40.476 19:26:09 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:36:40.476 19:26:09 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:36:40.476 19:26:09 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:40.476 19:26:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:40.476 19:26:09 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:36:40.476 19:26:09 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:40.476 19:26:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:40.476 19:26:09 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=648627 00:36:40.476 19:26:09 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:36:40.476 19:26:09 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:36:40.476 19:26:09 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 648627 00:36:40.476 19:26:09 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # '[' -z 648627 ']' 00:36:40.476 19:26:09 nvmf_identify_passthru -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:40.476 19:26:09 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # local max_retries=100 00:36:40.476 19:26:09 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:40.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:40.476 19:26:09 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # xtrace_disable 00:36:40.476 19:26:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:40.476 [2024-11-05 19:26:09.772798] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:36:40.476 [2024-11-05 19:26:09.772868] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:40.735 [2024-11-05 19:26:09.854522] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:40.735 [2024-11-05 19:26:09.897305] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:40.735 [2024-11-05 19:26:09.897342] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:40.735 [2024-11-05 19:26:09.897353] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:40.735 [2024-11-05 19:26:09.897360] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:40.735 [2024-11-05 19:26:09.897366] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:40.735 [2024-11-05 19:26:09.899222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:40.735 [2024-11-05 19:26:09.899342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:40.735 [2024-11-05 19:26:09.899502] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:40.735 [2024-11-05 19:26:09.899503] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:36:41.307 19:26:10 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:36:41.307 19:26:10 nvmf_identify_passthru -- common/autotest_common.sh@866 -- # return 0 00:36:41.307 19:26:10 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:36:41.307 19:26:10 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:41.307 19:26:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:41.307 INFO: Log level set to 20 00:36:41.307 INFO: Requests: 00:36:41.307 { 00:36:41.307 "jsonrpc": "2.0", 00:36:41.307 "method": "nvmf_set_config", 00:36:41.307 "id": 1, 00:36:41.307 "params": { 00:36:41.307 "admin_cmd_passthru": { 00:36:41.307 "identify_ctrlr": true 00:36:41.307 } 00:36:41.307 } 00:36:41.307 } 00:36:41.307 00:36:41.308 INFO: response: 00:36:41.308 { 00:36:41.308 "jsonrpc": "2.0", 00:36:41.308 "id": 1, 00:36:41.308 "result": true 00:36:41.308 } 00:36:41.308 00:36:41.308 19:26:10 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:41.308 19:26:10 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:36:41.308 19:26:10 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:41.308 19:26:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:41.308 INFO: Setting log level to 20 00:36:41.308 INFO: Setting log level to 20 00:36:41.308 INFO: Log level set to 20 00:36:41.308 INFO: Log level set to 20 00:36:41.308 INFO: Requests: 00:36:41.308 { 00:36:41.308 "jsonrpc": "2.0", 00:36:41.308 "method": "framework_start_init", 00:36:41.308 "id": 1 00:36:41.308 } 00:36:41.308 00:36:41.308 INFO: Requests: 00:36:41.308 { 00:36:41.308 "jsonrpc": "2.0", 00:36:41.308 "method": "framework_start_init", 00:36:41.308 "id": 1 00:36:41.308 } 00:36:41.308 00:36:41.568 [2024-11-05 19:26:10.647547] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:36:41.568 INFO: response: 00:36:41.568 { 00:36:41.568 "jsonrpc": "2.0", 00:36:41.568 "id": 1, 00:36:41.568 "result": true 00:36:41.568 } 00:36:41.568 00:36:41.568 INFO: response: 00:36:41.568 { 00:36:41.568 "jsonrpc": "2.0", 00:36:41.568 "id": 1, 00:36:41.568 "result": true 00:36:41.568 } 00:36:41.568 00:36:41.568 19:26:10 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:41.568 19:26:10 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:41.568 19:26:10 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:41.568 19:26:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:41.568 INFO: Setting log level to 40 00:36:41.568 INFO: Setting log level to 40 00:36:41.568 INFO: Setting log level to 40 00:36:41.568 [2024-11-05 19:26:10.660893] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:41.568 19:26:10 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:41.568 19:26:10 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:36:41.568 19:26:10 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:41.568 19:26:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:41.568 19:26:10 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:36:41.568 19:26:10 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:41.568 19:26:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:41.829 Nvme0n1 00:36:41.829 19:26:11 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:41.829 19:26:11 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:36:41.829 19:26:11 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:41.829 19:26:11 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:41.829 19:26:11 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:41.829 19:26:11 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:36:41.829 19:26:11 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:41.829 19:26:11 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:41.829 19:26:11 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:41.829 19:26:11 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:41.829 19:26:11 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:41.829 19:26:11 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:41.829 [2024-11-05 19:26:11.057184] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:41.829 19:26:11 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:41.829 19:26:11 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:36:41.829 19:26:11 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:41.829 19:26:11 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:41.829 [ 00:36:41.829 { 00:36:41.829 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:36:41.829 "subtype": "Discovery", 00:36:41.829 "listen_addresses": [], 00:36:41.829 "allow_any_host": true, 00:36:41.829 "hosts": [] 00:36:41.829 }, 00:36:41.829 { 00:36:41.829 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:36:41.829 "subtype": "NVMe", 00:36:41.829 "listen_addresses": [ 00:36:41.829 { 00:36:41.829 "trtype": "TCP", 00:36:41.829 "adrfam": "IPv4", 00:36:41.829 "traddr": "10.0.0.2", 00:36:41.829 "trsvcid": "4420" 00:36:41.829 } 00:36:41.829 ], 00:36:41.829 "allow_any_host": true, 00:36:41.829 "hosts": [], 00:36:41.829 "serial_number": "SPDK00000000000001", 00:36:41.829 "model_number": "SPDK bdev Controller", 00:36:41.829 "max_namespaces": 1, 00:36:41.829 "min_cntlid": 1, 00:36:41.829 "max_cntlid": 65519, 00:36:41.829 "namespaces": [ 00:36:41.829 { 00:36:41.829 "nsid": 1, 00:36:41.829 "bdev_name": "Nvme0n1", 00:36:41.829 "name": "Nvme0n1", 00:36:41.829 "nguid": "36344730526054870025384500000044", 00:36:41.829 "uuid": "36344730-5260-5487-0025-384500000044" 00:36:41.829 } 00:36:41.829 ] 00:36:41.829 } 00:36:41.829 ] 00:36:41.829 19:26:11 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:41.829 19:26:11 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:36:41.829 19:26:11 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:36:41.829 19:26:11 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:36:42.091 19:26:11 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605487 00:36:42.091 19:26:11 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:36:42.091 19:26:11 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:36:42.091 19:26:11 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:36:42.352 19:26:11 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:36:42.352 19:26:11 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605487 '!=' S64GNE0R605487 ']' 00:36:42.352 19:26:11 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:36:42.352 19:26:11 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:42.352 19:26:11 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:42.352 19:26:11 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:42.352 19:26:11 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:42.352 19:26:11 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:36:42.352 19:26:11 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:36:42.352 19:26:11 nvmf_identify_passthru -- nvmf/common.sh@335 -- # nvmfcleanup 00:36:42.352 19:26:11 nvmf_identify_passthru -- nvmf/common.sh@99 -- # sync 00:36:42.352 19:26:11 nvmf_identify_passthru -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:36:42.352 19:26:11 nvmf_identify_passthru -- nvmf/common.sh@102 -- # set +e 00:36:42.352 19:26:11 nvmf_identify_passthru -- nvmf/common.sh@103 -- # for i in {1..20} 00:36:42.352 19:26:11 nvmf_identify_passthru -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:36:42.352 rmmod nvme_tcp 00:36:42.352 rmmod nvme_fabrics 00:36:42.352 rmmod nvme_keyring 00:36:42.352 19:26:11 nvmf_identify_passthru -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:36:42.352 19:26:11 nvmf_identify_passthru -- nvmf/common.sh@106 -- # set -e 00:36:42.353 19:26:11 nvmf_identify_passthru -- nvmf/common.sh@107 -- # return 0 00:36:42.353 19:26:11 nvmf_identify_passthru -- nvmf/common.sh@336 -- # '[' -n 648627 ']' 00:36:42.353 19:26:11 nvmf_identify_passthru -- nvmf/common.sh@337 -- # killprocess 648627 00:36:42.353 19:26:11 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # '[' -z 648627 ']' 00:36:42.353 19:26:11 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # kill -0 648627 00:36:42.353 19:26:11 nvmf_identify_passthru -- common/autotest_common.sh@957 -- # uname 00:36:42.353 19:26:11 nvmf_identify_passthru -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:36:42.353 19:26:11 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 648627 00:36:42.353 19:26:11 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:36:42.353 19:26:11 nvmf_identify_passthru -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:36:42.353 19:26:11 nvmf_identify_passthru -- common/autotest_common.sh@970 -- # echo 'killing process with pid 648627' 00:36:42.353 killing process with pid 648627 00:36:42.353 19:26:11 nvmf_identify_passthru -- common/autotest_common.sh@971 -- # kill 648627 00:36:42.353 19:26:11 nvmf_identify_passthru -- common/autotest_common.sh@976 -- # wait 648627 00:36:42.614 19:26:11 nvmf_identify_passthru -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:36:42.614 19:26:11 nvmf_identify_passthru -- nvmf/common.sh@342 -- # nvmf_fini 00:36:42.614 19:26:11 nvmf_identify_passthru -- nvmf/setup.sh@264 -- # local dev 00:36:42.614 19:26:11 nvmf_identify_passthru -- nvmf/setup.sh@267 -- # remove_target_ns 00:36:42.614 19:26:11 nvmf_identify_passthru -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:36:42.614 19:26:11 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 13> /dev/null' 00:36:42.614 19:26:11 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_target_ns 00:36:45.160 19:26:13 nvmf_identify_passthru -- nvmf/setup.sh@268 -- # delete_main_bridge 00:36:45.160 19:26:13 nvmf_identify_passthru -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:36:45.160 19:26:13 nvmf_identify_passthru -- nvmf/setup.sh@130 -- # return 0 00:36:45.160 19:26:13 nvmf_identify_passthru -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:36:45.160 19:26:13 nvmf_identify_passthru -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:36:45.160 19:26:13 nvmf_identify_passthru -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:36:45.160 19:26:13 nvmf_identify_passthru -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:36:45.160 19:26:13 nvmf_identify_passthru -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:36:45.160 19:26:13 nvmf_identify_passthru -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:36:45.160 19:26:13 nvmf_identify_passthru -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:36:45.160 19:26:13 nvmf_identify_passthru -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:36:45.160 19:26:13 nvmf_identify_passthru -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:36:45.160 19:26:13 nvmf_identify_passthru -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:36:45.160 19:26:13 nvmf_identify_passthru -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:36:45.160 19:26:13 nvmf_identify_passthru -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:36:45.160 19:26:13 nvmf_identify_passthru -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:36:45.160 19:26:13 nvmf_identify_passthru -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:36:45.160 19:26:13 nvmf_identify_passthru -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:36:45.160 19:26:13 nvmf_identify_passthru -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:36:45.160 19:26:13 nvmf_identify_passthru -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:36:45.160 19:26:13 nvmf_identify_passthru -- nvmf/setup.sh@41 -- # _dev=0 00:36:45.160 19:26:13 nvmf_identify_passthru -- nvmf/setup.sh@41 -- # dev_map=() 00:36:45.160 19:26:13 nvmf_identify_passthru -- nvmf/setup.sh@284 -- # iptr 00:36:45.160 19:26:13 nvmf_identify_passthru -- nvmf/common.sh@542 -- # iptables-save 00:36:45.160 19:26:13 nvmf_identify_passthru -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:36:45.161 19:26:13 nvmf_identify_passthru -- nvmf/common.sh@542 -- # iptables-restore 00:36:45.161 00:36:45.161 real 0m12.916s 00:36:45.161 user 0m9.958s 00:36:45.161 sys 0m6.549s 00:36:45.161 19:26:13 nvmf_identify_passthru -- common/autotest_common.sh@1128 -- # xtrace_disable 00:36:45.161 19:26:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:45.161 ************************************ 00:36:45.161 END TEST nvmf_identify_passthru 00:36:45.161 ************************************ 00:36:45.161 19:26:13 -- spdk/autotest.sh@285 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:36:45.161 19:26:13 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:36:45.161 19:26:13 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:36:45.161 19:26:13 -- common/autotest_common.sh@10 -- # set +x 00:36:45.161 ************************************ 00:36:45.161 START TEST nvmf_dif 00:36:45.161 ************************************ 00:36:45.161 19:26:13 nvmf_dif -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:36:45.161 * Looking for test storage... 00:36:45.161 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:45.161 19:26:14 nvmf_dif -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:36:45.161 19:26:14 nvmf_dif -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:36:45.161 19:26:14 nvmf_dif -- common/autotest_common.sh@1691 -- # lcov --version 00:36:45.161 19:26:14 nvmf_dif -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:36:45.161 19:26:14 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:45.161 19:26:14 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:45.161 19:26:14 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:45.161 19:26:14 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:36:45.161 19:26:14 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:36:45.161 19:26:14 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:36:45.161 19:26:14 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:36:45.161 19:26:14 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:36:45.161 19:26:14 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:36:45.161 19:26:14 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:36:45.161 19:26:14 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:45.161 19:26:14 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:36:45.161 19:26:14 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:36:45.161 19:26:14 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:45.161 19:26:14 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:45.161 19:26:14 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:36:45.161 19:26:14 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:36:45.161 19:26:14 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:45.161 19:26:14 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:36:45.161 19:26:14 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:36:45.161 19:26:14 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:36:45.161 19:26:14 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:36:45.161 19:26:14 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:45.161 19:26:14 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:36:45.161 19:26:14 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:36:45.161 19:26:14 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:45.161 19:26:14 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:45.161 19:26:14 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:36:45.161 19:26:14 nvmf_dif -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:45.161 19:26:14 nvmf_dif -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:36:45.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:45.161 --rc genhtml_branch_coverage=1 00:36:45.161 --rc genhtml_function_coverage=1 00:36:45.161 --rc genhtml_legend=1 00:36:45.161 --rc geninfo_all_blocks=1 00:36:45.161 --rc geninfo_unexecuted_blocks=1 00:36:45.161 00:36:45.161 ' 00:36:45.161 19:26:14 nvmf_dif -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:36:45.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:45.161 --rc genhtml_branch_coverage=1 00:36:45.161 --rc genhtml_function_coverage=1 00:36:45.161 --rc genhtml_legend=1 00:36:45.161 --rc geninfo_all_blocks=1 00:36:45.161 --rc geninfo_unexecuted_blocks=1 00:36:45.161 00:36:45.161 ' 00:36:45.161 19:26:14 nvmf_dif -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:36:45.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:45.161 --rc genhtml_branch_coverage=1 00:36:45.161 --rc genhtml_function_coverage=1 00:36:45.161 --rc genhtml_legend=1 00:36:45.161 --rc geninfo_all_blocks=1 00:36:45.161 --rc geninfo_unexecuted_blocks=1 00:36:45.161 00:36:45.161 ' 00:36:45.161 19:26:14 nvmf_dif -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:36:45.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:45.161 --rc genhtml_branch_coverage=1 00:36:45.161 --rc genhtml_function_coverage=1 00:36:45.161 --rc genhtml_legend=1 00:36:45.161 --rc geninfo_all_blocks=1 00:36:45.161 --rc geninfo_unexecuted_blocks=1 00:36:45.161 00:36:45.161 ' 00:36:45.161 19:26:14 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:45.161 19:26:14 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:36:45.161 19:26:14 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:45.161 19:26:14 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:45.161 19:26:14 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:45.161 19:26:14 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:45.161 19:26:14 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:45.161 19:26:14 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:36:45.161 19:26:14 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:45.161 19:26:14 nvmf_dif -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:36:45.161 19:26:14 nvmf_dif -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:45.161 19:26:14 nvmf_dif -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:45.161 19:26:14 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:45.161 19:26:14 nvmf_dif -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:36:45.161 19:26:14 nvmf_dif -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:36:45.161 19:26:14 nvmf_dif -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:45.161 19:26:14 nvmf_dif -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:45.161 19:26:14 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:36:45.161 19:26:14 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:45.161 19:26:14 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:45.161 19:26:14 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:45.161 19:26:14 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:45.161 19:26:14 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:45.161 19:26:14 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:45.161 19:26:14 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:36:45.161 19:26:14 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:45.161 19:26:14 nvmf_dif -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:36:45.161 19:26:14 nvmf_dif -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:36:45.161 19:26:14 nvmf_dif -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:36:45.161 19:26:14 nvmf_dif -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:36:45.161 19:26:14 nvmf_dif -- nvmf/common.sh@50 -- # : 0 00:36:45.161 19:26:14 nvmf_dif -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:36:45.161 19:26:14 nvmf_dif -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:36:45.161 19:26:14 nvmf_dif -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:36:45.161 19:26:14 nvmf_dif -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:45.161 19:26:14 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:45.161 19:26:14 nvmf_dif -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:36:45.161 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:36:45.161 19:26:14 nvmf_dif -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:36:45.161 19:26:14 nvmf_dif -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:36:45.161 19:26:14 nvmf_dif -- nvmf/common.sh@54 -- # have_pci_nics=0 00:36:45.161 19:26:14 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:36:45.161 19:26:14 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:36:45.161 19:26:14 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:36:45.161 19:26:14 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:36:45.161 19:26:14 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:36:45.161 19:26:14 nvmf_dif -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:36:45.161 19:26:14 nvmf_dif -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:45.161 19:26:14 nvmf_dif -- nvmf/common.sh@296 -- # prepare_net_devs 00:36:45.161 19:26:14 nvmf_dif -- nvmf/common.sh@258 -- # local -g is_hw=no 00:36:45.161 19:26:14 nvmf_dif -- nvmf/common.sh@260 -- # remove_target_ns 00:36:45.161 19:26:14 nvmf_dif -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:36:45.161 19:26:14 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 13> /dev/null' 00:36:45.161 19:26:14 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_target_ns 00:36:45.161 19:26:14 nvmf_dif -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:36:45.161 19:26:14 nvmf_dif -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:36:45.161 19:26:14 nvmf_dif -- nvmf/common.sh@125 -- # xtrace_disable 00:36:45.162 19:26:14 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:53.304 19:26:21 nvmf_dif -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:53.304 19:26:21 nvmf_dif -- nvmf/common.sh@131 -- # pci_devs=() 00:36:53.304 19:26:21 nvmf_dif -- nvmf/common.sh@131 -- # local -a pci_devs 00:36:53.304 19:26:21 nvmf_dif -- nvmf/common.sh@132 -- # pci_net_devs=() 00:36:53.304 19:26:21 nvmf_dif -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:36:53.304 19:26:21 nvmf_dif -- nvmf/common.sh@133 -- # pci_drivers=() 00:36:53.304 19:26:21 nvmf_dif -- nvmf/common.sh@133 -- # local -A pci_drivers 00:36:53.304 19:26:21 nvmf_dif -- nvmf/common.sh@135 -- # net_devs=() 00:36:53.304 19:26:21 nvmf_dif -- nvmf/common.sh@135 -- # local -ga net_devs 00:36:53.304 19:26:21 nvmf_dif -- nvmf/common.sh@136 -- # e810=() 00:36:53.304 19:26:21 nvmf_dif -- nvmf/common.sh@136 -- # local -ga e810 00:36:53.304 19:26:21 nvmf_dif -- nvmf/common.sh@137 -- # x722=() 00:36:53.304 19:26:21 nvmf_dif -- nvmf/common.sh@137 -- # local -ga x722 00:36:53.304 19:26:21 nvmf_dif -- nvmf/common.sh@138 -- # mlx=() 00:36:53.304 19:26:21 nvmf_dif -- nvmf/common.sh@138 -- # local -ga mlx 00:36:53.304 19:26:21 nvmf_dif -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:53.304 19:26:21 nvmf_dif -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:53.304 19:26:21 nvmf_dif -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:53.304 19:26:21 nvmf_dif -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:53.304 19:26:21 nvmf_dif -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:53.304 19:26:21 nvmf_dif -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:53.304 19:26:21 nvmf_dif -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:53.304 19:26:21 nvmf_dif -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:53.304 19:26:21 nvmf_dif -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:53.304 19:26:21 nvmf_dif -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:53.304 19:26:21 nvmf_dif -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:53.304 19:26:21 nvmf_dif -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:53.304 19:26:21 nvmf_dif -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:36:53.304 19:26:21 nvmf_dif -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:36:53.304 19:26:21 nvmf_dif -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:36:53.304 19:26:21 nvmf_dif -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:36:53.304 19:26:21 nvmf_dif -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:36:53.304 19:26:21 nvmf_dif -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:36:53.304 19:26:21 nvmf_dif -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:36:53.304 19:26:21 nvmf_dif -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:36:53.304 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:36:53.304 19:26:21 nvmf_dif -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:36:53.304 19:26:21 nvmf_dif -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:36:53.304 19:26:21 nvmf_dif -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:53.304 19:26:21 nvmf_dif -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:53.304 19:26:21 nvmf_dif -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:36:53.304 19:26:21 nvmf_dif -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:36:53.304 19:26:21 nvmf_dif -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:36:53.304 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:36:53.304 19:26:21 nvmf_dif -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:36:53.304 19:26:21 nvmf_dif -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:36:53.304 19:26:21 nvmf_dif -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:53.304 19:26:21 nvmf_dif -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:53.304 19:26:21 nvmf_dif -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:36:53.304 19:26:21 nvmf_dif -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:36:53.304 19:26:21 nvmf_dif -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:36:53.304 19:26:21 nvmf_dif -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:36:53.304 19:26:21 nvmf_dif -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:36:53.304 19:26:21 nvmf_dif -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:53.304 19:26:21 nvmf_dif -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:36:53.304 19:26:21 nvmf_dif -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:53.304 19:26:21 nvmf_dif -- nvmf/common.sh@234 -- # [[ up == up ]] 00:36:53.304 19:26:21 nvmf_dif -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:36:53.304 19:26:21 nvmf_dif -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:53.304 19:26:21 nvmf_dif -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:36:53.304 Found net devices under 0000:4b:00.0: cvl_0_0 00:36:53.304 19:26:21 nvmf_dif -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:36:53.304 19:26:21 nvmf_dif -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:36:53.304 19:26:21 nvmf_dif -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:53.304 19:26:21 nvmf_dif -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:36:53.304 19:26:21 nvmf_dif -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:53.304 19:26:21 nvmf_dif -- nvmf/common.sh@234 -- # [[ up == up ]] 00:36:53.304 19:26:21 nvmf_dif -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:36:53.304 19:26:21 nvmf_dif -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:53.304 19:26:21 nvmf_dif -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:36:53.304 Found net devices under 0000:4b:00.1: cvl_0_1 00:36:53.304 19:26:21 nvmf_dif -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:36:53.304 19:26:21 nvmf_dif -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:36:53.304 19:26:21 nvmf_dif -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:36:53.304 19:26:21 nvmf_dif -- nvmf/common.sh@262 -- # is_hw=yes 00:36:53.304 19:26:21 nvmf_dif -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:36:53.304 19:26:21 nvmf_dif -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:36:53.304 19:26:21 nvmf_dif -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:36:53.304 19:26:21 nvmf_dif -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:36:53.304 19:26:21 nvmf_dif -- nvmf/setup.sh@257 -- # create_target_ns 00:36:53.304 19:26:21 nvmf_dif -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:36:53.304 19:26:21 nvmf_dif -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:36:53.304 19:26:21 nvmf_dif -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:36:53.304 19:26:21 nvmf_dif -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:53.304 19:26:21 nvmf_dif -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:36:53.304 19:26:21 nvmf_dif -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:36:53.304 19:26:21 nvmf_dif -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:36:53.304 19:26:21 nvmf_dif -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:36:53.304 19:26:21 nvmf_dif -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:36:53.304 19:26:21 nvmf_dif -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:36:53.304 19:26:21 nvmf_dif -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:36:53.304 19:26:21 nvmf_dif -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:36:53.304 19:26:21 nvmf_dif -- nvmf/setup.sh@27 -- # local -gA dev_map 00:36:53.304 19:26:21 nvmf_dif -- nvmf/setup.sh@28 -- # local -g _dev 00:36:53.304 19:26:21 nvmf_dif -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:36:53.304 19:26:21 nvmf_dif -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:36:53.304 19:26:21 nvmf_dif -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:36:53.304 19:26:21 nvmf_dif -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:36:53.304 19:26:21 nvmf_dif -- nvmf/setup.sh@44 -- # ips=() 00:36:53.304 19:26:21 nvmf_dif -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:36:53.304 19:26:21 nvmf_dif -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:36:53.304 19:26:21 nvmf_dif -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:36:53.304 19:26:21 nvmf_dif -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:36:53.304 19:26:21 nvmf_dif -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:36:53.304 19:26:21 nvmf_dif -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:36:53.304 19:26:21 nvmf_dif -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:36:53.304 19:26:21 nvmf_dif -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:36:53.304 19:26:21 nvmf_dif -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:36:53.304 19:26:21 nvmf_dif -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:36:53.304 19:26:21 nvmf_dif -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:36:53.304 19:26:21 nvmf_dif -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:36:53.304 19:26:21 nvmf_dif -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:36:53.304 19:26:21 nvmf_dif -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:36:53.304 19:26:21 nvmf_dif -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:36:53.304 19:26:21 nvmf_dif -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:36:53.304 19:26:21 nvmf_dif -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:36:53.304 19:26:21 nvmf_dif -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:36:53.304 19:26:21 nvmf_dif -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:36:53.304 19:26:21 nvmf_dif -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:36:53.304 19:26:21 nvmf_dif -- nvmf/setup.sh@11 -- # local val=167772161 00:36:53.304 19:26:21 nvmf_dif -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:36:53.304 19:26:21 nvmf_dif -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:36:53.304 19:26:21 nvmf_dif -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:36:53.304 19:26:21 nvmf_dif -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:36:53.304 19:26:21 nvmf_dif -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:36:53.304 19:26:21 nvmf_dif -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:36:53.304 19:26:21 nvmf_dif -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:36:53.304 10.0.0.1 00:36:53.304 19:26:21 nvmf_dif -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:36:53.304 19:26:21 nvmf_dif -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:36:53.304 19:26:21 nvmf_dif -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:36:53.304 19:26:21 nvmf_dif -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:36:53.304 19:26:21 nvmf_dif -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:36:53.304 19:26:21 nvmf_dif -- nvmf/setup.sh@11 -- # local val=167772162 00:36:53.304 19:26:21 nvmf_dif -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:36:53.304 19:26:21 nvmf_dif -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:36:53.305 19:26:21 nvmf_dif -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:36:53.305 19:26:21 nvmf_dif -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:36:53.305 19:26:21 nvmf_dif -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:36:53.305 19:26:21 nvmf_dif -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:36:53.305 19:26:21 nvmf_dif -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:36:53.305 10.0.0.2 00:36:53.305 19:26:21 nvmf_dif -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:36:53.305 19:26:21 nvmf_dif -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:36:53.305 19:26:21 nvmf_dif -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:36:53.305 19:26:21 nvmf_dif -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:36:53.305 19:26:21 nvmf_dif -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:36:53.305 19:26:21 nvmf_dif -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:36:53.305 19:26:21 nvmf_dif -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:36:53.305 19:26:21 nvmf_dif -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:36:53.305 19:26:21 nvmf_dif -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:36:53.305 19:26:21 nvmf_dif -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:36:53.305 19:26:21 nvmf_dif -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:36:53.305 19:26:21 nvmf_dif -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:36:53.305 19:26:21 nvmf_dif -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:36:53.305 19:26:21 nvmf_dif -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:36:53.305 19:26:21 nvmf_dif -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:36:53.305 19:26:21 nvmf_dif -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:36:53.305 19:26:21 nvmf_dif -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:36:53.305 19:26:21 nvmf_dif -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:36:53.305 19:26:21 nvmf_dif -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:36:53.305 19:26:21 nvmf_dif -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:36:53.305 19:26:21 nvmf_dif -- nvmf/setup.sh@38 -- # ping_ips 1 00:36:53.305 19:26:21 nvmf_dif -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:36:53.305 19:26:21 nvmf_dif -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:36:53.305 19:26:21 nvmf_dif -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:36:53.305 19:26:21 nvmf_dif -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:36:53.305 19:26:21 nvmf_dif -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:36:53.305 19:26:21 nvmf_dif -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:36:53.305 19:26:21 nvmf_dif -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:36:53.305 19:26:21 nvmf_dif -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:36:53.305 19:26:21 nvmf_dif -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:36:53.305 19:26:21 nvmf_dif -- nvmf/setup.sh@107 -- # local dev=initiator0 00:36:53.305 19:26:21 nvmf_dif -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:36:53.305 19:26:21 nvmf_dif -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:36:53.305 19:26:21 nvmf_dif -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:36:53.305 19:26:21 nvmf_dif -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:36:53.305 19:26:21 nvmf_dif -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:36:53.305 19:26:21 nvmf_dif -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:36:53.305 19:26:21 nvmf_dif -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:36:53.305 19:26:21 nvmf_dif -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:36:53.305 19:26:21 nvmf_dif -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:36:53.305 19:26:21 nvmf_dif -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:36:53.305 19:26:21 nvmf_dif -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:36:53.305 19:26:21 nvmf_dif -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:36:53.305 19:26:21 nvmf_dif -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:36:53.305 19:26:21 nvmf_dif -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:36:53.305 19:26:21 nvmf_dif -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:36:53.305 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:53.305 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.555 ms 00:36:53.305 00:36:53.305 --- 10.0.0.1 ping statistics --- 00:36:53.305 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:53.305 rtt min/avg/max/mdev = 0.555/0.555/0.555/0.000 ms 00:36:53.305 19:26:21 nvmf_dif -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:36:53.305 19:26:21 nvmf_dif -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:36:53.305 19:26:21 nvmf_dif -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:36:53.305 19:26:21 nvmf_dif -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:36:53.305 19:26:21 nvmf_dif -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:36:53.305 19:26:21 nvmf_dif -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:36:53.305 19:26:21 nvmf_dif -- nvmf/setup.sh@168 -- # get_net_dev target0 00:36:53.305 19:26:21 nvmf_dif -- nvmf/setup.sh@107 -- # local dev=target0 00:36:53.305 19:26:21 nvmf_dif -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:36:53.305 19:26:21 nvmf_dif -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:36:53.305 19:26:21 nvmf_dif -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:36:53.305 19:26:21 nvmf_dif -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:36:53.305 19:26:21 nvmf_dif -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:36:53.305 19:26:21 nvmf_dif -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:36:53.305 19:26:21 nvmf_dif -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:36:53.305 19:26:21 nvmf_dif -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:36:53.305 19:26:21 nvmf_dif -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:36:53.305 19:26:21 nvmf_dif -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:36:53.305 19:26:21 nvmf_dif -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:36:53.305 19:26:21 nvmf_dif -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:36:53.305 19:26:21 nvmf_dif -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:36:53.305 19:26:21 nvmf_dif -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:36:53.305 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:53.305 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.262 ms 00:36:53.305 00:36:53.305 --- 10.0.0.2 ping statistics --- 00:36:53.305 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:53.305 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:36:53.305 19:26:21 nvmf_dif -- nvmf/setup.sh@98 -- # (( pair++ )) 00:36:53.305 19:26:21 nvmf_dif -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:36:53.305 19:26:21 nvmf_dif -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:53.305 19:26:21 nvmf_dif -- nvmf/common.sh@270 -- # return 0 00:36:53.305 19:26:21 nvmf_dif -- nvmf/common.sh@298 -- # '[' iso == iso ']' 00:36:53.305 19:26:21 nvmf_dif -- nvmf/common.sh@299 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:55.850 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:36:55.850 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:36:55.850 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:36:55.850 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:36:55.850 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:36:55.850 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:36:55.850 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:36:55.850 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:36:55.850 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:36:55.850 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:36:55.850 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:36:55.850 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:36:55.850 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:36:55.850 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:36:55.850 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:36:55.850 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:36:55.850 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:36:56.111 19:26:25 nvmf_dif -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:36:56.111 19:26:25 nvmf_dif -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:36:56.111 19:26:25 nvmf_dif -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:36:56.111 19:26:25 nvmf_dif -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:36:56.111 19:26:25 nvmf_dif -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:36:56.111 19:26:25 nvmf_dif -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:36:56.111 19:26:25 nvmf_dif -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:36:56.111 19:26:25 nvmf_dif -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:36:56.111 19:26:25 nvmf_dif -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:36:56.111 19:26:25 nvmf_dif -- nvmf/setup.sh@107 -- # local dev=initiator0 00:36:56.111 19:26:25 nvmf_dif -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:36:56.111 19:26:25 nvmf_dif -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:36:56.111 19:26:25 nvmf_dif -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:36:56.111 19:26:25 nvmf_dif -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:36:56.111 19:26:25 nvmf_dif -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:36:56.111 19:26:25 nvmf_dif -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:36:56.111 19:26:25 nvmf_dif -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:36:56.111 19:26:25 nvmf_dif -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:36:56.111 19:26:25 nvmf_dif -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:36:56.111 19:26:25 nvmf_dif -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:56.111 19:26:25 nvmf_dif -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:36:56.111 19:26:25 nvmf_dif -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:36:56.111 19:26:25 nvmf_dif -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:36:56.111 19:26:25 nvmf_dif -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:36:56.111 19:26:25 nvmf_dif -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:36:56.111 19:26:25 nvmf_dif -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:36:56.111 19:26:25 nvmf_dif -- nvmf/setup.sh@107 -- # local dev=initiator1 00:36:56.111 19:26:25 nvmf_dif -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:36:56.111 19:26:25 nvmf_dif -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:36:56.111 19:26:25 nvmf_dif -- nvmf/setup.sh@109 -- # return 1 00:36:56.111 19:26:25 nvmf_dif -- nvmf/setup.sh@168 -- # dev= 00:36:56.111 19:26:25 nvmf_dif -- nvmf/setup.sh@169 -- # return 0 00:36:56.111 19:26:25 nvmf_dif -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:36:56.111 19:26:25 nvmf_dif -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:36:56.111 19:26:25 nvmf_dif -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:36:56.112 19:26:25 nvmf_dif -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:36:56.112 19:26:25 nvmf_dif -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:36:56.112 19:26:25 nvmf_dif -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:36:56.112 19:26:25 nvmf_dif -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:36:56.112 19:26:25 nvmf_dif -- nvmf/setup.sh@168 -- # get_net_dev target0 00:36:56.112 19:26:25 nvmf_dif -- nvmf/setup.sh@107 -- # local dev=target0 00:36:56.112 19:26:25 nvmf_dif -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:36:56.112 19:26:25 nvmf_dif -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:36:56.112 19:26:25 nvmf_dif -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:36:56.112 19:26:25 nvmf_dif -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:36:56.112 19:26:25 nvmf_dif -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:36:56.112 19:26:25 nvmf_dif -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:36:56.112 19:26:25 nvmf_dif -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:36:56.112 19:26:25 nvmf_dif -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:36:56.112 19:26:25 nvmf_dif -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:36:56.112 19:26:25 nvmf_dif -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:56.112 19:26:25 nvmf_dif -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:36:56.112 19:26:25 nvmf_dif -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:36:56.112 19:26:25 nvmf_dif -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:36:56.112 19:26:25 nvmf_dif -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:36:56.112 19:26:25 nvmf_dif -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:36:56.112 19:26:25 nvmf_dif -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:36:56.112 19:26:25 nvmf_dif -- nvmf/setup.sh@168 -- # get_net_dev target1 00:36:56.112 19:26:25 nvmf_dif -- nvmf/setup.sh@107 -- # local dev=target1 00:36:56.112 19:26:25 nvmf_dif -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:36:56.112 19:26:25 nvmf_dif -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:36:56.112 19:26:25 nvmf_dif -- nvmf/setup.sh@109 -- # return 1 00:36:56.112 19:26:25 nvmf_dif -- nvmf/setup.sh@168 -- # dev= 00:36:56.112 19:26:25 nvmf_dif -- nvmf/setup.sh@169 -- # return 0 00:36:56.112 19:26:25 nvmf_dif -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:36:56.112 19:26:25 nvmf_dif -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:56.112 19:26:25 nvmf_dif -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:36:56.112 19:26:25 nvmf_dif -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:36:56.112 19:26:25 nvmf_dif -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:56.112 19:26:25 nvmf_dif -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:36:56.112 19:26:25 nvmf_dif -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:36:56.112 19:26:25 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:36:56.112 19:26:25 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:36:56.112 19:26:25 nvmf_dif -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:36:56.112 19:26:25 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:56.112 19:26:25 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:56.112 19:26:25 nvmf_dif -- nvmf/common.sh@328 -- # nvmfpid=654655 00:36:56.112 19:26:25 nvmf_dif -- nvmf/common.sh@329 -- # waitforlisten 654655 00:36:56.112 19:26:25 nvmf_dif -- common/autotest_common.sh@833 -- # '[' -z 654655 ']' 00:36:56.112 19:26:25 nvmf_dif -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:56.112 19:26:25 nvmf_dif -- common/autotest_common.sh@838 -- # local max_retries=100 00:36:56.112 19:26:25 nvmf_dif -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:56.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:56.112 19:26:25 nvmf_dif -- common/autotest_common.sh@842 -- # xtrace_disable 00:36:56.112 19:26:25 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:56.112 19:26:25 nvmf_dif -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:36:56.374 [2024-11-05 19:26:25.440384] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:36:56.374 [2024-11-05 19:26:25.440452] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:56.374 [2024-11-05 19:26:25.522230] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:56.374 [2024-11-05 19:26:25.563524] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:56.374 [2024-11-05 19:26:25.563559] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:56.374 [2024-11-05 19:26:25.563567] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:56.374 [2024-11-05 19:26:25.563574] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:56.374 [2024-11-05 19:26:25.563580] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:56.374 [2024-11-05 19:26:25.564177] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:56.944 19:26:26 nvmf_dif -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:36:56.944 19:26:26 nvmf_dif -- common/autotest_common.sh@866 -- # return 0 00:36:56.944 19:26:26 nvmf_dif -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:36:56.944 19:26:26 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:56.944 19:26:26 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:56.944 19:26:26 nvmf_dif -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:56.944 19:26:26 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:36:56.944 19:26:26 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:36:56.944 19:26:26 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:56.944 19:26:26 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:56.944 [2024-11-05 19:26:26.265719] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:57.205 19:26:26 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:57.205 19:26:26 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:36:57.205 19:26:26 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:36:57.205 19:26:26 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:36:57.205 19:26:26 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:57.205 ************************************ 00:36:57.205 START TEST fio_dif_1_default 00:36:57.205 ************************************ 00:36:57.205 19:26:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1127 -- # fio_dif_1 00:36:57.205 19:26:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:36:57.205 19:26:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:36:57.205 19:26:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:36:57.205 19:26:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:36:57.205 19:26:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:36:57.205 19:26:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:36:57.205 19:26:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:57.205 19:26:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:57.205 bdev_null0 00:36:57.205 19:26:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:57.205 19:26:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:57.205 19:26:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:57.205 19:26:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:57.205 19:26:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:57.205 19:26:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:57.205 19:26:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:57.205 19:26:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:57.205 19:26:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:57.205 19:26:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:57.205 19:26:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:57.205 19:26:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:57.205 [2024-11-05 19:26:26.334054] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:57.205 19:26:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:57.205 19:26:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:36:57.205 19:26:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:57.205 19:26:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:57.205 19:26:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:36:57.206 19:26:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:57.206 19:26:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local sanitizers 00:36:57.206 19:26:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:57.206 19:26:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # shift 00:36:57.206 19:26:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # local asan_lib= 00:36:57.206 19:26:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:36:57.206 19:26:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:36:57.206 19:26:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:36:57.206 19:26:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:36:57.206 19:26:26 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@372 -- # config=() 00:36:57.206 19:26:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:36:57.206 19:26:26 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@372 -- # local subsystem config 00:36:57.206 19:26:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:36:57.206 19:26:26 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:36:57.206 19:26:26 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:36:57.206 { 00:36:57.206 "params": { 00:36:57.206 "name": "Nvme$subsystem", 00:36:57.206 "trtype": "$TEST_TRANSPORT", 00:36:57.206 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:57.206 "adrfam": "ipv4", 00:36:57.206 "trsvcid": "$NVMF_PORT", 00:36:57.206 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:57.206 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:57.206 "hdgst": ${hdgst:-false}, 00:36:57.206 "ddgst": ${ddgst:-false} 00:36:57.206 }, 00:36:57.206 "method": "bdev_nvme_attach_controller" 00:36:57.206 } 00:36:57.206 EOF 00:36:57.206 )") 00:36:57.206 19:26:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:57.206 19:26:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # grep libasan 00:36:57.206 19:26:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:36:57.206 19:26:26 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@394 -- # cat 00:36:57.206 19:26:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:36:57.206 19:26:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:36:57.206 19:26:26 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@396 -- # jq . 00:36:57.206 19:26:26 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@397 -- # IFS=, 00:36:57.206 19:26:26 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:36:57.206 "params": { 00:36:57.206 "name": "Nvme0", 00:36:57.206 "trtype": "tcp", 00:36:57.206 "traddr": "10.0.0.2", 00:36:57.206 "adrfam": "ipv4", 00:36:57.206 "trsvcid": "4420", 00:36:57.206 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:57.206 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:57.206 "hdgst": false, 00:36:57.206 "ddgst": false 00:36:57.206 }, 00:36:57.206 "method": "bdev_nvme_attach_controller" 00:36:57.206 }' 00:36:57.206 19:26:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # asan_lib= 00:36:57.206 19:26:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:36:57.206 19:26:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:36:57.206 19:26:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:57.206 19:26:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:36:57.206 19:26:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:36:57.206 19:26:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # asan_lib= 00:36:57.206 19:26:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:36:57.206 19:26:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:57.206 19:26:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:57.467 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:36:57.467 fio-3.35 00:36:57.467 Starting 1 thread 00:37:09.702 00:37:09.702 filename0: (groupid=0, jobs=1): err= 0: pid=655180: Tue Nov 5 19:26:37 2024 00:37:09.702 read: IOPS=190, BW=761KiB/s (779kB/s)(7616KiB/10012msec) 00:37:09.702 slat (nsec): min=5374, max=63791, avg=6167.94, stdev=1882.65 00:37:09.702 clat (usec): min=593, max=43989, avg=21016.90, stdev=20128.32 00:37:09.702 lat (usec): min=599, max=44025, avg=21023.06, stdev=20128.30 00:37:09.702 clat percentiles (usec): 00:37:09.702 | 1.00th=[ 693], 5.00th=[ 799], 10.00th=[ 873], 20.00th=[ 898], 00:37:09.702 | 30.00th=[ 914], 40.00th=[ 938], 50.00th=[ 1029], 60.00th=[41157], 00:37:09.702 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:37:09.702 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43779], 99.95th=[43779], 00:37:09.702 | 99.99th=[43779] 00:37:09.702 bw ( KiB/s): min= 704, max= 768, per=99.91%, avg=760.00, stdev=20.44, samples=20 00:37:09.702 iops : min= 176, max= 192, avg=190.00, stdev= 5.11, samples=20 00:37:09.702 lat (usec) : 750=3.10%, 1000=46.06% 00:37:09.702 lat (msec) : 2=0.84%, 50=50.00% 00:37:09.702 cpu : usr=93.59%, sys=6.18%, ctx=21, majf=0, minf=205 00:37:09.702 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:09.702 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:09.702 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:09.702 issued rwts: total=1904,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:09.702 latency : target=0, window=0, percentile=100.00%, depth=4 00:37:09.702 00:37:09.702 Run status group 0 (all jobs): 00:37:09.702 READ: bw=761KiB/s (779kB/s), 761KiB/s-761KiB/s (779kB/s-779kB/s), io=7616KiB (7799kB), run=10012-10012msec 00:37:09.702 19:26:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:37:09.702 19:26:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:37:09.702 19:26:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:37:09.702 19:26:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:09.702 19:26:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:37:09.702 19:26:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:09.702 19:26:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:09.702 19:26:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:09.702 19:26:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:09.702 19:26:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:09.702 19:26:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:09.702 19:26:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:09.702 19:26:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:09.702 00:37:09.702 real 0m11.193s 00:37:09.702 user 0m23.031s 00:37:09.702 sys 0m0.935s 00:37:09.702 19:26:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1128 -- # xtrace_disable 00:37:09.702 19:26:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:09.702 ************************************ 00:37:09.702 END TEST fio_dif_1_default 00:37:09.702 ************************************ 00:37:09.702 19:26:37 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:37:09.702 19:26:37 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:37:09.702 19:26:37 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:37:09.702 19:26:37 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:09.702 ************************************ 00:37:09.702 START TEST fio_dif_1_multi_subsystems 00:37:09.702 ************************************ 00:37:09.702 19:26:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1127 -- # fio_dif_1_multi_subsystems 00:37:09.702 19:26:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:37:09.702 19:26:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:37:09.702 19:26:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:37:09.702 19:26:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:37:09.702 19:26:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:37:09.702 19:26:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:37:09.702 19:26:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:37:09.702 19:26:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:09.702 19:26:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:09.702 bdev_null0 00:37:09.702 19:26:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:09.702 19:26:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:09.702 19:26:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:09.702 19:26:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:09.702 19:26:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:09.702 19:26:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:09.702 19:26:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:09.702 19:26:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:09.702 19:26:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:09.702 19:26:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:09.702 19:26:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:09.702 19:26:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:09.702 [2024-11-05 19:26:37.622177] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:09.702 19:26:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:09.702 19:26:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:37:09.702 19:26:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:37:09.702 19:26:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:37:09.702 19:26:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:37:09.702 19:26:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:09.702 19:26:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:09.702 bdev_null1 00:37:09.702 19:26:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:09.702 19:26:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:37:09.702 19:26:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:09.702 19:26:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:09.702 19:26:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:09.702 19:26:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:37:09.703 19:26:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:09.703 19:26:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:09.703 19:26:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:09.703 19:26:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:09.703 19:26:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:09.703 19:26:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:09.703 19:26:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:09.703 19:26:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:37:09.703 19:26:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:37:09.703 19:26:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@372 -- # config=() 00:37:09.703 19:26:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@372 -- # local subsystem config 00:37:09.703 19:26:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:37:09.703 19:26:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:37:09.703 19:26:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:37:09.703 { 00:37:09.703 "params": { 00:37:09.703 "name": "Nvme$subsystem", 00:37:09.703 "trtype": "$TEST_TRANSPORT", 00:37:09.703 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:09.703 "adrfam": "ipv4", 00:37:09.703 "trsvcid": "$NVMF_PORT", 00:37:09.703 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:09.703 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:09.703 "hdgst": ${hdgst:-false}, 00:37:09.703 "ddgst": ${ddgst:-false} 00:37:09.703 }, 00:37:09.703 "method": "bdev_nvme_attach_controller" 00:37:09.703 } 00:37:09.703 EOF 00:37:09.703 )") 00:37:09.703 19:26:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:09.703 19:26:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:09.703 19:26:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:37:09.703 19:26:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:37:09.703 19:26:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@394 -- # cat 00:37:09.703 19:26:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:09.703 19:26:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:37:09.703 19:26:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local sanitizers 00:37:09.703 19:26:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:37:09.703 19:26:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:09.703 19:26:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # shift 00:37:09.703 19:26:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # local asan_lib= 00:37:09.703 19:26:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:37:09.703 19:26:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:09.703 19:26:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:37:09.703 19:26:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # grep libasan 00:37:09.703 19:26:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:37:09.703 19:26:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:37:09.703 19:26:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:37:09.703 19:26:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:37:09.703 19:26:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:37:09.703 { 00:37:09.703 "params": { 00:37:09.703 "name": "Nvme$subsystem", 00:37:09.703 "trtype": "$TEST_TRANSPORT", 00:37:09.703 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:09.703 "adrfam": "ipv4", 00:37:09.703 "trsvcid": "$NVMF_PORT", 00:37:09.703 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:09.703 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:09.703 "hdgst": ${hdgst:-false}, 00:37:09.703 "ddgst": ${ddgst:-false} 00:37:09.703 }, 00:37:09.703 "method": "bdev_nvme_attach_controller" 00:37:09.703 } 00:37:09.703 EOF 00:37:09.703 )") 00:37:09.703 19:26:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@394 -- # cat 00:37:09.703 19:26:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:37:09.703 19:26:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:37:09.703 19:26:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@396 -- # jq . 00:37:09.703 19:26:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@397 -- # IFS=, 00:37:09.703 19:26:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:37:09.703 "params": { 00:37:09.703 "name": "Nvme0", 00:37:09.703 "trtype": "tcp", 00:37:09.703 "traddr": "10.0.0.2", 00:37:09.703 "adrfam": "ipv4", 00:37:09.703 "trsvcid": "4420", 00:37:09.703 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:09.703 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:09.703 "hdgst": false, 00:37:09.703 "ddgst": false 00:37:09.703 }, 00:37:09.703 "method": "bdev_nvme_attach_controller" 00:37:09.703 },{ 00:37:09.703 "params": { 00:37:09.703 "name": "Nvme1", 00:37:09.703 "trtype": "tcp", 00:37:09.703 "traddr": "10.0.0.2", 00:37:09.703 "adrfam": "ipv4", 00:37:09.703 "trsvcid": "4420", 00:37:09.703 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:09.703 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:09.703 "hdgst": false, 00:37:09.703 "ddgst": false 00:37:09.703 }, 00:37:09.703 "method": "bdev_nvme_attach_controller" 00:37:09.703 }' 00:37:09.703 19:26:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # asan_lib= 00:37:09.703 19:26:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:37:09.703 19:26:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:37:09.703 19:26:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:09.703 19:26:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:37:09.703 19:26:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:37:09.703 19:26:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # asan_lib= 00:37:09.703 19:26:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:37:09.703 19:26:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:09.703 19:26:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:09.703 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:37:09.703 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:37:09.703 fio-3.35 00:37:09.703 Starting 2 threads 00:37:19.776 00:37:19.776 filename0: (groupid=0, jobs=1): err= 0: pid=657710: Tue Nov 5 19:26:48 2024 00:37:19.776 read: IOPS=96, BW=387KiB/s (396kB/s)(3872KiB/10006msec) 00:37:19.776 slat (nsec): min=5373, max=31417, avg=6332.96, stdev=1505.30 00:37:19.776 clat (usec): min=40752, max=43486, avg=41327.80, stdev=562.94 00:37:19.776 lat (usec): min=40759, max=43517, avg=41334.13, stdev=563.15 00:37:19.776 clat percentiles (usec): 00:37:19.776 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:37:19.776 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:37:19.776 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42730], 00:37:19.776 | 99.00th=[42730], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:37:19.776 | 99.99th=[43254] 00:37:19.776 bw ( KiB/s): min= 384, max= 416, per=33.58%, avg=385.60, stdev= 7.16, samples=20 00:37:19.776 iops : min= 96, max= 104, avg=96.40, stdev= 1.79, samples=20 00:37:19.776 lat (msec) : 50=100.00% 00:37:19.776 cpu : usr=95.30%, sys=4.50%, ctx=14, majf=0, minf=157 00:37:19.776 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:19.776 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:19.776 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:19.776 issued rwts: total=968,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:19.776 latency : target=0, window=0, percentile=100.00%, depth=4 00:37:19.776 filename1: (groupid=0, jobs=1): err= 0: pid=657711: Tue Nov 5 19:26:48 2024 00:37:19.776 read: IOPS=190, BW=761KiB/s (779kB/s)(7632KiB/10033msec) 00:37:19.776 slat (nsec): min=5375, max=33428, avg=6154.04, stdev=1314.56 00:37:19.776 clat (usec): min=676, max=43314, avg=21015.51, stdev=20130.75 00:37:19.776 lat (usec): min=682, max=43347, avg=21021.66, stdev=20130.75 00:37:19.776 clat percentiles (usec): 00:37:19.776 | 1.00th=[ 725], 5.00th=[ 889], 10.00th=[ 906], 20.00th=[ 930], 00:37:19.776 | 30.00th=[ 938], 40.00th=[ 955], 50.00th=[ 1483], 60.00th=[41157], 00:37:19.776 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:37:19.776 | 99.00th=[42206], 99.50th=[42730], 99.90th=[43254], 99.95th=[43254], 00:37:19.776 | 99.99th=[43254] 00:37:19.776 bw ( KiB/s): min= 704, max= 832, per=66.37%, avg=761.60, stdev=26.67, samples=20 00:37:19.776 iops : min= 176, max= 208, avg=190.40, stdev= 6.67, samples=20 00:37:19.776 lat (usec) : 750=1.78%, 1000=46.02% 00:37:19.776 lat (msec) : 2=2.31%, 50=49.90% 00:37:19.776 cpu : usr=94.73%, sys=5.05%, ctx=9, majf=0, minf=124 00:37:19.776 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:19.776 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:19.776 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:19.776 issued rwts: total=1908,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:19.776 latency : target=0, window=0, percentile=100.00%, depth=4 00:37:19.776 00:37:19.776 Run status group 0 (all jobs): 00:37:19.776 READ: bw=1147KiB/s (1174kB/s), 387KiB/s-761KiB/s (396kB/s-779kB/s), io=11.2MiB (11.8MB), run=10006-10033msec 00:37:19.776 19:26:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:37:19.776 19:26:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:37:19.776 19:26:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:37:19.776 19:26:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:19.776 19:26:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:37:19.776 19:26:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:19.776 19:26:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:19.776 19:26:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:19.776 19:26:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:19.776 19:26:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:19.776 19:26:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:19.776 19:26:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:19.776 19:26:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:19.776 19:26:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:37:19.777 19:26:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:37:19.777 19:26:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:37:19.777 19:26:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:19.777 19:26:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:19.777 19:26:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:19.777 19:26:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:19.777 19:26:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:37:19.777 19:26:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:19.777 19:26:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:19.777 19:26:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:19.777 00:37:19.777 real 0m11.375s 00:37:19.777 user 0m34.677s 00:37:19.777 sys 0m1.363s 00:37:19.777 19:26:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1128 -- # xtrace_disable 00:37:19.777 19:26:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:19.777 ************************************ 00:37:19.777 END TEST fio_dif_1_multi_subsystems 00:37:19.777 ************************************ 00:37:19.777 19:26:48 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:37:19.777 19:26:48 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:37:19.777 19:26:48 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:37:19.777 19:26:48 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:19.777 ************************************ 00:37:19.777 START TEST fio_dif_rand_params 00:37:19.777 ************************************ 00:37:19.777 19:26:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1127 -- # fio_dif_rand_params 00:37:19.777 19:26:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:37:19.777 19:26:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:37:19.777 19:26:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:37:19.777 19:26:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:37:19.777 19:26:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:37:19.777 19:26:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:37:19.777 19:26:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:37:19.777 19:26:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:37:19.777 19:26:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:37:19.777 19:26:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:19.777 19:26:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:37:19.777 19:26:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:37:19.777 19:26:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:37:19.777 19:26:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:19.777 19:26:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:19.777 bdev_null0 00:37:19.777 19:26:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:19.777 19:26:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:19.777 19:26:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:19.777 19:26:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:19.777 19:26:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:19.777 19:26:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:19.777 19:26:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:19.777 19:26:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:19.777 19:26:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:19.777 19:26:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:19.777 19:26:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:19.777 19:26:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:19.777 [2024-11-05 19:26:49.065860] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:19.777 19:26:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:19.777 19:26:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:37:19.777 19:26:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:19.777 19:26:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:19.777 19:26:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:37:19.777 19:26:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:19.777 19:26:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:37:19.777 19:26:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:19.777 19:26:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:37:19.777 19:26:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:37:19.777 19:26:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:37:19.777 19:26:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:37:19.777 19:26:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:37:19.777 19:26:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:37:19.777 19:26:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@372 -- # config=() 00:37:19.777 19:26:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:37:19.777 19:26:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@372 -- # local subsystem config 00:37:19.777 19:26:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:37:19.777 19:26:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:37:19.777 19:26:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:37:19.777 { 00:37:19.777 "params": { 00:37:19.777 "name": "Nvme$subsystem", 00:37:19.777 "trtype": "$TEST_TRANSPORT", 00:37:19.777 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:19.777 "adrfam": "ipv4", 00:37:19.777 "trsvcid": "$NVMF_PORT", 00:37:19.777 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:19.777 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:19.777 "hdgst": ${hdgst:-false}, 00:37:19.777 "ddgst": ${ddgst:-false} 00:37:19.777 }, 00:37:19.777 "method": "bdev_nvme_attach_controller" 00:37:19.777 } 00:37:19.777 EOF 00:37:19.777 )") 00:37:19.777 19:26:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:19.777 19:26:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:37:19.777 19:26:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:37:19.777 19:26:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # cat 00:37:19.777 19:26:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:37:19.777 19:26:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:19.777 19:26:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@396 -- # jq . 00:37:19.777 19:26:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@397 -- # IFS=, 00:37:19.777 19:26:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:37:19.777 "params": { 00:37:19.777 "name": "Nvme0", 00:37:19.777 "trtype": "tcp", 00:37:19.777 "traddr": "10.0.0.2", 00:37:19.777 "adrfam": "ipv4", 00:37:19.777 "trsvcid": "4420", 00:37:19.777 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:19.777 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:19.777 "hdgst": false, 00:37:19.777 "ddgst": false 00:37:19.777 }, 00:37:19.777 "method": "bdev_nvme_attach_controller" 00:37:19.777 }' 00:37:20.053 19:26:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:37:20.053 19:26:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:37:20.053 19:26:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:37:20.053 19:26:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:20.053 19:26:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:37:20.053 19:26:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:37:20.053 19:26:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:37:20.053 19:26:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:37:20.053 19:26:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:20.053 19:26:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:20.316 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:37:20.316 ... 00:37:20.316 fio-3.35 00:37:20.316 Starting 3 threads 00:37:26.904 00:37:26.904 filename0: (groupid=0, jobs=1): err= 0: pid=659911: Tue Nov 5 19:26:55 2024 00:37:26.904 read: IOPS=207, BW=25.9MiB/s (27.2MB/s)(131MiB/5034msec) 00:37:26.904 slat (nsec): min=5385, max=30739, avg=7824.70, stdev=1855.78 00:37:26.904 clat (usec): min=5164, max=55836, avg=14441.39, stdev=11853.53 00:37:26.904 lat (usec): min=5172, max=55845, avg=14449.22, stdev=11853.68 00:37:26.904 clat percentiles (usec): 00:37:26.904 | 1.00th=[ 5342], 5.00th=[ 6259], 10.00th=[ 7373], 20.00th=[ 8717], 00:37:26.904 | 30.00th=[ 9765], 40.00th=[10814], 50.00th=[11600], 60.00th=[12125], 00:37:26.904 | 70.00th=[12649], 80.00th=[13304], 90.00th=[15139], 95.00th=[50594], 00:37:26.904 | 99.00th=[53740], 99.50th=[54789], 99.90th=[55313], 99.95th=[55837], 00:37:26.904 | 99.99th=[55837] 00:37:26.904 bw ( KiB/s): min=16128, max=34048, per=32.10%, avg=26675.20, stdev=6667.80, samples=10 00:37:26.904 iops : min= 126, max= 266, avg=208.40, stdev=52.09, samples=10 00:37:26.904 lat (msec) : 10=31.77%, 20=59.04%, 50=3.44%, 100=5.74% 00:37:26.904 cpu : usr=95.51%, sys=4.25%, ctx=10, majf=0, minf=107 00:37:26.904 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:26.904 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:26.904 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:26.904 issued rwts: total=1045,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:26.904 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:26.904 filename0: (groupid=0, jobs=1): err= 0: pid=659912: Tue Nov 5 19:26:55 2024 00:37:26.904 read: IOPS=217, BW=27.2MiB/s (28.6MB/s)(137MiB/5044msec) 00:37:26.904 slat (nsec): min=5425, max=31151, avg=8069.53, stdev=1673.78 00:37:26.904 clat (usec): min=5192, max=92257, avg=13720.42, stdev=11976.71 00:37:26.904 lat (usec): min=5198, max=92266, avg=13728.49, stdev=11976.76 00:37:26.904 clat percentiles (usec): 00:37:26.904 | 1.00th=[ 5669], 5.00th=[ 6783], 10.00th=[ 7242], 20.00th=[ 7898], 00:37:26.904 | 30.00th=[ 8717], 40.00th=[ 9765], 50.00th=[10683], 60.00th=[11338], 00:37:26.904 | 70.00th=[11994], 80.00th=[12911], 90.00th=[14877], 95.00th=[49546], 00:37:26.904 | 99.00th=[53216], 99.50th=[55313], 99.90th=[91751], 99.95th=[91751], 00:37:26.904 | 99.99th=[91751] 00:37:26.904 bw ( KiB/s): min=19417, max=35328, per=33.79%, avg=28079.30, stdev=4912.70, samples=10 00:37:26.904 iops : min= 151, max= 276, avg=219.30, stdev=38.52, samples=10 00:37:26.904 lat (msec) : 10=41.22%, 20=50.14%, 50=4.82%, 100=3.82% 00:37:26.904 cpu : usr=94.31%, sys=5.20%, ctx=228, majf=0, minf=84 00:37:26.904 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:26.904 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:26.904 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:26.904 issued rwts: total=1099,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:26.904 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:26.904 filename0: (groupid=0, jobs=1): err= 0: pid=659913: Tue Nov 5 19:26:55 2024 00:37:26.904 read: IOPS=225, BW=28.1MiB/s (29.5MB/s)(141MiB/5024msec) 00:37:26.904 slat (nsec): min=5394, max=32413, avg=7895.01, stdev=1876.60 00:37:26.904 clat (usec): min=5222, max=54175, avg=13313.93, stdev=11000.37 00:37:26.904 lat (usec): min=5230, max=54184, avg=13321.82, stdev=11000.19 00:37:26.904 clat percentiles (usec): 00:37:26.904 | 1.00th=[ 6194], 5.00th=[ 7242], 10.00th=[ 7635], 20.00th=[ 8356], 00:37:26.904 | 30.00th=[ 9110], 40.00th=[10159], 50.00th=[10683], 60.00th=[11207], 00:37:26.904 | 70.00th=[11600], 80.00th=[11994], 90.00th=[12911], 95.00th=[49021], 00:37:26.904 | 99.00th=[52691], 99.50th=[53216], 99.90th=[53740], 99.95th=[54264], 00:37:26.904 | 99.99th=[54264] 00:37:26.904 bw ( KiB/s): min=22528, max=36096, per=34.74%, avg=28876.80, stdev=4440.29, samples=10 00:37:26.905 iops : min= 176, max= 282, avg=225.60, stdev=34.69, samples=10 00:37:26.905 lat (msec) : 10=38.02%, 20=54.02%, 50=3.89%, 100=4.07% 00:37:26.905 cpu : usr=95.32%, sys=4.42%, ctx=11, majf=0, minf=58 00:37:26.905 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:26.905 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:26.905 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:26.905 issued rwts: total=1131,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:26.905 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:26.905 00:37:26.905 Run status group 0 (all jobs): 00:37:26.905 READ: bw=81.2MiB/s (85.1MB/s), 25.9MiB/s-28.1MiB/s (27.2MB/s-29.5MB/s), io=409MiB (429MB), run=5024-5044msec 00:37:26.905 19:26:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:37:26.905 19:26:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:37:26.905 19:26:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:26.905 19:26:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:26.905 19:26:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:37:26.905 19:26:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:26.905 19:26:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:26.905 19:26:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:26.905 19:26:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:26.905 19:26:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:26.905 19:26:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:26.905 19:26:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:26.905 19:26:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:26.905 19:26:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:37:26.905 19:26:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:37:26.905 19:26:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:37:26.905 19:26:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:37:26.905 19:26:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:37:26.905 19:26:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:37:26.905 19:26:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:37:26.905 19:26:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:37:26.905 19:26:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:26.905 19:26:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:37:26.905 19:26:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:37:26.905 19:26:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:37:26.905 19:26:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:26.905 19:26:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:26.905 bdev_null0 00:37:26.905 19:26:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:26.905 19:26:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:26.905 19:26:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:26.905 19:26:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:26.905 19:26:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:26.905 19:26:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:26.905 19:26:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:26.905 19:26:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:26.905 19:26:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:26.905 19:26:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:26.905 19:26:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:26.905 19:26:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:26.905 [2024-11-05 19:26:55.245408] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:26.905 19:26:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:26.905 19:26:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:26.905 19:26:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:37:26.905 19:26:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:37:26.905 19:26:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:37:26.905 19:26:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:26.905 19:26:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:26.905 bdev_null1 00:37:26.905 19:26:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:26.905 19:26:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:37:26.905 19:26:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:26.905 19:26:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:26.905 19:26:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:26.905 19:26:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:37:26.905 19:26:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:26.905 19:26:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:26.905 19:26:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:26.905 19:26:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:26.905 19:26:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:26.905 19:26:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:26.905 19:26:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:26.905 19:26:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:26.905 19:26:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:37:26.905 19:26:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:37:26.905 19:26:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:37:26.905 19:26:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:26.905 19:26:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:26.905 bdev_null2 00:37:26.905 19:26:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:26.905 19:26:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:37:26.905 19:26:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:26.905 19:26:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:26.905 19:26:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:26.905 19:26:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:37:26.905 19:26:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:26.905 19:26:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:26.905 19:26:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:26.905 19:26:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:37:26.905 19:26:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:26.905 19:26:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:26.905 19:26:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:26.905 19:26:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:37:26.905 19:26:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:37:26.905 19:26:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:37:26.905 19:26:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@372 -- # config=() 00:37:26.905 19:26:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:26.905 19:26:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@372 -- # local subsystem config 00:37:26.905 19:26:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:26.905 19:26:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:37:26.905 19:26:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:37:26.905 { 00:37:26.905 "params": { 00:37:26.905 "name": "Nvme$subsystem", 00:37:26.905 "trtype": "$TEST_TRANSPORT", 00:37:26.905 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:26.905 "adrfam": "ipv4", 00:37:26.905 "trsvcid": "$NVMF_PORT", 00:37:26.905 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:26.905 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:26.905 "hdgst": ${hdgst:-false}, 00:37:26.905 "ddgst": ${ddgst:-false} 00:37:26.905 }, 00:37:26.905 "method": "bdev_nvme_attach_controller" 00:37:26.905 } 00:37:26.905 EOF 00:37:26.905 )") 00:37:26.905 19:26:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:37:26.905 19:26:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:37:26.905 19:26:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:26.905 19:26:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:37:26.905 19:26:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:37:26.905 19:26:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:26.905 19:26:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:37:26.905 19:26:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:37:26.905 19:26:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:37:26.905 19:26:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:37:26.905 19:26:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # cat 00:37:26.905 19:26:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:26.905 19:26:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:37:26.905 19:26:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:37:26.906 19:26:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:26.906 19:26:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:37:26.906 19:26:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:37:26.906 19:26:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:37:26.906 19:26:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:37:26.906 { 00:37:26.906 "params": { 00:37:26.906 "name": "Nvme$subsystem", 00:37:26.906 "trtype": "$TEST_TRANSPORT", 00:37:26.906 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:26.906 "adrfam": "ipv4", 00:37:26.906 "trsvcid": "$NVMF_PORT", 00:37:26.906 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:26.906 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:26.906 "hdgst": ${hdgst:-false}, 00:37:26.906 "ddgst": ${ddgst:-false} 00:37:26.906 }, 00:37:26.906 "method": "bdev_nvme_attach_controller" 00:37:26.906 } 00:37:26.906 EOF 00:37:26.906 )") 00:37:26.906 19:26:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:37:26.906 19:26:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:26.906 19:26:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:37:26.906 19:26:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # cat 00:37:26.906 19:26:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:37:26.906 19:26:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:26.906 19:26:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:37:26.906 19:26:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:37:26.906 { 00:37:26.906 "params": { 00:37:26.906 "name": "Nvme$subsystem", 00:37:26.906 "trtype": "$TEST_TRANSPORT", 00:37:26.906 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:26.906 "adrfam": "ipv4", 00:37:26.906 "trsvcid": "$NVMF_PORT", 00:37:26.906 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:26.906 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:26.906 "hdgst": ${hdgst:-false}, 00:37:26.906 "ddgst": ${ddgst:-false} 00:37:26.906 }, 00:37:26.906 "method": "bdev_nvme_attach_controller" 00:37:26.906 } 00:37:26.906 EOF 00:37:26.906 )") 00:37:26.906 19:26:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # cat 00:37:26.906 19:26:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@396 -- # jq . 00:37:26.906 19:26:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@397 -- # IFS=, 00:37:26.906 19:26:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:37:26.906 "params": { 00:37:26.906 "name": "Nvme0", 00:37:26.906 "trtype": "tcp", 00:37:26.906 "traddr": "10.0.0.2", 00:37:26.906 "adrfam": "ipv4", 00:37:26.906 "trsvcid": "4420", 00:37:26.906 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:26.906 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:26.906 "hdgst": false, 00:37:26.906 "ddgst": false 00:37:26.906 }, 00:37:26.906 "method": "bdev_nvme_attach_controller" 00:37:26.906 },{ 00:37:26.906 "params": { 00:37:26.906 "name": "Nvme1", 00:37:26.906 "trtype": "tcp", 00:37:26.906 "traddr": "10.0.0.2", 00:37:26.906 "adrfam": "ipv4", 00:37:26.906 "trsvcid": "4420", 00:37:26.906 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:26.906 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:26.906 "hdgst": false, 00:37:26.906 "ddgst": false 00:37:26.906 }, 00:37:26.906 "method": "bdev_nvme_attach_controller" 00:37:26.906 },{ 00:37:26.906 "params": { 00:37:26.906 "name": "Nvme2", 00:37:26.906 "trtype": "tcp", 00:37:26.906 "traddr": "10.0.0.2", 00:37:26.906 "adrfam": "ipv4", 00:37:26.906 "trsvcid": "4420", 00:37:26.906 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:37:26.906 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:37:26.906 "hdgst": false, 00:37:26.906 "ddgst": false 00:37:26.906 }, 00:37:26.906 "method": "bdev_nvme_attach_controller" 00:37:26.906 }' 00:37:26.906 19:26:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:37:26.906 19:26:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:37:26.906 19:26:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:37:26.906 19:26:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:26.906 19:26:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:37:26.906 19:26:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:37:26.906 19:26:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:37:26.906 19:26:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:37:26.906 19:26:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:26.906 19:26:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:26.906 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:37:26.906 ... 00:37:26.906 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:37:26.906 ... 00:37:26.906 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:37:26.906 ... 00:37:26.906 fio-3.35 00:37:26.906 Starting 24 threads 00:37:39.140 00:37:39.140 filename0: (groupid=0, jobs=1): err= 0: pid=661418: Tue Nov 5 19:27:06 2024 00:37:39.140 read: IOPS=528, BW=2114KiB/s (2165kB/s)(20.7MiB/10021msec) 00:37:39.140 slat (nsec): min=5553, max=65088, avg=9876.11, stdev=6434.53 00:37:39.140 clat (usec): min=6622, max=34270, avg=30188.09, stdev=4815.39 00:37:39.140 lat (usec): min=6641, max=34289, avg=30197.97, stdev=4816.06 00:37:39.140 clat percentiles (usec): 00:37:39.140 | 1.00th=[15270], 5.00th=[21365], 10.00th=[21627], 20.00th=[24249], 00:37:39.140 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32375], 00:37:39.140 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33162], 95.00th=[33817], 00:37:39.140 | 99.00th=[33817], 99.50th=[34341], 99.90th=[34341], 99.95th=[34341], 00:37:39.140 | 99.99th=[34341] 00:37:39.140 bw ( KiB/s): min= 1920, max= 2432, per=4.46%, avg=2112.00, stdev=163.50, samples=20 00:37:39.140 iops : min= 480, max= 608, avg=528.00, stdev=40.87, samples=20 00:37:39.140 lat (msec) : 10=0.60%, 20=0.91%, 50=98.49% 00:37:39.140 cpu : usr=97.92%, sys=1.31%, ctx=336, majf=0, minf=35 00:37:39.140 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:39.140 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:39.140 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:39.140 issued rwts: total=5296,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:39.140 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:39.140 filename0: (groupid=0, jobs=1): err= 0: pid=661419: Tue Nov 5 19:27:06 2024 00:37:39.140 read: IOPS=487, BW=1949KiB/s (1996kB/s)(19.1MiB/10009msec) 00:37:39.140 slat (nsec): min=5416, max=70763, avg=20563.27, stdev=12824.62 00:37:39.140 clat (usec): min=10780, max=59093, avg=32636.03, stdev=2120.39 00:37:39.140 lat (usec): min=10786, max=59111, avg=32656.60, stdev=2119.90 00:37:39.140 clat percentiles (usec): 00:37:39.140 | 1.00th=[31851], 5.00th=[32113], 10.00th=[32113], 20.00th=[32113], 00:37:39.140 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:37:39.140 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[33817], 00:37:39.140 | 99.00th=[34341], 99.50th=[34866], 99.90th=[58983], 99.95th=[58983], 00:37:39.140 | 99.99th=[58983] 00:37:39.140 bw ( KiB/s): min= 1795, max= 2048, per=4.09%, avg=1940.37, stdev=63.80, samples=19 00:37:39.140 iops : min= 448, max= 512, avg=485.05, stdev=16.05, samples=19 00:37:39.140 lat (msec) : 20=0.35%, 50=99.32%, 100=0.33% 00:37:39.140 cpu : usr=98.83%, sys=0.85%, ctx=68, majf=0, minf=16 00:37:39.140 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:39.140 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:39.140 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:39.140 issued rwts: total=4878,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:39.140 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:39.140 filename0: (groupid=0, jobs=1): err= 0: pid=661420: Tue Nov 5 19:27:06 2024 00:37:39.140 read: IOPS=508, BW=2032KiB/s (2081kB/s)(19.9MiB/10009msec) 00:37:39.140 slat (nsec): min=2776, max=56810, avg=13185.41, stdev=9603.32 00:37:39.140 clat (usec): min=1012, max=43994, avg=31386.19, stdev=5682.18 00:37:39.140 lat (usec): min=1017, max=44002, avg=31399.38, stdev=5683.89 00:37:39.140 clat percentiles (usec): 00:37:39.140 | 1.00th=[ 1565], 5.00th=[22152], 10.00th=[32113], 20.00th=[32375], 00:37:39.140 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:37:39.140 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:37:39.140 | 99.00th=[34341], 99.50th=[34866], 99.90th=[43779], 99.95th=[43779], 00:37:39.140 | 99.99th=[43779] 00:37:39.140 bw ( KiB/s): min= 1920, max= 3560, per=4.29%, avg=2033.26, stdev=373.26, samples=19 00:37:39.140 iops : min= 480, max= 890, avg=508.32, stdev=93.31, samples=19 00:37:39.140 lat (msec) : 2=1.53%, 4=0.35%, 10=1.26%, 20=1.59%, 50=95.26% 00:37:39.140 cpu : usr=98.91%, sys=0.81%, ctx=13, majf=0, minf=28 00:37:39.140 IO depths : 1=4.9%, 2=10.8%, 4=23.7%, 8=53.0%, 16=7.7%, 32=0.0%, >=64=0.0% 00:37:39.140 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:39.140 complete : 0=0.0%, 4=93.8%, 8=0.5%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:39.140 issued rwts: total=5085,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:39.140 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:39.140 filename0: (groupid=0, jobs=1): err= 0: pid=661421: Tue Nov 5 19:27:06 2024 00:37:39.140 read: IOPS=488, BW=1956KiB/s (2003kB/s)(19.1MiB/10025msec) 00:37:39.140 slat (nsec): min=5570, max=76108, avg=17260.78, stdev=12618.96 00:37:39.140 clat (usec): min=15587, max=50422, avg=32576.96, stdev=2144.64 00:37:39.140 lat (usec): min=15598, max=50446, avg=32594.22, stdev=2145.06 00:37:39.140 clat percentiles (usec): 00:37:39.140 | 1.00th=[21890], 5.00th=[31851], 10.00th=[32113], 20.00th=[32113], 00:37:39.140 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:37:39.140 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[33817], 00:37:39.140 | 99.00th=[36963], 99.50th=[43254], 99.90th=[50594], 99.95th=[50594], 00:37:39.140 | 99.99th=[50594] 00:37:39.140 bw ( KiB/s): min= 1792, max= 2048, per=4.12%, avg=1952.85, stdev=70.07, samples=20 00:37:39.140 iops : min= 448, max= 512, avg=488.20, stdev=17.53, samples=20 00:37:39.140 lat (msec) : 20=0.65%, 50=99.22%, 100=0.12% 00:37:39.140 cpu : usr=99.09%, sys=0.60%, ctx=13, majf=0, minf=22 00:37:39.140 IO depths : 1=5.8%, 2=11.7%, 4=24.3%, 8=51.4%, 16=6.7%, 32=0.0%, >=64=0.0% 00:37:39.140 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:39.140 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:39.140 issued rwts: total=4902,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:39.140 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:39.140 filename0: (groupid=0, jobs=1): err= 0: pid=661422: Tue Nov 5 19:27:06 2024 00:37:39.140 read: IOPS=487, BW=1950KiB/s (1997kB/s)(19.1MiB/10011msec) 00:37:39.140 slat (nsec): min=5578, max=65158, avg=13637.10, stdev=9083.90 00:37:39.140 clat (usec): min=20132, max=49442, avg=32719.14, stdev=1210.84 00:37:39.140 lat (usec): min=20138, max=49460, avg=32732.77, stdev=1210.74 00:37:39.140 clat percentiles (usec): 00:37:39.140 | 1.00th=[31851], 5.00th=[32113], 10.00th=[32113], 20.00th=[32375], 00:37:39.140 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:37:39.140 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[33817], 00:37:39.140 | 99.00th=[34866], 99.50th=[34866], 99.90th=[43254], 99.95th=[43779], 00:37:39.140 | 99.99th=[49546] 00:37:39.140 bw ( KiB/s): min= 1792, max= 2048, per=4.11%, avg=1946.95, stdev=76.93, samples=19 00:37:39.140 iops : min= 448, max= 512, avg=486.74, stdev=19.23, samples=19 00:37:39.140 lat (msec) : 50=100.00% 00:37:39.140 cpu : usr=98.07%, sys=1.13%, ctx=381, majf=0, minf=20 00:37:39.140 IO depths : 1=3.0%, 2=9.2%, 4=25.0%, 8=53.3%, 16=9.5%, 32=0.0%, >=64=0.0% 00:37:39.140 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:39.140 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:39.140 issued rwts: total=4880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:39.140 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:39.140 filename0: (groupid=0, jobs=1): err= 0: pid=661423: Tue Nov 5 19:27:06 2024 00:37:39.140 read: IOPS=502, BW=2009KiB/s (2057kB/s)(19.7MiB/10029msec) 00:37:39.140 slat (nsec): min=5562, max=52955, avg=9302.05, stdev=5190.78 00:37:39.140 clat (usec): min=5150, max=43265, avg=31782.26, stdev=3744.88 00:37:39.140 lat (usec): min=5169, max=43289, avg=31791.56, stdev=3744.60 00:37:39.140 clat percentiles (usec): 00:37:39.140 | 1.00th=[14484], 5.00th=[22152], 10.00th=[32113], 20.00th=[32375], 00:37:39.140 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:37:39.140 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[33817], 00:37:39.140 | 99.00th=[34341], 99.50th=[34341], 99.90th=[38011], 99.95th=[38011], 00:37:39.140 | 99.99th=[43254] 00:37:39.140 bw ( KiB/s): min= 1920, max= 2512, per=4.24%, avg=2008.00, stdev=152.45, samples=20 00:37:39.140 iops : min= 480, max= 628, avg=502.00, stdev=38.11, samples=20 00:37:39.141 lat (msec) : 10=0.56%, 20=1.91%, 50=97.54% 00:37:39.141 cpu : usr=98.71%, sys=0.93%, ctx=56, majf=0, minf=19 00:37:39.141 IO depths : 1=5.8%, 2=11.8%, 4=24.2%, 8=51.5%, 16=6.7%, 32=0.0%, >=64=0.0% 00:37:39.141 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:39.141 complete : 0=0.0%, 4=93.9%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:39.141 issued rwts: total=5036,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:39.141 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:39.141 filename0: (groupid=0, jobs=1): err= 0: pid=661424: Tue Nov 5 19:27:06 2024 00:37:39.141 read: IOPS=488, BW=1953KiB/s (2000kB/s)(19.1MiB/10009msec) 00:37:39.141 slat (nsec): min=5551, max=61655, avg=15145.95, stdev=8924.96 00:37:39.141 clat (usec): min=17058, max=50614, avg=32633.96, stdev=2056.05 00:37:39.141 lat (usec): min=17065, max=50624, avg=32649.11, stdev=2056.62 00:37:39.141 clat percentiles (usec): 00:37:39.141 | 1.00th=[22152], 5.00th=[32113], 10.00th=[32113], 20.00th=[32375], 00:37:39.141 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:37:39.141 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[33817], 00:37:39.141 | 99.00th=[34866], 99.50th=[48497], 99.90th=[49021], 99.95th=[50070], 00:37:39.141 | 99.99th=[50594] 00:37:39.141 bw ( KiB/s): min= 1843, max= 2048, per=4.10%, avg=1942.89, stdev=56.98, samples=19 00:37:39.141 iops : min= 460, max= 512, avg=485.68, stdev=14.32, samples=19 00:37:39.141 lat (msec) : 20=0.33%, 50=99.63%, 100=0.04% 00:37:39.141 cpu : usr=99.05%, sys=0.66%, ctx=23, majf=0, minf=28 00:37:39.141 IO depths : 1=5.0%, 2=11.1%, 4=24.7%, 8=51.7%, 16=7.5%, 32=0.0%, >=64=0.0% 00:37:39.141 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:39.141 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:39.141 issued rwts: total=4886,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:39.141 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:39.141 filename0: (groupid=0, jobs=1): err= 0: pid=661425: Tue Nov 5 19:27:06 2024 00:37:39.141 read: IOPS=489, BW=1957KiB/s (2004kB/s)(19.1MiB/10005msec) 00:37:39.141 slat (nsec): min=5554, max=49308, avg=10041.00, stdev=5982.81 00:37:39.141 clat (usec): min=14912, max=34912, avg=32608.95, stdev=1611.88 00:37:39.141 lat (usec): min=14930, max=34919, avg=32618.99, stdev=1611.20 00:37:39.141 clat percentiles (usec): 00:37:39.141 | 1.00th=[25297], 5.00th=[32113], 10.00th=[32375], 20.00th=[32375], 00:37:39.141 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32900], 00:37:39.141 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[33817], 00:37:39.141 | 99.00th=[34341], 99.50th=[34866], 99.90th=[34866], 99.95th=[34866], 00:37:39.141 | 99.99th=[34866] 00:37:39.141 bw ( KiB/s): min= 1920, max= 2048, per=4.12%, avg=1953.68, stdev=57.91, samples=19 00:37:39.141 iops : min= 480, max= 512, avg=488.42, stdev=14.48, samples=19 00:37:39.141 lat (msec) : 20=0.65%, 50=99.35% 00:37:39.141 cpu : usr=98.84%, sys=0.87%, ctx=17, majf=0, minf=27 00:37:39.141 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:39.141 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:39.141 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:39.141 issued rwts: total=4896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:39.141 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:39.141 filename1: (groupid=0, jobs=1): err= 0: pid=661426: Tue Nov 5 19:27:06 2024 00:37:39.141 read: IOPS=517, BW=2071KiB/s (2120kB/s)(20.3MiB/10030msec) 00:37:39.141 slat (nsec): min=5558, max=66629, avg=12015.46, stdev=9049.85 00:37:39.141 clat (usec): min=4114, max=50006, avg=30810.20, stdev=4689.01 00:37:39.141 lat (usec): min=4130, max=50013, avg=30822.21, stdev=4689.58 00:37:39.141 clat percentiles (usec): 00:37:39.141 | 1.00th=[14877], 5.00th=[21627], 10.00th=[22152], 20.00th=[32113], 00:37:39.141 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32375], 00:37:39.141 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33817], 95.00th=[33817], 00:37:39.141 | 99.00th=[34866], 99.50th=[42206], 99.90th=[45351], 99.95th=[50070], 00:37:39.141 | 99.99th=[50070] 00:37:39.141 bw ( KiB/s): min= 1792, max= 3120, per=4.37%, avg=2070.40, stdev=324.41, samples=20 00:37:39.141 iops : min= 448, max= 780, avg=517.60, stdev=81.10, samples=20 00:37:39.141 lat (msec) : 10=0.52%, 20=1.79%, 50=97.65%, 100=0.04% 00:37:39.141 cpu : usr=98.96%, sys=0.73%, ctx=12, majf=0, minf=18 00:37:39.141 IO depths : 1=4.9%, 2=10.1%, 4=21.8%, 8=55.4%, 16=7.7%, 32=0.0%, >=64=0.0% 00:37:39.141 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:39.141 complete : 0=0.0%, 4=93.2%, 8=1.1%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:39.141 issued rwts: total=5192,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:39.141 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:39.141 filename1: (groupid=0, jobs=1): err= 0: pid=661427: Tue Nov 5 19:27:06 2024 00:37:39.141 read: IOPS=489, BW=1957KiB/s (2004kB/s)(19.1MiB/10005msec) 00:37:39.141 slat (nsec): min=5567, max=71154, avg=13720.41, stdev=10683.80 00:37:39.141 clat (usec): min=14880, max=43030, avg=32573.48, stdev=1677.87 00:37:39.141 lat (usec): min=14898, max=43036, avg=32587.20, stdev=1676.60 00:37:39.141 clat percentiles (usec): 00:37:39.141 | 1.00th=[24511], 5.00th=[32113], 10.00th=[32113], 20.00th=[32375], 00:37:39.141 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:37:39.141 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[33817], 00:37:39.141 | 99.00th=[34341], 99.50th=[34866], 99.90th=[37487], 99.95th=[42730], 00:37:39.141 | 99.99th=[43254] 00:37:39.141 bw ( KiB/s): min= 1920, max= 2048, per=4.12%, avg=1953.68, stdev=57.91, samples=19 00:37:39.141 iops : min= 480, max= 512, avg=488.42, stdev=14.48, samples=19 00:37:39.141 lat (msec) : 20=0.65%, 50=99.35% 00:37:39.141 cpu : usr=98.75%, sys=0.82%, ctx=99, majf=0, minf=20 00:37:39.141 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:39.141 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:39.141 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:39.141 issued rwts: total=4896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:39.141 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:39.141 filename1: (groupid=0, jobs=1): err= 0: pid=661428: Tue Nov 5 19:27:06 2024 00:37:39.141 read: IOPS=545, BW=2182KiB/s (2235kB/s)(21.4MiB/10021msec) 00:37:39.141 slat (nsec): min=5560, max=70312, avg=10775.50, stdev=8403.85 00:37:39.141 clat (usec): min=8024, max=45201, avg=29241.22, stdev=5191.49 00:37:39.141 lat (usec): min=8039, max=45233, avg=29252.00, stdev=5193.52 00:37:39.141 clat percentiles (usec): 00:37:39.141 | 1.00th=[15926], 5.00th=[21365], 10.00th=[21627], 20.00th=[22414], 00:37:39.141 | 30.00th=[23987], 40.00th=[32113], 50.00th=[32375], 60.00th=[32375], 00:37:39.141 | 70.00th=[32375], 80.00th=[32637], 90.00th=[33162], 95.00th=[33817], 00:37:39.141 | 99.00th=[34341], 99.50th=[34341], 99.90th=[43779], 99.95th=[44827], 00:37:39.141 | 99.99th=[45351] 00:37:39.141 bw ( KiB/s): min= 1920, max= 2896, per=4.60%, avg=2180.40, stdev=344.47, samples=20 00:37:39.141 iops : min= 480, max= 724, avg=545.10, stdev=86.12, samples=20 00:37:39.141 lat (msec) : 10=0.13%, 20=1.83%, 50=98.04% 00:37:39.141 cpu : usr=98.84%, sys=0.72%, ctx=54, majf=0, minf=27 00:37:39.141 IO depths : 1=4.2%, 2=8.9%, 4=20.0%, 8=58.6%, 16=8.3%, 32=0.0%, >=64=0.0% 00:37:39.141 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:39.141 complete : 0=0.0%, 4=92.7%, 8=1.6%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:39.141 issued rwts: total=5467,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:39.141 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:39.141 filename1: (groupid=0, jobs=1): err= 0: pid=661429: Tue Nov 5 19:27:06 2024 00:37:39.141 read: IOPS=487, BW=1950KiB/s (1997kB/s)(19.1MiB/10009msec) 00:37:39.141 slat (nsec): min=5535, max=62564, avg=17181.27, stdev=10295.59 00:37:39.141 clat (usec): min=10797, max=69581, avg=32664.76, stdev=2223.16 00:37:39.141 lat (usec): min=10810, max=69598, avg=32681.94, stdev=2222.75 00:37:39.141 clat percentiles (usec): 00:37:39.141 | 1.00th=[29230], 5.00th=[32113], 10.00th=[32113], 20.00th=[32375], 00:37:39.141 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:37:39.141 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[33817], 00:37:39.141 | 99.00th=[34341], 99.50th=[34866], 99.90th=[58459], 99.95th=[58459], 00:37:39.141 | 99.99th=[69731] 00:37:39.141 bw ( KiB/s): min= 1792, max= 2048, per=4.09%, avg=1940.21, stdev=64.19, samples=19 00:37:39.141 iops : min= 448, max= 512, avg=485.05, stdev=16.05, samples=19 00:37:39.141 lat (msec) : 20=0.33%, 50=99.34%, 100=0.33% 00:37:39.141 cpu : usr=99.07%, sys=0.63%, ctx=13, majf=0, minf=16 00:37:39.141 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:39.141 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:39.141 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:39.141 issued rwts: total=4880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:39.141 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:39.141 filename1: (groupid=0, jobs=1): err= 0: pid=661430: Tue Nov 5 19:27:06 2024 00:37:39.141 read: IOPS=487, BW=1950KiB/s (1997kB/s)(19.1MiB/10010msec) 00:37:39.141 slat (nsec): min=5553, max=63088, avg=16499.88, stdev=10162.26 00:37:39.141 clat (usec): min=17116, max=50314, avg=32685.02, stdev=1697.76 00:37:39.141 lat (usec): min=17122, max=50330, avg=32701.52, stdev=1697.80 00:37:39.141 clat percentiles (usec): 00:37:39.141 | 1.00th=[31327], 5.00th=[32113], 10.00th=[32113], 20.00th=[32375], 00:37:39.141 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:37:39.141 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[33817], 00:37:39.141 | 99.00th=[34341], 99.50th=[34866], 99.90th=[50070], 99.95th=[50070], 00:37:39.141 | 99.99th=[50070] 00:37:39.141 bw ( KiB/s): min= 1792, max= 2048, per=4.09%, avg=1940.21, stdev=64.41, samples=19 00:37:39.141 iops : min= 448, max= 512, avg=485.05, stdev=16.10, samples=19 00:37:39.141 lat (msec) : 20=0.33%, 50=99.34%, 100=0.33% 00:37:39.141 cpu : usr=99.04%, sys=0.67%, ctx=15, majf=0, minf=27 00:37:39.141 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:37:39.141 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:39.141 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:39.141 issued rwts: total=4880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:39.141 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:39.141 filename1: (groupid=0, jobs=1): err= 0: pid=661431: Tue Nov 5 19:27:06 2024 00:37:39.141 read: IOPS=488, BW=1953KiB/s (2000kB/s)(19.1MiB/10029msec) 00:37:39.141 slat (nsec): min=5554, max=54707, avg=9883.28, stdev=6849.90 00:37:39.141 clat (usec): min=18754, max=46086, avg=32671.21, stdev=1481.12 00:37:39.141 lat (usec): min=18770, max=46104, avg=32681.10, stdev=1481.18 00:37:39.141 clat percentiles (usec): 00:37:39.141 | 1.00th=[29230], 5.00th=[32113], 10.00th=[32375], 20.00th=[32375], 00:37:39.142 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:37:39.142 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[33817], 00:37:39.142 | 99.00th=[34866], 99.50th=[34866], 99.90th=[44827], 99.95th=[45351], 00:37:39.142 | 99.99th=[45876] 00:37:39.142 bw ( KiB/s): min= 1920, max= 2048, per=4.12%, avg=1952.00, stdev=56.87, samples=20 00:37:39.142 iops : min= 480, max= 512, avg=488.00, stdev=14.22, samples=20 00:37:39.142 lat (msec) : 20=0.33%, 50=99.67% 00:37:39.142 cpu : usr=98.97%, sys=0.74%, ctx=14, majf=0, minf=17 00:37:39.142 IO depths : 1=6.0%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.5%, 32=0.0%, >=64=0.0% 00:37:39.142 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:39.142 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:39.142 issued rwts: total=4896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:39.142 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:39.142 filename1: (groupid=0, jobs=1): err= 0: pid=661432: Tue Nov 5 19:27:06 2024 00:37:39.142 read: IOPS=484, BW=1937KiB/s (1984kB/s)(19.0MiB/10044msec) 00:37:39.142 slat (nsec): min=5500, max=69003, avg=16896.83, stdev=10090.40 00:37:39.142 clat (usec): min=17842, max=69728, avg=32777.84, stdev=2418.83 00:37:39.142 lat (usec): min=17884, max=69747, avg=32794.74, stdev=2418.20 00:37:39.142 clat percentiles (usec): 00:37:39.142 | 1.00th=[31589], 5.00th=[32113], 10.00th=[32113], 20.00th=[32375], 00:37:39.142 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:37:39.142 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[33817], 00:37:39.142 | 99.00th=[34866], 99.50th=[34866], 99.90th=[69731], 99.95th=[69731], 00:37:39.142 | 99.99th=[69731] 00:37:39.142 bw ( KiB/s): min= 1664, max= 2048, per=4.09%, avg=1940.21, stdev=88.10, samples=19 00:37:39.142 iops : min= 416, max= 512, avg=485.05, stdev=22.02, samples=19 00:37:39.142 lat (msec) : 20=0.29%, 50=99.38%, 100=0.33% 00:37:39.142 cpu : usr=98.92%, sys=0.67%, ctx=103, majf=0, minf=20 00:37:39.142 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:37:39.142 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:39.142 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:39.142 issued rwts: total=4864,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:39.142 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:39.142 filename1: (groupid=0, jobs=1): err= 0: pid=661433: Tue Nov 5 19:27:06 2024 00:37:39.142 read: IOPS=489, BW=1959KiB/s (2006kB/s)(19.2MiB/10022msec) 00:37:39.142 slat (nsec): min=5566, max=81766, avg=15877.50, stdev=12899.96 00:37:39.142 clat (usec): min=20281, max=49439, avg=32528.16, stdev=1941.09 00:37:39.142 lat (usec): min=20287, max=49448, avg=32544.04, stdev=1941.30 00:37:39.142 clat percentiles (usec): 00:37:39.142 | 1.00th=[22676], 5.00th=[31851], 10.00th=[32113], 20.00th=[32375], 00:37:39.142 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:37:39.142 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:37:39.142 | 99.00th=[34866], 99.50th=[39584], 99.90th=[49546], 99.95th=[49546], 00:37:39.142 | 99.99th=[49546] 00:37:39.142 bw ( KiB/s): min= 1792, max= 2160, per=4.12%, avg=1952.00, stdev=80.35, samples=19 00:37:39.142 iops : min= 448, max= 540, avg=488.00, stdev=20.09, samples=19 00:37:39.142 lat (msec) : 50=100.00% 00:37:39.142 cpu : usr=98.79%, sys=0.87%, ctx=52, majf=0, minf=19 00:37:39.142 IO depths : 1=5.8%, 2=11.9%, 4=24.5%, 8=51.0%, 16=6.7%, 32=0.0%, >=64=0.0% 00:37:39.142 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:39.142 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:39.142 issued rwts: total=4908,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:39.142 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:39.142 filename2: (groupid=0, jobs=1): err= 0: pid=661434: Tue Nov 5 19:27:06 2024 00:37:39.142 read: IOPS=487, BW=1952KiB/s (1999kB/s)(19.1MiB/10001msec) 00:37:39.142 slat (nsec): min=5565, max=66260, avg=15365.62, stdev=11410.05 00:37:39.142 clat (usec): min=18730, max=34781, avg=32666.40, stdev=1003.33 00:37:39.142 lat (usec): min=18738, max=34793, avg=32681.77, stdev=1002.72 00:37:39.142 clat percentiles (usec): 00:37:39.142 | 1.00th=[31327], 5.00th=[32113], 10.00th=[32113], 20.00th=[32375], 00:37:39.142 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:37:39.142 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[33817], 00:37:39.142 | 99.00th=[34341], 99.50th=[34341], 99.90th=[34866], 99.95th=[34866], 00:37:39.142 | 99.99th=[34866] 00:37:39.142 bw ( KiB/s): min= 1920, max= 2048, per=4.11%, avg=1946.95, stdev=53.61, samples=19 00:37:39.142 iops : min= 480, max= 512, avg=486.74, stdev=13.40, samples=19 00:37:39.142 lat (msec) : 20=0.33%, 50=99.67% 00:37:39.142 cpu : usr=99.12%, sys=0.59%, ctx=13, majf=0, minf=20 00:37:39.142 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:39.142 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:39.142 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:39.142 issued rwts: total=4880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:39.142 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:39.142 filename2: (groupid=0, jobs=1): err= 0: pid=661435: Tue Nov 5 19:27:06 2024 00:37:39.142 read: IOPS=486, BW=1945KiB/s (1992kB/s)(19.0MiB/10003msec) 00:37:39.142 slat (nsec): min=5555, max=82241, avg=21323.59, stdev=13749.12 00:37:39.142 clat (usec): min=17720, max=69707, avg=32713.70, stdev=2430.22 00:37:39.142 lat (usec): min=17729, max=69730, avg=32735.02, stdev=2429.58 00:37:39.142 clat percentiles (usec): 00:37:39.142 | 1.00th=[31065], 5.00th=[32113], 10.00th=[32113], 20.00th=[32113], 00:37:39.142 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:37:39.142 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:37:39.142 | 99.00th=[34866], 99.50th=[34866], 99.90th=[69731], 99.95th=[69731], 00:37:39.142 | 99.99th=[69731] 00:37:39.142 bw ( KiB/s): min= 1664, max= 2048, per=4.09%, avg=1940.21, stdev=84.64, samples=19 00:37:39.142 iops : min= 416, max= 512, avg=485.05, stdev=21.16, samples=19 00:37:39.142 lat (msec) : 20=0.33%, 50=99.30%, 100=0.37% 00:37:39.142 cpu : usr=99.15%, sys=0.56%, ctx=16, majf=0, minf=19 00:37:39.142 IO depths : 1=2.8%, 2=9.1%, 4=25.0%, 8=53.4%, 16=9.7%, 32=0.0%, >=64=0.0% 00:37:39.142 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:39.142 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:39.142 issued rwts: total=4864,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:39.142 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:39.142 filename2: (groupid=0, jobs=1): err= 0: pid=661436: Tue Nov 5 19:27:06 2024 00:37:39.142 read: IOPS=487, BW=1950KiB/s (1997kB/s)(19.1MiB/10009msec) 00:37:39.142 slat (nsec): min=5369, max=65299, avg=19938.68, stdev=11279.66 00:37:39.142 clat (usec): min=17035, max=49577, avg=32633.49, stdev=1566.27 00:37:39.142 lat (usec): min=17041, max=49592, avg=32653.43, stdev=1566.47 00:37:39.142 clat percentiles (usec): 00:37:39.142 | 1.00th=[31327], 5.00th=[32113], 10.00th=[32113], 20.00th=[32113], 00:37:39.142 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:37:39.142 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:37:39.142 | 99.00th=[34341], 99.50th=[34866], 99.90th=[49546], 99.95th=[49546], 00:37:39.142 | 99.99th=[49546] 00:37:39.142 bw ( KiB/s): min= 1792, max= 2048, per=4.09%, avg=1940.21, stdev=64.19, samples=19 00:37:39.142 iops : min= 448, max= 512, avg=485.05, stdev=16.05, samples=19 00:37:39.142 lat (msec) : 20=0.33%, 50=99.67% 00:37:39.142 cpu : usr=98.89%, sys=0.74%, ctx=63, majf=0, minf=28 00:37:39.142 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:39.142 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:39.142 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:39.142 issued rwts: total=4880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:39.142 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:39.142 filename2: (groupid=0, jobs=1): err= 0: pid=661437: Tue Nov 5 19:27:06 2024 00:37:39.142 read: IOPS=486, BW=1946KiB/s (1993kB/s)(19.0MiB/10009msec) 00:37:39.142 slat (nsec): min=5542, max=77296, avg=22484.49, stdev=14291.05 00:37:39.142 clat (usec): min=17882, max=59260, avg=32656.57, stdev=2428.04 00:37:39.142 lat (usec): min=17941, max=59279, avg=32679.06, stdev=2427.48 00:37:39.142 clat percentiles (usec): 00:37:39.142 | 1.00th=[23200], 5.00th=[31851], 10.00th=[32113], 20.00th=[32113], 00:37:39.142 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:37:39.142 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:37:39.142 | 99.00th=[42730], 99.50th=[49546], 99.90th=[58983], 99.95th=[59507], 00:37:39.142 | 99.99th=[59507] 00:37:39.142 bw ( KiB/s): min= 1795, max= 2048, per=4.09%, avg=1938.68, stdev=59.04, samples=19 00:37:39.142 iops : min= 448, max= 512, avg=484.63, stdev=14.86, samples=19 00:37:39.142 lat (msec) : 20=0.29%, 50=99.22%, 100=0.49% 00:37:39.142 cpu : usr=98.89%, sys=0.81%, ctx=13, majf=0, minf=21 00:37:39.142 IO depths : 1=5.8%, 2=12.0%, 4=24.7%, 8=50.9%, 16=6.7%, 32=0.0%, >=64=0.0% 00:37:39.142 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:39.142 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:39.142 issued rwts: total=4870,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:39.142 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:39.142 filename2: (groupid=0, jobs=1): err= 0: pid=661438: Tue Nov 5 19:27:06 2024 00:37:39.142 read: IOPS=486, BW=1945KiB/s (1992kB/s)(19.0MiB/10003msec) 00:37:39.142 slat (nsec): min=5578, max=76373, avg=19537.12, stdev=12680.19 00:37:39.142 clat (usec): min=17645, max=81227, avg=32727.04, stdev=2512.34 00:37:39.142 lat (usec): min=17696, max=81246, avg=32746.58, stdev=2511.45 00:37:39.142 clat percentiles (usec): 00:37:39.142 | 1.00th=[31327], 5.00th=[32113], 10.00th=[32113], 20.00th=[32113], 00:37:39.142 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:37:39.142 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:37:39.142 | 99.00th=[34866], 99.50th=[34866], 99.90th=[69731], 99.95th=[69731], 00:37:39.142 | 99.99th=[81265] 00:37:39.142 bw ( KiB/s): min= 1664, max= 2048, per=4.09%, avg=1940.21, stdev=88.10, samples=19 00:37:39.142 iops : min= 416, max= 512, avg=485.05, stdev=22.02, samples=19 00:37:39.142 lat (msec) : 20=0.33%, 50=99.34%, 100=0.33% 00:37:39.142 cpu : usr=99.06%, sys=0.65%, ctx=13, majf=0, minf=24 00:37:39.142 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:39.142 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:39.142 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:39.142 issued rwts: total=4864,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:39.143 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:39.143 filename2: (groupid=0, jobs=1): err= 0: pid=661439: Tue Nov 5 19:27:06 2024 00:37:39.143 read: IOPS=487, BW=1949KiB/s (1996kB/s)(19.1MiB/10014msec) 00:37:39.143 slat (nsec): min=4904, max=71126, avg=20069.01, stdev=11806.83 00:37:39.143 clat (usec): min=17067, max=53796, avg=32636.78, stdev=1720.65 00:37:39.143 lat (usec): min=17073, max=53810, avg=32656.85, stdev=1720.72 00:37:39.143 clat percentiles (usec): 00:37:39.143 | 1.00th=[31327], 5.00th=[32113], 10.00th=[32113], 20.00th=[32113], 00:37:39.143 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:37:39.143 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:37:39.143 | 99.00th=[34341], 99.50th=[34866], 99.90th=[53740], 99.95th=[53740], 00:37:39.143 | 99.99th=[53740] 00:37:39.143 bw ( KiB/s): min= 1792, max= 2048, per=4.09%, avg=1940.21, stdev=64.19, samples=19 00:37:39.143 iops : min= 448, max= 512, avg=485.05, stdev=16.05, samples=19 00:37:39.143 lat (msec) : 20=0.33%, 50=99.34%, 100=0.33% 00:37:39.143 cpu : usr=99.03%, sys=0.66%, ctx=40, majf=0, minf=19 00:37:39.143 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:39.143 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:39.143 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:39.143 issued rwts: total=4880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:39.143 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:39.143 filename2: (groupid=0, jobs=1): err= 0: pid=661440: Tue Nov 5 19:27:06 2024 00:37:39.143 read: IOPS=500, BW=2000KiB/s (2048kB/s)(19.6MiB/10010msec) 00:37:39.143 slat (nsec): min=5445, max=66470, avg=16616.30, stdev=11651.27 00:37:39.143 clat (usec): min=10703, max=49568, avg=31855.14, stdev=3799.26 00:37:39.143 lat (usec): min=10710, max=49584, avg=31871.76, stdev=3800.38 00:37:39.143 clat percentiles (usec): 00:37:39.143 | 1.00th=[17171], 5.00th=[23200], 10.00th=[27919], 20.00th=[32113], 00:37:39.143 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32375], 60.00th=[32375], 00:37:39.143 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[34341], 00:37:39.143 | 99.00th=[44303], 99.50th=[48497], 99.90th=[49546], 99.95th=[49546], 00:37:39.143 | 99.99th=[49546] 00:37:39.143 bw ( KiB/s): min= 1795, max= 2160, per=4.21%, avg=1993.42, stdev=91.75, samples=19 00:37:39.143 iops : min= 448, max= 540, avg=498.32, stdev=23.03, samples=19 00:37:39.143 lat (msec) : 20=1.24%, 50=98.76% 00:37:39.143 cpu : usr=99.02%, sys=0.69%, ctx=13, majf=0, minf=18 00:37:39.143 IO depths : 1=3.9%, 2=8.1%, 4=17.5%, 8=60.8%, 16=9.8%, 32=0.0%, >=64=0.0% 00:37:39.143 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:39.143 complete : 0=0.0%, 4=92.3%, 8=3.1%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:39.143 issued rwts: total=5006,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:39.143 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:39.143 filename2: (groupid=0, jobs=1): err= 0: pid=661441: Tue Nov 5 19:27:06 2024 00:37:39.143 read: IOPS=490, BW=1960KiB/s (2007kB/s)(19.2MiB/10023msec) 00:37:39.143 slat (nsec): min=5567, max=65940, avg=18757.45, stdev=11825.87 00:37:39.143 clat (usec): min=12996, max=43071, avg=32480.20, stdev=1872.87 00:37:39.143 lat (usec): min=13006, max=43084, avg=32498.96, stdev=1873.32 00:37:39.143 clat percentiles (usec): 00:37:39.143 | 1.00th=[21627], 5.00th=[32113], 10.00th=[32113], 20.00th=[32113], 00:37:39.143 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:37:39.143 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[33817], 00:37:39.143 | 99.00th=[34341], 99.50th=[34866], 99.90th=[34866], 99.95th=[34866], 00:37:39.143 | 99.99th=[43254] 00:37:39.143 bw ( KiB/s): min= 1920, max= 2048, per=4.13%, avg=1958.40, stdev=60.18, samples=20 00:37:39.143 iops : min= 480, max= 512, avg=489.60, stdev=15.05, samples=20 00:37:39.143 lat (msec) : 20=0.98%, 50=99.02% 00:37:39.143 cpu : usr=98.87%, sys=0.77%, ctx=86, majf=0, minf=14 00:37:39.143 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:39.143 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:39.143 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:39.143 issued rwts: total=4912,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:39.143 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:39.143 00:37:39.143 Run status group 0 (all jobs): 00:37:39.143 READ: bw=46.3MiB/s (48.5MB/s), 1937KiB/s-2182KiB/s (1984kB/s-2235kB/s), io=465MiB (487MB), run=10001-10044msec 00:37:39.143 19:27:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:37:39.143 19:27:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:37:39.143 19:27:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:39.143 19:27:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:39.143 19:27:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:37:39.143 19:27:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:39.143 19:27:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:39.143 19:27:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:39.143 19:27:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:39.143 19:27:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:39.143 19:27:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:39.143 19:27:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:39.143 19:27:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:39.143 19:27:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:39.143 19:27:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:37:39.143 19:27:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:37:39.143 19:27:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:39.143 19:27:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:39.143 19:27:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:39.143 19:27:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:39.143 19:27:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:37:39.143 19:27:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:39.143 19:27:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:39.143 19:27:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:39.143 19:27:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:39.143 19:27:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:37:39.143 19:27:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:37:39.143 19:27:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:37:39.143 19:27:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:39.143 19:27:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:39.143 19:27:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:39.143 19:27:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:37:39.143 19:27:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:39.143 19:27:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:39.143 19:27:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:39.143 19:27:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:37:39.143 19:27:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:37:39.143 19:27:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:37:39.143 19:27:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:37:39.143 19:27:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:37:39.143 19:27:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:37:39.143 19:27:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:37:39.143 19:27:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:37:39.143 19:27:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:39.143 19:27:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:37:39.143 19:27:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:37:39.143 19:27:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:37:39.143 19:27:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:39.143 19:27:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:39.143 bdev_null0 00:37:39.143 19:27:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:39.143 19:27:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:39.143 19:27:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:39.143 19:27:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:39.143 19:27:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:39.143 19:27:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:39.143 19:27:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:39.143 19:27:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:39.143 19:27:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:39.143 19:27:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:39.143 19:27:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:39.143 19:27:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:39.143 [2024-11-05 19:27:07.017026] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:39.143 19:27:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:39.143 19:27:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:39.143 19:27:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:37:39.143 19:27:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:37:39.143 19:27:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:37:39.143 19:27:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:39.143 19:27:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:39.143 bdev_null1 00:37:39.143 19:27:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:39.144 19:27:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:37:39.144 19:27:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:39.144 19:27:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:39.144 19:27:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:39.144 19:27:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:37:39.144 19:27:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:39.144 19:27:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:39.144 19:27:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:39.144 19:27:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:39.144 19:27:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:39.144 19:27:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:39.144 19:27:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:39.144 19:27:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:37:39.144 19:27:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:37:39.144 19:27:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:37:39.144 19:27:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:39.144 19:27:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@372 -- # config=() 00:37:39.144 19:27:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@372 -- # local subsystem config 00:37:39.144 19:27:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:39.144 19:27:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:37:39.144 19:27:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:37:39.144 { 00:37:39.144 "params": { 00:37:39.144 "name": "Nvme$subsystem", 00:37:39.144 "trtype": "$TEST_TRANSPORT", 00:37:39.144 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:39.144 "adrfam": "ipv4", 00:37:39.144 "trsvcid": "$NVMF_PORT", 00:37:39.144 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:39.144 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:39.144 "hdgst": ${hdgst:-false}, 00:37:39.144 "ddgst": ${ddgst:-false} 00:37:39.144 }, 00:37:39.144 "method": "bdev_nvme_attach_controller" 00:37:39.144 } 00:37:39.144 EOF 00:37:39.144 )") 00:37:39.144 19:27:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:37:39.144 19:27:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:37:39.144 19:27:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:39.144 19:27:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:37:39.144 19:27:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:37:39.144 19:27:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:37:39.144 19:27:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:39.144 19:27:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:37:39.144 19:27:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:37:39.144 19:27:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:37:39.144 19:27:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # cat 00:37:39.144 19:27:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:39.144 19:27:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:37:39.144 19:27:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:37:39.144 19:27:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:37:39.144 19:27:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:39.144 19:27:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:37:39.144 19:27:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:37:39.144 19:27:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:37:39.144 { 00:37:39.144 "params": { 00:37:39.144 "name": "Nvme$subsystem", 00:37:39.144 "trtype": "$TEST_TRANSPORT", 00:37:39.144 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:39.144 "adrfam": "ipv4", 00:37:39.144 "trsvcid": "$NVMF_PORT", 00:37:39.144 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:39.144 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:39.144 "hdgst": ${hdgst:-false}, 00:37:39.144 "ddgst": ${ddgst:-false} 00:37:39.144 }, 00:37:39.144 "method": "bdev_nvme_attach_controller" 00:37:39.144 } 00:37:39.144 EOF 00:37:39.144 )") 00:37:39.144 19:27:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:37:39.144 19:27:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:39.144 19:27:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # cat 00:37:39.144 19:27:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@396 -- # jq . 00:37:39.144 19:27:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@397 -- # IFS=, 00:37:39.144 19:27:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:37:39.144 "params": { 00:37:39.144 "name": "Nvme0", 00:37:39.144 "trtype": "tcp", 00:37:39.144 "traddr": "10.0.0.2", 00:37:39.144 "adrfam": "ipv4", 00:37:39.144 "trsvcid": "4420", 00:37:39.144 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:39.144 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:39.144 "hdgst": false, 00:37:39.144 "ddgst": false 00:37:39.144 }, 00:37:39.144 "method": "bdev_nvme_attach_controller" 00:37:39.144 },{ 00:37:39.144 "params": { 00:37:39.144 "name": "Nvme1", 00:37:39.144 "trtype": "tcp", 00:37:39.144 "traddr": "10.0.0.2", 00:37:39.144 "adrfam": "ipv4", 00:37:39.144 "trsvcid": "4420", 00:37:39.144 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:39.144 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:39.144 "hdgst": false, 00:37:39.144 "ddgst": false 00:37:39.144 }, 00:37:39.144 "method": "bdev_nvme_attach_controller" 00:37:39.144 }' 00:37:39.144 19:27:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:37:39.144 19:27:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:37:39.144 19:27:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:37:39.144 19:27:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:39.144 19:27:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:37:39.144 19:27:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:37:39.144 19:27:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:37:39.144 19:27:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:37:39.144 19:27:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:39.144 19:27:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:39.144 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:37:39.144 ... 00:37:39.144 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:37:39.144 ... 00:37:39.144 fio-3.35 00:37:39.144 Starting 4 threads 00:37:44.430 00:37:44.430 filename0: (groupid=0, jobs=1): err= 0: pid=663771: Tue Nov 5 19:27:13 2024 00:37:44.430 read: IOPS=2075, BW=16.2MiB/s (17.0MB/s)(81.1MiB/5003msec) 00:37:44.430 slat (nsec): min=5383, max=30701, avg=6123.11, stdev=2319.87 00:37:44.430 clat (usec): min=2001, max=44851, avg=3837.28, stdev=1314.18 00:37:44.430 lat (usec): min=2007, max=44881, avg=3843.41, stdev=1314.39 00:37:44.430 clat percentiles (usec): 00:37:44.430 | 1.00th=[ 2769], 5.00th=[ 2999], 10.00th=[ 3163], 20.00th=[ 3392], 00:37:44.430 | 30.00th=[ 3490], 40.00th=[ 3589], 50.00th=[ 3687], 60.00th=[ 3752], 00:37:44.430 | 70.00th=[ 3818], 80.00th=[ 4015], 90.00th=[ 4883], 95.00th=[ 5407], 00:37:44.430 | 99.00th=[ 5866], 99.50th=[ 5997], 99.90th=[ 6325], 99.95th=[44827], 00:37:44.430 | 99.99th=[44827] 00:37:44.430 bw ( KiB/s): min=15296, max=17200, per=24.68%, avg=16584.89, stdev=535.71, samples=9 00:37:44.430 iops : min= 1912, max= 2150, avg=2073.11, stdev=66.96, samples=9 00:37:44.430 lat (msec) : 4=79.38%, 10=20.55%, 50=0.08% 00:37:44.430 cpu : usr=96.98%, sys=2.78%, ctx=6, majf=0, minf=61 00:37:44.430 IO depths : 1=0.1%, 2=0.3%, 4=71.5%, 8=28.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:44.430 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:44.430 complete : 0=0.0%, 4=93.4%, 8=6.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:44.430 issued rwts: total=10386,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:44.430 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:44.430 filename0: (groupid=0, jobs=1): err= 0: pid=663772: Tue Nov 5 19:27:13 2024 00:37:44.430 read: IOPS=2119, BW=16.6MiB/s (17.4MB/s)(82.8MiB/5002msec) 00:37:44.430 slat (nsec): min=5390, max=60896, avg=6130.68, stdev=2389.14 00:37:44.430 clat (usec): min=1600, max=6383, avg=3757.68, stdev=541.02 00:37:44.430 lat (usec): min=1610, max=6388, avg=3763.81, stdev=540.84 00:37:44.430 clat percentiles (usec): 00:37:44.430 | 1.00th=[ 2671], 5.00th=[ 3097], 10.00th=[ 3261], 20.00th=[ 3458], 00:37:44.430 | 30.00th=[ 3556], 40.00th=[ 3589], 50.00th=[ 3687], 60.00th=[ 3785], 00:37:44.430 | 70.00th=[ 3785], 80.00th=[ 3916], 90.00th=[ 4146], 95.00th=[ 5145], 00:37:44.430 | 99.00th=[ 5735], 99.50th=[ 5932], 99.90th=[ 6259], 99.95th=[ 6259], 00:37:44.430 | 99.99th=[ 6390] 00:37:44.430 bw ( KiB/s): min=16624, max=17424, per=25.26%, avg=16972.44, stdev=274.08, samples=9 00:37:44.430 iops : min= 2078, max= 2178, avg=2121.56, stdev=34.26, samples=9 00:37:44.430 lat (msec) : 2=0.05%, 4=82.08%, 10=17.88% 00:37:44.430 cpu : usr=97.12%, sys=2.64%, ctx=7, majf=0, minf=42 00:37:44.430 IO depths : 1=0.1%, 2=0.1%, 4=72.6%, 8=27.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:44.430 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:44.430 complete : 0=0.0%, 4=92.4%, 8=7.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:44.430 issued rwts: total=10600,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:44.430 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:44.430 filename1: (groupid=0, jobs=1): err= 0: pid=663773: Tue Nov 5 19:27:13 2024 00:37:44.430 read: IOPS=2098, BW=16.4MiB/s (17.2MB/s)(82.0MiB/5001msec) 00:37:44.430 slat (nsec): min=5384, max=63285, avg=7601.07, stdev=2227.85 00:37:44.430 clat (usec): min=1625, max=6407, avg=3790.89, stdev=505.72 00:37:44.430 lat (usec): min=1631, max=6415, avg=3798.49, stdev=505.52 00:37:44.430 clat percentiles (usec): 00:37:44.430 | 1.00th=[ 2868], 5.00th=[ 3228], 10.00th=[ 3392], 20.00th=[ 3523], 00:37:44.430 | 30.00th=[ 3556], 40.00th=[ 3654], 50.00th=[ 3720], 60.00th=[ 3785], 00:37:44.430 | 70.00th=[ 3785], 80.00th=[ 3949], 90.00th=[ 4146], 95.00th=[ 5145], 00:37:44.430 | 99.00th=[ 5800], 99.50th=[ 5932], 99.90th=[ 6194], 99.95th=[ 6325], 00:37:44.430 | 99.99th=[ 6390] 00:37:44.430 bw ( KiB/s): min=16416, max=17074, per=24.99%, avg=16789.56, stdev=235.18, samples=9 00:37:44.430 iops : min= 2052, max= 2134, avg=2098.67, stdev=29.36, samples=9 00:37:44.430 lat (msec) : 2=0.03%, 4=80.75%, 10=19.22% 00:37:44.430 cpu : usr=96.40%, sys=3.32%, ctx=6, majf=0, minf=48 00:37:44.430 IO depths : 1=0.1%, 2=0.1%, 4=73.5%, 8=26.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:44.430 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:44.430 complete : 0=0.0%, 4=91.6%, 8=8.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:44.430 issued rwts: total=10495,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:44.430 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:44.430 filename1: (groupid=0, jobs=1): err= 0: pid=663774: Tue Nov 5 19:27:13 2024 00:37:44.430 read: IOPS=2107, BW=16.5MiB/s (17.3MB/s)(82.4MiB/5002msec) 00:37:44.430 slat (nsec): min=5391, max=36432, avg=6300.27, stdev=2668.47 00:37:44.430 clat (usec): min=1441, max=6293, avg=3780.94, stdev=509.17 00:37:44.430 lat (usec): min=1446, max=6298, avg=3787.24, stdev=509.13 00:37:44.430 clat percentiles (usec): 00:37:44.430 | 1.00th=[ 2802], 5.00th=[ 3163], 10.00th=[ 3359], 20.00th=[ 3523], 00:37:44.430 | 30.00th=[ 3556], 40.00th=[ 3621], 50.00th=[ 3752], 60.00th=[ 3785], 00:37:44.430 | 70.00th=[ 3818], 80.00th=[ 3949], 90.00th=[ 4146], 95.00th=[ 5014], 00:37:44.430 | 99.00th=[ 5735], 99.50th=[ 5866], 99.90th=[ 6128], 99.95th=[ 6128], 00:37:44.430 | 99.99th=[ 6259] 00:37:44.430 bw ( KiB/s): min=16416, max=17216, per=25.07%, avg=16846.22, stdev=256.43, samples=9 00:37:44.430 iops : min= 2052, max= 2152, avg=2105.78, stdev=32.05, samples=9 00:37:44.430 lat (msec) : 2=0.08%, 4=81.24%, 10=18.69% 00:37:44.430 cpu : usr=96.68%, sys=3.06%, ctx=7, majf=0, minf=35 00:37:44.430 IO depths : 1=0.1%, 2=0.1%, 4=67.4%, 8=32.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:44.430 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:44.430 complete : 0=0.0%, 4=96.5%, 8=3.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:44.430 issued rwts: total=10541,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:44.430 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:44.430 00:37:44.430 Run status group 0 (all jobs): 00:37:44.430 READ: bw=65.6MiB/s (68.8MB/s), 16.2MiB/s-16.6MiB/s (17.0MB/s-17.4MB/s), io=328MiB (344MB), run=5001-5003msec 00:37:44.430 19:27:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:37:44.430 19:27:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:37:44.430 19:27:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:44.430 19:27:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:44.430 19:27:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:37:44.430 19:27:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:44.430 19:27:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:44.430 19:27:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:44.430 19:27:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:44.430 19:27:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:44.430 19:27:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:44.430 19:27:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:44.430 19:27:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:44.430 19:27:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:44.430 19:27:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:37:44.430 19:27:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:37:44.430 19:27:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:44.430 19:27:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:44.430 19:27:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:44.430 19:27:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:44.430 19:27:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:37:44.430 19:27:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:44.430 19:27:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:44.430 19:27:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:44.430 00:37:44.430 real 0m24.557s 00:37:44.430 user 5m20.760s 00:37:44.430 sys 0m4.325s 00:37:44.430 19:27:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1128 -- # xtrace_disable 00:37:44.430 19:27:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:44.430 ************************************ 00:37:44.430 END TEST fio_dif_rand_params 00:37:44.430 ************************************ 00:37:44.430 19:27:13 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:37:44.430 19:27:13 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:37:44.430 19:27:13 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:37:44.430 19:27:13 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:44.430 ************************************ 00:37:44.430 START TEST fio_dif_digest 00:37:44.430 ************************************ 00:37:44.430 19:27:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1127 -- # fio_dif_digest 00:37:44.430 19:27:13 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:37:44.430 19:27:13 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:37:44.430 19:27:13 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:37:44.430 19:27:13 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:37:44.430 19:27:13 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:37:44.430 19:27:13 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:37:44.430 19:27:13 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:37:44.430 19:27:13 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:37:44.430 19:27:13 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:37:44.430 19:27:13 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:37:44.430 19:27:13 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:37:44.430 19:27:13 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:37:44.430 19:27:13 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:37:44.430 19:27:13 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:37:44.430 19:27:13 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:37:44.430 19:27:13 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:37:44.430 19:27:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:44.430 19:27:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:44.430 bdev_null0 00:37:44.430 19:27:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:44.430 19:27:13 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:44.431 19:27:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:44.431 19:27:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:44.431 19:27:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:44.431 19:27:13 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:44.431 19:27:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:44.431 19:27:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:44.431 19:27:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:44.431 19:27:13 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:44.431 19:27:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:44.431 19:27:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:44.431 [2024-11-05 19:27:13.698607] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:44.431 19:27:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:44.431 19:27:13 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:37:44.431 19:27:13 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:44.431 19:27:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:44.431 19:27:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:37:44.431 19:27:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:44.431 19:27:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local sanitizers 00:37:44.431 19:27:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:44.431 19:27:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # shift 00:37:44.431 19:27:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # local asan_lib= 00:37:44.431 19:27:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:37:44.431 19:27:13 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:37:44.431 19:27:13 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:37:44.431 19:27:13 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:37:44.431 19:27:13 nvmf_dif.fio_dif_digest -- nvmf/common.sh@372 -- # config=() 00:37:44.431 19:27:13 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:37:44.431 19:27:13 nvmf_dif.fio_dif_digest -- nvmf/common.sh@372 -- # local subsystem config 00:37:44.431 19:27:13 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:37:44.431 19:27:13 nvmf_dif.fio_dif_digest -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:37:44.431 19:27:13 nvmf_dif.fio_dif_digest -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:37:44.431 { 00:37:44.431 "params": { 00:37:44.431 "name": "Nvme$subsystem", 00:37:44.431 "trtype": "$TEST_TRANSPORT", 00:37:44.431 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:44.431 "adrfam": "ipv4", 00:37:44.431 "trsvcid": "$NVMF_PORT", 00:37:44.431 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:44.431 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:44.431 "hdgst": ${hdgst:-false}, 00:37:44.431 "ddgst": ${ddgst:-false} 00:37:44.431 }, 00:37:44.431 "method": "bdev_nvme_attach_controller" 00:37:44.431 } 00:37:44.431 EOF 00:37:44.431 )") 00:37:44.431 19:27:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:44.431 19:27:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # grep libasan 00:37:44.431 19:27:13 nvmf_dif.fio_dif_digest -- nvmf/common.sh@394 -- # cat 00:37:44.431 19:27:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:37:44.431 19:27:13 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:37:44.431 19:27:13 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:37:44.431 19:27:13 nvmf_dif.fio_dif_digest -- nvmf/common.sh@396 -- # jq . 00:37:44.431 19:27:13 nvmf_dif.fio_dif_digest -- nvmf/common.sh@397 -- # IFS=, 00:37:44.431 19:27:13 nvmf_dif.fio_dif_digest -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:37:44.431 "params": { 00:37:44.431 "name": "Nvme0", 00:37:44.431 "trtype": "tcp", 00:37:44.431 "traddr": "10.0.0.2", 00:37:44.431 "adrfam": "ipv4", 00:37:44.431 "trsvcid": "4420", 00:37:44.431 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:44.431 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:44.431 "hdgst": true, 00:37:44.431 "ddgst": true 00:37:44.431 }, 00:37:44.431 "method": "bdev_nvme_attach_controller" 00:37:44.431 }' 00:37:44.431 19:27:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # asan_lib= 00:37:44.431 19:27:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:37:44.431 19:27:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:37:44.431 19:27:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:44.431 19:27:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:37:44.431 19:27:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:37:44.716 19:27:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # asan_lib= 00:37:44.716 19:27:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:37:44.716 19:27:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:44.716 19:27:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:44.981 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:37:44.981 ... 00:37:44.981 fio-3.35 00:37:44.981 Starting 3 threads 00:37:57.220 00:37:57.220 filename0: (groupid=0, jobs=1): err= 0: pid=665707: Tue Nov 5 19:27:24 2024 00:37:57.220 read: IOPS=222, BW=27.9MiB/s (29.2MB/s)(280MiB/10047msec) 00:37:57.220 slat (nsec): min=5628, max=36695, avg=8661.37, stdev=1718.16 00:37:57.220 clat (usec): min=7970, max=53353, avg=13425.58, stdev=1593.72 00:37:57.220 lat (usec): min=7979, max=53359, avg=13434.24, stdev=1593.72 00:37:57.220 clat percentiles (usec): 00:37:57.220 | 1.00th=[ 9634], 5.00th=[11600], 10.00th=[12125], 20.00th=[12649], 00:37:57.220 | 30.00th=[13042], 40.00th=[13173], 50.00th=[13435], 60.00th=[13698], 00:37:57.220 | 70.00th=[13960], 80.00th=[14222], 90.00th=[14615], 95.00th=[15008], 00:37:57.220 | 99.00th=[15795], 99.50th=[16188], 99.90th=[16712], 99.95th=[51119], 00:37:57.220 | 99.99th=[53216] 00:37:57.220 bw ( KiB/s): min=27648, max=29440, per=34.21%, avg=28646.40, stdev=483.59, samples=20 00:37:57.220 iops : min= 216, max= 230, avg=223.80, stdev= 3.78, samples=20 00:37:57.220 lat (msec) : 10=1.61%, 20=98.30%, 100=0.09% 00:37:57.220 cpu : usr=93.51%, sys=5.50%, ctx=356, majf=0, minf=128 00:37:57.220 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:57.220 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:57.220 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:57.220 issued rwts: total=2240,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:57.220 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:57.220 filename0: (groupid=0, jobs=1): err= 0: pid=665708: Tue Nov 5 19:27:24 2024 00:37:57.220 read: IOPS=210, BW=26.3MiB/s (27.6MB/s)(264MiB/10047msec) 00:37:57.220 slat (nsec): min=5631, max=30732, avg=6566.01, stdev=1086.48 00:37:57.220 clat (usec): min=10128, max=56836, avg=14230.42, stdev=3757.63 00:37:57.220 lat (usec): min=10135, max=56842, avg=14236.98, stdev=3757.63 00:37:57.220 clat percentiles (usec): 00:37:57.220 | 1.00th=[11338], 5.00th=[12125], 10.00th=[12649], 20.00th=[13042], 00:37:57.220 | 30.00th=[13304], 40.00th=[13698], 50.00th=[13829], 60.00th=[14222], 00:37:57.220 | 70.00th=[14484], 80.00th=[14877], 90.00th=[15270], 95.00th=[15795], 00:37:57.220 | 99.00th=[17433], 99.50th=[54264], 99.90th=[55837], 99.95th=[56361], 00:37:57.220 | 99.99th=[56886] 00:37:57.220 bw ( KiB/s): min=25088, max=28416, per=32.28%, avg=27033.60, stdev=1077.83, samples=20 00:37:57.220 iops : min= 196, max= 222, avg=211.20, stdev= 8.42, samples=20 00:37:57.220 lat (msec) : 20=99.20%, 50=0.09%, 100=0.71% 00:37:57.220 cpu : usr=95.62%, sys=4.17%, ctx=22, majf=0, minf=140 00:37:57.220 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:57.220 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:57.220 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:57.220 issued rwts: total=2114,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:57.220 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:57.220 filename0: (groupid=0, jobs=1): err= 0: pid=665709: Tue Nov 5 19:27:24 2024 00:37:57.220 read: IOPS=220, BW=27.6MiB/s (29.0MB/s)(278MiB/10049msec) 00:37:57.220 slat (nsec): min=5707, max=32195, avg=7695.76, stdev=1587.02 00:37:57.220 clat (usec): min=8092, max=52167, avg=13550.56, stdev=1632.95 00:37:57.220 lat (usec): min=8101, max=52173, avg=13558.26, stdev=1632.91 00:37:57.220 clat percentiles (usec): 00:37:57.220 | 1.00th=[ 9765], 5.00th=[11600], 10.00th=[12125], 20.00th=[12649], 00:37:57.220 | 30.00th=[13042], 40.00th=[13304], 50.00th=[13566], 60.00th=[13829], 00:37:57.220 | 70.00th=[14091], 80.00th=[14353], 90.00th=[14877], 95.00th=[15270], 00:37:57.220 | 99.00th=[16450], 99.50th=[16909], 99.90th=[17957], 99.95th=[49546], 00:37:57.220 | 99.99th=[52167] 00:37:57.220 bw ( KiB/s): min=27392, max=29440, per=33.90%, avg=28390.40, stdev=604.10, samples=20 00:37:57.220 iops : min= 214, max= 230, avg=221.80, stdev= 4.72, samples=20 00:37:57.220 lat (msec) : 10=1.49%, 20=98.42%, 50=0.05%, 100=0.05% 00:37:57.220 cpu : usr=95.02%, sys=4.75%, ctx=15, majf=0, minf=36 00:37:57.220 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:57.220 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:57.220 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:57.220 issued rwts: total=2220,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:57.220 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:57.220 00:37:57.220 Run status group 0 (all jobs): 00:37:57.220 READ: bw=81.8MiB/s (85.7MB/s), 26.3MiB/s-27.9MiB/s (27.6MB/s-29.2MB/s), io=822MiB (862MB), run=10047-10049msec 00:37:57.220 19:27:24 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:37:57.220 19:27:24 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:37:57.220 19:27:24 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:37:57.220 19:27:24 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:57.220 19:27:24 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:37:57.220 19:27:24 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:57.221 19:27:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:57.221 19:27:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:57.221 19:27:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:57.221 19:27:24 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:57.221 19:27:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:57.221 19:27:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:57.221 19:27:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:57.221 00:37:57.221 real 0m11.138s 00:37:57.221 user 0m41.501s 00:37:57.221 sys 0m1.746s 00:37:57.221 19:27:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1128 -- # xtrace_disable 00:37:57.221 19:27:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:57.221 ************************************ 00:37:57.221 END TEST fio_dif_digest 00:37:57.221 ************************************ 00:37:57.221 19:27:24 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:37:57.221 19:27:24 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:37:57.221 19:27:24 nvmf_dif -- nvmf/common.sh@335 -- # nvmfcleanup 00:37:57.221 19:27:24 nvmf_dif -- nvmf/common.sh@99 -- # sync 00:37:57.221 19:27:24 nvmf_dif -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:37:57.221 19:27:24 nvmf_dif -- nvmf/common.sh@102 -- # set +e 00:37:57.221 19:27:24 nvmf_dif -- nvmf/common.sh@103 -- # for i in {1..20} 00:37:57.221 19:27:24 nvmf_dif -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:37:57.221 rmmod nvme_tcp 00:37:57.221 rmmod nvme_fabrics 00:37:57.221 rmmod nvme_keyring 00:37:57.221 19:27:24 nvmf_dif -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:37:57.221 19:27:24 nvmf_dif -- nvmf/common.sh@106 -- # set -e 00:37:57.221 19:27:24 nvmf_dif -- nvmf/common.sh@107 -- # return 0 00:37:57.221 19:27:24 nvmf_dif -- nvmf/common.sh@336 -- # '[' -n 654655 ']' 00:37:57.221 19:27:24 nvmf_dif -- nvmf/common.sh@337 -- # killprocess 654655 00:37:57.221 19:27:24 nvmf_dif -- common/autotest_common.sh@952 -- # '[' -z 654655 ']' 00:37:57.221 19:27:24 nvmf_dif -- common/autotest_common.sh@956 -- # kill -0 654655 00:37:57.221 19:27:24 nvmf_dif -- common/autotest_common.sh@957 -- # uname 00:37:57.221 19:27:24 nvmf_dif -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:37:57.221 19:27:24 nvmf_dif -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 654655 00:37:57.221 19:27:24 nvmf_dif -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:37:57.221 19:27:24 nvmf_dif -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:37:57.221 19:27:24 nvmf_dif -- common/autotest_common.sh@970 -- # echo 'killing process with pid 654655' 00:37:57.221 killing process with pid 654655 00:37:57.221 19:27:24 nvmf_dif -- common/autotest_common.sh@971 -- # kill 654655 00:37:57.221 19:27:24 nvmf_dif -- common/autotest_common.sh@976 -- # wait 654655 00:37:57.221 19:27:25 nvmf_dif -- nvmf/common.sh@339 -- # '[' iso == iso ']' 00:37:57.221 19:27:25 nvmf_dif -- nvmf/common.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:37:59.140 Waiting for block devices as requested 00:37:59.401 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:59.401 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:59.401 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:59.661 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:59.661 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:59.661 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:59.923 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:59.923 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:59.923 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:38:00.183 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:38:00.183 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:38:00.183 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:38:00.445 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:38:00.445 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:38:00.445 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:38:00.445 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:38:00.706 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:38:00.966 19:27:30 nvmf_dif -- nvmf/common.sh@342 -- # nvmf_fini 00:38:00.966 19:27:30 nvmf_dif -- nvmf/setup.sh@264 -- # local dev 00:38:00.966 19:27:30 nvmf_dif -- nvmf/setup.sh@267 -- # remove_target_ns 00:38:00.966 19:27:30 nvmf_dif -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:38:00.966 19:27:30 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 13> /dev/null' 00:38:00.966 19:27:30 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_target_ns 00:38:02.881 19:27:32 nvmf_dif -- nvmf/setup.sh@268 -- # delete_main_bridge 00:38:02.881 19:27:32 nvmf_dif -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:38:02.881 19:27:32 nvmf_dif -- nvmf/setup.sh@130 -- # return 0 00:38:02.881 19:27:32 nvmf_dif -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:38:02.881 19:27:32 nvmf_dif -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:38:02.881 19:27:32 nvmf_dif -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:38:02.881 19:27:32 nvmf_dif -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:38:02.881 19:27:32 nvmf_dif -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:38:02.881 19:27:32 nvmf_dif -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:38:02.881 19:27:32 nvmf_dif -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:38:02.881 19:27:32 nvmf_dif -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:38:02.881 19:27:32 nvmf_dif -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:38:02.881 19:27:32 nvmf_dif -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:38:02.881 19:27:32 nvmf_dif -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:38:02.881 19:27:32 nvmf_dif -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:38:02.881 19:27:32 nvmf_dif -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:38:02.881 19:27:32 nvmf_dif -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:38:02.881 19:27:32 nvmf_dif -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:38:02.881 19:27:32 nvmf_dif -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:38:02.881 19:27:32 nvmf_dif -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:38:02.881 19:27:32 nvmf_dif -- nvmf/setup.sh@41 -- # _dev=0 00:38:02.881 19:27:32 nvmf_dif -- nvmf/setup.sh@41 -- # dev_map=() 00:38:02.881 19:27:32 nvmf_dif -- nvmf/setup.sh@284 -- # iptr 00:38:02.881 19:27:32 nvmf_dif -- nvmf/common.sh@542 -- # iptables-save 00:38:02.881 19:27:32 nvmf_dif -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:38:02.881 19:27:32 nvmf_dif -- nvmf/common.sh@542 -- # iptables-restore 00:38:02.881 00:38:02.881 real 1m18.183s 00:38:02.881 user 8m2.182s 00:38:02.881 sys 0m21.719s 00:38:02.881 19:27:32 nvmf_dif -- common/autotest_common.sh@1128 -- # xtrace_disable 00:38:02.881 19:27:32 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:02.881 ************************************ 00:38:02.881 END TEST nvmf_dif 00:38:02.881 ************************************ 00:38:03.143 19:27:32 -- spdk/autotest.sh@286 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:38:03.143 19:27:32 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:38:03.143 19:27:32 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:38:03.143 19:27:32 -- common/autotest_common.sh@10 -- # set +x 00:38:03.143 ************************************ 00:38:03.143 START TEST nvmf_abort_qd_sizes 00:38:03.143 ************************************ 00:38:03.143 19:27:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:38:03.143 * Looking for test storage... 00:38:03.143 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:03.143 19:27:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:38:03.143 19:27:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # lcov --version 00:38:03.143 19:27:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:38:03.143 19:27:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:38:03.143 19:27:32 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:03.143 19:27:32 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:03.143 19:27:32 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:03.143 19:27:32 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:38:03.143 19:27:32 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:38:03.143 19:27:32 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:38:03.143 19:27:32 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:38:03.143 19:27:32 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:38:03.143 19:27:32 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:38:03.143 19:27:32 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:38:03.143 19:27:32 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:03.143 19:27:32 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:38:03.143 19:27:32 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:38:03.143 19:27:32 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:03.143 19:27:32 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:03.143 19:27:32 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:38:03.143 19:27:32 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:38:03.143 19:27:32 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:03.143 19:27:32 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:38:03.143 19:27:32 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:38:03.143 19:27:32 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:38:03.143 19:27:32 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:38:03.143 19:27:32 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:03.143 19:27:32 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:38:03.143 19:27:32 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:38:03.143 19:27:32 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:03.143 19:27:32 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:03.143 19:27:32 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:38:03.143 19:27:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:03.143 19:27:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:38:03.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:03.143 --rc genhtml_branch_coverage=1 00:38:03.143 --rc genhtml_function_coverage=1 00:38:03.143 --rc genhtml_legend=1 00:38:03.143 --rc geninfo_all_blocks=1 00:38:03.143 --rc geninfo_unexecuted_blocks=1 00:38:03.143 00:38:03.143 ' 00:38:03.143 19:27:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:38:03.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:03.143 --rc genhtml_branch_coverage=1 00:38:03.143 --rc genhtml_function_coverage=1 00:38:03.143 --rc genhtml_legend=1 00:38:03.143 --rc geninfo_all_blocks=1 00:38:03.143 --rc geninfo_unexecuted_blocks=1 00:38:03.144 00:38:03.144 ' 00:38:03.144 19:27:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:38:03.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:03.144 --rc genhtml_branch_coverage=1 00:38:03.144 --rc genhtml_function_coverage=1 00:38:03.144 --rc genhtml_legend=1 00:38:03.144 --rc geninfo_all_blocks=1 00:38:03.144 --rc geninfo_unexecuted_blocks=1 00:38:03.144 00:38:03.144 ' 00:38:03.144 19:27:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:38:03.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:03.144 --rc genhtml_branch_coverage=1 00:38:03.144 --rc genhtml_function_coverage=1 00:38:03.144 --rc genhtml_legend=1 00:38:03.144 --rc geninfo_all_blocks=1 00:38:03.144 --rc geninfo_unexecuted_blocks=1 00:38:03.144 00:38:03.144 ' 00:38:03.144 19:27:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:03.144 19:27:32 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:38:03.406 19:27:32 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:03.406 19:27:32 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:03.406 19:27:32 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:03.406 19:27:32 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:03.406 19:27:32 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:03.406 19:27:32 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:38:03.406 19:27:32 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:03.406 19:27:32 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:38:03.406 19:27:32 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:03.406 19:27:32 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:03.406 19:27:32 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:03.406 19:27:32 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:38:03.406 19:27:32 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:38:03.406 19:27:32 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:03.406 19:27:32 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:03.406 19:27:32 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:38:03.406 19:27:32 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:03.406 19:27:32 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:03.406 19:27:32 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:03.406 19:27:32 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:03.406 19:27:32 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:03.406 19:27:32 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:03.406 19:27:32 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:38:03.406 19:27:32 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:03.406 19:27:32 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:38:03.406 19:27:32 nvmf_abort_qd_sizes -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:38:03.406 19:27:32 nvmf_abort_qd_sizes -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:38:03.406 19:27:32 nvmf_abort_qd_sizes -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:38:03.406 19:27:32 nvmf_abort_qd_sizes -- nvmf/common.sh@50 -- # : 0 00:38:03.406 19:27:32 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:38:03.406 19:27:32 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:38:03.406 19:27:32 nvmf_abort_qd_sizes -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:38:03.406 19:27:32 nvmf_abort_qd_sizes -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:03.406 19:27:32 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:03.406 19:27:32 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:38:03.406 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:38:03.406 19:27:32 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:38:03.406 19:27:32 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:38:03.406 19:27:32 nvmf_abort_qd_sizes -- nvmf/common.sh@54 -- # have_pci_nics=0 00:38:03.407 19:27:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:38:03.407 19:27:32 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:38:03.407 19:27:32 nvmf_abort_qd_sizes -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:03.407 19:27:32 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # prepare_net_devs 00:38:03.407 19:27:32 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # local -g is_hw=no 00:38:03.407 19:27:32 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # remove_target_ns 00:38:03.407 19:27:32 nvmf_abort_qd_sizes -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:38:03.407 19:27:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 13> /dev/null' 00:38:03.407 19:27:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_target_ns 00:38:03.407 19:27:32 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:38:03.407 19:27:32 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:38:03.407 19:27:32 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # xtrace_disable 00:38:03.407 19:27:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:11.549 19:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:11.549 19:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@131 -- # pci_devs=() 00:38:11.549 19:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@131 -- # local -a pci_devs 00:38:11.549 19:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@132 -- # pci_net_devs=() 00:38:11.549 19:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:38:11.549 19:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@133 -- # pci_drivers=() 00:38:11.549 19:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@133 -- # local -A pci_drivers 00:38:11.549 19:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@135 -- # net_devs=() 00:38:11.549 19:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@135 -- # local -ga net_devs 00:38:11.549 19:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@136 -- # e810=() 00:38:11.549 19:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@136 -- # local -ga e810 00:38:11.549 19:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@137 -- # x722=() 00:38:11.549 19:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@137 -- # local -ga x722 00:38:11.549 19:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@138 -- # mlx=() 00:38:11.549 19:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@138 -- # local -ga mlx 00:38:11.549 19:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:11.549 19:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:11.549 19:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:11.549 19:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:11.549 19:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:11.549 19:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:11.549 19:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:11.549 19:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:11.549 19:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:11.549 19:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:11.549 19:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:11.549 19:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:11.549 19:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:38:11.549 19:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:38:11.549 19:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:38:11.549 19:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:38:11.549 19:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:38:11.549 19:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:38:11.549 19:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:38:11.549 19:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:38:11.549 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:38:11.549 19:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:38:11.549 19:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:38:11.549 19:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:11.549 19:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:11.549 19:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:38:11.549 19:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:38:11.549 19:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:38:11.549 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:38:11.549 19:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:38:11.549 19:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:38:11.549 19:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:11.549 19:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:11.549 19:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:38:11.549 19:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:38:11.549 19:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:38:11.549 19:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:38:11.549 19:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:38:11.549 19:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:11.549 19:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:38:11.549 19:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:11.549 19:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # [[ up == up ]] 00:38:11.549 19:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:38:11.549 19:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:11.549 19:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:38:11.549 Found net devices under 0000:4b:00.0: cvl_0_0 00:38:11.550 19:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:38:11.550 19:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:38:11.550 19:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:11.550 19:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:38:11.550 19:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:11.550 19:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # [[ up == up ]] 00:38:11.550 19:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:38:11.550 19:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:11.550 19:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:38:11.550 Found net devices under 0000:4b:00.1: cvl_0_1 00:38:11.550 19:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:38:11.550 19:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:38:11.550 19:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:38:11.550 19:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # is_hw=yes 00:38:11.550 19:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:38:11.550 19:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:38:11.550 19:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:38:11.550 19:27:39 nvmf_abort_qd_sizes -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:38:11.550 19:27:39 nvmf_abort_qd_sizes -- nvmf/setup.sh@257 -- # create_target_ns 00:38:11.550 19:27:39 nvmf_abort_qd_sizes -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:38:11.550 19:27:39 nvmf_abort_qd_sizes -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:38:11.550 19:27:39 nvmf_abort_qd_sizes -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:38:11.550 19:27:39 nvmf_abort_qd_sizes -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:11.550 19:27:39 nvmf_abort_qd_sizes -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:38:11.550 19:27:39 nvmf_abort_qd_sizes -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:38:11.550 19:27:39 nvmf_abort_qd_sizes -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:38:11.550 19:27:39 nvmf_abort_qd_sizes -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:38:11.550 19:27:39 nvmf_abort_qd_sizes -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:38:11.550 19:27:39 nvmf_abort_qd_sizes -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:38:11.550 19:27:39 nvmf_abort_qd_sizes -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:38:11.550 19:27:39 nvmf_abort_qd_sizes -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:38:11.550 19:27:39 nvmf_abort_qd_sizes -- nvmf/setup.sh@27 -- # local -gA dev_map 00:38:11.550 19:27:39 nvmf_abort_qd_sizes -- nvmf/setup.sh@28 -- # local -g _dev 00:38:11.550 19:27:39 nvmf_abort_qd_sizes -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:38:11.550 19:27:39 nvmf_abort_qd_sizes -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:38:11.550 19:27:39 nvmf_abort_qd_sizes -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:38:11.550 19:27:39 nvmf_abort_qd_sizes -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:38:11.550 19:27:39 nvmf_abort_qd_sizes -- nvmf/setup.sh@44 -- # ips=() 00:38:11.550 19:27:39 nvmf_abort_qd_sizes -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:38:11.550 19:27:39 nvmf_abort_qd_sizes -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:38:11.550 19:27:39 nvmf_abort_qd_sizes -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:38:11.550 19:27:39 nvmf_abort_qd_sizes -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:38:11.550 19:27:39 nvmf_abort_qd_sizes -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:38:11.550 19:27:39 nvmf_abort_qd_sizes -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:38:11.550 19:27:39 nvmf_abort_qd_sizes -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:38:11.550 19:27:39 nvmf_abort_qd_sizes -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:38:11.550 19:27:39 nvmf_abort_qd_sizes -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:38:11.550 19:27:39 nvmf_abort_qd_sizes -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:38:11.550 19:27:39 nvmf_abort_qd_sizes -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:38:11.550 19:27:39 nvmf_abort_qd_sizes -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:38:11.550 19:27:39 nvmf_abort_qd_sizes -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:38:11.550 19:27:39 nvmf_abort_qd_sizes -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:38:11.550 19:27:39 nvmf_abort_qd_sizes -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:38:11.550 19:27:39 nvmf_abort_qd_sizes -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:38:11.550 19:27:39 nvmf_abort_qd_sizes -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:38:11.550 19:27:39 nvmf_abort_qd_sizes -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:38:11.550 19:27:39 nvmf_abort_qd_sizes -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:38:11.550 19:27:39 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:38:11.550 19:27:39 nvmf_abort_qd_sizes -- nvmf/setup.sh@11 -- # local val=167772161 00:38:11.550 19:27:39 nvmf_abort_qd_sizes -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:38:11.550 19:27:39 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:38:11.550 19:27:39 nvmf_abort_qd_sizes -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:38:11.550 19:27:39 nvmf_abort_qd_sizes -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:38:11.550 19:27:39 nvmf_abort_qd_sizes -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:38:11.550 19:27:39 nvmf_abort_qd_sizes -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:38:11.550 19:27:39 nvmf_abort_qd_sizes -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:38:11.550 10.0.0.1 00:38:11.550 19:27:39 nvmf_abort_qd_sizes -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:38:11.550 19:27:39 nvmf_abort_qd_sizes -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:38:11.550 19:27:39 nvmf_abort_qd_sizes -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:38:11.550 19:27:39 nvmf_abort_qd_sizes -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:38:11.550 19:27:39 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:38:11.550 19:27:39 nvmf_abort_qd_sizes -- nvmf/setup.sh@11 -- # local val=167772162 00:38:11.550 19:27:39 nvmf_abort_qd_sizes -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:38:11.550 19:27:39 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:38:11.550 19:27:39 nvmf_abort_qd_sizes -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:38:11.550 19:27:39 nvmf_abort_qd_sizes -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:38:11.550 19:27:39 nvmf_abort_qd_sizes -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:38:11.550 19:27:39 nvmf_abort_qd_sizes -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:38:11.550 19:27:39 nvmf_abort_qd_sizes -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:38:11.550 10.0.0.2 00:38:11.550 19:27:39 nvmf_abort_qd_sizes -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:38:11.550 19:27:39 nvmf_abort_qd_sizes -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:38:11.550 19:27:39 nvmf_abort_qd_sizes -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:38:11.550 19:27:39 nvmf_abort_qd_sizes -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:38:11.550 19:27:39 nvmf_abort_qd_sizes -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:38:11.550 19:27:39 nvmf_abort_qd_sizes -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:38:11.550 19:27:39 nvmf_abort_qd_sizes -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:38:11.550 19:27:39 nvmf_abort_qd_sizes -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:38:11.550 19:27:39 nvmf_abort_qd_sizes -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:38:11.550 19:27:39 nvmf_abort_qd_sizes -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:38:11.550 19:27:39 nvmf_abort_qd_sizes -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:38:11.550 19:27:39 nvmf_abort_qd_sizes -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:38:11.550 19:27:39 nvmf_abort_qd_sizes -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:38:11.550 19:27:39 nvmf_abort_qd_sizes -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:38:11.550 19:27:39 nvmf_abort_qd_sizes -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:38:11.550 19:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:38:11.550 19:27:39 nvmf_abort_qd_sizes -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:38:11.550 19:27:39 nvmf_abort_qd_sizes -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:38:11.550 19:27:39 nvmf_abort_qd_sizes -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:38:11.550 19:27:39 nvmf_abort_qd_sizes -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:38:11.550 19:27:39 nvmf_abort_qd_sizes -- nvmf/setup.sh@38 -- # ping_ips 1 00:38:11.550 19:27:39 nvmf_abort_qd_sizes -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:38:11.550 19:27:39 nvmf_abort_qd_sizes -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:38:11.550 19:27:39 nvmf_abort_qd_sizes -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:38:11.550 19:27:39 nvmf_abort_qd_sizes -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:38:11.550 19:27:39 nvmf_abort_qd_sizes -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:38:11.550 19:27:39 nvmf_abort_qd_sizes -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:38:11.550 19:27:39 nvmf_abort_qd_sizes -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:38:11.550 19:27:39 nvmf_abort_qd_sizes -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:38:11.550 19:27:39 nvmf_abort_qd_sizes -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:38:11.550 19:27:39 nvmf_abort_qd_sizes -- nvmf/setup.sh@107 -- # local dev=initiator0 00:38:11.550 19:27:39 nvmf_abort_qd_sizes -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:38:11.550 19:27:39 nvmf_abort_qd_sizes -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:38:11.550 19:27:39 nvmf_abort_qd_sizes -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:38:11.550 19:27:39 nvmf_abort_qd_sizes -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:38:11.550 19:27:39 nvmf_abort_qd_sizes -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:38:11.550 19:27:39 nvmf_abort_qd_sizes -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:38:11.550 19:27:39 nvmf_abort_qd_sizes -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:38:11.550 19:27:39 nvmf_abort_qd_sizes -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:38:11.550 19:27:39 nvmf_abort_qd_sizes -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:38:11.550 19:27:39 nvmf_abort_qd_sizes -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:38:11.550 19:27:39 nvmf_abort_qd_sizes -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:38:11.550 19:27:39 nvmf_abort_qd_sizes -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:38:11.550 19:27:39 nvmf_abort_qd_sizes -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:38:11.550 19:27:39 nvmf_abort_qd_sizes -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:38:11.550 19:27:39 nvmf_abort_qd_sizes -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:38:11.550 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:11.551 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.722 ms 00:38:11.551 00:38:11.551 --- 10.0.0.1 ping statistics --- 00:38:11.551 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:11.551 rtt min/avg/max/mdev = 0.722/0.722/0.722/0.000 ms 00:38:11.551 19:27:39 nvmf_abort_qd_sizes -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:38:11.551 19:27:39 nvmf_abort_qd_sizes -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:38:11.551 19:27:39 nvmf_abort_qd_sizes -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:38:11.551 19:27:39 nvmf_abort_qd_sizes -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:38:11.551 19:27:39 nvmf_abort_qd_sizes -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:38:11.551 19:27:39 nvmf_abort_qd_sizes -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:38:11.551 19:27:39 nvmf_abort_qd_sizes -- nvmf/setup.sh@168 -- # get_net_dev target0 00:38:11.551 19:27:39 nvmf_abort_qd_sizes -- nvmf/setup.sh@107 -- # local dev=target0 00:38:11.551 19:27:39 nvmf_abort_qd_sizes -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:38:11.551 19:27:39 nvmf_abort_qd_sizes -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:38:11.551 19:27:39 nvmf_abort_qd_sizes -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:38:11.551 19:27:39 nvmf_abort_qd_sizes -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:38:11.551 19:27:39 nvmf_abort_qd_sizes -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:38:11.551 19:27:39 nvmf_abort_qd_sizes -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:38:11.551 19:27:39 nvmf_abort_qd_sizes -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:38:11.551 19:27:39 nvmf_abort_qd_sizes -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:38:11.551 19:27:39 nvmf_abort_qd_sizes -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:38:11.551 19:27:39 nvmf_abort_qd_sizes -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:38:11.551 19:27:39 nvmf_abort_qd_sizes -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:38:11.551 19:27:39 nvmf_abort_qd_sizes -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:38:11.551 19:27:39 nvmf_abort_qd_sizes -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:38:11.551 19:27:39 nvmf_abort_qd_sizes -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:38:11.551 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:11.551 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.287 ms 00:38:11.551 00:38:11.551 --- 10.0.0.2 ping statistics --- 00:38:11.551 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:11.551 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:38:11.551 19:27:39 nvmf_abort_qd_sizes -- nvmf/setup.sh@98 -- # (( pair++ )) 00:38:11.551 19:27:39 nvmf_abort_qd_sizes -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:38:11.551 19:27:39 nvmf_abort_qd_sizes -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:11.551 19:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # return 0 00:38:11.551 19:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # '[' iso == iso ']' 00:38:11.551 19:27:39 nvmf_abort_qd_sizes -- nvmf/common.sh@299 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:38:14.097 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:38:14.097 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:38:14.097 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:38:14.097 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:38:14.097 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:38:14.097 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:38:14.097 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:38:14.097 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:38:14.097 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:38:14.097 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:38:14.097 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:38:14.097 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:38:14.097 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:38:14.097 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:38:14.097 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:38:14.097 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:38:14.097 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:38:14.359 19:27:43 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:38:14.359 19:27:43 nvmf_abort_qd_sizes -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:38:14.359 19:27:43 nvmf_abort_qd_sizes -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:38:14.359 19:27:43 nvmf_abort_qd_sizes -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:38:14.359 19:27:43 nvmf_abort_qd_sizes -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:38:14.359 19:27:43 nvmf_abort_qd_sizes -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:38:14.359 19:27:43 nvmf_abort_qd_sizes -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:38:14.359 19:27:43 nvmf_abort_qd_sizes -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:38:14.359 19:27:43 nvmf_abort_qd_sizes -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:38:14.359 19:27:43 nvmf_abort_qd_sizes -- nvmf/setup.sh@107 -- # local dev=initiator0 00:38:14.359 19:27:43 nvmf_abort_qd_sizes -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:38:14.359 19:27:43 nvmf_abort_qd_sizes -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:38:14.359 19:27:43 nvmf_abort_qd_sizes -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:38:14.359 19:27:43 nvmf_abort_qd_sizes -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:38:14.359 19:27:43 nvmf_abort_qd_sizes -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:38:14.359 19:27:43 nvmf_abort_qd_sizes -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:38:14.359 19:27:43 nvmf_abort_qd_sizes -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:38:14.359 19:27:43 nvmf_abort_qd_sizes -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:38:14.359 19:27:43 nvmf_abort_qd_sizes -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:38:14.359 19:27:43 nvmf_abort_qd_sizes -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:14.359 19:27:43 nvmf_abort_qd_sizes -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:38:14.359 19:27:43 nvmf_abort_qd_sizes -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:38:14.359 19:27:43 nvmf_abort_qd_sizes -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:38:14.359 19:27:43 nvmf_abort_qd_sizes -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:38:14.359 19:27:43 nvmf_abort_qd_sizes -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:38:14.359 19:27:43 nvmf_abort_qd_sizes -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:38:14.359 19:27:43 nvmf_abort_qd_sizes -- nvmf/setup.sh@107 -- # local dev=initiator1 00:38:14.359 19:27:43 nvmf_abort_qd_sizes -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:38:14.359 19:27:43 nvmf_abort_qd_sizes -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:38:14.359 19:27:43 nvmf_abort_qd_sizes -- nvmf/setup.sh@109 -- # return 1 00:38:14.359 19:27:43 nvmf_abort_qd_sizes -- nvmf/setup.sh@168 -- # dev= 00:38:14.359 19:27:43 nvmf_abort_qd_sizes -- nvmf/setup.sh@169 -- # return 0 00:38:14.359 19:27:43 nvmf_abort_qd_sizes -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:38:14.359 19:27:43 nvmf_abort_qd_sizes -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:38:14.359 19:27:43 nvmf_abort_qd_sizes -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:38:14.359 19:27:43 nvmf_abort_qd_sizes -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:38:14.359 19:27:43 nvmf_abort_qd_sizes -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:38:14.359 19:27:43 nvmf_abort_qd_sizes -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:38:14.359 19:27:43 nvmf_abort_qd_sizes -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:38:14.359 19:27:43 nvmf_abort_qd_sizes -- nvmf/setup.sh@168 -- # get_net_dev target0 00:38:14.359 19:27:43 nvmf_abort_qd_sizes -- nvmf/setup.sh@107 -- # local dev=target0 00:38:14.359 19:27:43 nvmf_abort_qd_sizes -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:38:14.359 19:27:43 nvmf_abort_qd_sizes -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:38:14.359 19:27:43 nvmf_abort_qd_sizes -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:38:14.359 19:27:43 nvmf_abort_qd_sizes -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:38:14.359 19:27:43 nvmf_abort_qd_sizes -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:38:14.360 19:27:43 nvmf_abort_qd_sizes -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:38:14.360 19:27:43 nvmf_abort_qd_sizes -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:38:14.360 19:27:43 nvmf_abort_qd_sizes -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:38:14.360 19:27:43 nvmf_abort_qd_sizes -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:38:14.360 19:27:43 nvmf_abort_qd_sizes -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:14.360 19:27:43 nvmf_abort_qd_sizes -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:38:14.360 19:27:43 nvmf_abort_qd_sizes -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:38:14.360 19:27:43 nvmf_abort_qd_sizes -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:38:14.360 19:27:43 nvmf_abort_qd_sizes -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:38:14.360 19:27:43 nvmf_abort_qd_sizes -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:38:14.360 19:27:43 nvmf_abort_qd_sizes -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:38:14.360 19:27:43 nvmf_abort_qd_sizes -- nvmf/setup.sh@168 -- # get_net_dev target1 00:38:14.360 19:27:43 nvmf_abort_qd_sizes -- nvmf/setup.sh@107 -- # local dev=target1 00:38:14.360 19:27:43 nvmf_abort_qd_sizes -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:38:14.360 19:27:43 nvmf_abort_qd_sizes -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:38:14.360 19:27:43 nvmf_abort_qd_sizes -- nvmf/setup.sh@109 -- # return 1 00:38:14.360 19:27:43 nvmf_abort_qd_sizes -- nvmf/setup.sh@168 -- # dev= 00:38:14.360 19:27:43 nvmf_abort_qd_sizes -- nvmf/setup.sh@169 -- # return 0 00:38:14.360 19:27:43 nvmf_abort_qd_sizes -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:38:14.360 19:27:43 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:14.360 19:27:43 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:38:14.360 19:27:43 nvmf_abort_qd_sizes -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:38:14.360 19:27:43 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:14.360 19:27:43 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:38:14.360 19:27:43 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:38:14.621 19:27:43 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:38:14.621 19:27:43 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:38:14.621 19:27:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:14.621 19:27:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:14.621 19:27:43 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # nvmfpid=675161 00:38:14.621 19:27:43 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # waitforlisten 675161 00:38:14.621 19:27:43 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:38:14.621 19:27:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # '[' -z 675161 ']' 00:38:14.621 19:27:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:14.621 19:27:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # local max_retries=100 00:38:14.621 19:27:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:14.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:14.621 19:27:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # xtrace_disable 00:38:14.621 19:27:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:14.621 [2024-11-05 19:27:43.770718] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:38:14.621 [2024-11-05 19:27:43.770816] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:14.621 [2024-11-05 19:27:43.856552] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:14.621 [2024-11-05 19:27:43.900105] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:14.621 [2024-11-05 19:27:43.900140] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:14.621 [2024-11-05 19:27:43.900149] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:14.621 [2024-11-05 19:27:43.900155] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:14.621 [2024-11-05 19:27:43.900165] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:14.621 [2024-11-05 19:27:43.902026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:14.621 [2024-11-05 19:27:43.902142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:38:14.621 [2024-11-05 19:27:43.902299] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:14.621 [2024-11-05 19:27:43.902299] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:38:15.563 19:27:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:38:15.563 19:27:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@866 -- # return 0 00:38:15.563 19:27:44 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:38:15.563 19:27:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:15.563 19:27:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:15.563 19:27:44 nvmf_abort_qd_sizes -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:15.563 19:27:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:38:15.563 19:27:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:38:15.563 19:27:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:38:15.563 19:27:44 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:38:15.563 19:27:44 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:38:15.563 19:27:44 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:65:00.0 ]] 00:38:15.563 19:27:44 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:38:15.563 19:27:44 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:38:15.563 19:27:44 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:38:15.563 19:27:44 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:38:15.563 19:27:44 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:38:15.563 19:27:44 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:38:15.563 19:27:44 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:38:15.563 19:27:44 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:65:00.0 00:38:15.563 19:27:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:38:15.563 19:27:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:38:15.563 19:27:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:38:15.563 19:27:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:38:15.563 19:27:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@1109 -- # xtrace_disable 00:38:15.563 19:27:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:15.563 ************************************ 00:38:15.563 START TEST spdk_target_abort 00:38:15.563 ************************************ 00:38:15.563 19:27:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1127 -- # spdk_target 00:38:15.563 19:27:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:38:15.563 19:27:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:38:15.563 19:27:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:15.563 19:27:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:15.824 spdk_targetn1 00:38:15.824 19:27:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:15.824 19:27:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:15.824 19:27:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:15.824 19:27:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:15.825 [2024-11-05 19:27:44.984797] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:15.825 19:27:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:15.825 19:27:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:38:15.825 19:27:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:15.825 19:27:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:15.825 19:27:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:15.825 19:27:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:38:15.825 19:27:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:15.825 19:27:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:15.825 19:27:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:15.825 19:27:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:38:15.825 19:27:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:15.825 19:27:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:15.825 [2024-11-05 19:27:45.037106] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:15.825 19:27:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:15.825 19:27:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:38:15.825 19:27:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:38:15.825 19:27:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:38:15.825 19:27:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:38:15.825 19:27:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:38:15.825 19:27:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:38:15.825 19:27:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:38:15.825 19:27:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:38:15.825 19:27:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:38:15.825 19:27:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:15.825 19:27:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:38:15.825 19:27:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:15.825 19:27:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:38:15.825 19:27:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:15.825 19:27:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:38:15.825 19:27:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:15.825 19:27:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:38:15.825 19:27:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:15.825 19:27:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:15.825 19:27:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:15.825 19:27:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:16.086 [2024-11-05 19:27:45.228196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:312 len:8 PRP1 0x200004ac4000 PRP2 0x0 00:38:16.086 [2024-11-05 19:27:45.228226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:0028 p:1 m:0 dnr:0 00:38:16.086 [2024-11-05 19:27:45.228816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:344 len:8 PRP1 0x200004abe000 PRP2 0x0 00:38:16.086 [2024-11-05 19:27:45.228828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:002d p:1 m:0 dnr:0 00:38:16.086 [2024-11-05 19:27:45.268237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:1760 len:8 PRP1 0x200004ac6000 PRP2 0x0 00:38:16.086 [2024-11-05 19:27:45.268254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:00de p:1 m:0 dnr:0 00:38:16.086 [2024-11-05 19:27:45.292218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:2632 len:8 PRP1 0x200004ac4000 PRP2 0x0 00:38:16.086 [2024-11-05 19:27:45.292234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:38:16.086 [2024-11-05 19:27:45.316196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:3528 len:8 PRP1 0x200004abe000 PRP2 0x0 00:38:16.086 [2024-11-05 19:27:45.316212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:00bb p:0 m:0 dnr:0 00:38:16.086 [2024-11-05 19:27:45.324701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:3856 len:8 PRP1 0x200004abe000 PRP2 0x0 00:38:16.086 [2024-11-05 19:27:45.324716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:00e3 p:0 m:0 dnr:0 00:38:16.086 [2024-11-05 19:27:45.327461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:4040 len:8 PRP1 0x200004ac8000 PRP2 0x0 00:38:16.086 [2024-11-05 19:27:45.327475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:00fa p:0 m:0 dnr:0 00:38:19.389 Initializing NVMe Controllers 00:38:19.389 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:38:19.389 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:19.389 Initialization complete. Launching workers. 00:38:19.389 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 12649, failed: 7 00:38:19.389 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 3554, failed to submit 9102 00:38:19.389 success 743, unsuccessful 2811, failed 0 00:38:19.389 19:27:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:19.389 19:27:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:19.389 [2024-11-05 19:27:48.573915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:169 nsid:1 lba:1960 len:8 PRP1 0x200004e40000 PRP2 0x0 00:38:19.389 [2024-11-05 19:27:48.573956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:169 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:38:19.389 [2024-11-05 19:27:48.589870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:168 nsid:1 lba:2344 len:8 PRP1 0x200004e58000 PRP2 0x0 00:38:19.389 [2024-11-05 19:27:48.589895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:168 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:38:19.389 [2024-11-05 19:27:48.613901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:181 nsid:1 lba:2944 len:8 PRP1 0x200004e5e000 PRP2 0x0 00:38:19.389 [2024-11-05 19:27:48.613925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:181 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:38:19.650 [2024-11-05 19:27:48.919954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:172 nsid:1 lba:10168 len:8 PRP1 0x200004e4c000 PRP2 0x0 00:38:19.650 [2024-11-05 19:27:48.919984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:172 cdw0:0 sqhd:00fc p:1 m:0 dnr:0 00:38:21.067 [2024-11-05 19:27:50.103059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:189 nsid:1 lba:37008 len:8 PRP1 0x200004e56000 PRP2 0x0 00:38:21.068 [2024-11-05 19:27:50.103100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:189 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:38:22.491 Initializing NVMe Controllers 00:38:22.491 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:38:22.491 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:22.491 Initialization complete. Launching workers. 00:38:22.491 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8536, failed: 5 00:38:22.491 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1206, failed to submit 7335 00:38:22.491 success 374, unsuccessful 832, failed 0 00:38:22.491 19:27:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:22.491 19:27:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:25.792 Initializing NVMe Controllers 00:38:25.792 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:38:25.792 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:25.792 Initialization complete. Launching workers. 00:38:25.792 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 41958, failed: 0 00:38:25.792 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2697, failed to submit 39261 00:38:25.792 success 575, unsuccessful 2122, failed 0 00:38:25.792 19:27:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:38:25.792 19:27:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:25.792 19:27:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:25.792 19:27:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:25.792 19:27:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:38:25.792 19:27:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:25.792 19:27:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:27.706 19:27:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:27.706 19:27:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 675161 00:38:27.706 19:27:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # '[' -z 675161 ']' 00:38:27.706 19:27:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # kill -0 675161 00:38:27.706 19:27:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@957 -- # uname 00:38:27.706 19:27:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:38:27.706 19:27:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 675161 00:38:27.706 19:27:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:38:27.706 19:27:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:38:27.706 19:27:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@970 -- # echo 'killing process with pid 675161' 00:38:27.706 killing process with pid 675161 00:38:27.706 19:27:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@971 -- # kill 675161 00:38:27.706 19:27:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@976 -- # wait 675161 00:38:27.706 00:38:27.706 real 0m12.317s 00:38:27.706 user 0m50.244s 00:38:27.706 sys 0m1.916s 00:38:27.706 19:27:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:38:27.706 19:27:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:27.706 ************************************ 00:38:27.706 END TEST spdk_target_abort 00:38:27.706 ************************************ 00:38:27.706 19:27:57 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:38:27.706 19:27:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:38:27.706 19:27:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@1109 -- # xtrace_disable 00:38:27.706 19:27:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:27.967 ************************************ 00:38:27.967 START TEST kernel_target_abort 00:38:27.967 ************************************ 00:38:27.967 19:27:57 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1127 -- # kernel_target 00:38:27.967 19:27:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:38:27.967 19:27:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@434 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:38:27.967 19:27:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@436 -- # nvmet=/sys/kernel/config/nvmet 00:38:27.967 19:27:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@437 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:27.967 19:27:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@438 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:38:27.967 19:27:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@439 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:38:27.967 19:27:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@441 -- # local block nvme 00:38:27.967 19:27:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@443 -- # [[ ! -e /sys/module/nvmet ]] 00:38:27.967 19:27:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@444 -- # modprobe nvmet 00:38:27.967 19:27:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@447 -- # [[ -e /sys/kernel/config/nvmet ]] 00:38:27.968 19:27:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@449 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:38:31.266 Waiting for block devices as requested 00:38:31.266 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:38:31.266 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:38:31.266 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:38:31.527 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:38:31.527 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:38:31.527 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:38:31.789 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:38:31.789 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:38:31.789 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:38:32.050 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:38:32.050 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:38:32.311 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:38:32.311 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:38:32.311 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:38:32.311 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:38:32.571 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:38:32.571 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:38:32.832 19:28:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@452 -- # for block in /sys/block/nvme* 00:38:32.832 19:28:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@453 -- # [[ -e /sys/block/nvme0n1 ]] 00:38:32.832 19:28:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@454 -- # is_block_zoned nvme0n1 00:38:32.832 19:28:02 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:38:32.832 19:28:02 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:38:32.832 19:28:02 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:38:32.832 19:28:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@455 -- # block_in_use nvme0n1 00:38:32.832 19:28:02 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:38:32.832 19:28:02 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:38:32.832 No valid GPT data, bailing 00:38:32.832 19:28:02 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:38:32.832 19:28:02 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:38:32.832 19:28:02 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:38:32.832 19:28:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@455 -- # nvme=/dev/nvme0n1 00:38:32.832 19:28:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@458 -- # [[ -b /dev/nvme0n1 ]] 00:38:32.832 19:28:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@460 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:32.833 19:28:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@461 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:38:32.833 19:28:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@462 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:38:32.833 19:28:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@467 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:38:32.833 19:28:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@469 -- # echo 1 00:38:32.833 19:28:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@470 -- # echo /dev/nvme0n1 00:38:32.833 19:28:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@471 -- # echo 1 00:38:32.833 19:28:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@473 -- # echo 10.0.0.1 00:38:32.833 19:28:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@474 -- # echo tcp 00:38:32.833 19:28:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@475 -- # echo 4420 00:38:32.833 19:28:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@476 -- # echo ipv4 00:38:32.833 19:28:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@479 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:38:33.093 19:28:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@482 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:38:33.093 00:38:33.093 Discovery Log Number of Records 2, Generation counter 2 00:38:33.093 =====Discovery Log Entry 0====== 00:38:33.093 trtype: tcp 00:38:33.093 adrfam: ipv4 00:38:33.093 subtype: current discovery subsystem 00:38:33.093 treq: not specified, sq flow control disable supported 00:38:33.093 portid: 1 00:38:33.093 trsvcid: 4420 00:38:33.093 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:38:33.093 traddr: 10.0.0.1 00:38:33.093 eflags: none 00:38:33.093 sectype: none 00:38:33.093 =====Discovery Log Entry 1====== 00:38:33.093 trtype: tcp 00:38:33.093 adrfam: ipv4 00:38:33.093 subtype: nvme subsystem 00:38:33.093 treq: not specified, sq flow control disable supported 00:38:33.093 portid: 1 00:38:33.093 trsvcid: 4420 00:38:33.093 subnqn: nqn.2016-06.io.spdk:testnqn 00:38:33.093 traddr: 10.0.0.1 00:38:33.093 eflags: none 00:38:33.093 sectype: none 00:38:33.093 19:28:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:38:33.093 19:28:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:38:33.093 19:28:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:38:33.093 19:28:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:38:33.093 19:28:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:38:33.093 19:28:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:38:33.093 19:28:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:38:33.094 19:28:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:38:33.094 19:28:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:38:33.094 19:28:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:33.094 19:28:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:38:33.094 19:28:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:33.094 19:28:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:38:33.094 19:28:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:33.094 19:28:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:38:33.094 19:28:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:33.094 19:28:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:38:33.094 19:28:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:33.094 19:28:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:33.094 19:28:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:33.094 19:28:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:36.394 Initializing NVMe Controllers 00:38:36.394 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:38:36.394 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:36.394 Initialization complete. Launching workers. 00:38:36.394 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 67096, failed: 0 00:38:36.394 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 67096, failed to submit 0 00:38:36.394 success 0, unsuccessful 67096, failed 0 00:38:36.394 19:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:36.394 19:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:39.826 Initializing NVMe Controllers 00:38:39.826 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:38:39.826 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:39.826 Initialization complete. Launching workers. 00:38:39.826 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 107948, failed: 0 00:38:39.826 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 27178, failed to submit 80770 00:38:39.826 success 0, unsuccessful 27178, failed 0 00:38:39.826 19:28:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:39.826 19:28:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:42.371 Initializing NVMe Controllers 00:38:42.371 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:38:42.371 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:42.371 Initialization complete. Launching workers. 00:38:42.371 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 101527, failed: 0 00:38:42.371 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 25370, failed to submit 76157 00:38:42.371 success 0, unsuccessful 25370, failed 0 00:38:42.371 19:28:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:38:42.371 19:28:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@486 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:38:42.371 19:28:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@488 -- # echo 0 00:38:42.372 19:28:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@490 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:42.372 19:28:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@491 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:38:42.372 19:28:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@492 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:38:42.372 19:28:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@493 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:42.372 19:28:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@495 -- # modules=(/sys/module/nvmet/holders/*) 00:38:42.372 19:28:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@497 -- # modprobe -r nvmet_tcp nvmet 00:38:42.372 19:28:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@500 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:38:45.676 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:38:45.676 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:38:45.676 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:38:45.676 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:38:45.676 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:38:45.676 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:38:45.676 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:38:45.676 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:38:45.676 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:38:45.676 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:38:45.676 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:38:45.676 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:38:45.676 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:38:45.676 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:38:45.676 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:38:45.676 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:38:47.062 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:38:47.323 00:38:47.323 real 0m19.558s 00:38:47.323 user 0m9.507s 00:38:47.323 sys 0m5.645s 00:38:47.323 19:28:16 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:38:47.323 19:28:16 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:47.323 ************************************ 00:38:47.323 END TEST kernel_target_abort 00:38:47.323 ************************************ 00:38:47.584 19:28:16 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:38:47.584 19:28:16 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:38:47.584 19:28:16 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # nvmfcleanup 00:38:47.584 19:28:16 nvmf_abort_qd_sizes -- nvmf/common.sh@99 -- # sync 00:38:47.584 19:28:16 nvmf_abort_qd_sizes -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:38:47.584 19:28:16 nvmf_abort_qd_sizes -- nvmf/common.sh@102 -- # set +e 00:38:47.584 19:28:16 nvmf_abort_qd_sizes -- nvmf/common.sh@103 -- # for i in {1..20} 00:38:47.584 19:28:16 nvmf_abort_qd_sizes -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:38:47.584 rmmod nvme_tcp 00:38:47.584 rmmod nvme_fabrics 00:38:47.584 rmmod nvme_keyring 00:38:47.584 19:28:16 nvmf_abort_qd_sizes -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:38:47.584 19:28:16 nvmf_abort_qd_sizes -- nvmf/common.sh@106 -- # set -e 00:38:47.584 19:28:16 nvmf_abort_qd_sizes -- nvmf/common.sh@107 -- # return 0 00:38:47.584 19:28:16 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # '[' -n 675161 ']' 00:38:47.584 19:28:16 nvmf_abort_qd_sizes -- nvmf/common.sh@337 -- # killprocess 675161 00:38:47.584 19:28:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # '[' -z 675161 ']' 00:38:47.584 19:28:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@956 -- # kill -0 675161 00:38:47.584 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (675161) - No such process 00:38:47.584 19:28:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@979 -- # echo 'Process with pid 675161 is not found' 00:38:47.584 Process with pid 675161 is not found 00:38:47.584 19:28:16 nvmf_abort_qd_sizes -- nvmf/common.sh@339 -- # '[' iso == iso ']' 00:38:47.584 19:28:16 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:38:50.890 Waiting for block devices as requested 00:38:50.890 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:38:50.890 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:38:50.890 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:38:51.152 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:38:51.152 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:38:51.152 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:38:51.413 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:38:51.413 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:38:51.413 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:38:51.674 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:38:51.674 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:38:51.674 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:38:51.935 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:38:51.935 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:38:51.935 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:38:52.196 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:38:52.196 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:38:52.457 19:28:21 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # nvmf_fini 00:38:52.457 19:28:21 nvmf_abort_qd_sizes -- nvmf/setup.sh@264 -- # local dev 00:38:52.457 19:28:21 nvmf_abort_qd_sizes -- nvmf/setup.sh@267 -- # remove_target_ns 00:38:52.457 19:28:21 nvmf_abort_qd_sizes -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:38:52.457 19:28:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 13> /dev/null' 00:38:52.457 19:28:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_target_ns 00:38:55.006 19:28:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@268 -- # delete_main_bridge 00:38:55.006 19:28:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:38:55.006 19:28:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@130 -- # return 0 00:38:55.006 19:28:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:38:55.006 19:28:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:38:55.006 19:28:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:38:55.006 19:28:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:38:55.006 19:28:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:38:55.006 19:28:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:38:55.006 19:28:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:38:55.006 19:28:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:38:55.006 19:28:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:38:55.006 19:28:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:38:55.006 19:28:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:38:55.006 19:28:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:38:55.006 19:28:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:38:55.006 19:28:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:38:55.006 19:28:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:38:55.006 19:28:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:38:55.006 19:28:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:38:55.006 19:28:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@41 -- # _dev=0 00:38:55.006 19:28:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@41 -- # dev_map=() 00:38:55.006 19:28:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@284 -- # iptr 00:38:55.006 19:28:23 nvmf_abort_qd_sizes -- nvmf/common.sh@542 -- # iptables-save 00:38:55.006 19:28:23 nvmf_abort_qd_sizes -- nvmf/common.sh@542 -- # iptables-restore 00:38:55.006 19:28:23 nvmf_abort_qd_sizes -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:38:55.006 00:38:55.006 real 0m51.510s 00:38:55.006 user 1m5.077s 00:38:55.006 sys 0m18.498s 00:38:55.006 19:28:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@1128 -- # xtrace_disable 00:38:55.006 19:28:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:55.006 ************************************ 00:38:55.006 END TEST nvmf_abort_qd_sizes 00:38:55.006 ************************************ 00:38:55.006 19:28:23 -- spdk/autotest.sh@288 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:38:55.006 19:28:23 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:38:55.006 19:28:23 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:38:55.006 19:28:23 -- common/autotest_common.sh@10 -- # set +x 00:38:55.006 ************************************ 00:38:55.006 START TEST keyring_file 00:38:55.006 ************************************ 00:38:55.006 19:28:23 keyring_file -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:38:55.006 * Looking for test storage... 00:38:55.006 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:38:55.006 19:28:23 keyring_file -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:38:55.007 19:28:23 keyring_file -- common/autotest_common.sh@1691 -- # lcov --version 00:38:55.007 19:28:23 keyring_file -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:38:55.007 19:28:24 keyring_file -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:38:55.007 19:28:24 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:55.007 19:28:24 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:55.007 19:28:24 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:55.007 19:28:24 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:38:55.007 19:28:24 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:38:55.007 19:28:24 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:38:55.007 19:28:24 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:38:55.007 19:28:24 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:38:55.007 19:28:24 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:38:55.007 19:28:24 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:38:55.007 19:28:24 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:55.007 19:28:24 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:38:55.007 19:28:24 keyring_file -- scripts/common.sh@345 -- # : 1 00:38:55.007 19:28:24 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:55.007 19:28:24 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:55.007 19:28:24 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:38:55.007 19:28:24 keyring_file -- scripts/common.sh@353 -- # local d=1 00:38:55.007 19:28:24 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:55.007 19:28:24 keyring_file -- scripts/common.sh@355 -- # echo 1 00:38:55.007 19:28:24 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:38:55.007 19:28:24 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:38:55.007 19:28:24 keyring_file -- scripts/common.sh@353 -- # local d=2 00:38:55.007 19:28:24 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:55.007 19:28:24 keyring_file -- scripts/common.sh@355 -- # echo 2 00:38:55.007 19:28:24 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:38:55.007 19:28:24 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:55.007 19:28:24 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:55.007 19:28:24 keyring_file -- scripts/common.sh@368 -- # return 0 00:38:55.007 19:28:24 keyring_file -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:55.007 19:28:24 keyring_file -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:38:55.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:55.007 --rc genhtml_branch_coverage=1 00:38:55.007 --rc genhtml_function_coverage=1 00:38:55.007 --rc genhtml_legend=1 00:38:55.007 --rc geninfo_all_blocks=1 00:38:55.007 --rc geninfo_unexecuted_blocks=1 00:38:55.007 00:38:55.007 ' 00:38:55.007 19:28:24 keyring_file -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:38:55.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:55.007 --rc genhtml_branch_coverage=1 00:38:55.007 --rc genhtml_function_coverage=1 00:38:55.007 --rc genhtml_legend=1 00:38:55.007 --rc geninfo_all_blocks=1 00:38:55.007 --rc geninfo_unexecuted_blocks=1 00:38:55.007 00:38:55.007 ' 00:38:55.007 19:28:24 keyring_file -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:38:55.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:55.007 --rc genhtml_branch_coverage=1 00:38:55.007 --rc genhtml_function_coverage=1 00:38:55.007 --rc genhtml_legend=1 00:38:55.007 --rc geninfo_all_blocks=1 00:38:55.007 --rc geninfo_unexecuted_blocks=1 00:38:55.007 00:38:55.007 ' 00:38:55.007 19:28:24 keyring_file -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:38:55.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:55.007 --rc genhtml_branch_coverage=1 00:38:55.007 --rc genhtml_function_coverage=1 00:38:55.007 --rc genhtml_legend=1 00:38:55.007 --rc geninfo_all_blocks=1 00:38:55.007 --rc geninfo_unexecuted_blocks=1 00:38:55.007 00:38:55.007 ' 00:38:55.007 19:28:24 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:38:55.007 19:28:24 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:55.007 19:28:24 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:38:55.007 19:28:24 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:55.007 19:28:24 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:55.007 19:28:24 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:55.007 19:28:24 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:55.007 19:28:24 keyring_file -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:55.007 19:28:24 keyring_file -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:38:55.007 19:28:24 keyring_file -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:55.007 19:28:24 keyring_file -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:38:55.007 19:28:24 keyring_file -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:55.007 19:28:24 keyring_file -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:55.007 19:28:24 keyring_file -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:55.007 19:28:24 keyring_file -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:38:55.007 19:28:24 keyring_file -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:38:55.007 19:28:24 keyring_file -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:55.007 19:28:24 keyring_file -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:55.007 19:28:24 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:38:55.007 19:28:24 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:55.007 19:28:24 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:55.007 19:28:24 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:55.007 19:28:24 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:55.007 19:28:24 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:55.007 19:28:24 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:55.007 19:28:24 keyring_file -- paths/export.sh@5 -- # export PATH 00:38:55.007 19:28:24 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:55.007 19:28:24 keyring_file -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:38:55.007 19:28:24 keyring_file -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:38:55.007 19:28:24 keyring_file -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:38:55.007 19:28:24 keyring_file -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:38:55.007 19:28:24 keyring_file -- nvmf/common.sh@50 -- # : 0 00:38:55.007 19:28:24 keyring_file -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:38:55.007 19:28:24 keyring_file -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:38:55.007 19:28:24 keyring_file -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:38:55.007 19:28:24 keyring_file -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:55.007 19:28:24 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:55.007 19:28:24 keyring_file -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:38:55.007 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:38:55.007 19:28:24 keyring_file -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:38:55.007 19:28:24 keyring_file -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:38:55.007 19:28:24 keyring_file -- nvmf/common.sh@54 -- # have_pci_nics=0 00:38:55.007 19:28:24 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:38:55.007 19:28:24 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:38:55.007 19:28:24 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:38:55.007 19:28:24 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:38:55.007 19:28:24 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:38:55.007 19:28:24 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:38:55.007 19:28:24 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:38:55.007 19:28:24 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:38:55.007 19:28:24 keyring_file -- keyring/common.sh@17 -- # name=key0 00:38:55.007 19:28:24 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:38:55.007 19:28:24 keyring_file -- keyring/common.sh@17 -- # digest=0 00:38:55.007 19:28:24 keyring_file -- keyring/common.sh@18 -- # mktemp 00:38:55.007 19:28:24 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.w3YSw0egSg 00:38:55.007 19:28:24 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:38:55.007 19:28:24 keyring_file -- nvmf/common.sh@517 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:38:55.007 19:28:24 keyring_file -- nvmf/common.sh@504 -- # local prefix key digest 00:38:55.007 19:28:24 keyring_file -- nvmf/common.sh@506 -- # prefix=NVMeTLSkey-1 00:38:55.007 19:28:24 keyring_file -- nvmf/common.sh@506 -- # key=00112233445566778899aabbccddeeff 00:38:55.007 19:28:24 keyring_file -- nvmf/common.sh@506 -- # digest=0 00:38:55.007 19:28:24 keyring_file -- nvmf/common.sh@507 -- # python - 00:38:55.007 19:28:24 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.w3YSw0egSg 00:38:55.007 19:28:24 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.w3YSw0egSg 00:38:55.007 19:28:24 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.w3YSw0egSg 00:38:55.007 19:28:24 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:38:55.007 19:28:24 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:38:55.007 19:28:24 keyring_file -- keyring/common.sh@17 -- # name=key1 00:38:55.007 19:28:24 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:38:55.007 19:28:24 keyring_file -- keyring/common.sh@17 -- # digest=0 00:38:55.007 19:28:24 keyring_file -- keyring/common.sh@18 -- # mktemp 00:38:55.008 19:28:24 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.LdwFIwL98s 00:38:55.008 19:28:24 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:38:55.008 19:28:24 keyring_file -- nvmf/common.sh@517 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:38:55.008 19:28:24 keyring_file -- nvmf/common.sh@504 -- # local prefix key digest 00:38:55.008 19:28:24 keyring_file -- nvmf/common.sh@506 -- # prefix=NVMeTLSkey-1 00:38:55.008 19:28:24 keyring_file -- nvmf/common.sh@506 -- # key=112233445566778899aabbccddeeff00 00:38:55.008 19:28:24 keyring_file -- nvmf/common.sh@506 -- # digest=0 00:38:55.008 19:28:24 keyring_file -- nvmf/common.sh@507 -- # python - 00:38:55.008 19:28:24 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.LdwFIwL98s 00:38:55.008 19:28:24 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.LdwFIwL98s 00:38:55.008 19:28:24 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.LdwFIwL98s 00:38:55.008 19:28:24 keyring_file -- keyring/file.sh@30 -- # tgtpid=685249 00:38:55.008 19:28:24 keyring_file -- keyring/file.sh@32 -- # waitforlisten 685249 00:38:55.008 19:28:24 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:38:55.008 19:28:24 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 685249 ']' 00:38:55.008 19:28:24 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:55.008 19:28:24 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:38:55.008 19:28:24 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:55.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:55.008 19:28:24 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:38:55.008 19:28:24 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:55.008 [2024-11-05 19:28:24.274201] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:38:55.008 [2024-11-05 19:28:24.274282] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid685249 ] 00:38:55.270 [2024-11-05 19:28:24.349356] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:55.270 [2024-11-05 19:28:24.391617] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:55.842 19:28:25 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:38:55.842 19:28:25 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:38:55.842 19:28:25 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:38:55.842 19:28:25 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:55.842 19:28:25 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:55.842 [2024-11-05 19:28:25.067151] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:55.842 null0 00:38:55.842 [2024-11-05 19:28:25.099209] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:38:55.843 [2024-11-05 19:28:25.099501] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:38:55.843 19:28:25 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:55.843 19:28:25 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:38:55.843 19:28:25 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:38:55.843 19:28:25 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:38:55.843 19:28:25 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:38:55.843 19:28:25 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:55.843 19:28:25 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:38:55.843 19:28:25 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:55.843 19:28:25 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:38:55.843 19:28:25 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:55.843 19:28:25 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:55.843 [2024-11-05 19:28:25.131275] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:38:55.843 request: 00:38:55.843 { 00:38:55.843 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:38:55.843 "secure_channel": false, 00:38:55.843 "listen_address": { 00:38:55.843 "trtype": "tcp", 00:38:55.843 "traddr": "127.0.0.1", 00:38:55.843 "trsvcid": "4420" 00:38:55.843 }, 00:38:55.843 "method": "nvmf_subsystem_add_listener", 00:38:55.843 "req_id": 1 00:38:55.843 } 00:38:55.843 Got JSON-RPC error response 00:38:55.843 response: 00:38:55.843 { 00:38:55.843 "code": -32602, 00:38:55.843 "message": "Invalid parameters" 00:38:55.843 } 00:38:55.843 19:28:25 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:38:55.843 19:28:25 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:38:55.843 19:28:25 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:38:55.843 19:28:25 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:38:55.843 19:28:25 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:38:55.843 19:28:25 keyring_file -- keyring/file.sh@47 -- # bperfpid=685382 00:38:55.843 19:28:25 keyring_file -- keyring/file.sh@49 -- # waitforlisten 685382 /var/tmp/bperf.sock 00:38:55.843 19:28:25 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:38:55.843 19:28:25 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 685382 ']' 00:38:55.843 19:28:25 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:55.843 19:28:25 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:38:55.843 19:28:25 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:55.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:55.843 19:28:25 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:38:55.843 19:28:25 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:56.104 [2024-11-05 19:28:25.200195] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:38:56.104 [2024-11-05 19:28:25.200245] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid685382 ] 00:38:56.104 [2024-11-05 19:28:25.287272] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:56.104 [2024-11-05 19:28:25.323126] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:56.677 19:28:25 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:38:56.677 19:28:25 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:38:56.677 19:28:25 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.w3YSw0egSg 00:38:56.677 19:28:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.w3YSw0egSg 00:38:56.938 19:28:26 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.LdwFIwL98s 00:38:56.938 19:28:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.LdwFIwL98s 00:38:57.198 19:28:26 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:38:57.199 19:28:26 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:38:57.199 19:28:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:57.199 19:28:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:57.199 19:28:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:57.199 19:28:26 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.w3YSw0egSg == \/\t\m\p\/\t\m\p\.\w\3\Y\S\w\0\e\g\S\g ]] 00:38:57.199 19:28:26 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:38:57.199 19:28:26 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:38:57.199 19:28:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:57.199 19:28:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:57.199 19:28:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:57.459 19:28:26 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.LdwFIwL98s == \/\t\m\p\/\t\m\p\.\L\d\w\F\I\w\L\9\8\s ]] 00:38:57.459 19:28:26 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:38:57.459 19:28:26 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:57.459 19:28:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:57.459 19:28:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:57.459 19:28:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:57.459 19:28:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:57.720 19:28:26 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:38:57.720 19:28:26 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:38:57.720 19:28:26 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:57.720 19:28:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:57.720 19:28:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:57.720 19:28:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:57.720 19:28:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:57.720 19:28:26 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:38:57.720 19:28:26 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:57.720 19:28:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:57.981 [2024-11-05 19:28:27.125254] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:38:57.981 nvme0n1 00:38:57.981 19:28:27 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:38:57.981 19:28:27 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:57.981 19:28:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:57.981 19:28:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:57.981 19:28:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:57.981 19:28:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:58.241 19:28:27 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:38:58.241 19:28:27 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:38:58.241 19:28:27 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:58.241 19:28:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:58.241 19:28:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:58.241 19:28:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:58.241 19:28:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:58.502 19:28:27 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:38:58.502 19:28:27 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:38:58.502 Running I/O for 1 seconds... 00:38:59.445 16324.00 IOPS, 63.77 MiB/s 00:38:59.445 Latency(us) 00:38:59.445 [2024-11-05T18:28:28.768Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:59.445 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:38:59.445 nvme0n1 : 1.01 16336.11 63.81 0.00 0.00 7805.83 6116.69 16165.55 00:38:59.445 [2024-11-05T18:28:28.768Z] =================================================================================================================== 00:38:59.445 [2024-11-05T18:28:28.768Z] Total : 16336.11 63.81 0.00 0.00 7805.83 6116.69 16165.55 00:38:59.445 { 00:38:59.445 "results": [ 00:38:59.445 { 00:38:59.445 "job": "nvme0n1", 00:38:59.445 "core_mask": "0x2", 00:38:59.445 "workload": "randrw", 00:38:59.445 "percentage": 50, 00:38:59.445 "status": "finished", 00:38:59.445 "queue_depth": 128, 00:38:59.445 "io_size": 4096, 00:38:59.445 "runtime": 1.007094, 00:38:59.445 "iops": 16336.111624138362, 00:38:59.445 "mibps": 63.81293603179048, 00:38:59.445 "io_failed": 0, 00:38:59.445 "io_timeout": 0, 00:38:59.445 "avg_latency_us": 7805.827765621201, 00:38:59.445 "min_latency_us": 6116.693333333334, 00:38:59.445 "max_latency_us": 16165.546666666667 00:38:59.445 } 00:38:59.445 ], 00:38:59.445 "core_count": 1 00:38:59.445 } 00:38:59.445 19:28:28 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:38:59.445 19:28:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:38:59.706 19:28:28 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:38:59.706 19:28:28 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:59.706 19:28:28 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:59.706 19:28:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:59.706 19:28:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:59.706 19:28:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:59.968 19:28:29 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:38:59.968 19:28:29 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:38:59.968 19:28:29 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:59.968 19:28:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:59.968 19:28:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:59.968 19:28:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:59.968 19:28:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:59.968 19:28:29 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:38:59.968 19:28:29 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:38:59.968 19:28:29 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:38:59.968 19:28:29 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:38:59.968 19:28:29 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:38:59.968 19:28:29 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:59.968 19:28:29 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:38:59.968 19:28:29 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:59.968 19:28:29 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:38:59.968 19:28:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:39:00.229 [2024-11-05 19:28:29.382545] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:39:00.229 [2024-11-05 19:28:29.383196] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcd7c10 (107): Transport endpoint is not connected 00:39:00.229 [2024-11-05 19:28:29.384191] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcd7c10 (9): Bad file descriptor 00:39:00.229 [2024-11-05 19:28:29.385193] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:39:00.229 [2024-11-05 19:28:29.385206] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:39:00.229 [2024-11-05 19:28:29.385212] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:39:00.229 [2024-11-05 19:28:29.385219] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:39:00.229 request: 00:39:00.229 { 00:39:00.229 "name": "nvme0", 00:39:00.229 "trtype": "tcp", 00:39:00.229 "traddr": "127.0.0.1", 00:39:00.229 "adrfam": "ipv4", 00:39:00.229 "trsvcid": "4420", 00:39:00.229 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:00.229 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:00.229 "prchk_reftag": false, 00:39:00.229 "prchk_guard": false, 00:39:00.229 "hdgst": false, 00:39:00.229 "ddgst": false, 00:39:00.229 "psk": "key1", 00:39:00.229 "allow_unrecognized_csi": false, 00:39:00.229 "method": "bdev_nvme_attach_controller", 00:39:00.229 "req_id": 1 00:39:00.229 } 00:39:00.229 Got JSON-RPC error response 00:39:00.229 response: 00:39:00.229 { 00:39:00.229 "code": -5, 00:39:00.229 "message": "Input/output error" 00:39:00.229 } 00:39:00.229 19:28:29 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:39:00.229 19:28:29 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:39:00.229 19:28:29 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:39:00.229 19:28:29 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:39:00.229 19:28:29 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:39:00.229 19:28:29 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:00.229 19:28:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:00.229 19:28:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:00.229 19:28:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:00.229 19:28:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:00.494 19:28:29 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:39:00.494 19:28:29 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:39:00.494 19:28:29 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:39:00.494 19:28:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:00.494 19:28:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:00.494 19:28:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:00.494 19:28:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:00.494 19:28:29 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:39:00.494 19:28:29 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:39:00.494 19:28:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:39:00.754 19:28:29 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:39:00.754 19:28:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:39:01.015 19:28:30 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:39:01.015 19:28:30 keyring_file -- keyring/file.sh@78 -- # jq length 00:39:01.016 19:28:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:01.016 19:28:30 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:39:01.016 19:28:30 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.w3YSw0egSg 00:39:01.016 19:28:30 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.w3YSw0egSg 00:39:01.016 19:28:30 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:39:01.016 19:28:30 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.w3YSw0egSg 00:39:01.016 19:28:30 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:39:01.016 19:28:30 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:39:01.016 19:28:30 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:39:01.016 19:28:30 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:39:01.016 19:28:30 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.w3YSw0egSg 00:39:01.016 19:28:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.w3YSw0egSg 00:39:01.277 [2024-11-05 19:28:30.421885] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.w3YSw0egSg': 0100660 00:39:01.277 [2024-11-05 19:28:30.421909] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:39:01.277 request: 00:39:01.277 { 00:39:01.277 "name": "key0", 00:39:01.277 "path": "/tmp/tmp.w3YSw0egSg", 00:39:01.277 "method": "keyring_file_add_key", 00:39:01.277 "req_id": 1 00:39:01.277 } 00:39:01.277 Got JSON-RPC error response 00:39:01.277 response: 00:39:01.277 { 00:39:01.277 "code": -1, 00:39:01.277 "message": "Operation not permitted" 00:39:01.277 } 00:39:01.277 19:28:30 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:39:01.277 19:28:30 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:39:01.277 19:28:30 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:39:01.277 19:28:30 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:39:01.277 19:28:30 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.w3YSw0egSg 00:39:01.277 19:28:30 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.w3YSw0egSg 00:39:01.277 19:28:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.w3YSw0egSg 00:39:01.277 19:28:30 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.w3YSw0egSg 00:39:01.277 19:28:30 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:39:01.277 19:28:30 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:01.277 19:28:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:01.277 19:28:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:01.277 19:28:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:01.277 19:28:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:01.537 19:28:30 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:39:01.537 19:28:30 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:01.537 19:28:30 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:39:01.537 19:28:30 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:01.537 19:28:30 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:39:01.537 19:28:30 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:39:01.537 19:28:30 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:39:01.537 19:28:30 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:39:01.537 19:28:30 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:01.537 19:28:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:01.797 [2024-11-05 19:28:30.927171] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.w3YSw0egSg': No such file or directory 00:39:01.797 [2024-11-05 19:28:30.927186] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:39:01.797 [2024-11-05 19:28:30.927200] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:39:01.797 [2024-11-05 19:28:30.927206] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:39:01.797 [2024-11-05 19:28:30.927212] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:39:01.797 [2024-11-05 19:28:30.927216] bdev_nvme.c:6667:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:39:01.797 request: 00:39:01.797 { 00:39:01.797 "name": "nvme0", 00:39:01.797 "trtype": "tcp", 00:39:01.797 "traddr": "127.0.0.1", 00:39:01.797 "adrfam": "ipv4", 00:39:01.797 "trsvcid": "4420", 00:39:01.797 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:01.797 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:01.797 "prchk_reftag": false, 00:39:01.797 "prchk_guard": false, 00:39:01.797 "hdgst": false, 00:39:01.797 "ddgst": false, 00:39:01.797 "psk": "key0", 00:39:01.797 "allow_unrecognized_csi": false, 00:39:01.797 "method": "bdev_nvme_attach_controller", 00:39:01.797 "req_id": 1 00:39:01.797 } 00:39:01.797 Got JSON-RPC error response 00:39:01.797 response: 00:39:01.797 { 00:39:01.797 "code": -19, 00:39:01.797 "message": "No such device" 00:39:01.797 } 00:39:01.797 19:28:30 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:39:01.797 19:28:30 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:39:01.797 19:28:30 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:39:01.797 19:28:30 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:39:01.797 19:28:30 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:39:01.797 19:28:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:39:01.797 19:28:31 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:39:01.797 19:28:31 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:39:01.797 19:28:31 keyring_file -- keyring/common.sh@17 -- # name=key0 00:39:01.797 19:28:31 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:39:01.797 19:28:31 keyring_file -- keyring/common.sh@17 -- # digest=0 00:39:01.797 19:28:31 keyring_file -- keyring/common.sh@18 -- # mktemp 00:39:01.797 19:28:31 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.94PBsiw5bb 00:39:01.797 19:28:31 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:39:01.797 19:28:31 keyring_file -- nvmf/common.sh@517 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:39:01.797 19:28:31 keyring_file -- nvmf/common.sh@504 -- # local prefix key digest 00:39:01.797 19:28:31 keyring_file -- nvmf/common.sh@506 -- # prefix=NVMeTLSkey-1 00:39:01.797 19:28:31 keyring_file -- nvmf/common.sh@506 -- # key=00112233445566778899aabbccddeeff 00:39:01.797 19:28:31 keyring_file -- nvmf/common.sh@506 -- # digest=0 00:39:01.797 19:28:31 keyring_file -- nvmf/common.sh@507 -- # python - 00:39:02.058 19:28:31 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.94PBsiw5bb 00:39:02.058 19:28:31 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.94PBsiw5bb 00:39:02.058 19:28:31 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.94PBsiw5bb 00:39:02.058 19:28:31 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.94PBsiw5bb 00:39:02.058 19:28:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.94PBsiw5bb 00:39:02.058 19:28:31 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:02.058 19:28:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:02.318 nvme0n1 00:39:02.318 19:28:31 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:39:02.318 19:28:31 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:02.318 19:28:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:02.318 19:28:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:02.318 19:28:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:02.318 19:28:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:02.578 19:28:31 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:39:02.578 19:28:31 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:39:02.578 19:28:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:39:02.839 19:28:31 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:39:02.839 19:28:31 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:39:02.839 19:28:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:02.839 19:28:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:02.839 19:28:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:02.839 19:28:32 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:39:02.839 19:28:32 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:39:02.839 19:28:32 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:02.839 19:28:32 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:02.839 19:28:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:02.839 19:28:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:02.839 19:28:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:03.099 19:28:32 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:39:03.099 19:28:32 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:39:03.099 19:28:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:39:03.359 19:28:32 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:39:03.359 19:28:32 keyring_file -- keyring/file.sh@105 -- # jq length 00:39:03.359 19:28:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:03.359 19:28:32 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:39:03.359 19:28:32 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.94PBsiw5bb 00:39:03.359 19:28:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.94PBsiw5bb 00:39:03.619 19:28:32 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.LdwFIwL98s 00:39:03.619 19:28:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.LdwFIwL98s 00:39:03.880 19:28:32 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:03.880 19:28:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:03.880 nvme0n1 00:39:03.880 19:28:33 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:39:03.880 19:28:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:39:04.141 19:28:33 keyring_file -- keyring/file.sh@113 -- # config='{ 00:39:04.141 "subsystems": [ 00:39:04.141 { 00:39:04.141 "subsystem": "keyring", 00:39:04.141 "config": [ 00:39:04.141 { 00:39:04.141 "method": "keyring_file_add_key", 00:39:04.141 "params": { 00:39:04.141 "name": "key0", 00:39:04.141 "path": "/tmp/tmp.94PBsiw5bb" 00:39:04.141 } 00:39:04.141 }, 00:39:04.141 { 00:39:04.141 "method": "keyring_file_add_key", 00:39:04.141 "params": { 00:39:04.141 "name": "key1", 00:39:04.141 "path": "/tmp/tmp.LdwFIwL98s" 00:39:04.141 } 00:39:04.141 } 00:39:04.141 ] 00:39:04.141 }, 00:39:04.141 { 00:39:04.141 "subsystem": "iobuf", 00:39:04.141 "config": [ 00:39:04.141 { 00:39:04.141 "method": "iobuf_set_options", 00:39:04.141 "params": { 00:39:04.141 "small_pool_count": 8192, 00:39:04.141 "large_pool_count": 1024, 00:39:04.141 "small_bufsize": 8192, 00:39:04.141 "large_bufsize": 135168, 00:39:04.141 "enable_numa": false 00:39:04.141 } 00:39:04.141 } 00:39:04.141 ] 00:39:04.141 }, 00:39:04.141 { 00:39:04.141 "subsystem": "sock", 00:39:04.141 "config": [ 00:39:04.141 { 00:39:04.141 "method": "sock_set_default_impl", 00:39:04.141 "params": { 00:39:04.141 "impl_name": "posix" 00:39:04.141 } 00:39:04.141 }, 00:39:04.141 { 00:39:04.141 "method": "sock_impl_set_options", 00:39:04.141 "params": { 00:39:04.141 "impl_name": "ssl", 00:39:04.141 "recv_buf_size": 4096, 00:39:04.141 "send_buf_size": 4096, 00:39:04.141 "enable_recv_pipe": true, 00:39:04.141 "enable_quickack": false, 00:39:04.141 "enable_placement_id": 0, 00:39:04.141 "enable_zerocopy_send_server": true, 00:39:04.141 "enable_zerocopy_send_client": false, 00:39:04.141 "zerocopy_threshold": 0, 00:39:04.141 "tls_version": 0, 00:39:04.141 "enable_ktls": false 00:39:04.141 } 00:39:04.141 }, 00:39:04.141 { 00:39:04.141 "method": "sock_impl_set_options", 00:39:04.141 "params": { 00:39:04.141 "impl_name": "posix", 00:39:04.141 "recv_buf_size": 2097152, 00:39:04.141 "send_buf_size": 2097152, 00:39:04.141 "enable_recv_pipe": true, 00:39:04.141 "enable_quickack": false, 00:39:04.141 "enable_placement_id": 0, 00:39:04.141 "enable_zerocopy_send_server": true, 00:39:04.141 "enable_zerocopy_send_client": false, 00:39:04.141 "zerocopy_threshold": 0, 00:39:04.141 "tls_version": 0, 00:39:04.141 "enable_ktls": false 00:39:04.141 } 00:39:04.141 } 00:39:04.141 ] 00:39:04.141 }, 00:39:04.141 { 00:39:04.141 "subsystem": "vmd", 00:39:04.141 "config": [] 00:39:04.141 }, 00:39:04.141 { 00:39:04.141 "subsystem": "accel", 00:39:04.141 "config": [ 00:39:04.141 { 00:39:04.141 "method": "accel_set_options", 00:39:04.141 "params": { 00:39:04.141 "small_cache_size": 128, 00:39:04.141 "large_cache_size": 16, 00:39:04.141 "task_count": 2048, 00:39:04.141 "sequence_count": 2048, 00:39:04.141 "buf_count": 2048 00:39:04.141 } 00:39:04.141 } 00:39:04.141 ] 00:39:04.141 }, 00:39:04.141 { 00:39:04.141 "subsystem": "bdev", 00:39:04.141 "config": [ 00:39:04.141 { 00:39:04.141 "method": "bdev_set_options", 00:39:04.141 "params": { 00:39:04.141 "bdev_io_pool_size": 65535, 00:39:04.141 "bdev_io_cache_size": 256, 00:39:04.141 "bdev_auto_examine": true, 00:39:04.141 "iobuf_small_cache_size": 128, 00:39:04.141 "iobuf_large_cache_size": 16 00:39:04.141 } 00:39:04.141 }, 00:39:04.141 { 00:39:04.141 "method": "bdev_raid_set_options", 00:39:04.141 "params": { 00:39:04.141 "process_window_size_kb": 1024, 00:39:04.141 "process_max_bandwidth_mb_sec": 0 00:39:04.141 } 00:39:04.141 }, 00:39:04.141 { 00:39:04.141 "method": "bdev_iscsi_set_options", 00:39:04.141 "params": { 00:39:04.141 "timeout_sec": 30 00:39:04.141 } 00:39:04.141 }, 00:39:04.141 { 00:39:04.141 "method": "bdev_nvme_set_options", 00:39:04.141 "params": { 00:39:04.141 "action_on_timeout": "none", 00:39:04.141 "timeout_us": 0, 00:39:04.141 "timeout_admin_us": 0, 00:39:04.141 "keep_alive_timeout_ms": 10000, 00:39:04.141 "arbitration_burst": 0, 00:39:04.141 "low_priority_weight": 0, 00:39:04.141 "medium_priority_weight": 0, 00:39:04.141 "high_priority_weight": 0, 00:39:04.141 "nvme_adminq_poll_period_us": 10000, 00:39:04.141 "nvme_ioq_poll_period_us": 0, 00:39:04.141 "io_queue_requests": 512, 00:39:04.141 "delay_cmd_submit": true, 00:39:04.141 "transport_retry_count": 4, 00:39:04.141 "bdev_retry_count": 3, 00:39:04.141 "transport_ack_timeout": 0, 00:39:04.141 "ctrlr_loss_timeout_sec": 0, 00:39:04.141 "reconnect_delay_sec": 0, 00:39:04.141 "fast_io_fail_timeout_sec": 0, 00:39:04.141 "disable_auto_failback": false, 00:39:04.141 "generate_uuids": false, 00:39:04.141 "transport_tos": 0, 00:39:04.141 "nvme_error_stat": false, 00:39:04.141 "rdma_srq_size": 0, 00:39:04.141 "io_path_stat": false, 00:39:04.141 "allow_accel_sequence": false, 00:39:04.141 "rdma_max_cq_size": 0, 00:39:04.141 "rdma_cm_event_timeout_ms": 0, 00:39:04.141 "dhchap_digests": [ 00:39:04.141 "sha256", 00:39:04.141 "sha384", 00:39:04.141 "sha512" 00:39:04.141 ], 00:39:04.141 "dhchap_dhgroups": [ 00:39:04.141 "null", 00:39:04.141 "ffdhe2048", 00:39:04.141 "ffdhe3072", 00:39:04.141 "ffdhe4096", 00:39:04.141 "ffdhe6144", 00:39:04.141 "ffdhe8192" 00:39:04.141 ] 00:39:04.141 } 00:39:04.141 }, 00:39:04.141 { 00:39:04.141 "method": "bdev_nvme_attach_controller", 00:39:04.141 "params": { 00:39:04.141 "name": "nvme0", 00:39:04.141 "trtype": "TCP", 00:39:04.141 "adrfam": "IPv4", 00:39:04.141 "traddr": "127.0.0.1", 00:39:04.141 "trsvcid": "4420", 00:39:04.141 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:04.141 "prchk_reftag": false, 00:39:04.141 "prchk_guard": false, 00:39:04.141 "ctrlr_loss_timeout_sec": 0, 00:39:04.141 "reconnect_delay_sec": 0, 00:39:04.141 "fast_io_fail_timeout_sec": 0, 00:39:04.141 "psk": "key0", 00:39:04.141 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:04.141 "hdgst": false, 00:39:04.141 "ddgst": false, 00:39:04.141 "multipath": "multipath" 00:39:04.141 } 00:39:04.141 }, 00:39:04.141 { 00:39:04.141 "method": "bdev_nvme_set_hotplug", 00:39:04.141 "params": { 00:39:04.141 "period_us": 100000, 00:39:04.141 "enable": false 00:39:04.141 } 00:39:04.141 }, 00:39:04.141 { 00:39:04.141 "method": "bdev_wait_for_examine" 00:39:04.141 } 00:39:04.141 ] 00:39:04.141 }, 00:39:04.141 { 00:39:04.141 "subsystem": "nbd", 00:39:04.141 "config": [] 00:39:04.141 } 00:39:04.141 ] 00:39:04.141 }' 00:39:04.141 19:28:33 keyring_file -- keyring/file.sh@115 -- # killprocess 685382 00:39:04.141 19:28:33 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 685382 ']' 00:39:04.142 19:28:33 keyring_file -- common/autotest_common.sh@956 -- # kill -0 685382 00:39:04.142 19:28:33 keyring_file -- common/autotest_common.sh@957 -- # uname 00:39:04.142 19:28:33 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:39:04.142 19:28:33 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 685382 00:39:04.403 19:28:33 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:39:04.403 19:28:33 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:39:04.403 19:28:33 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 685382' 00:39:04.403 killing process with pid 685382 00:39:04.403 19:28:33 keyring_file -- common/autotest_common.sh@971 -- # kill 685382 00:39:04.403 Received shutdown signal, test time was about 1.000000 seconds 00:39:04.403 00:39:04.403 Latency(us) 00:39:04.403 [2024-11-05T18:28:33.726Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:04.403 [2024-11-05T18:28:33.726Z] =================================================================================================================== 00:39:04.403 [2024-11-05T18:28:33.726Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:04.403 19:28:33 keyring_file -- common/autotest_common.sh@976 -- # wait 685382 00:39:04.403 19:28:33 keyring_file -- keyring/file.sh@118 -- # bperfpid=687185 00:39:04.403 19:28:33 keyring_file -- keyring/file.sh@120 -- # waitforlisten 687185 /var/tmp/bperf.sock 00:39:04.403 19:28:33 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 687185 ']' 00:39:04.403 19:28:33 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:39:04.403 19:28:33 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:39:04.403 19:28:33 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:39:04.403 19:28:33 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:39:04.403 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:39:04.403 19:28:33 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:39:04.403 19:28:33 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:04.403 19:28:33 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:39:04.403 "subsystems": [ 00:39:04.403 { 00:39:04.403 "subsystem": "keyring", 00:39:04.403 "config": [ 00:39:04.403 { 00:39:04.403 "method": "keyring_file_add_key", 00:39:04.403 "params": { 00:39:04.403 "name": "key0", 00:39:04.403 "path": "/tmp/tmp.94PBsiw5bb" 00:39:04.403 } 00:39:04.403 }, 00:39:04.403 { 00:39:04.403 "method": "keyring_file_add_key", 00:39:04.403 "params": { 00:39:04.403 "name": "key1", 00:39:04.403 "path": "/tmp/tmp.LdwFIwL98s" 00:39:04.403 } 00:39:04.403 } 00:39:04.403 ] 00:39:04.403 }, 00:39:04.403 { 00:39:04.403 "subsystem": "iobuf", 00:39:04.403 "config": [ 00:39:04.403 { 00:39:04.403 "method": "iobuf_set_options", 00:39:04.403 "params": { 00:39:04.403 "small_pool_count": 8192, 00:39:04.403 "large_pool_count": 1024, 00:39:04.403 "small_bufsize": 8192, 00:39:04.403 "large_bufsize": 135168, 00:39:04.403 "enable_numa": false 00:39:04.403 } 00:39:04.403 } 00:39:04.403 ] 00:39:04.403 }, 00:39:04.403 { 00:39:04.403 "subsystem": "sock", 00:39:04.403 "config": [ 00:39:04.403 { 00:39:04.403 "method": "sock_set_default_impl", 00:39:04.403 "params": { 00:39:04.403 "impl_name": "posix" 00:39:04.403 } 00:39:04.403 }, 00:39:04.403 { 00:39:04.403 "method": "sock_impl_set_options", 00:39:04.403 "params": { 00:39:04.403 "impl_name": "ssl", 00:39:04.403 "recv_buf_size": 4096, 00:39:04.403 "send_buf_size": 4096, 00:39:04.403 "enable_recv_pipe": true, 00:39:04.403 "enable_quickack": false, 00:39:04.403 "enable_placement_id": 0, 00:39:04.403 "enable_zerocopy_send_server": true, 00:39:04.403 "enable_zerocopy_send_client": false, 00:39:04.403 "zerocopy_threshold": 0, 00:39:04.403 "tls_version": 0, 00:39:04.403 "enable_ktls": false 00:39:04.403 } 00:39:04.403 }, 00:39:04.403 { 00:39:04.403 "method": "sock_impl_set_options", 00:39:04.403 "params": { 00:39:04.403 "impl_name": "posix", 00:39:04.403 "recv_buf_size": 2097152, 00:39:04.403 "send_buf_size": 2097152, 00:39:04.403 "enable_recv_pipe": true, 00:39:04.403 "enable_quickack": false, 00:39:04.403 "enable_placement_id": 0, 00:39:04.403 "enable_zerocopy_send_server": true, 00:39:04.403 "enable_zerocopy_send_client": false, 00:39:04.403 "zerocopy_threshold": 0, 00:39:04.403 "tls_version": 0, 00:39:04.403 "enable_ktls": false 00:39:04.403 } 00:39:04.403 } 00:39:04.403 ] 00:39:04.403 }, 00:39:04.403 { 00:39:04.403 "subsystem": "vmd", 00:39:04.403 "config": [] 00:39:04.403 }, 00:39:04.403 { 00:39:04.403 "subsystem": "accel", 00:39:04.403 "config": [ 00:39:04.403 { 00:39:04.403 "method": "accel_set_options", 00:39:04.403 "params": { 00:39:04.403 "small_cache_size": 128, 00:39:04.403 "large_cache_size": 16, 00:39:04.403 "task_count": 2048, 00:39:04.403 "sequence_count": 2048, 00:39:04.403 "buf_count": 2048 00:39:04.403 } 00:39:04.403 } 00:39:04.403 ] 00:39:04.403 }, 00:39:04.403 { 00:39:04.403 "subsystem": "bdev", 00:39:04.403 "config": [ 00:39:04.403 { 00:39:04.403 "method": "bdev_set_options", 00:39:04.403 "params": { 00:39:04.403 "bdev_io_pool_size": 65535, 00:39:04.403 "bdev_io_cache_size": 256, 00:39:04.403 "bdev_auto_examine": true, 00:39:04.403 "iobuf_small_cache_size": 128, 00:39:04.403 "iobuf_large_cache_size": 16 00:39:04.403 } 00:39:04.403 }, 00:39:04.403 { 00:39:04.403 "method": "bdev_raid_set_options", 00:39:04.403 "params": { 00:39:04.403 "process_window_size_kb": 1024, 00:39:04.403 "process_max_bandwidth_mb_sec": 0 00:39:04.403 } 00:39:04.403 }, 00:39:04.403 { 00:39:04.403 "method": "bdev_iscsi_set_options", 00:39:04.403 "params": { 00:39:04.403 "timeout_sec": 30 00:39:04.403 } 00:39:04.403 }, 00:39:04.403 { 00:39:04.403 "method": "bdev_nvme_set_options", 00:39:04.403 "params": { 00:39:04.403 "action_on_timeout": "none", 00:39:04.403 "timeout_us": 0, 00:39:04.403 "timeout_admin_us": 0, 00:39:04.403 "keep_alive_timeout_ms": 10000, 00:39:04.403 "arbitration_burst": 0, 00:39:04.403 "low_priority_weight": 0, 00:39:04.403 "medium_priority_weight": 0, 00:39:04.403 "high_priority_weight": 0, 00:39:04.403 "nvme_adminq_poll_period_us": 10000, 00:39:04.403 "nvme_ioq_poll_period_us": 0, 00:39:04.403 "io_queue_requests": 512, 00:39:04.403 "delay_cmd_submit": true, 00:39:04.403 "transport_retry_count": 4, 00:39:04.403 "bdev_retry_count": 3, 00:39:04.404 "transport_ack_timeout": 0, 00:39:04.404 "ctrlr_loss_timeout_sec": 0, 00:39:04.404 "reconnect_delay_sec": 0, 00:39:04.404 "fast_io_fail_timeout_sec": 0, 00:39:04.404 "disable_auto_failback": false, 00:39:04.404 "generate_uuids": false, 00:39:04.404 "transport_tos": 0, 00:39:04.404 "nvme_error_stat": false, 00:39:04.404 "rdma_srq_size": 0, 00:39:04.404 "io_path_stat": false, 00:39:04.404 "allow_accel_sequence": false, 00:39:04.404 "rdma_max_cq_size": 0, 00:39:04.404 "rdma_cm_event_timeout_ms": 0, 00:39:04.404 "dhchap_digests": [ 00:39:04.404 "sha256", 00:39:04.404 "sha384", 00:39:04.404 "sha512" 00:39:04.404 ], 00:39:04.404 "dhchap_dhgroups": [ 00:39:04.404 "null", 00:39:04.404 "ffdhe2048", 00:39:04.404 "ffdhe3072", 00:39:04.404 "ffdhe4096", 00:39:04.404 "ffdhe6144", 00:39:04.404 "ffdhe8192" 00:39:04.404 ] 00:39:04.404 } 00:39:04.404 }, 00:39:04.404 { 00:39:04.404 "method": "bdev_nvme_attach_controller", 00:39:04.404 "params": { 00:39:04.404 "name": "nvme0", 00:39:04.404 "trtype": "TCP", 00:39:04.404 "adrfam": "IPv4", 00:39:04.404 "traddr": "127.0.0.1", 00:39:04.404 "trsvcid": "4420", 00:39:04.404 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:04.404 "prchk_reftag": false, 00:39:04.404 "prchk_guard": false, 00:39:04.404 "ctrlr_loss_timeout_sec": 0, 00:39:04.404 "reconnect_delay_sec": 0, 00:39:04.404 "fast_io_fail_timeout_sec": 0, 00:39:04.404 "psk": "key0", 00:39:04.404 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:04.404 "hdgst": false, 00:39:04.404 "ddgst": false, 00:39:04.404 "multipath": "multipath" 00:39:04.404 } 00:39:04.404 }, 00:39:04.404 { 00:39:04.404 "method": "bdev_nvme_set_hotplug", 00:39:04.404 "params": { 00:39:04.404 "period_us": 100000, 00:39:04.404 "enable": false 00:39:04.404 } 00:39:04.404 }, 00:39:04.404 { 00:39:04.404 "method": "bdev_wait_for_examine" 00:39:04.404 } 00:39:04.404 ] 00:39:04.404 }, 00:39:04.404 { 00:39:04.404 "subsystem": "nbd", 00:39:04.404 "config": [] 00:39:04.404 } 00:39:04.404 ] 00:39:04.404 }' 00:39:04.404 [2024-11-05 19:28:33.651936] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:39:04.404 [2024-11-05 19:28:33.651998] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid687185 ] 00:39:04.665 [2024-11-05 19:28:33.734118] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:04.665 [2024-11-05 19:28:33.763679] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:04.665 [2024-11-05 19:28:33.906509] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:39:05.236 19:28:34 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:39:05.236 19:28:34 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:39:05.236 19:28:34 keyring_file -- keyring/file.sh@121 -- # jq length 00:39:05.236 19:28:34 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:39:05.236 19:28:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:05.497 19:28:34 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:39:05.497 19:28:34 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:39:05.497 19:28:34 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:05.497 19:28:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:05.497 19:28:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:05.497 19:28:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:05.497 19:28:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:05.497 19:28:34 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:39:05.497 19:28:34 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:39:05.497 19:28:34 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:39:05.497 19:28:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:05.497 19:28:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:05.497 19:28:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:05.497 19:28:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:05.759 19:28:34 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:39:05.759 19:28:34 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:39:05.759 19:28:34 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:39:05.759 19:28:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:39:06.021 19:28:35 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:39:06.021 19:28:35 keyring_file -- keyring/file.sh@1 -- # cleanup 00:39:06.021 19:28:35 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.94PBsiw5bb /tmp/tmp.LdwFIwL98s 00:39:06.021 19:28:35 keyring_file -- keyring/file.sh@20 -- # killprocess 687185 00:39:06.021 19:28:35 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 687185 ']' 00:39:06.021 19:28:35 keyring_file -- common/autotest_common.sh@956 -- # kill -0 687185 00:39:06.021 19:28:35 keyring_file -- common/autotest_common.sh@957 -- # uname 00:39:06.021 19:28:35 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:39:06.021 19:28:35 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 687185 00:39:06.021 19:28:35 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:39:06.021 19:28:35 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:39:06.021 19:28:35 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 687185' 00:39:06.021 killing process with pid 687185 00:39:06.021 19:28:35 keyring_file -- common/autotest_common.sh@971 -- # kill 687185 00:39:06.021 Received shutdown signal, test time was about 1.000000 seconds 00:39:06.021 00:39:06.021 Latency(us) 00:39:06.021 [2024-11-05T18:28:35.344Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:06.021 [2024-11-05T18:28:35.344Z] =================================================================================================================== 00:39:06.021 [2024-11-05T18:28:35.344Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:39:06.021 19:28:35 keyring_file -- common/autotest_common.sh@976 -- # wait 687185 00:39:06.021 19:28:35 keyring_file -- keyring/file.sh@21 -- # killprocess 685249 00:39:06.021 19:28:35 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 685249 ']' 00:39:06.021 19:28:35 keyring_file -- common/autotest_common.sh@956 -- # kill -0 685249 00:39:06.021 19:28:35 keyring_file -- common/autotest_common.sh@957 -- # uname 00:39:06.021 19:28:35 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:39:06.021 19:28:35 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 685249 00:39:06.281 19:28:35 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:39:06.281 19:28:35 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:39:06.281 19:28:35 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 685249' 00:39:06.281 killing process with pid 685249 00:39:06.281 19:28:35 keyring_file -- common/autotest_common.sh@971 -- # kill 685249 00:39:06.281 19:28:35 keyring_file -- common/autotest_common.sh@976 -- # wait 685249 00:39:06.281 00:39:06.281 real 0m11.728s 00:39:06.281 user 0m28.157s 00:39:06.281 sys 0m2.596s 00:39:06.281 19:28:35 keyring_file -- common/autotest_common.sh@1128 -- # xtrace_disable 00:39:06.281 19:28:35 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:06.281 ************************************ 00:39:06.281 END TEST keyring_file 00:39:06.281 ************************************ 00:39:06.543 19:28:35 -- spdk/autotest.sh@289 -- # [[ y == y ]] 00:39:06.543 19:28:35 -- spdk/autotest.sh@290 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:39:06.543 19:28:35 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:39:06.543 19:28:35 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:39:06.543 19:28:35 -- common/autotest_common.sh@10 -- # set +x 00:39:06.543 ************************************ 00:39:06.543 START TEST keyring_linux 00:39:06.543 ************************************ 00:39:06.543 19:28:35 keyring_linux -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:39:06.543 Joined session keyring: 377534035 00:39:06.543 * Looking for test storage... 00:39:06.543 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:39:06.543 19:28:35 keyring_linux -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:39:06.543 19:28:35 keyring_linux -- common/autotest_common.sh@1691 -- # lcov --version 00:39:06.543 19:28:35 keyring_linux -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:39:06.543 19:28:35 keyring_linux -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:39:06.543 19:28:35 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:06.543 19:28:35 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:06.543 19:28:35 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:06.543 19:28:35 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:39:06.543 19:28:35 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:39:06.543 19:28:35 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:39:06.543 19:28:35 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:39:06.543 19:28:35 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:39:06.543 19:28:35 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:39:06.543 19:28:35 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:39:06.543 19:28:35 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:06.543 19:28:35 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:39:06.543 19:28:35 keyring_linux -- scripts/common.sh@345 -- # : 1 00:39:06.543 19:28:35 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:06.543 19:28:35 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:06.543 19:28:35 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:39:06.543 19:28:35 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:39:06.543 19:28:35 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:06.543 19:28:35 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:39:06.543 19:28:35 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:39:06.543 19:28:35 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:39:06.543 19:28:35 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:39:06.543 19:28:35 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:06.543 19:28:35 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:39:06.543 19:28:35 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:39:06.543 19:28:35 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:06.543 19:28:35 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:06.543 19:28:35 keyring_linux -- scripts/common.sh@368 -- # return 0 00:39:06.543 19:28:35 keyring_linux -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:06.543 19:28:35 keyring_linux -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:39:06.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:06.543 --rc genhtml_branch_coverage=1 00:39:06.543 --rc genhtml_function_coverage=1 00:39:06.543 --rc genhtml_legend=1 00:39:06.543 --rc geninfo_all_blocks=1 00:39:06.543 --rc geninfo_unexecuted_blocks=1 00:39:06.543 00:39:06.543 ' 00:39:06.543 19:28:35 keyring_linux -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:39:06.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:06.543 --rc genhtml_branch_coverage=1 00:39:06.543 --rc genhtml_function_coverage=1 00:39:06.543 --rc genhtml_legend=1 00:39:06.543 --rc geninfo_all_blocks=1 00:39:06.543 --rc geninfo_unexecuted_blocks=1 00:39:06.543 00:39:06.543 ' 00:39:06.543 19:28:35 keyring_linux -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:39:06.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:06.543 --rc genhtml_branch_coverage=1 00:39:06.543 --rc genhtml_function_coverage=1 00:39:06.543 --rc genhtml_legend=1 00:39:06.543 --rc geninfo_all_blocks=1 00:39:06.543 --rc geninfo_unexecuted_blocks=1 00:39:06.543 00:39:06.543 ' 00:39:06.543 19:28:35 keyring_linux -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:39:06.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:06.543 --rc genhtml_branch_coverage=1 00:39:06.543 --rc genhtml_function_coverage=1 00:39:06.543 --rc genhtml_legend=1 00:39:06.543 --rc geninfo_all_blocks=1 00:39:06.543 --rc geninfo_unexecuted_blocks=1 00:39:06.543 00:39:06.543 ' 00:39:06.543 19:28:35 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:39:06.543 19:28:35 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:06.805 19:28:35 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:39:06.805 19:28:35 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:06.805 19:28:35 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:06.805 19:28:35 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:06.805 19:28:35 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:06.805 19:28:35 keyring_linux -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:06.805 19:28:35 keyring_linux -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:39:06.805 19:28:35 keyring_linux -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:06.805 19:28:35 keyring_linux -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:39:06.805 19:28:35 keyring_linux -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:39:06.805 19:28:35 keyring_linux -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:39:06.805 19:28:35 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:06.805 19:28:35 keyring_linux -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:39:06.805 19:28:35 keyring_linux -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:39:06.805 19:28:35 keyring_linux -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:06.805 19:28:35 keyring_linux -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:06.805 19:28:35 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:39:06.805 19:28:35 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:06.805 19:28:35 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:06.805 19:28:35 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:06.805 19:28:35 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:06.805 19:28:35 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:06.805 19:28:35 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:06.805 19:28:35 keyring_linux -- paths/export.sh@5 -- # export PATH 00:39:06.805 19:28:35 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:06.805 19:28:35 keyring_linux -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:39:06.805 19:28:35 keyring_linux -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:39:06.805 19:28:35 keyring_linux -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:39:06.805 19:28:35 keyring_linux -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:39:06.805 19:28:35 keyring_linux -- nvmf/common.sh@50 -- # : 0 00:39:06.805 19:28:35 keyring_linux -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:39:06.805 19:28:35 keyring_linux -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:39:06.805 19:28:35 keyring_linux -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:39:06.805 19:28:35 keyring_linux -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:06.805 19:28:35 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:06.805 19:28:35 keyring_linux -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:39:06.805 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:39:06.805 19:28:35 keyring_linux -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:39:06.805 19:28:35 keyring_linux -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:39:06.805 19:28:35 keyring_linux -- nvmf/common.sh@54 -- # have_pci_nics=0 00:39:06.805 19:28:35 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:39:06.805 19:28:35 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:39:06.805 19:28:35 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:39:06.805 19:28:35 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:39:06.805 19:28:35 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:39:06.805 19:28:35 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:39:06.805 19:28:35 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:39:06.805 19:28:35 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:39:06.805 19:28:35 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:39:06.805 19:28:35 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:39:06.805 19:28:35 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:39:06.805 19:28:35 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:39:06.805 19:28:35 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:39:06.805 19:28:35 keyring_linux -- nvmf/common.sh@517 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:39:06.805 19:28:35 keyring_linux -- nvmf/common.sh@504 -- # local prefix key digest 00:39:06.805 19:28:35 keyring_linux -- nvmf/common.sh@506 -- # prefix=NVMeTLSkey-1 00:39:06.805 19:28:35 keyring_linux -- nvmf/common.sh@506 -- # key=00112233445566778899aabbccddeeff 00:39:06.805 19:28:35 keyring_linux -- nvmf/common.sh@506 -- # digest=0 00:39:06.805 19:28:35 keyring_linux -- nvmf/common.sh@507 -- # python - 00:39:06.805 19:28:35 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:39:06.805 19:28:35 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:39:06.805 /tmp/:spdk-test:key0 00:39:06.805 19:28:35 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:39:06.805 19:28:35 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:39:06.805 19:28:35 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:39:06.805 19:28:35 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:39:06.806 19:28:35 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:39:06.806 19:28:35 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:39:06.806 19:28:35 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:39:06.806 19:28:35 keyring_linux -- nvmf/common.sh@517 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:39:06.806 19:28:35 keyring_linux -- nvmf/common.sh@504 -- # local prefix key digest 00:39:06.806 19:28:35 keyring_linux -- nvmf/common.sh@506 -- # prefix=NVMeTLSkey-1 00:39:06.806 19:28:35 keyring_linux -- nvmf/common.sh@506 -- # key=112233445566778899aabbccddeeff00 00:39:06.806 19:28:35 keyring_linux -- nvmf/common.sh@506 -- # digest=0 00:39:06.806 19:28:35 keyring_linux -- nvmf/common.sh@507 -- # python - 00:39:06.806 19:28:35 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:39:06.806 19:28:35 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:39:06.806 /tmp/:spdk-test:key1 00:39:06.806 19:28:35 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=687624 00:39:06.806 19:28:35 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 687624 00:39:06.806 19:28:35 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:39:06.806 19:28:35 keyring_linux -- common/autotest_common.sh@833 -- # '[' -z 687624 ']' 00:39:06.806 19:28:35 keyring_linux -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:06.806 19:28:35 keyring_linux -- common/autotest_common.sh@838 -- # local max_retries=100 00:39:06.806 19:28:35 keyring_linux -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:06.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:06.806 19:28:35 keyring_linux -- common/autotest_common.sh@842 -- # xtrace_disable 00:39:06.806 19:28:35 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:39:06.806 [2024-11-05 19:28:36.060742] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:39:06.806 [2024-11-05 19:28:36.060836] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid687624 ] 00:39:07.067 [2024-11-05 19:28:36.135692] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:07.067 [2024-11-05 19:28:36.177437] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:07.639 19:28:36 keyring_linux -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:39:07.639 19:28:36 keyring_linux -- common/autotest_common.sh@866 -- # return 0 00:39:07.639 19:28:36 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:39:07.639 19:28:36 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:07.639 19:28:36 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:39:07.639 [2024-11-05 19:28:36.852157] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:07.639 null0 00:39:07.639 [2024-11-05 19:28:36.884207] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:39:07.639 [2024-11-05 19:28:36.884596] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:39:07.639 19:28:36 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:07.639 19:28:36 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:39:07.639 945095060 00:39:07.639 19:28:36 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:39:07.639 290507209 00:39:07.639 19:28:36 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=687919 00:39:07.639 19:28:36 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 687919 /var/tmp/bperf.sock 00:39:07.639 19:28:36 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:39:07.639 19:28:36 keyring_linux -- common/autotest_common.sh@833 -- # '[' -z 687919 ']' 00:39:07.639 19:28:36 keyring_linux -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:39:07.639 19:28:36 keyring_linux -- common/autotest_common.sh@838 -- # local max_retries=100 00:39:07.639 19:28:36 keyring_linux -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:39:07.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:39:07.639 19:28:36 keyring_linux -- common/autotest_common.sh@842 -- # xtrace_disable 00:39:07.639 19:28:36 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:39:07.639 [2024-11-05 19:28:36.961468] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:39:07.639 [2024-11-05 19:28:36.961517] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid687919 ] 00:39:07.900 [2024-11-05 19:28:37.044859] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:07.900 [2024-11-05 19:28:37.074482] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:08.472 19:28:37 keyring_linux -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:39:08.472 19:28:37 keyring_linux -- common/autotest_common.sh@866 -- # return 0 00:39:08.472 19:28:37 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:39:08.472 19:28:37 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:39:08.732 19:28:37 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:39:08.732 19:28:37 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:39:08.993 19:28:38 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:39:08.993 19:28:38 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:39:08.993 [2024-11-05 19:28:38.270497] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:39:09.253 nvme0n1 00:39:09.253 19:28:38 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:39:09.253 19:28:38 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:39:09.253 19:28:38 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:39:09.253 19:28:38 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:39:09.253 19:28:38 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:39:09.253 19:28:38 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:09.253 19:28:38 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:39:09.253 19:28:38 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:39:09.253 19:28:38 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:39:09.253 19:28:38 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:39:09.253 19:28:38 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:09.253 19:28:38 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:39:09.253 19:28:38 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:09.514 19:28:38 keyring_linux -- keyring/linux.sh@25 -- # sn=945095060 00:39:09.514 19:28:38 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:39:09.514 19:28:38 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:39:09.514 19:28:38 keyring_linux -- keyring/linux.sh@26 -- # [[ 945095060 == \9\4\5\0\9\5\0\6\0 ]] 00:39:09.514 19:28:38 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 945095060 00:39:09.514 19:28:38 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:39:09.514 19:28:38 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:39:09.514 Running I/O for 1 seconds... 00:39:10.905 16521.00 IOPS, 64.54 MiB/s 00:39:10.905 Latency(us) 00:39:10.905 [2024-11-05T18:28:40.228Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:10.905 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:39:10.905 nvme0n1 : 1.01 16521.93 64.54 0.00 0.00 7714.50 1897.81 8901.97 00:39:10.905 [2024-11-05T18:28:40.228Z] =================================================================================================================== 00:39:10.905 [2024-11-05T18:28:40.228Z] Total : 16521.93 64.54 0.00 0.00 7714.50 1897.81 8901.97 00:39:10.905 { 00:39:10.905 "results": [ 00:39:10.905 { 00:39:10.905 "job": "nvme0n1", 00:39:10.905 "core_mask": "0x2", 00:39:10.905 "workload": "randread", 00:39:10.905 "status": "finished", 00:39:10.905 "queue_depth": 128, 00:39:10.905 "io_size": 4096, 00:39:10.905 "runtime": 1.007691, 00:39:10.905 "iops": 16521.929837618874, 00:39:10.905 "mibps": 64.53878842819873, 00:39:10.905 "io_failed": 0, 00:39:10.905 "io_timeout": 0, 00:39:10.905 "avg_latency_us": 7714.503184575649, 00:39:10.905 "min_latency_us": 1897.8133333333333, 00:39:10.905 "max_latency_us": 8901.973333333333 00:39:10.905 } 00:39:10.905 ], 00:39:10.905 "core_count": 1 00:39:10.905 } 00:39:10.905 19:28:39 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:39:10.905 19:28:39 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:39:10.905 19:28:40 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:39:10.905 19:28:40 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:39:10.905 19:28:40 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:39:10.905 19:28:40 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:39:10.905 19:28:40 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:39:10.905 19:28:40 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:10.905 19:28:40 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:39:10.905 19:28:40 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:39:10.905 19:28:40 keyring_linux -- keyring/linux.sh@23 -- # return 00:39:10.905 19:28:40 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:39:10.905 19:28:40 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:39:10.906 19:28:40 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:39:10.906 19:28:40 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:39:10.906 19:28:40 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:39:10.906 19:28:40 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:39:10.906 19:28:40 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:39:10.906 19:28:40 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:39:10.906 19:28:40 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:39:11.167 [2024-11-05 19:28:40.355107] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:39:11.167 [2024-11-05 19:28:40.355416] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f7480 (107): Transport endpoint is not connected 00:39:11.167 [2024-11-05 19:28:40.356411] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f7480 (9): Bad file descriptor 00:39:11.167 [2024-11-05 19:28:40.357413] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:39:11.167 [2024-11-05 19:28:40.357421] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:39:11.167 [2024-11-05 19:28:40.357427] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:39:11.167 [2024-11-05 19:28:40.357434] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:39:11.167 request: 00:39:11.167 { 00:39:11.167 "name": "nvme0", 00:39:11.167 "trtype": "tcp", 00:39:11.167 "traddr": "127.0.0.1", 00:39:11.167 "adrfam": "ipv4", 00:39:11.167 "trsvcid": "4420", 00:39:11.167 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:11.167 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:11.167 "prchk_reftag": false, 00:39:11.167 "prchk_guard": false, 00:39:11.167 "hdgst": false, 00:39:11.167 "ddgst": false, 00:39:11.167 "psk": ":spdk-test:key1", 00:39:11.167 "allow_unrecognized_csi": false, 00:39:11.167 "method": "bdev_nvme_attach_controller", 00:39:11.167 "req_id": 1 00:39:11.167 } 00:39:11.167 Got JSON-RPC error response 00:39:11.167 response: 00:39:11.167 { 00:39:11.167 "code": -5, 00:39:11.167 "message": "Input/output error" 00:39:11.167 } 00:39:11.167 19:28:40 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:39:11.167 19:28:40 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:39:11.167 19:28:40 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:39:11.167 19:28:40 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:39:11.167 19:28:40 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:39:11.167 19:28:40 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:39:11.167 19:28:40 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:39:11.167 19:28:40 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:39:11.167 19:28:40 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:39:11.167 19:28:40 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:39:11.167 19:28:40 keyring_linux -- keyring/linux.sh@33 -- # sn=945095060 00:39:11.167 19:28:40 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 945095060 00:39:11.167 1 links removed 00:39:11.167 19:28:40 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:39:11.167 19:28:40 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:39:11.167 19:28:40 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:39:11.167 19:28:40 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:39:11.167 19:28:40 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:39:11.167 19:28:40 keyring_linux -- keyring/linux.sh@33 -- # sn=290507209 00:39:11.167 19:28:40 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 290507209 00:39:11.167 1 links removed 00:39:11.167 19:28:40 keyring_linux -- keyring/linux.sh@41 -- # killprocess 687919 00:39:11.167 19:28:40 keyring_linux -- common/autotest_common.sh@952 -- # '[' -z 687919 ']' 00:39:11.167 19:28:40 keyring_linux -- common/autotest_common.sh@956 -- # kill -0 687919 00:39:11.167 19:28:40 keyring_linux -- common/autotest_common.sh@957 -- # uname 00:39:11.167 19:28:40 keyring_linux -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:39:11.167 19:28:40 keyring_linux -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 687919 00:39:11.167 19:28:40 keyring_linux -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:39:11.167 19:28:40 keyring_linux -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:39:11.167 19:28:40 keyring_linux -- common/autotest_common.sh@970 -- # echo 'killing process with pid 687919' 00:39:11.167 killing process with pid 687919 00:39:11.168 19:28:40 keyring_linux -- common/autotest_common.sh@971 -- # kill 687919 00:39:11.168 Received shutdown signal, test time was about 1.000000 seconds 00:39:11.168 00:39:11.168 Latency(us) 00:39:11.168 [2024-11-05T18:28:40.491Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:11.168 [2024-11-05T18:28:40.491Z] =================================================================================================================== 00:39:11.168 [2024-11-05T18:28:40.491Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:11.168 19:28:40 keyring_linux -- common/autotest_common.sh@976 -- # wait 687919 00:39:11.429 19:28:40 keyring_linux -- keyring/linux.sh@42 -- # killprocess 687624 00:39:11.429 19:28:40 keyring_linux -- common/autotest_common.sh@952 -- # '[' -z 687624 ']' 00:39:11.429 19:28:40 keyring_linux -- common/autotest_common.sh@956 -- # kill -0 687624 00:39:11.429 19:28:40 keyring_linux -- common/autotest_common.sh@957 -- # uname 00:39:11.429 19:28:40 keyring_linux -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:39:11.429 19:28:40 keyring_linux -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 687624 00:39:11.429 19:28:40 keyring_linux -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:39:11.429 19:28:40 keyring_linux -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:39:11.429 19:28:40 keyring_linux -- common/autotest_common.sh@970 -- # echo 'killing process with pid 687624' 00:39:11.429 killing process with pid 687624 00:39:11.429 19:28:40 keyring_linux -- common/autotest_common.sh@971 -- # kill 687624 00:39:11.429 19:28:40 keyring_linux -- common/autotest_common.sh@976 -- # wait 687624 00:39:11.690 00:39:11.690 real 0m5.165s 00:39:11.690 user 0m9.579s 00:39:11.690 sys 0m1.360s 00:39:11.690 19:28:40 keyring_linux -- common/autotest_common.sh@1128 -- # xtrace_disable 00:39:11.690 19:28:40 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:39:11.690 ************************************ 00:39:11.690 END TEST keyring_linux 00:39:11.690 ************************************ 00:39:11.690 19:28:40 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:39:11.690 19:28:40 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:39:11.690 19:28:40 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:39:11.690 19:28:40 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:39:11.690 19:28:40 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:39:11.690 19:28:40 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:39:11.690 19:28:40 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:39:11.690 19:28:40 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:39:11.690 19:28:40 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:39:11.690 19:28:40 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:39:11.690 19:28:40 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:39:11.690 19:28:40 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:39:11.690 19:28:40 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:39:11.690 19:28:40 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:39:11.690 19:28:40 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:39:11.690 19:28:40 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:39:11.690 19:28:40 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:39:11.690 19:28:40 -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:11.690 19:28:40 -- common/autotest_common.sh@10 -- # set +x 00:39:11.690 19:28:40 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:39:11.690 19:28:40 -- common/autotest_common.sh@1394 -- # local autotest_es=0 00:39:11.690 19:28:40 -- common/autotest_common.sh@1395 -- # xtrace_disable 00:39:11.690 19:28:40 -- common/autotest_common.sh@10 -- # set +x 00:39:19.831 INFO: APP EXITING 00:39:19.831 INFO: killing all VMs 00:39:19.831 INFO: killing vhost app 00:39:19.831 INFO: EXIT DONE 00:39:22.373 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:39:22.373 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:39:22.373 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:39:22.373 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:39:22.373 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:39:22.373 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:39:22.373 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:39:22.373 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:39:22.373 0000:65:00.0 (144d a80a): Already using the nvme driver 00:39:22.373 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:39:22.373 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:39:22.373 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:39:22.373 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:39:22.634 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:39:22.634 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:39:22.634 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:39:22.634 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:39:25.946 Cleaning 00:39:25.946 Removing: /var/run/dpdk/spdk0/config 00:39:25.946 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:39:25.946 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:39:25.946 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:39:25.946 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:39:25.946 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:39:25.946 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:39:25.946 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:39:25.946 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:39:25.946 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:39:25.946 Removing: /var/run/dpdk/spdk0/hugepage_info 00:39:25.946 Removing: /var/run/dpdk/spdk1/config 00:39:25.946 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:39:25.946 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:39:25.946 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:39:25.946 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:39:25.946 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:39:25.946 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:39:25.946 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:39:25.946 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:39:25.946 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:39:25.946 Removing: /var/run/dpdk/spdk1/hugepage_info 00:39:25.947 Removing: /var/run/dpdk/spdk2/config 00:39:25.947 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:39:25.947 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:39:25.947 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:39:25.947 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:39:25.947 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:39:25.947 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:39:25.947 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:39:25.947 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:39:25.947 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:39:25.947 Removing: /var/run/dpdk/spdk2/hugepage_info 00:39:25.947 Removing: /var/run/dpdk/spdk3/config 00:39:25.947 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:39:25.947 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:39:25.947 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:39:25.947 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:39:25.947 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:39:25.947 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:39:25.947 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:39:25.947 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:39:25.947 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:39:25.947 Removing: /var/run/dpdk/spdk3/hugepage_info 00:39:25.947 Removing: /var/run/dpdk/spdk4/config 00:39:25.947 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:39:25.947 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:39:25.947 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:39:25.947 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:39:25.947 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:39:25.947 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:39:25.947 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:39:25.947 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:39:25.947 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:39:25.947 Removing: /var/run/dpdk/spdk4/hugepage_info 00:39:25.947 Removing: /dev/shm/bdev_svc_trace.1 00:39:25.947 Removing: /dev/shm/nvmf_trace.0 00:39:25.947 Removing: /dev/shm/spdk_tgt_trace.pid110588 00:39:25.947 Removing: /var/run/dpdk/spdk0 00:39:25.947 Removing: /var/run/dpdk/spdk1 00:39:25.947 Removing: /var/run/dpdk/spdk2 00:39:25.947 Removing: /var/run/dpdk/spdk3 00:39:25.947 Removing: /var/run/dpdk/spdk4 00:39:25.947 Removing: /var/run/dpdk/spdk_pid108852 00:39:25.947 Removing: /var/run/dpdk/spdk_pid110588 00:39:25.947 Removing: /var/run/dpdk/spdk_pid111144 00:39:25.947 Removing: /var/run/dpdk/spdk_pid112368 00:39:25.947 Removing: /var/run/dpdk/spdk_pid112525 00:39:25.947 Removing: /var/run/dpdk/spdk_pid113839 00:39:25.947 Removing: /var/run/dpdk/spdk_pid113908 00:39:25.947 Removing: /var/run/dpdk/spdk_pid114364 00:39:25.947 Removing: /var/run/dpdk/spdk_pid115496 00:39:25.947 Removing: /var/run/dpdk/spdk_pid115968 00:39:25.947 Removing: /var/run/dpdk/spdk_pid116366 00:39:25.947 Removing: /var/run/dpdk/spdk_pid116761 00:39:25.947 Removing: /var/run/dpdk/spdk_pid117178 00:39:25.947 Removing: /var/run/dpdk/spdk_pid117572 00:39:25.947 Removing: /var/run/dpdk/spdk_pid117926 00:39:25.947 Removing: /var/run/dpdk/spdk_pid118139 00:39:25.947 Removing: /var/run/dpdk/spdk_pid118389 00:39:25.947 Removing: /var/run/dpdk/spdk_pid119435 00:39:25.947 Removing: /var/run/dpdk/spdk_pid122993 00:39:25.947 Removing: /var/run/dpdk/spdk_pid123263 00:39:25.947 Removing: /var/run/dpdk/spdk_pid123623 00:39:25.947 Removing: /var/run/dpdk/spdk_pid123743 00:39:25.947 Removing: /var/run/dpdk/spdk_pid124118 00:39:25.947 Removing: /var/run/dpdk/spdk_pid124448 00:39:25.947 Removing: /var/run/dpdk/spdk_pid124825 00:39:25.947 Removing: /var/run/dpdk/spdk_pid125071 00:39:25.947 Removing: /var/run/dpdk/spdk_pid125371 00:39:25.947 Removing: /var/run/dpdk/spdk_pid125537 00:39:25.947 Removing: /var/run/dpdk/spdk_pid125841 00:39:25.947 Removing: /var/run/dpdk/spdk_pid125914 00:39:25.947 Removing: /var/run/dpdk/spdk_pid126445 00:39:25.947 Removing: /var/run/dpdk/spdk_pid126713 00:39:25.947 Removing: /var/run/dpdk/spdk_pid127116 00:39:25.947 Removing: /var/run/dpdk/spdk_pid131700 00:39:25.947 Removing: /var/run/dpdk/spdk_pid137067 00:39:25.947 Removing: /var/run/dpdk/spdk_pid149206 00:39:25.947 Removing: /var/run/dpdk/spdk_pid149995 00:39:25.947 Removing: /var/run/dpdk/spdk_pid155877 00:39:25.947 Removing: /var/run/dpdk/spdk_pid156231 00:39:25.947 Removing: /var/run/dpdk/spdk_pid161473 00:39:25.947 Removing: /var/run/dpdk/spdk_pid168563 00:39:25.947 Removing: /var/run/dpdk/spdk_pid171840 00:39:25.947 Removing: /var/run/dpdk/spdk_pid184286 00:39:25.947 Removing: /var/run/dpdk/spdk_pid206149 00:39:25.947 Removing: /var/run/dpdk/spdk_pid211345 00:39:25.947 Removing: /var/run/dpdk/spdk_pid213458 00:39:25.947 Removing: /var/run/dpdk/spdk_pid214476 00:39:25.947 Removing: /var/run/dpdk/spdk_pid220608 00:39:25.947 Removing: /var/run/dpdk/spdk_pid277299 00:39:25.947 Removing: /var/run/dpdk/spdk_pid283725 00:39:25.947 Removing: /var/run/dpdk/spdk_pid290896 00:39:25.947 Removing: /var/run/dpdk/spdk_pid298524 00:39:25.947 Removing: /var/run/dpdk/spdk_pid298623 00:39:25.947 Removing: /var/run/dpdk/spdk_pid299657 00:39:25.947 Removing: /var/run/dpdk/spdk_pid300682 00:39:25.947 Removing: /var/run/dpdk/spdk_pid301750 00:39:25.947 Removing: /var/run/dpdk/spdk_pid302345 00:39:25.947 Removing: /var/run/dpdk/spdk_pid302495 00:39:25.947 Removing: /var/run/dpdk/spdk_pid302732 00:39:25.947 Removing: /var/run/dpdk/spdk_pid302842 00:39:25.947 Removing: /var/run/dpdk/spdk_pid302845 00:39:25.947 Removing: /var/run/dpdk/spdk_pid303857 00:39:25.947 Removing: /var/run/dpdk/spdk_pid304865 00:39:25.947 Removing: /var/run/dpdk/spdk_pid305869 00:39:25.947 Removing: /var/run/dpdk/spdk_pid306545 00:39:25.947 Removing: /var/run/dpdk/spdk_pid306593 00:39:25.947 Removing: /var/run/dpdk/spdk_pid306883 00:39:25.947 Removing: /var/run/dpdk/spdk_pid308307 00:39:25.947 Removing: /var/run/dpdk/spdk_pid309480 00:39:25.947 Removing: /var/run/dpdk/spdk_pid320117 00:39:25.947 Removing: /var/run/dpdk/spdk_pid356403 00:39:25.947 Removing: /var/run/dpdk/spdk_pid361914 00:39:25.947 Removing: /var/run/dpdk/spdk_pid363782 00:39:25.947 Removing: /var/run/dpdk/spdk_pid365937 00:39:25.947 Removing: /var/run/dpdk/spdk_pid365972 00:39:25.947 Removing: /var/run/dpdk/spdk_pid366286 00:39:25.947 Removing: /var/run/dpdk/spdk_pid366306 00:39:25.947 Removing: /var/run/dpdk/spdk_pid367018 00:39:25.947 Removing: /var/run/dpdk/spdk_pid369032 00:39:25.947 Removing: /var/run/dpdk/spdk_pid370106 00:39:25.947 Removing: /var/run/dpdk/spdk_pid370485 00:39:25.947 Removing: /var/run/dpdk/spdk_pid373189 00:39:25.947 Removing: /var/run/dpdk/spdk_pid373891 00:39:25.947 Removing: /var/run/dpdk/spdk_pid374609 00:39:25.947 Removing: /var/run/dpdk/spdk_pid379685 00:39:25.947 Removing: /var/run/dpdk/spdk_pid386078 00:39:25.947 Removing: /var/run/dpdk/spdk_pid386079 00:39:25.947 Removing: /var/run/dpdk/spdk_pid386080 00:39:25.947 Removing: /var/run/dpdk/spdk_pid390799 00:39:25.947 Removing: /var/run/dpdk/spdk_pid401165 00:39:25.947 Removing: /var/run/dpdk/spdk_pid406460 00:39:25.947 Removing: /var/run/dpdk/spdk_pid413695 00:39:25.947 Removing: /var/run/dpdk/spdk_pid415226 00:39:25.947 Removing: /var/run/dpdk/spdk_pid417083 00:39:25.947 Removing: /var/run/dpdk/spdk_pid418725 00:39:25.947 Removing: /var/run/dpdk/spdk_pid424361 00:39:25.947 Removing: /var/run/dpdk/spdk_pid429635 00:39:25.947 Removing: /var/run/dpdk/spdk_pid438712 00:39:26.207 Removing: /var/run/dpdk/spdk_pid438833 00:39:26.207 Removing: /var/run/dpdk/spdk_pid443965 00:39:26.207 Removing: /var/run/dpdk/spdk_pid444057 00:39:26.207 Removing: /var/run/dpdk/spdk_pid444319 00:39:26.207 Removing: /var/run/dpdk/spdk_pid444915 00:39:26.207 Removing: /var/run/dpdk/spdk_pid444987 00:39:26.207 Removing: /var/run/dpdk/spdk_pid450419 00:39:26.207 Removing: /var/run/dpdk/spdk_pid451223 00:39:26.207 Removing: /var/run/dpdk/spdk_pid456848 00:39:26.207 Removing: /var/run/dpdk/spdk_pid460362 00:39:26.207 Removing: /var/run/dpdk/spdk_pid467267 00:39:26.207 Removing: /var/run/dpdk/spdk_pid477339 00:39:26.207 Removing: /var/run/dpdk/spdk_pid486082 00:39:26.207 Removing: /var/run/dpdk/spdk_pid486127 00:39:26.207 Removing: /var/run/dpdk/spdk_pid509850 00:39:26.207 Removing: /var/run/dpdk/spdk_pid510399 00:39:26.207 Removing: /var/run/dpdk/spdk_pid517476 00:39:26.207 Removing: /var/run/dpdk/spdk_pid517847 00:39:26.207 Removing: /var/run/dpdk/spdk_pid524265 00:39:26.207 Removing: /var/run/dpdk/spdk_pid524957 00:39:26.207 Removing: /var/run/dpdk/spdk_pid525639 00:39:26.207 Removing: /var/run/dpdk/spdk_pid526425 00:39:26.207 Removing: /var/run/dpdk/spdk_pid527405 00:39:26.207 Removing: /var/run/dpdk/spdk_pid528217 00:39:26.208 Removing: /var/run/dpdk/spdk_pid529072 00:39:26.208 Removing: /var/run/dpdk/spdk_pid529761 00:39:26.208 Removing: /var/run/dpdk/spdk_pid534831 00:39:26.208 Removing: /var/run/dpdk/spdk_pid541305 00:39:26.208 Removing: /var/run/dpdk/spdk_pid548418 00:39:26.208 Removing: /var/run/dpdk/spdk_pid553309 00:39:26.208 Removing: /var/run/dpdk/spdk_pid558462 00:39:26.208 Removing: /var/run/dpdk/spdk_pid570739 00:39:26.208 Removing: /var/run/dpdk/spdk_pid571484 00:39:26.208 Removing: /var/run/dpdk/spdk_pid576710 00:39:26.208 Removing: /var/run/dpdk/spdk_pid577060 00:39:26.208 Removing: /var/run/dpdk/spdk_pid582083 00:39:26.208 Removing: /var/run/dpdk/spdk_pid588858 00:39:26.208 Removing: /var/run/dpdk/spdk_pid591777 00:39:26.208 Removing: /var/run/dpdk/spdk_pid603827 00:39:26.208 Removing: /var/run/dpdk/spdk_pid625047 00:39:26.208 Removing: /var/run/dpdk/spdk_pid629750 00:39:26.208 Removing: /var/run/dpdk/spdk_pid631748 00:39:26.208 Removing: /var/run/dpdk/spdk_pid632760 00:39:26.208 Removing: /var/run/dpdk/spdk_pid638270 00:39:26.208 Removing: /var/run/dpdk/spdk_pid641664 00:39:26.208 Removing: /var/run/dpdk/spdk_pid648821 00:39:26.208 Removing: /var/run/dpdk/spdk_pid648901 00:39:26.208 Removing: /var/run/dpdk/spdk_pid655029 00:39:26.208 Removing: /var/run/dpdk/spdk_pid657239 00:39:26.208 Removing: /var/run/dpdk/spdk_pid659682 00:39:26.208 Removing: /var/run/dpdk/spdk_pid660947 00:39:26.208 Removing: /var/run/dpdk/spdk_pid663569 00:39:26.208 Removing: /var/run/dpdk/spdk_pid665344 00:39:26.208 Removing: /var/run/dpdk/spdk_pid675476 00:39:26.208 Removing: /var/run/dpdk/spdk_pid675981 00:39:26.208 Removing: /var/run/dpdk/spdk_pid676556 00:39:26.208 Removing: /var/run/dpdk/spdk_pid679481 00:39:26.208 Removing: /var/run/dpdk/spdk_pid680152 00:39:26.208 Removing: /var/run/dpdk/spdk_pid680794 00:39:26.208 Removing: /var/run/dpdk/spdk_pid685249 00:39:26.208 Removing: /var/run/dpdk/spdk_pid685382 00:39:26.208 Removing: /var/run/dpdk/spdk_pid687185 00:39:26.208 Removing: /var/run/dpdk/spdk_pid687624 00:39:26.208 Removing: /var/run/dpdk/spdk_pid687919 00:39:26.208 Clean 00:39:26.468 19:28:55 -- common/autotest_common.sh@1451 -- # return 0 00:39:26.468 19:28:55 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:39:26.468 19:28:55 -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:26.468 19:28:55 -- common/autotest_common.sh@10 -- # set +x 00:39:26.469 19:28:55 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:39:26.469 19:28:55 -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:26.469 19:28:55 -- common/autotest_common.sh@10 -- # set +x 00:39:26.469 19:28:55 -- spdk/autotest.sh@388 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:39:26.469 19:28:55 -- spdk/autotest.sh@390 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:39:26.469 19:28:55 -- spdk/autotest.sh@390 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:39:26.469 19:28:55 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:39:26.469 19:28:55 -- spdk/autotest.sh@394 -- # hostname 00:39:26.469 19:28:55 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-09 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:39:26.730 geninfo: WARNING: invalid characters removed from testname! 00:39:53.302 19:29:20 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:54.684 19:29:23 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:56.594 19:29:25 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:58.503 19:29:27 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:59.884 19:29:29 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:40:01.794 19:29:30 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:40:03.176 19:29:32 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:40:03.176 19:29:32 -- spdk/autorun.sh@1 -- $ timing_finish 00:40:03.176 19:29:32 -- common/autotest_common.sh@736 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:40:03.176 19:29:32 -- common/autotest_common.sh@738 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:40:03.176 19:29:32 -- common/autotest_common.sh@739 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:40:03.176 19:29:32 -- common/autotest_common.sh@742 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:40:03.436 + [[ -n 24037 ]] 00:40:03.436 + sudo kill 24037 00:40:03.446 [Pipeline] } 00:40:03.460 [Pipeline] // stage 00:40:03.465 [Pipeline] } 00:40:03.479 [Pipeline] // timeout 00:40:03.483 [Pipeline] } 00:40:03.497 [Pipeline] // catchError 00:40:03.502 [Pipeline] } 00:40:03.516 [Pipeline] // wrap 00:40:03.521 [Pipeline] } 00:40:03.533 [Pipeline] // catchError 00:40:03.541 [Pipeline] stage 00:40:03.543 [Pipeline] { (Epilogue) 00:40:03.557 [Pipeline] catchError 00:40:03.558 [Pipeline] { 00:40:03.570 [Pipeline] echo 00:40:03.572 Cleanup processes 00:40:03.577 [Pipeline] sh 00:40:03.865 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:40:03.865 700642 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:40:03.880 [Pipeline] sh 00:40:04.168 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:40:04.168 ++ grep -v 'sudo pgrep' 00:40:04.168 ++ awk '{print $1}' 00:40:04.168 + sudo kill -9 00:40:04.168 + true 00:40:04.180 [Pipeline] sh 00:40:04.470 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:40:16.712 [Pipeline] sh 00:40:17.000 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:40:17.000 Artifacts sizes are good 00:40:17.015 [Pipeline] archiveArtifacts 00:40:17.023 Archiving artifacts 00:40:17.191 [Pipeline] sh 00:40:17.563 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:40:17.605 [Pipeline] cleanWs 00:40:17.616 [WS-CLEANUP] Deleting project workspace... 00:40:17.616 [WS-CLEANUP] Deferred wipeout is used... 00:40:17.623 [WS-CLEANUP] done 00:40:17.624 [Pipeline] } 00:40:17.642 [Pipeline] // catchError 00:40:17.654 [Pipeline] sh 00:40:17.943 + logger -p user.info -t JENKINS-CI 00:40:17.953 [Pipeline] } 00:40:17.967 [Pipeline] // stage 00:40:17.972 [Pipeline] } 00:40:17.987 [Pipeline] // node 00:40:17.992 [Pipeline] End of Pipeline 00:40:18.033 Finished: SUCCESS